Juniper SRX Security Chassis Cluster
Juniper SRX Security Chassis Cluster
Release
15.1X49-D40
Modified: 2016-06-24
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
®
Junos OS Chassis Cluster Feature Guide for Branch SRX Series Devices
15.1X49-D40
Copyright © 2016, Juniper Networks, Inc.
All rights reserved.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the
year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks
software. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted at
http://www.juniper.net/support/eula.html. By downloading, installing or using such software, you agree to the terms and conditions of
that EULA.
Part 1 Overview
Chapter 1 Introduction to Chassis Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Chassis Cluster Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
High Availability Using Chassis Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
How High Availability Is Achieved by Chassis Cluster . . . . . . . . . . . . . . . . . . . . 3
Chassis Cluster Active/Active and Active/Passive Modes . . . . . . . . . . . . . . . . . 4
Chassis Cluster Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
IPv6 Clustering Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Chassis Cluster Supported Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Chassis Cluster Features Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Chassis Cluster Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Chapter 2 Understanding Chassis Cluster License Requirements . . . . . . . . . . . . . . . . . 29
Understanding Chassis Cluster Licensing Requirements . . . . . . . . . . . . . . . . . . . . 29
Installing Licenses on the Devices in a Chassis Cluster . . . . . . . . . . . . . . . . . . . . . 30
Verifying Licenses for an SRX Series Device in a Chassis Cluster . . . . . . . . . . . . . . 32
Chapter 3 Planning Your Chassis Cluster Configuration . . . . . . . . . . . . . . . . . . . . . . . . . 35
Preparing Your Equipment for Chassis Cluster Formation . . . . . . . . . . . . . . . . . . . 35
SRX Series Chassis Cluster Configuration Overview . . . . . . . . . . . . . . . . . . . . . . . 36
Part 7 Index
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Figure 19: Synchronizing Time from the NTP Server Using fxp0 . . . . . . . . . . . . . . 185
Part 1 Overview
Chapter 1 Introduction to Chassis Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Table 3: Features Supported on a Branch SRX Series Device in a Chassis
Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Table 4: Chassis Cluster Feature Support on Branch SRX Series Devices . . . . . . . 23
If the information in the latest release notes differs from the information in the
documentation, follow the product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject
matter experts. These books go beyond the technical documentation to explore the
nuances of network architecture, deployment, and administration. The current list can
be viewed at http://www.juniper.net/books.
Supported Platforms
For the features described in this document, the following platforms are supported:
• SRX Series
• vSRX
If you want to use the examples in this manual, you can use the load merge or the load
merge relative command. These commands cause the software to merge the incoming
configuration into the current candidate configuration. The example does not become
active until you commit the candidate configuration.
If the example configuration contains the top level of the hierarchy (or multiple
hierarchies), the example is a full example. In this case, use the load merge command.
If the example configuration does not start at the top level of the hierarchy, the example
is a snippet. In this case, use the load merge relative command. These procedures are
described in the following sections.
1. From the HTML or PDF version of the manual, copy a configuration example into a
text file, save the file with a name, and copy the file to a directory on your routing
platform.
For example, copy the following configuration to a file and name the file ex-script.conf.
Copy the ex-script.conf file to the /var/tmp directory on your routing platform.
system {
scripts {
commit {
file ex-script.xsl;
}
}
}
interfaces {
fxp0 {
disable;
unit 0 {
family inet {
address 10.0.0.1/24;
}
}
}
}
2. Merge the contents of the file into your routing platform configuration by issuing the
load merge configuration mode command:
[edit]
user@host# load merge /var/tmp/ex-script.conf
load complete
Merging a Snippet
To merge a snippet, follow these steps:
1. From the HTML or PDF version of the manual, copy a configuration snippet into a text
file, save the file with a name, and copy the file to a directory on your routing platform.
For example, copy the following snippet to a file and name the file
ex-script-snippet.conf. Copy the ex-script-snippet.conf file to the /var/tmp directory
on your routing platform.
commit {
file ex-script-snippet.xsl; }
2. Move to the hierarchy level that is relevant for this snippet by issuing the following
configuration mode command:
[edit]
user@host# edit system scripts
[edit system scripts]
3. Merge the contents of the file into your routing platform configuration by issuing the
load merge relative configuration mode command:
For more information about the load command, see the CLI User Guide.
Documentation Conventions
Caution Indicates a situation that might result in loss of data or hardware damage.
Laser warning Alerts you to the risk of personal injury from a laser.
Table 2 on page xv defines the text and syntax conventions used in this guide.
Bold text like this Represents text that you type. To enter configuration mode, type the
configure command:
user@host> configure
Fixed-width text like this Represents output that appears on the user@host> show chassis alarms
terminal screen.
No alarms currently active
Italic text like this • Introduces or emphasizes important • A policy term is a named structure
new terms. that defines match conditions and
• Identifies guide names. actions.
• Junos OS CLI User Guide
• Identifies RFC and Internet draft titles.
• RFC 1997, BGP Communities Attribute
Italic text like this Represents variables (options for which Configure the machine’s domain name:
you substitute a value) in commands or
configuration statements. [edit]
root@# set system domain-name
domain-name
Text like this Represents names of configuration • To configure a stub area, include the
statements, commands, files, and stub statement at the [edit protocols
directories; configuration hierarchy levels; ospf area area-id] hierarchy level.
or labels on routing platform • The console port is labeled CONSOLE.
components.
< > (angle brackets) Encloses optional keywords or variables. stub <default-metric metric>;
# (pound sign) Indicates a comment specified on the rsvp { # Required for dynamic MPLS only
same line as the configuration statement
to which it applies.
[ ] (square brackets) Encloses a variable for which you can community name members [
substitute one or more values. community-ids ]
GUI Conventions
Bold text like this Represents graphical user interface (GUI) • In the Logical Interfaces box, select
items you click or select. All Interfaces.
• To cancel the configuration, click
Cancel.
> (bold right angle bracket) Separates levels in a hierarchy of menu In the configuration editor hierarchy,
selections. select Protocols>Ospf.
Documentation Feedback
• Online feedback rating system—On any page of the Juniper Networks TechLibrary site
at http://www.juniper.net/techpubs/index.html, simply click the stars to rate the content,
and use the pop-up form to provide us with information about your experience.
Alternately, you can use the online feedback form at
http://www.juniper.net/techpubs/feedback/.
Technical product support is available through the Juniper Networks Technical Assistance
Center (JTAC). If you are a customer with an active J-Care or Partner Support Service
support contract, or are covered under warranty, and need post-sales technical support,
you can access our tools and resources online or open a case with JTAC.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
• Find solutions and answer questions using our Knowledge Base: http://kb.juniper.net/
To verify service entitlement by product serial number, use our Serial Number Entitlement
(SNE) Tool: https://tools.juniper.net/SerialNumberEntitlementSearch/
Overview
• Introduction to Chassis Cluster on page 3
• Understanding Chassis Cluster License Requirements on page 29
• Planning Your Chassis Cluster Configuration on page 35
When configured as a chassis cluster, the two nodes back up each other, with one node
acting as the primary device and the other as the secondary device, ensuring stateful
failover of processes and services in the event of system or hardware failure. If the primary
device fails, the secondary device takes over processing of traffic.
• The devices must be running the same version of the Junos operating system (Junos
OS).
• The control ports on the respective nodes are connected to form a control plane that
synchronizes the configuration and kernel state to facilitate the high availability of
interfaces and services.
• The data plane on the respective nodes is connected over the fabric ports to form a
unified data plane. The fabric link allows for the management of cross-node flow
processing and for the management of session redundancy.
The data plane software operates in active/active mode. In a chassis cluster, session
information is updated as traffic traverses either device, and this information is transmitted
between the nodes over the fabric link to guarantee that established sessions are not
dropped when a failover occurs. In active/active mode, it is possible for traffic to ingress
the cluster on one node and egress from the other node.
• Resilient system architecture, with a single active control plane for the entire cluster
and multiple Packet Forwarding Engines. This architecture presents a single device
view of the cluster.
• Monitoring of physical interfaces, and failover if the failure parameters cross a configured
threshold.
• Support for Generic Routing Encapsulation (GRE) tunnels used to route encapsulated
IPv4/IPv6 traffic by means of an internal interface, gr-0/0/0. This interface is created
by Junos OS at system bootup and is used only for processing GRE tunnels. See the
Interfaces Feature Guide for Security Devices.
At any given instant, a cluster can be in one of the following states: hold, primary,
secondary-hold, secondary, ineligible, and disabled. A state transition can be triggered
because of any event, such as interface monitoring, SPU monitoring, failures, and manual
failovers.
include any combination of IPv4 addresses, IPv6 addresses, and Domain Name System
(DNS) names.
Table 3 on page 5 lists the features that are supported on branch SRX Series devices
in a chassis cluster.
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Simple filters No No No No
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
2
DHCPv6 Yes Yes Yes Yes
J-Flow version 9 No No No No
Ping MPLS No No No No
3
Dynamic VPN Package dynamic VPN client – – – –
40/100-Gigabit Ethernet – – – –
interface MPC slots Gigabit
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Packet-based processing No No No No
Message-length filtering No No No No
Message-rate limiting No No No No
Message-type filtering No No No No
Policy-based inspection No No No No
Stateful inspection No No No No
Traffic logging No No No No
Tunnel cleanup No No No No
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
DSCP marking No No No No
Jumbo frames No No No No
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Q-in-Q tunneling No No No No
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
System Log Files System log archival Yes Yes Yes Yes
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
Class of service No No No No
Antivirus–Sophos Yes No No No
ISSU No No No No
Table 3: Features Supported on a Branch SRX Series Device in a Chassis Cluster (continued)
Active/Backup Active/Active
Category Feature Active/Backup Failover Active/Active Failover
1
When the application ID is identified before session failover, the same action taken
before the failover is effective after the failover. That is, the action is published to
AppSecure service modules and takes place based on the application ID of the traffic. If
the application is in the process of being identified before a failover, the application ID is
not identified and the session information will be lost. The application identification
process will be applied on new sessions created on new primary node.
2
DHCPv6 is supported on SRX Series devices running Junos OS Release 12.1 and later
releases.
3
Package Dynamic VPN client is supported on branch SRX Series devices until Junos OS
Release 12.3X48.
Table 4: Chassis Cluster Feature Support on Branch SRX Series Devices (continued)
Features Branch SRX Series
IP monitoring Yes
HA monitoring Yes
Low-Impact ISSU No
Table 4: Chassis Cluster Feature Support on Branch SRX Series Devices (continued)
Features Branch SRX Series
Point-to-Point Protocol over Ethernet (PPPoE) over redundant Ethernet interface Yes
SPU monitoring No
Synchronization–configuration Yes
Synchronization–policies Yes
WAN interfaces No
The SRX Series devices have the following chassis cluster limitations:
Chassis Cluster
• Starting with Junos OS Release 12.1X45-D10 and later, sampling features such as flow
monitoring, packet capture, and port mirroring are supported on reth interfaces.
• On all SRX Series devices in a chassis cluster, flow monitoring for version 5 and
version 8 is supported. However, flow monitoring for version 9 is not supported.
• If you use packet capture on reth interfaces, two files are created, one for ingress
packets and the other for egress packets based on the reth interface name. These files
can be merged outside of the device using tools such as Wireshark or Mergecap.
• If you use port mirroring on reth interfaces, the reth interface cannot be configured as
the output interface. You must use a physical interface as the output interface. If you
configure the reth interface as an output interface using the set forwarding-options
port-mirroring family inet output command, the following error message is displayed.
• Any packet-based services such as MPLS and CLNS are not supported.
• On all SRX Series devices, the packet-based forwarding for MPLS and ISO protocol
families is not supported.
• For SRX300, SRX320, SRX340, SRX345, and SRX550 devices, the reboot parameter
is not available, because the devices in a cluster are automatically rebooted following
an in-band cluster upgrade (ICU).
Interfaces
• On the lsq-0/0/0 interface, Link services MLPPP, MLFR, and CRTP are not supported.
Layer 2 Switching
• On SRX Series device failover, access points on the Layer 2 switch reboot and all
wireless clients lose connectivity for 4 to 6 minutes.
MIBs
Monitoring
• The maximum number of monitoring IPs that can be configured per cluster is 64 for
the branch SRX Series devices.
• On SRX300, SRX320, SRX340, SRX345, SRX550, and SRX1500 devices, logs cannot
be sent to NSM when logging is configured in the stream mode. Logs cannot be sent
because the security log does not support configuration of the source IP address for
the fxp0 interface and the security log destination in stream mode cannot be routed
through the fxp0 interface. This implies that you cannot configure the security log
server in the same subnet as the fxp0 interface and route the log server through the
fxp0 interface.
Some Junos OS software features require a license to activate the feature. To enable a
licensed feature, you need to purchase, install, manage, and verify a license key that
corresponds to each licensed feature.
There is no separate license required for chassis cluster. However, to configure and use
the licensed feature in a chassis cluster setup, you must purchase one license per feature
per device and the license needs to be installed on both nodes of the chassis cluster.
Each license is tied to one software feature pack, and that license is valid for only one
device.
For chassis cluster, you must install licenses that are unique to each device and cannot
be shared between the devices. Both devices (which are going to form a chassis cluster)
must have the valid, identical features licenses installed on them. If both devices do not
have an identical set of licenses, then after a failover, a particular feature (that is, a feature
that is not licensed on both devices) might not work or the configuration might not
synchronize in chassis cluster formation.
Licensing is usually ordered when the device is purchased, and this information is bound
to the chassis serial number. For example, Intrusion Detection and Prevention (IDP) is a
licensed feature and the license for this specific feature is tied to the serial number of
the device.
For information about how to purchase software licenses, contact your Juniper Networks
sales representative at http://www.juniper.net/in/en/contact-us/.
You can add a license key from a file or a URL, from a terminal, or from the J-Web user
interface. Use the filename option to activate a perpetual license directly on the device.
Use the url option to send a subscription-based license key entitlement (such as unified
threat management [UTM]) to the Juniper Networks licensing server for authorization.
If authorized, the server downloads the license to the device and activates it.
• Set the chassis cluster node ID and the cluster ID. See “Example: Setting the Chassis
Cluster Node ID and Cluster ID for Branch SRX Series Devices” on page 51 or Example:
Setting the Chassis Cluster Node ID and Cluster ID for High-End SRX Series Devices.
• Ensure that your SRX Series device has a connection to the Internet (if particular feature
requires Internet or if (automatic) renewal of license through internet is to be used).
For instructions on establishing basic connectivity, see the Getting Started Guide or
Quick Start Guide for your device.
To install licenses on the primary node of an SRX Series device in a chassis cluster:
1. Run the show chassis cluster status command and identify which node is primary for
redundancy group 0 on your SRX Series device.
{primary:node0}
user@host> show chassis cluster status redundancy-group 0
Cluster ID: 9
Node Priority Status Preempt Manual failover
Output to this command indicates that node 0 is primary and node 1 is secondary.
2. From CLI operational mode, enter one of the following CLI commands:
• To add a license key from a file or a URL, enter the following command, specifying
the filename or the URL where the key is located:
• To add a license key from the terminal, enter the following command:
3. When prompted, enter the license key, separating multiple license keys with a blank
line.
If the license key you enter is invalid, an error appears in the CLI output when you press
Ctrl+d to exit license entry mode.
For more details, see Working with License Keys for SRX Series Devices.
To install licenses on the secondary node of an SRX Series device in a chassis cluster:
{primary:node0}
NOTE: Initiating a failover to the secondary node is not required if you are
installing licenses manually on the device. However, if you are installing
the license directly from the Internet, you must initiate a failover.
NOTE: You must install the updated license on both nodes of the chassis
cluster before the existing license expires.
TIP: If you are not using any specific feature or license , you can delete the
license from both devices in a chassis cluster. You need to connect to each
node separately to delete the licenses. For details, see Example: Deleting a
License Key.
Related • Verifying Licenses for an SRX Series Device in a Chassis Cluster on page 32
Documentation
• Understanding Chassis Cluster Licensing Requirements on page 29
Purpose You can verify the licenses installed on both the devices in a chassis cluster setup by
using the show system license installed command to view license usage.
Licenses installed:
License identifier: JUNOS363684
License version: 2
Valid for device: JN111A654AGB
Features:
services-offload - services offload mode
permanent
{secondary-hold:node1}
user@host> show system license
License usage:
Licenses Licenses Licenses Expiry
Feature name used installed needed
idp-sig 0 1 0 permanent
logical-system 1 26 0 permanent
services-offload 0 1 0 permanent
Licenses installed:
License identifier: JUNOS209661
License version: 2
Valid for device: JN111AB4DAGB
Features:
idp-sig - IDP Signature
permanent
License version: 2
Valid for device: JN111AB4DAGB
Features:
services-offload - services offload mode
permanent
Meaning Use the fields License version and Features to make sure that licenses installed on both
the nodes are identical.
To form a chassis cluster, a pair of the same kind of supported SRX Series devices is
combined to act as a single system that enforces the same overall security.
The following are the device-specific matches required to form a chassis cluster:
• SRX300, SRX320, SRX340, SRX345, and SRX550: Although the devices must be of
the same type, they can contain different Physical Interface Modules (PIMs).
When a device joins a cluster, it becomes a node of that cluster. With the exception of
unique node settings and management IP addresses, nodes in a cluster share the same
configuration.
You can deploy up to 255 chassis clusters in a Layer 2 domain. Clusters and nodes are
identified in the following way:
The following message is displayed when you try to set a cluster ID greater than 15,
and when fabric and control link interfaces are not connected back-to-back or are not
connected on separated private VLANs:
{primary:node1}
user@host> set chassis cluster cluster-id 254 node 1 reboot
For cluster-ids greater than 15 and when deploying more than one cluster in a
single Layer 2 BROADCAST domain, it is mandatory that fabric and control links
are either connected back-to-back or are connected on separate private VLANS.
• On SRX Series branch devices, any existing configurations associated with interfaces
that transform to the fxp0 management port and the control port should be removed.
For more information, see “Understanding SRX Series Chassis Cluster Slot Numbering
and Physical Port and Logical Interface Naming” on page 47.
• Confirm that hardware and software are the same on both devices.
This section provides an overview of the basic steps to create an SRX Series chassis
cluster.
NOTE: For SRX300, SRX320, SRX340, SRX345, and SRX550 chassis clusters,
the placement and type of GPIMs, XGPIMs, XPIMs, and Mini-PIMs (as
applicable) must match in the two devices.
1. Physically connect a pair of the same kind of supported SRX Series devices together.
For more information, see “Connecting SRX Series Devices to Create a Chassis Cluster”
on page 43.
a. Create the fabric link between two nodes in a cluster by connecting any pair of
Ethernet interfaces. For most SRX Series devices, the only requirement is that both
interfaces be Gigabit Ethernet interfaces (or 10-Gigabit Ethernet interfaces). For
SRX300, SRX320, SRX340, SRX345, and SRX550 devices, connect a pair of Gigabit
Ethernet interfaces. For SRX1500 devices, fabric child must be of a similar type.
When using dual fabric link functionality, connect the two pairs of Ethernet
interfaces that you will use on each device. See “Understanding Chassis Cluster
Dual Fabric Links” on page 151.
2. Connect the first device to be initialized in the cluster to the console port. This is the
node that forms the cluster.
For connection instructions, see the Getting Started Guide for your device.
b. Identify the node by giving it its own node ID and then reboot the system.
See “Example: Setting the Chassis Cluster Node ID and Cluster ID for Branch SRX
Series Devices” on page 51.
4. Connect to the console port on the other device and use CLI operational mode
commands to enable clustering:
a. Identify the cluster that the device is joining by setting the same cluster ID you set
on the first node.
b. Identify the node by giving it its own node ID and then reboot the system.
5. Configure the management interfaces on the cluster. See “Example: Configuring the
Chassis Cluster Management Interface” on page 53.
d. Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and
IPv6 Addresses on page 79
7. Initiate manual failover. See “Initiating a Chassis Cluster Manual Redundancy Group
Failover” on page 146.
9. Verify the configuration. See “Verifying a Chassis Cluster Configuration” on page 99.
You must use the following ports to form the control link on the branch SRX Series devices:
• For SRX300 devices, connect the ge-0/0/1 on node 0 to the ge-1/0/1 on node 1.
• For SRX320 devices, connect the ge-0/0/1 on node 0 to the ge-3/0/1 on node 1.
• For SRX340 and SRX345 devices, connect the ge-0/0/1 on node 0 to the ge-5/0/1 on
node 1.
• For SRX550 devices, connect the ge-0/0/1 on node 0 to the ge-9/0/1 on node 1.
The fabric link connection must be any pair of either Gigabit Ethernet or 10-Gigabit
Ethernet interfaces on all SRX Series devices.
• For SRX300 and SRX320 devices, connect any interface except ge-0/0/0 and ge-0/0/1.
• For SRX340 and SRX345 devices, connect any interface except fxp0 and ge-0/0/.
Figure 2 on page 44, Figure 3 on page 44, Figure 4 on page 44, Figure 5 on page 44,
Figure 6 on page 44, and Figure 7 on page 45 show pairs of SRX Series devices with the
fabric links and control links connected.
• Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming on page 47
• Example: Setting the Chassis Cluster Node ID and Cluster ID for Branch SRX Series
Devices on page 51
Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming
Normally, on SRX Series devices, the built-in interfaces are numbered as follows:
For chassis clustering, all SRX Series devices have a built-in management interface
named fxp0. For most SRX Series devices, the fxp0 interface is a dedicated port.
For SRX340 and SRX345 devices, the fxp0 interface is a dedicated port. For SRX300
and SRX320 devices, after you enable chassis clustering and reboot the system, the
built-in interface named ge-0/0/0 is repurposed as the management interface and is
automatically renamed fxp0.
For SRX300, SRX320, SRX340, and SRX345 devices, after you enable chassis clustering
and reboot the system, the build-in interface named ge-0/0/1is repurposed as the control
interface and is automatically renamed fxp1.
For SRX550 devices, control interfaces are dedicated Gigabit Ethernet ports.
After the devices are connected as a cluster, the slot numbering on one device changes
and thus the interface numbering will change. The slot number for each slot in both nodes
is determined using the following formula:
cluster slot number = (node ID * maximum slots per node) + local slot number
In chassis cluster mode, all FPC related configuration is performed under edit chassis
node node-id fpc hierarchy. In non-cluster mode, the FPC related configuration is performed
under edit chassis fpc hierarchy.
Table 6 on page 48 shows the slot numbering, as well as the physical port and logical
interface numbering, for both of the SRX Series devices that become node 0 and node
1 of the chassis cluster after the cluster is formed.
Table 6: SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical Interface
Naming
Management Control Fabric
Maximum Slot Physical Physical Physical
Slots Per Numbering in Port/Logical Port/Logical Port/Logical
Model Chassis Node a Cluster Interface Interface Interface
em0 fab0
em0 fab1
340 and 345 Node 0 5 (PIM slots) 0—4 fxp0 ge-0/0/1 Any Ethernet
port
Table 6: SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical Interface
Naming (continued)
Management Control Fabric
Maximum Slot Physical Physical Physical
Slots Per Numbering in Port/Logical Port/Logical Port/Logical
Model Chassis Node a Cluster Interface Interface Interface
NOTE: See the hardware documentation for your particular model (SRX
Series Services Gateways) for details about SRX Series devices. See Interfaces
Feature Guide for Security Devices for a full discussion of interface naming
conventions.
After you enable chassis clustering, the two chassis joined together cease to exist as
individuals and now represent a single system. As a single system, the cluster now has
twice as many slots. (See Figure 8 on page 49, Figure 9 on page 50, Figure 10 on page 50,
Figure 10 on page 50, Figure 11 on page 50, Figure 12 on page 50, and Figure 13 on page 50.)
Node 0 Node 1
g009250
Slot 0 Slot 1
ALARM CONSOLE
ALARM CONSOLE
STATUS STATUS
POWER POWER
HA HA
g034131
MPIM-1 MPIM-1
AUX AUX
MPIM-2 MPIM-2
RPS RPS
ACE ACE
STORAGE STORAGE
1,2
ESD
0 1 0 1
RESET RESET
CONFIG CONFIG
Related • Example: Configuring an SRX Series Services Gateway for the Branch as a Chassis
Documentation Cluster on page 87
Example: Setting the Chassis Cluster Node ID and Cluster ID for Branch SRX Series
Devices
This example shows how to set the chassis cluster node ID and chassis cluster ID , which
you must configure after connecting two devices together. A chassis cluster ID identifies
the cluster to which the devices belong, and a chassis cluster node ID identifies a unique
node within the cluster. After wiring the two devices together, you use CLI operational
mode commands to enable chassis clustering by assigning a cluster ID and node ID on
each chassis in the cluster. The cluster ID is the same on both nodes.
• Requirements on page 51
• Overview on page 51
• Configuration on page 51
• Verification on page 52
Requirements
Before you begin, ensure that you can connect to each device through the console port.
Overview
The system uses the chassis cluster ID and chassis cluster node ID to apply the correct
configuration for each node (for example, when you use the apply-groups command to
configure the chassis cluster management interface). The chassis cluster ID and node
ID statements are written to the EPROM, and the statements take effect when the system
is rebooted.
In this example, you configure a chassis cluster ID of 1. You also configure a chassis cluster
node ID of 0 for the first node, which allows redundancy groups to be primary on this
node when priority settings for both nodes are the same, and a chassis cluster node ID
of 1 for the other node.
Configuration
Step-by-Step To specify the chassis cluster node ID and cluster ID, you need to set two devices to
Procedure cluster mode and reboot the devices. You must enter the following operational mode
commands on both devices:
Verification
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}[edit]
user@host> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
The fxp0 interfaces function like standard management interfaces on SRX Series devices
and allow network access to each node in the cluster.
For most SRX Series chassis clusters, the fxp0 interface is a dedicated port. SRX340 and
SRX345 devices contain an fxp0 interface. SRX300 and SRX320 devices do not have a
dedicated port for fxp0. The fxp0 interface is repurposed from a built-in interface. The
fxp0 interface is created when the system reboots the devices after you designate one
node as the primary device and the other as the secondary device.
We recommend giving each node in a chassis cluster a unique IP address for the fxp0
interface of each node. This practice allows independent node management.
Related • Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Documentation Logical Interface Naming on page 47
This example shows how to provide network management access to a chassis cluster.
• Requirements on page 54
• Overview on page 54
• Configuration on page 54
• Verification on page 56
Requirements
Before you begin, set the chassis cluster node ID and cluster ID. See “Example: Setting
the Chassis Cluster Node ID and Cluster ID for Branch SRX Series Devices” on page 51
or Example: Setting the Chassis Cluster Node ID and Cluster ID for High-End SRX Series
Devices.
Overview
You must assign a unique IP address to each node in the cluster to provide network
management access. This configuration is not replicated across the two nodes.
NOTE: If you try to access the nodes in a cluster over the network before you
configure the fxp0 interface, you will lose access to the cluster.
• Node 0 name—node0-router
• Node 1 name—node1-router
• Node 0 name—node0-router
• Node 1 name—node1-router
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
user@host#
set groups node0 system host-name node0-router
set groups node0 interfaces fxp0 unit 0 family inet address 10.1.1.1/24
set groups node1 system host-name node1-router
set groups node1 interfaces fxp0 unit 0 family inet address 10.1.1.2/24
{primary:node0}[edit]
user@host#
set groups node0 system host-name node0-router
set groups node0 interfaces fxp0 unit 0 family inet6 address 2010:2010:201::2/64
set groups node1 system host-name node1-router
set groups node1 interfaces fxp0 unit 0 family inet6 address 2010:2010:201::3/64
{primary:node0}[edit]
user@host# set groups node0 system host-name node0-router
user@host# set groups node0 interfaces fxp0 unit 0 family inet address 10.1.1.1/24
{primary:node0}[edit]
set groups node1 system host-name node1-router
set groups node1 interfaces fxp0 unit 0 family inet address 10.1.1.2/24
{primary:node0}[edit]
user@host# commit
{primary:node0}[edit]
user@host# set groups node0 system host-name node0-router
user@host# set groups node0 interfaces fxp0 unit 0 family inet6 address
2010:2010:201::2/64
{primary:node0}[edit]
user@host# set groups node1 system host-name node1-router
user@host# set groups node1 interfaces fxp0 unit 0 family inet6 address
2010:2010:201::3/64
{primary:node0}[edit]
user@host# commit
Results From configuration mode, confirm your configuration by entering the show groups and
show apply-groups commands. If the output does not display the intended configuration,
repeat the configuration instructions in this example to correct it.
{primary:node0}[edit]
user@host# show groups
node0 {
system {
host-name node0-router;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.1.1.1/24;
}
}
}
}
}
node1 {
system {
host-name node1-router;
}
interfaces {
fxp0 {
unit 0 {
family inet {
address 10.1.1.2/24;
}
}
}
}
}
{primary:node0}[edit]
user@host# show apply-groups
## Last changed: 2010-09-16 11:08:29 UTC
apply-groups "${node}";
If you are done configuring the device, enter commit from configuration mode.
Verification
Action To verify the configuration is working properly, enter the show config command.
Related • Management Interface on an Active Chassis Cluster for Branch SRX Series Devices on
Documentation page 53
The data plane software, which operates in active/active mode, manages flow processing
and session state redundancy and processes transit traffic. All packets belonging to a
particular session are processed on the same node to ensure that the same security
treatment is applied to them. The system identifies the node on which a session is active
and forwards its packets to that node for processing. (After a packet is processed, the
Packet Forwarding Engine transmits the packet to the node on which its egress interface
exists if that node is not the local one.)
To provide for session (or flow) redundancy, the data plane software synchronizes its
state by sending special payload packets called runtime objects (RTOs) from one node
to the other across the fabric data link. By transmitting information about a session
between the nodes, RTOs ensure the consistency and stability of sessions if a failover
were to occur, and thus they enable the system to continue to process traffic belonging
to existing sessions. To ensure that session information is always synchronized between
the two nodes, the data plane software gives RTOs transmission priority over transit
traffic.
The data link is referred to as the fabric interface. It is used by the cluster's Packet
Forwarding Engines to transmit transit traffic and to synchronize the data plane software’s
dynamic runtime state. The fabric provides for synchronization of session state objects
created by operations such as authentication, Network Address Translation (NAT),
Application Layer Gateways (ALGs), and IP Security (IPsec) sessions.
When the system creates the fabric interface, the software assigns it an internally derived
IP address to be used for packet transmission.
The fabric is a physical connection between two nodes of a cluster and is formed by
connecting a pair of Ethernet interfaces back-to-back (one from each node).
Unlike for the control link, whose interfaces are determined by the system, you specify
the physical interfaces to be used for the fabric data link in the configuration.
For SRX1500, the fabric link can be any pair of Ethernet interfaces spanning the cluster;
the fabric link can be any pair of Gigabit Ethernet interface or any pair of 10-Gigabit
Ethernet interface. For SRX300, SRX320, SRX340, and SRX345 devices, the fabric link
can be any pair of Gigabit Ethernet interfaces.
For SRX Series chassis clusters made up of SRX550 devices, SFP interfaces on Mini-PIMs
cannot be used as the fabric link.
For details about port and interface usage for management, control, and fabric links, see
“Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and Logical
Interface Naming” on page 47.
The fabric data link does not support fragmentation. To accommodate this state, jumbo
frame support is enabled by default on the link with an MTU size of 8940 bytes. To ensure
that traffic that transits the data link does not exceed this size, we recommend that no
other interfaces exceed the fabric data link's MTU size.
• RTOs for creating and deleting temporary openings in the firewall (pinholes) and
child session pinholes
A chassis cluster can receive traffic on an interface on one node and send it out to an
interface on the other node. (In active/active mode, the ingress interface for traffic might
exist on one node and its egress interface on the other.)
• When packets are processed on one node, but need to be forwarded out an egress
interface on the other node
• When packets arrive on an interface on one node, but must be processed on the other
node
If the ingress and egress interfaces for a packet are on one node, but the packet must
be processed on the other node because its session was established there, it must
traverse the data link twice. This can be the case for some complex media sessions,
such as voice-over-IP (VoIP) sessions.
The fabric data link is vital to the chassis cluster. If the link is unavailable, traffic forwarding
and RTO synchronization are affected, which can result in loss of traffic and unpredictable
system behavior.
To eliminate this possibility, Junos OS uses fabric monitoring to check whether the fabric
link, or the two fabric links in the case of a dual fabric link configuration, are alive by
periodically transmitting probes over the fabric links. If Junos OS detects fabric faults,
RG1+ status of the secondary node changes to ineligible. It determines that a fabric fault
has occurred if a fabric probe is not received but the fabric interface is active. To recover
from this state, both the fabric links need to come back to online state and should start
exchanging probes. As soon as this happens, all the FPCs on the previously ineligible
node will be reset. They then come to online state and rejoin the cluster.
NOTE: If you make any changes to the configuration while the secondary
node is disabled, execute the commit command to synchronize the
configuration after you reboot the node. If you did not make configuration
changes, the configuration file remains synchronized with that of the primary
node.
Starting with Junos OS Release 12.1X47-D10, recovery of the fabric link and synchronization
take place automatically.
When both the primary and secondary nodes are healthy (that is, there are no failures)
and the fabric link goes down, RG1+ redundancy group(s) on the secondary node becomes
ineligible. When one of the nodes is unhealthy (that is, there is a failure), RG1+ redundancy
group(s) on this node (either the primary or secondary node) becomes ineligible. When
both nodes are unhealthy and the fabric link goes down, RG1+ redundancy group(s) on
the secondary node becomes ineligible. When the fabric link comes up, the node on which
RG1+ became ineligible performs a cold synchronization on all Services Processing Units
and transitions to active standby.
NOTE:
• If RG0 is primary on an unhealthy node, then RG0 will fail over from an
unhealthy to a healthy node. For example, if node 0 is primary for RG0+
and node 0 becomes unhealthy, then RG1+ on node 0 will transition to
ineligible after 66 seconds of a fabric link failure and RG0+ fails over to
node 1, which is the healthy node.
Use the show chassis cluster interfaces CLI command to verify the status of the fabric
link.
This example shows how to configure the chassis cluster fabric. The fabric is the
back-to-back data connection between the nodes in a cluster. Traffic on one node that
needs to be processed on the other node or to exit through an interface on the other node
passes over the fabric. Session state information also passes over the fabric.
• Requirements on page 61
• Overview on page 61
• Configuration on page 62
• Verification on page 62
Requirements
Before you begin, set the chassis cluster ID and chassis cluster node ID. See Example:
Setting the Chassis Cluster Node ID and Cluster ID.
Overview
In most SRX Series devices in a chassis cluster, you can configure any pair of Gigabit
Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between
nodes.
You cannot configure filters, policies, or services on the fabric interface. Fragmentation
is not supported on the fabric link. The MTU size is 8980 bytes. We recommend that no
interface in the cluster exceed this MTU size. Jumbo frame support on the member links
is enabled by default.
Only the same type of interfaces can be configured as fabric children, and you must
configure an equal number of child links for fab0 and fab1.
NOTE: If you are connecting each of the fabric links through a switch, you
must enable the jumbo frame feature on the corresponding switch ports. If
both of the fabric links are connected through the same switch, the
RTO-and-probes pair must be in one virtual LAN (VLAN) and the data pair
must be in another VLAN. Here too, the jumbo frame feature must be enabled
on the corresponding switch ports.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
set interfaces fab0 fabric-options member-interfaces ge-0/0/1
set interfaces fab1 fabric-options member-interfaces ge-7/0/1
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/1
{primary:node0}[edit]
user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/1
Results From configuration mode, confirm your configuration by entering the show interfaces
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show interfaces
...
fab0 {
fabric-options {
member-interfaces {
ge-0/0/1;
}
}
}
fab1 {
fabric-options {
member-interfaces {
ge-7/0/1;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show interfaces terse | match fab command.
{primary:node0}
Action From the CLI, enter the show chassis cluster data-plane interfaces command:
{primary:node1}
user@host> show chassis cluster data-plane interfaces
fab0:
Name Status
ge-2/1/9 up
ge-2/2/5 up
fab1:
Name Status
ge-8/1/9 up
ge-8/2/5 up
Related • Understanding Chassis Cluster Fabric Interfaces for Branch SRX Series on page 57
Documentation
• Understanding Chassis Cluster Fabric Interfaces for High-End SRX Series
Action From the CLI, enter the show chassis cluster data-plane statistics command:
{primary:node1}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Related • Understanding Chassis Cluster Fabric Interfaces for Branch SRX Series on page 57
Documentation
• Understanding Chassis Cluster Fabric Interfaces for High-End SRX Series
To clear displayed chassis cluster data plane statistics, enter the clear chassis cluster
data-plane statistics command from the CLI:
{primary:node1}
user@host> clear chassis cluster data-plane statistics
Related • Understanding Chassis Cluster Fabric Interfaces for Branch SRX Series on page 57
Documentation
• Understanding Chassis Cluster Fabric Interfaces for High-End SRX Series
• Runs on the Routing Engine and oversees the entire chassis cluster system, including
interfaces on both nodes
• Manages system and data plane resources, including the Packet Forwarding Engine
(PFE) on each node
• Manages routing state, Address Resolution Protocol (ARP) processing, and Dynamic
Host Configuration Protocol (DHCP) processing
• On the primary node (where the Routing Engine is active), control information flows
from the Routing Engine to the local Packet Forwarding Engine.
• Control information flows across the control link to the secondary node's Routing
Engine and Packet Forwarding Engine.
The control plane software running on the master Routing Engine maintains state for
the entire cluster, and only processes running on its node can update state information.
The master Routing Engine synchronizes state for the secondary node and also processes
all host traffic.
NOTE: For a single control link in a chassis cluster, the same control port
should be used for the control link connection and for configuration on both
nodes. For example, if port 0 is configured as a control port on node 0, then
port 0 should be configured as a control port on node 1 with a cable connection
between the two ports. For dual control links, control port 0 on node 0 should
be connected to control port 0 on node 1 and control port 1 should be
connected to control port 1 on node 1. Cross connections, that is, connecting
port 0 on one node to port 1 on the other node and vice versa, do not work.
The control link relies on a proprietary protocol to transmit session state, configuration,
and liveliness signals across the nodes.
For SRX300, SRX320, SRX340, SRX345, and SRX550 devices, the control link uses the
ge-0/0/1 interface.
For details about port and interface usage for management, control, and fabric links, see
Table 6 on page 48.
Action From the CLI, enter the show chassis cluster control-plane statistics command:
{primary:node1}
user@host> show chassis cluster control-plane statistics
{primary:node1}
user@host> show chassis cluster control-plane statistics
To clear displayed chassis cluster control plane statistics, enter the clear chassis cluster
control-plane statistics command from the CLI:
{primary:node1}
user@host> clear chassis cluster control-plane statistics
Chassis clustering provides high availability of interfaces and services through redundancy
groups and primacy within groups.
Redundancy groups are independent units of failover. Each redundancy group fails over
from one node to the other independent of other redundancy groups. When a redundancy
group fails over, all its objects fail over together.
Three things determine the primacy of a redundancy group: the priority configured for
the node, the node ID (in case of tied priorities), and the order in which the node comes
up. If a lower priority node comes up first, then it will assume the primacy for a redundancy
group (and will stay as primary if preempt is not enabled). If preempt is added to a
redundancy group configuration, the device with the higher priority in the group can initiate
a failover to become master. By default, preemption is disabled. For more information
on preemeption, see preempt (Chassis Cluster).
A chassis cluster can include many redundancy groups, some of which might be primary
on one node and some of which might be primary on the other. Alternatively, all
redundancy groups can be primary on a single node. One redundancy group's primacy
does not affect another redundancy group's primacy. You can create up to 128 redundancy
groups.
You can configure redundancy groups to suit your deployment. You configure a redundancy
group to be primary on one node and backup on the other node. You specify the node on
which the group is primary by setting priorities for both nodes within a redundancy group
configuration. The node with the higher priority takes precedence, and the redundancy
group's objects on it are active.
If a redundancy group is configured so that both nodes have the same priority, the node
with the lowest node ID number always takes precedence, and the redundancy group is
primary on it. In a two-node cluster, node 0 always takes precedence in a priority tie.
The redundancy group 0 configuration specifies the priority for each node. The following
priority scheme determines redundancy group 0 primacy. Note that the three-second
value is the interval if the default heartbeat-threshold and heartbeat-interval values are
used.
• The node that comes up first (at least three seconds prior to the other node) is the
primary node.
• If both nodes come up at the same time (or within three seconds of each other):
• The node with the higher configured priority is the primary node.
• If there is a tie (either because the same value was configured or because default
settings were used), the node with the lower node ID (node 0) is the primary node.
You cannot enable preemption for redundancy group 0. If you want to change the primary
node for redundancy group 0, you must do a manual failover.
Each redundancy group x contains one or more redundant Ethernet interfaces. A redundant
Ethernet interface is a pseudointerface that contains at minimum a pair of physical
Gigabit Ethernet interfaces or a pair of Fast Ethernet interfaces. If a redundancy group is
active on node 0, then the child links of all the associated redundant Ethernet interfaces
on node 0 are active. If the redundancy group fails over to node 1, then the child links of
all redundant Ethernet interfaces on node 1 become active.
The following priority scheme determines redundancy group x primacy, provided preempt
is not configured. If preempt is configured, the node with the higher priority is the primary
node. Note that the three-second value is the interval if the default heartbeat-threshold
and heartbeat-interval values are used.
• The node that comes up first (at least three seconds prior to the other node) is the
primary node.
• If both nodes come up at the same time (or within three seconds of each other):
• The node with the higher configured priority is the primary node.
• If there is a tie (either because the same value was configured or because default
settings were used), the node with the lower node ID (node 0) is the primary node.
On SRX Series chassis clusters, you can configure multiple redundancy groups to
load-share traffic across the cluster. For example, you can configure some redundancy
groups x to be primary on one node and some redundancy groups x to be primary on the
other node. You can also configure a redundancy group x in a one-to-one relationship
with a single redundant Ethernet interface to control which interface traffic flows through.
The traffic for a redundancy group is processed on the node where the redundancy group
is active. Because more than one redundancy group can be configured, it is possible that
the traffic from some redundancy groups will be processed on one node while the traffic
for other redundancy groups is processed on the other node (depending on where the
redundancy group is active). Multiple redundancy groups make it possible for traffic to
arrive over an ingress interface of one redundancy group and over an egress interface
that belongs to another redundancy group. In this situation, the ingress and egress
interfaces might not be active on the same node. When this happens, the traffic is
forwarded over the fabric link to the appropriate node.
When you configure a redundancy group x, you must specify a priority for each node to
determine the node on which the redundancy group x is primary. The node with the higher
priority is selected as primary. The primacy of a redundancy group x can fail over from
one node to the other. When a redundancy group x fails over to the other node, its
redundant Ethernet interfaces on that node are active and their interfaces are passing
traffic.
NOTE: Some devices have both Gigabit Ethernet ports and Fast Ethernet
ports.
• Requirements on page 73
• Overview on page 73
• Configuration on page 74
• Verification on page 75
Requirements
Before you begin:
1. Set the chassis cluster node ID and cluster ID. See “Example: Setting the Chassis
Cluster Node ID and Cluster ID for Branch SRX Series Devices” on page 51 or Example:
Setting the Chassis Cluster Node ID and Cluster ID for High-End SRX Series Devices.
2. Configure the chassis cluster management interface. See “Example: Configuring the
Chassis Cluster Management Interface” on page 53.
3. Configure the chassis cluster fabric. See “Example: Configuring the Chassis Cluster
Fabric Interfaces” on page 61.
Overview
A chassis cluster redundancy group is an abstract entity that includes and manages a
collection of objects. Each redundancy group acts as an independent unit of failover and
is primary on only one node at a time.
In this example, you create two chassis cluster redundancy groups, 0 and 1:
The preempt option is enabled, and the number of gratuitous ARP requests that an
interface can send to notify other network devices of its presence after the redundancy
group it belongs to has failed over is 4.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
[edit]
set chassis cluster redundancy-group 0 node 0 priority 100
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 100
set chassis cluster redundancy-group 1 node 1 priority 1
set chassis cluster redundancy-group 1 preempt
set chassis cluster redundancy-group 1 gratuitous-arp-count 4
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 0 node 0 priority 100
user@host# set chassis cluster redundancy-group 0 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 node 0 priority 100
user@host# set chassis cluster redundancy-group 1 node 1 priority 1
2. Specify whether a node with a higher priority can initiate a failover to become primary
for the redundancy group.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 preempt
3. Specify the number of gratuitous ARP requests that an interface can send to notify
other network devices of its presence after the redundancy group it belongs to has
failed over.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 gratuitous-arp-count 4
Results From configuration mode, confirm your configuration by entering the show chassis cluster
status redundancy-group commands. If the output does not display the intended
configuration, repeat the configuration instructions in this example to correct it.
{primary:node0}[edit]
user@host# show chassis cluster
chassis {
cluster {
redundancy-group 0 {
node 0 priority 100;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 100;
node 1 priority 1;
preempt;
gratuitous-arp-count 4;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show chassis cluster status redundancy-group command.
{primary:node0}
user@host>show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
The maximum number of redundant Ethernet interfaces that you can configure varies,
depending on the device type you are using, as shown in Table 8 on page 78. Note that
SRX300, 128
SRX320,
SRX340,
SRX345
SRX550 58
SRX1500 128
A redundant Ethernet interface's child interface is associated with the redundant Ethernet
interface as part of the child interface configuration. The redundant Ethernet interface
child interface inherits most of its configuration from its parent.
A redundant Ethernet interface inherits its failover properties from the redundancy group
x that it belongs to. A redundant Ethernet interface remains active as long as its primary
child interface is available or active. For example, if reth0 is associated with redundancy
group 1 and redundancy group 1 is active on node 0, then reth0 is up as long as the node
0 child of reth0 is up.
Point-to-Point Protocol over Ethernet (PPPoE) over redundant Ethernet (reth) interface
is supported on SRX300, SRX320, SRX340, SRX345, and SRX550 devices in chassis
cluster mode. This feature allows an existing PPPoE session to continue without starting
a new PPP0E session in the event of a failover.
NOTE: On all branch SRX Series devices, the number of child interfaces per
node is restricted to eight on the reth interface and the number of child
interfaces per reth interface is restricted to eight.
For example:
ge-2/0/2 {
unit 0 {
family inet {
address 1.1.1.1/24;
}
}
}
interfaces {
ge-2/0/2 {
gigether-options {
redundant-parent reth2;
}
}
reth2 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 1.1.1.1/24;
}
}
}
}
Related • Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Documentation Addresses on page 79
Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Addresses
This example shows how to configure chassis cluster redundant Ethernet interfaces. A
redundant Ethernet interface is a pseudointerface that contains two or more physical
interfaces, with at least one from each node of the cluster.
• Requirements on page 80
• Overview on page 80
• Configuration on page 80
• Verification on page 83
Requirements
Before you begin:
• Understand how to set the chassis cluster node ID and cluster ID. See “Example: Setting
the Chassis Cluster Node ID and Cluster ID for Branch SRX Series Devices” on page 51
or Example: Setting the Chassis Cluster Node ID and Cluster ID for High-End SRX Series
Devices.
• Understand how to set the chassis cluster fabric. See “Example: Configuring the Chassis
Cluster Fabric Interfaces” on page 61.
• Understand how to set the chassis cluster node redundancy groups. See “Example:
Configuring Chassis Cluster Redundancy Groups” on page 73.
Overview
After physical interfaces have been assigned to the redundant Ethernet interface, you
set the configuration that pertains to them at the level of the redundant Ethernet interface,
and each of the child interfaces inherits the configuration.
If multiple child interfaces are present, then the speed of all the child interfaces must be
the same.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
user@host# set interfaces ge-0/0/0 gigether-options redundant-parent reth1
set interfaces ge-7/0/0 gigether-options redundant-parent reth1
set interfaces fe-1/0/0 fast-ether-options redundant-parent reth2
set interfaces fe-8/0/0 fast-ether-options redundant-parent reth2
set interfaces reth1 redundant-ether-options redundancy-group 1
set interfaces reth1 unit 0 family inet mtu 1500
set interfaces reth1 unit 0 family inet address 10.1.1.3/24
set security zones security-zone Trust interfaces reth1.0
To quickly configure this example, copy the following commands, paste them into a text
file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
user@host# set interfaces ge-0/0/0 gigether-options redundant-parent reth1
set interfaces ge-7/0/0 gigether-options redundant-parent reth1
set interfaces fe-1/0/0 fast-ether-options redundant-parent reth2
set interfaces fe-8/0/0 fast-ether-options redundant-parent reth2
set interfaces reth2 redundant-ether-options redundancy-group 1
set interfaces reth2 unit 0 family inet6 mtu 1500
set interfaces reth2 unit 0 family inet6 address 2010:2010:201::2/64
set security zones security-zone Trust interfaces reth2.0
{primary:node0}[edit]
user@host# set interfaces ge-0/0/0 gigether-options redundant-parent reth1
user@host# set interfaces ge-7/0/0 gigether-options redundant-parent reth1
{primary:node0}[edit]
user@host# set interfaces fe-1/0/0 fast-ether-options redundant-parent reth2
user@host# set interfaces fe-8/0/0 fast-ether-options redundant-parent reth2
{primary:node0}[edit]
user@host# set interfaces reth1 redundant-ether-options redundancy-group 1
{primary:node0}[edit]
user@host# set interfaces reth1 unit 0 family inet mtu 1500
NOTE: The maximum transmission unit (MTU) set on the reth interface
can be different from the MTU on the child interface.
{primary:node0}[edit]
user@host# set interfaces reth1 unit 0 family inet address 10.1.1.3/24
{primary:node0}[edit]
user@host# set security zones security-zone Trust interfaces reth1.0
{primary:node0}[edit]
user@host# set interfaces ge-0/0/0 gigether-options redundant-parent reth1
user@host# set interfaces ge-7/0/0 gigether-options redundant-parent reth1
{primary:node0}[edit]
user@host# set interfaces fe-1/0/0 fast-ether-options redundant-parent reth2
user@host# set interfaces fe-8/0/0 fast-ether-options redundant-parent reth2
{primary:node0}[edit]
user@host# set interfaces reth2 redundant-ether-options redundancy-group 1
{primary:node0}[edit]
user@host# set interfaces reth2 unit 0 family inet6 mtu 1500
{primary:node0}[edit]
user@host# set interfaces reth2 unit 0 family inet6 address 2010:2010:201::2/64
{primary:node0}[edit]
user@host# set security zones security-zone Trust interfaces reth2.0
Results From configuration mode, confirm your configuration by entering the show interfaces
reth0 command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show interfaces
interfaces {
...
fe-1/0/0 {
fastether-options {
redundant-parent reth2;
}
}
fe-8/0/0 {
fastether-options {
redundant-parent reth2;
}
}
ge-0/0/0 {
gigether-options {
redundant-parent reth1;
}
}
ge-7/0/0 {
gigether-options {
redundant-parent reth1;
}
}
reth1 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
mtu 1500;
address 10.1.1.3/24;
}
}
}
reth2 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet6 {
mtu 1500;
address 2010:2010:201::2/64;
}
}
}
...
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the configuration of the chassis cluster redundant Ethernet interfaces.
Action From operational mode, enter the show interfaces | match reth1 command:
{primary:node0}
user@host> show interfaces | match reth1
ge-0/0/0.0 up down aenet --> reth1.0
ge-7/0/0.0 up down aenet --> reth0.0
reth1 up down
reth1.0 up down inet 10.1.1.3/24
Purpose Verify information about the control interface in a chassis cluster configuration.
Action From operational mode, enter the show chassis cluster interfaces command:
{primary:node0}
user@host> show chassis cluster interfaces
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Down Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0
fab0
Redundant-pseudo-interface Information:
Name Status Redundancy-group
reth1 Up 1
This example shows how to specify the number of redundant Ethernet interfaces for a
chassis cluster. You must configure the redundant Ethernet interfaces count so that the
redundant Ethernet interfaces that you configure are recognized.
• Requirements on page 85
• Overview on page 85
• Configuration on page 85
• Verification on page 85
Requirements
Before you begin, set the chassis cluster ID and chassis cluster node ID. See “Example:
Setting the Chassis Cluster Node ID and Cluster ID for Branch SRX Series Devices” on
page 51 or Example: Setting the Chassis Cluster Node ID and Cluster ID for High-End SRX
Series Devices.
Overview
Before you configure redundant Ethernet interfaces for a chassis cluster, you must specify
the number of redundant Ethernet interfaces for the chassis cluster.
In this example, you set the number of redundant Ethernet interfaces for a chassis cluster
to 2.
Configuration
Step-by-Step To set the number of redundant Ethernet interfaces for a chassis cluster:
Procedure
1. Specify the number of redundant Ethernet interfaces:
{primary:node0}[edit]
user@host# set chassis cluster reth-count 2
[edit]
user@host# commit
Verification
Action To verify the configuration, enter the show configuration chassis cluster command.
Related • Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Documentation Addresses on page 79
• Example: Configuring an SRX Series Services Gateway for the Branch as a Chassis
Cluster on page 87
• Verifying a Chassis Cluster Configuration on page 99
• Verifying Chassis Cluster Statistics on page 99
• Clearing Chassis Cluster Statistics on page 101
Example: Configuring an SRX Series Services Gateway for the Branch as a Chassis
Cluster
This example shows how to set up chassis clustering on an SRX Series for the branch
device.
• Requirements on page 87
• Overview on page 88
• Configuration on page 89
• Verification on page 95
Requirements
Before you begin:
• Physically connect the two devices and ensure that they are the same models. For
example, on the SRX1500 Services Gateway, connect the dedicated control ports on
node 0 and node 1.
• Set the two devices to cluster mode and reboot the devices. You must enter the
following operational mode commands on both devices, for example:
• On node 0:
• On node 1:
The cluster-id is the same on both devices, but the node ID must be different because
one device is node 0 and the other device is node 1. The range for the cluster-id is 0
through 255 and setting it to 0 is equivalent to disabling cluster mode.
• After clustering occurs for the devices, continuing with the SRX1500 Services Gateway
example, the ge-0/0/0 interface on node 1 changes to ge-7/0/0.
NOTE:
After clustering occurs,
NOTE:
After the reboot, the following interfaces are assigned and repurposed to
form a cluster:
• For SRX300 and SRX320 devices, ge-0/0/0 becomes fxp0 and is used
for individual management of the chassis cluster.
See “Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming” on page 47 for complete mapping of the SRX Series devices.
From this point forward, configuration of the cluster is synchronized between the node
members and the two separate devices function as one device.
Overview
This example shows how to set up chassis clustering on an SRX Series device using the
SRX1500 device as example.
The node 1 renumbers its interfaces by adding the total number of system FPCs to the
original FPC number of the interface. See Table 9 on page 89 for interface renumbering
on the SRX Series device.
SRX345
After clustering is enabled, the system creates fxp0, fxp1, and em0 interfaces. Depending
on the device, the fxp0, fxp1, and em0 interfaces that are mapped to a physical interface
are not user defined. However, the fab interface is user defined.
Configuration
CLI Quick To quickly configure a chassis cluster on an SRX1500 Services Gateway, copy the following
Configuration commands and paste them into the CLI:
On {primary:node0}
[edit]
set groups node0 system host-name srx1500-1
set groups node0 interfaces fxp0 unit 0 family inet address 192.16.35.46/24
set groups node1 system host-name srx1500-2
set groups node1 interfaces fxp0 unit 0 family inet address 192.16.35.47/24
set groups node0 system backup-router <backup next-hop from fxp0> destination
<management network/mask>
set groups node1 system backup-router <backup next-hop from fxp0> destination
<management network/mask>
set apply-groups "${node}"
set interfaces fab0 fabric-options member-interfaces ge-0/0/1
set interfaces fab1 fabric-options member-interfaces ge-2/0/1
set chassis cluster redundancy-group 0 node 0 priority 100
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 100
set chassis cluster redundancy-group 1 node 1 priority 1
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/3 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/2 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-7/0/3 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-7/0/2 weight 255
set chassis cluster reth-count 2
set interfaces ge-0/0/2 gigether-options redundant-parent reth1
set interfaces ge-7/0/2 gigether-options redundant-parent reth1
set interfaces reth1 redundant-ether-options redundancy-group 1
set interfaces reth1 unit 0 family inet address 1.2.0.233/24
set interfaces ge-0/0/3 gigether-options redundant-parent reth0
set interfaces ge-7/0/3 gigether-options redundant-parent reth0
set interfaces reth0 redundant-ether-options redundancy-group 1
set interfaces reth0 unit 0 family inet address 10.16.8.1/24
set security zones security-zone Untrust interfaces reth1.0
set security zones security-zone Trust interfaces reth0.0
If you are configuring a Branch SRX Series device, see Table 10 on page 90 for command
and interface settings for your device and substitute these commands into your CLI.
Table 10: SRX Series Services Gateways for the Branch Interface Settings
SRX340
set chassis cluster ge-0/0/3 weight 255 ge-0/0/3 weight 255 ge-0/0/3 weight 255 ge-1/0/0 weight 255
redundancy-group 1
interface-monitor
set chassis cluster ge-0/0/4 weight 255 ge-0/0/4 weight 255 ge-0/0/4 weight 255 ge-10/0/0 weight 255
redundancy-group 1
interface-monitor
set chassis cluster ge-1/0/3 weight 255 ge-3/0/3 weight 255 ge-5/0/3 weight 255 ge-1/0/1 weight 255
redundancy-group 1
interface-monitor
Table 10: SRX Series Services Gateways for the Branch Interface Settings (continued)
SRX340
set chassis cluster ge-1/0/4 weight 255 ge-3/0/4 weight 255 ge-5/0/4 weight 255 ge-10/0/1 weight 255
redundancy-group 1
interface-monitor
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in the CLI User Guide.
NOTE: Perform Steps 1 through 5 on the primary device (node 0). They are
automatically copied over to the secondary device (node 1) when you execute
a commit command. The configurations are synchronized because the control
link and fab link interfaces are activated. To verify the configurations, use the
show interface terse command and review the output.
1. Set up hostnames and management IP addresses for each device using configuration
groups. These configurations are specific to each device and are unique to its specific
node.
Set the default route and backup router for each node.
user@host# set groups node0 system backup-router <backup next-hop from fxp0>
destination <management network/mask>
user@host# set groups node1 system backup-router <backup next-hop from fxp0>
destination <management network/mask>
Set the apply-group command so that the individual configurations for each node
set by the previous commands are applied only to that node.
2. Define the interfaces used for the fab connection (data plane links for RTO sync)
by using physical ports ge-0/0/1 from each node. These interfaces must be
connected back-to-back, or through a Layer 2 infrastructure.
3. Set up redundancy group 0 for the Routing Engine failover properties, and set up
redundancy group 1 (all interfaces are in one redundancy group in this example) to
define the failover properties for the redundant Ethernet interfaces.
4. Set up interface monitoring to monitor the health of the interfaces and trigger
redundancy group failover.
5. Set up the redundant Ethernet (reth) interfaces and assign the redundant interface
to a zone.
Results From operational mode, confirm your configuration by entering the show configuration
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
reth1 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 1.2.0.233/24;
}
}
}
}
...
security {
zones {
security–zone Untrust {
interfaces {
reth1.0;
}
}
security–zone Trust {
interfaces {
reth0.0;
}
}
}
policies {
from–zone Trust to–zone Untrust {
policy 1 {
match {
source–address any;
destination–address any;
application any;
}
then {
permit;
}
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host# show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link name: em0
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-7/0/3 255 Up 1
ge-7/0/2 255 Up 1
ge-0/0/2 255 Up 1
ge-0/0/3 255 Up 1
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitored interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster statistics command.
{primary:node0}
user@host> show chassis cluster statistics
Purpose Verify information about chassis cluster control plane statistics (heartbeats sent and
received) and the fabric link statistics (probes sent and received).
Action From operational mode, enter the show chassis cluster control-plane statistics command.
{primary:node0}
user@host> show chassis cluster control-plane statistics
Purpose Verify information about the number of RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster data-plane statistics command.
{primary:node0}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Purpose Verify the state and priority of both nodes in a cluster and information about whether
the primary node has been preempted or whether there has been a manual failover.
Action From operational mode, enter the chassis cluster status redundancy-group command.
{primary:node0}
user@host> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
Purpose Use these logs to identify any chassis cluster issues. You should run these logs on both
nodes.
Action From the CLI, enter the show chassis cluster ? command:
{primary:node1}
user@host> show chassis cluster ?
Possible completions:
interfaces Display chassis-cluster interfaces
statistics Display chassis-cluster traffic statistics
status Display chassis-cluster status
Action From the CLI, enter the show chassis cluster statistics command:
{primary:node1}
user@host> show chassis cluster statistics
{primary:node1}
user@host> show chassis cluster statistics
{primary:node1}
user@host> show chassis cluster statistics
To clear displayed information about chassis cluster services and interfaces, enter the
clear chassis cluster statistics command from the CLI:
{primary:node1}
user@host> clear chassis cluster statistics
For a redundancy group to automatically failover to another node, its interfaces must be
monitored. When you configure a redundancy group, you can specify a set of interfaces
that the redundancy group is to monitor for status (or “health”) to determine whether
the interface is up or down. A monitored interface can be a child interface of any of its
redundant Ethernet interfaces. When you configure an interface for a redundancy group
to monitor, you give it a weight.
Every redundancy group has a threshold tolerance value initially set to 255. When an
interface monitored by a redundancy group becomes unavailable, its weight is subtracted
from the redundancy group's threshold. When a redundancy group's threshold reaches
0, it fails over to the other node. For example, if redundancy group 1 was primary on node
0, on the threshold-crossing event, redundancy group 1 becomes primary on node 1. In
this case, all the child interfaces of redundancy group 1's redundant Ethernet interfaces
begin handling traffic.
A redundancy group failover occurs because the cumulative weight of the redundancy
group's monitored interfaces has brought its threshold value to 0. When the monitored
interfaces of a redundancy group on both nodes reach their thresholds at the same time,
the redundancy group is primary on the node with the lower node ID, in this case node 0.
NOTE:
• If you want to dampen the failovers occurring because of interface
monitoring failures, use the hold-down-interval statement.
Requirements
Before you begin, create a redundancy group. See “Example: Configuring Chassis Cluster
Redundancy Groups” on page 73.
Overview
To retrieve the remaining redundancy group threshold after a monitoring interface is
down, you can configure your system to monitor the health of the interfaces belonging
to a redundancy group. When you assign a weight to an interface to be monitored, the
system monitors the interface for availability. If a physical interface fails, the weight is
deducted from the corresponding redundancy group's threshold. Every redundancy group
has a threshold of 255. If the threshold hits 0, a failover is triggered, even if the redundancy
group is in manual failover mode and the preempt option is not enabled.
In this example, you check the process of the remaining threshold of a monitoring interface
by configuring two interfaces from each node and mapping them to Redundancy Group
1 (RG1), each with different weights. You use 130 and 140 for node 0 interfaces and 150
and 120 for node 1 interfaces. You configure one interface from each node and map the
interfaces to Redundancy Group 2 (RG2), each with default weight of 255.
Figure 15 on page 108 illustrates the network topology used in this example.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, copy and paste the commands into the CLI at the edit hierarchy level, and
then enter commit from configuration mode.
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in the Junos OS CLI User Guide.
3. Set up redundancy group 0 for the Routing Engine failover properties, and set up
RG1 and RG2 (all interfaces are in one redundancy group in this example) to define
the failover properties for the redundant Ethernet interfaces.
4. Set up interface monitoring to monitor the health of the interfaces and trigger
redundancy group failover.
NOTE: Interface failover only occurs after the weight reaches zero.
5. Set up the redundant Ethernet (reth) interfaces and assign them to a zone.
[edit interfaces]
user@host# Set ge-0/0/1 gigether-options redundant-parent reth0
user@host# Set ge-0/0/2 gigether-options redundant-parent reth1
user@host# Set ge-0/0/3 gigether-options redundant-parent reth2
user@host# Set ge-8/0/1 gigether-options redundant-parent reth0
Results From configuration mode, confirm your configuration by entering the show chassis and
show interfaces commands. If the output does not display the intended configuration,
repeat the configuration instructions in this example to correct it.
[edit]
user@host# show chassis
cluster {
traceoptions {
flag all;
}
reth-count 3;
node 0; ## Warning: 'node' is deprecated
node 1; ## Warning: 'node' is deprecated
redundancy-group 0 {
node 0 priority 254;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 200;
node 1 priority 100;
interface-monitor {
ge-0/0/1 weight 130;
ge-0/0/2 weight 140;
ge-8/0/1 weight 150;
ge-8/0/2 weight 120;
}
}
redundancy-group 2 {
node 0 priority 200;
node 1 priority 100;
interface-monitor {
ge-0/0/3 weight 255;
ge-8/0/3 weight 255;
}
}
}
[edit]
user@host# show interfaces
ge-0/0/1 {
gigether-options {
redundant-parent reth0;
}
}
ge-0/0/2 {
gigether-options {
redundant-parent reth1;
}
}
ge-0/0/3 {
gigether-options {
redundant-parent reth2;
}
}
ge-8/0/1 {
gigether-options {
redundant-parent reth0;
}
}
ge-8/0/2 {
gigether-options {
redundant-parent reth1;
}
}
ge-8/0/3 {
gigether-options {
redundant-parent reth2;
}
}
reth0 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 10.1.1.1/8;
}
}
}
reth1 {
redundant-ether-options {
redundancy-group 1;
}
unit 0 {
family inet {
address 11.1.1.1/8;
}
}
}
reth2 {
redundant-ether-options {
redundancy-group 2;
}
unit 0 {
family inet {
address 12.1.1.1/8;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
The following sections walk you through the process of verifying and (in some cases)
troubleshooting the interface status. The process shows you how to check the status of
each interface in the redundancy group, check them again after they have been disabled,
and looks for details about each interface, until you have circled through all interfaces
in the redundancy group.
In this example, you verify the process of the remaining threshold of a monitoring interface
by configuring two interfaces from each node and mapping them to RG1, each with
different weights. You use 130 and 140 for node 0 interfaces and 150 and 120 for node 1
interfaces. You configure one interface from each node and map the interfaces to RG2,
each with the default weight of 255.
• Verifying Chassis Cluster Interfaces After Enabling Interface ge-0/0/3 on page 131
• Verifying Chassis Cluster Information After Enabling Interface ge-0/0/3 on page 132
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Up 1
ge-0/0/1 130 Up 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Up 2
Meaning The sample output confirms that monitoring interfaces are up and that the weight of
each interface being monitored is displayed correctly as configured. These values do not
change if the interface goes up or down. The weights only change for the redundant group
and can be viewed when you use the show chassis cluster information command.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that node 0 and node 1 are healthy, and the green LED on
the device indicates that there are no failures. Also, the default weight of the redundancy
group (255) is displayed. The default weight is deducted whenever an interface mapped
to the corresponding redundancy group goes down.
Refer to subsequent verification sections to see how the redundancy group value varies
when a monitoring interface goes down or comes up.
Action From configuration mode, enter the set interface ge-0/0/1 disable command.
{primary:node0}
user@host# set interface ge-0/0/1 disable
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
{primary:node0}
user@host# show interfaces ge-0/0/1
disable;
gigether-options {
redundant-parent reth0;
}
Verifying Chassis Cluster Status After Disabling Interface ge-0/0/1 of RG1 in Node
0 with a Weight of 130
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Down 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Up 1
ge-0/0/1 130 Down 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Up 2
Meaning The sample output confirms that monitoring interface ge-0/0/1 is down.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, the RG1 weight is reduced to 125 (that is, 255
minus 130) because monitoring interface ge-0/0/1 (weight of 130) went down. The
monitoring status is unhealthy, the device LED is amber, and the interface status of
ge-0/0/1 is down.
NOTE: If interface ge-0/0/1 is brought back up, the weight of RG1 in node 0
becomes 255. Conversely, if interface ge-0/0/2 is also disabled, the weight
of RG1 in node 0 becomes 0 or less (in this example, 125 minus 140 = -15) and
triggers failover, as indicated in the next verification section.
Action From configuration mode, enter the set interface ge-0/0/2 disable command.
{primary:node0}
user@host# set interface ge-0/0/2 disable
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
{primary:node0}
user@host# show interfaces ge-0/0/2
disable;
gigether-options {
redundant-parent reth1;
}
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node. On RG1, you see interface failure, because both
interfaces mapped to RG1 on node 0 failed during interface monitoring.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Down 1
ge-0/0/1 130 Down 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Up 2
Meaning The sample output confirms that monitoring interfaces ge-0/0/1 and ge-0/0/2 are down.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, monitoring interfaces ge-0/0/1 and ge-0/0/2
are down. The weight of RG1 on node 0 reached zero value, which triggered RG1 failover
during use of the show chassis cluster status command.
NOTE: For RG2, the default weight of 255 is set for redundant Ethernet
interface 2 (reth2). When interface monitoring is required, we recommend
that you use the default weight when you do not have backup links like those
in RG1. That is, if interface ge-0/0/3 is disabled, it immediately triggers failover
because the weight becomes 0 (255 minus 225), as indicated in the next
verification section.
Action From configuration mode, enter the set interface ge-0/0/3 disable command.
{primary:node0}
user@host# set interface ge-0/0/3 disable
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
{primary:node0}
user@host# show interfaces ge-0/0/3
disable;
gigether-options {
redundant-parent reth2;
}
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Down 1
ge-0/0/1 130 Down 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Down 2
Meaning The sample output confirms that monitoring interfaces ge-0/0/1, ge-0/0/2, and ge-0/0/3
are down.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, monitoring interfaces ge-0/0/1, ge-0/0/2,
and ge-0/0/3 are down.
Action From configuration mode, enter the delete interfaces ge-0/0/2 disable command.
{primary:node0}
user@host# delete interfaces ge-0/0/2 disable
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
Meaning The sample output confirms that interface ge-0/0/2 disable is deleted.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with as one device functioning as the primary node
and the other as the secondary node.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Up 1
ge-0/0/1 130 Down 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Down 2
Meaning The sample output confirms that monitoring interfaces ge-0/0/1 and ge-0/0/3 are down.
Monitoring interface ge-0/0/2 is up after the disable has been deleted.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, monitoring interfaces ge-0/0/1 and ge-0/0/3
are down. Monitoring interface ge-0/0/2 is active after the disable has been deleted.
Action From configuration mode, enter the set chassis cluster redundancy-group 2 preempt
command.
{primary:node0}
user@host# set chassis cluster redundancy-group 2 preempt
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
Meaning The sample output confirms that chassis cluster RG2 preempted on node 0.
NOTE: In the next section, you check that RG2 fails over back to node 0 when
preempt is enabled when the disabled node 0 interface is brought online.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Action From configuration mode, enter the delete interfaces ge-0/0/3 disable command.
{primary:node0}
user@host# delete interfaces ge-0/0/3 disable
user@host# commit
node0:
configuration check succeeds
node1:
commit complete
node0:
commit complete
Meaning The sample output confirms that interface ge-0/0/3 disable has been deleted.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Monitor Failure codes:
CS Cold Sync monitoring FL Fabric Connection monitoring
GR GRES monitoring HW Hardware monitoring
IF Interface monitoring IP IP monitoring
LB Loopback monitoring MB Mbuf monitoring
NH Nexthop monitoring NP NPC monitoring
SP SPU monitoring SM Schedule monitoring
CF Config Sync monitoring
Cluster ID: 2
Node Priority Status Preempt Manual Monitor-failures
Meaning Use the show chassis cluster status command to confirm that devices in the chassis
cluster are communicating properly, with one device functioning as the primary node
and the other as the secondary node.
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/0 Up / Up
fab0
fab1 ge-8/0/0 Up / Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
reth2 Up 2
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-8/0/2 120 Up 1
ge-8/0/1 150 Up 1
ge-0/0/2 140 Up 1
ge-0/0/1 130 Down 1
ge-8/0/3 255 Up 2
ge-0/0/3 255 Up 2
Meaning The sample output confirms that monitoring interface ge-0/0/1 is down. Monitoring
interfaces ge-0/0/2, and ge-0/0/3 are up after deleting the disable.
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitoring interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster information command.
{primary:node0}
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Meaning The sample output confirms that in node 0, monitoring interface ge-0/0/1 is down. RG2
on node 0 state is back to primary state (because of the preempt enable) with a healthy
weight of 255 when interface ge-0/0/3 is back up.
• Understanding SRX Series Chassis Cluster Slot Numbering and Physical Port and
Logical Interface Naming for Branch SRX Series Devices on page 47
• Understanding SRX Series Chassis Cluster Slot Numbering, Physical Port and Logical
Interface Naming for High-End SRX Series Devices
IP address monitoring configuration allows you to set not only the address to monitor
and its failover weight but also a global IP address monitoring threshold and weight. Only
after the IP address monitoring global-threshold is reached because of cumulative
monitored address reachability failure will the IP address monitoring global-weight value
be deducted from the redundant group’s failover threshold. Thus, multiple addresses
can be monitored simultaneously as well as monitored to reflect their importance to
maintaining traffic flow. Also, the threshold value of an IP address that is unreachable
and then becomes reachable again will be restored to the monitoring threshold. This will
not, however, cause a failback unless the preempt option has been enabled.
NOTE: Starting in Junos OS Release 12.1X46-D35, for all SRX Series devices,
the reth interface supports proxy ARP.
One Services Processing Unit (SPU) or Packet Forwarding Engine (PFE) per node is
designated to send Internet Control Message Protocol (ICMP) ping packets for the
monitored IP addresses on the cluster. The primary PFE sends ping packets using Address
Resolution Protocol (ARP) requests resolved by the Routing Engine (RE). The source for
these pings is the redundant Ethernet interface MAC and IP addresses. The secondary
PFE resolves ARP requests for the monitored IP address itself. The source for these pings
is the physical child MAC address and a secondary IP address configured on the redundant
Ethernet interface. For the ping reply to be received on the secondary interface, the I/O
card (IOC), central PFE processor, or Flex IOC adds both the physical child MAC address
and the redundant Ethernet interface MAC address to its MAC table. The secondary PFE
responds with the physical child MAC address to ARP requests sent to the secondary IP
address configured on the redundant Ethernet interface.
The default interval to check the reachability of a monitored IP address is once per second.
The interval can be adjusted using the retry-interval command. The default number of
permitted consecutive failed ping attempts is 5. The number of allowed consecutive
failed ping attempts can be adjusted using the retry-count command. After failing to
reach a monitored IP address for the configured number of consecutive attempts, the IP
address is determined to be unreachable and its failover value is deducted from the
redundancy group's global-threshold.
Once the IP address is determined to be unreachable, its weight is deducted from the
global-threshold. If the recalculated global-threshold value is not 0, the IP address is
marked unreachable, but the global-weight is not deducted from the redundancy group’s
threshold. If the redundancy group IP monitoring global-threshold reaches 0 and there
are unreachable IP addresses, the redundancy group will continuously fail over and fail
back between the nodes until either an unreachable IP address becomes reachable or
a configuration change removes unreachable IP addresses from monitoring. Note that
both default and configured hold-down-interval failover dampening is still in effect.
Every redundancy group x has a threshold tolerance value initially set to 255. When an
IP address monitored by redundancy group x becomes unavailable, its weight is subtracted
from the redundancy group x's threshold. When redundancy group x's threshold reaches
0, it fails over to the other node. For example, if redundancy group 1 was primary on node
0, on the threshold-crossing event, redundancy group 1 becomes primary on node 1. In
this case, all the child interfaces of redundancy group 1's redundant Ethernet interfaces
begin handling traffic.
A redundancy group x failover occurs because the cumulative weight of the redundancy
group x's monitored IP addresses and other monitoring has brought its threshold value
to 0. When the monitored IP addresses of redundancy group x on both nodes reach their
thresholds at the same time, redundancy group x is primary on the node with the lower
node ID, which is typically node 0.
NOTE: Upstream device failure detection for the chassis cluster feature is
supported on SRX300, SRX320, SRX340, SRX345, and SRX1500 devices.
This example shows how to configure redundancy group IP address monitoring for an
SRX Series device in a chassis cluster.
Requirements
Before you begin:
• Set the chassis cluster node ID and cluster ID. See “Example: Setting the Chassis Cluster
Node ID and Cluster ID for Branch SRX Series Devices” on page 51 or Example: Setting
the Chassis Cluster Node ID and Cluster ID for High-End SRX Series Devices.
• Configure the chassis cluster management interface. See “Example: Configuring the
Chassis Cluster Management Interface” on page 53.
• Configure the chassis cluster fabric. See “Example: Configuring the Chassis Cluster
Fabric Interfaces” on page 61.
Overview
You can configure redundancy groups to monitor upstream resources by pinging specific
IP addresses that are reachable through redundant Ethernet interfaces on either node
in a cluster. You can also configure global threshold, weight, retry interval, and retry count
parameters for a redundancy group. When a monitored IP address becomes unreachable,
the weight of that monitored IP address is deducted from the redundancy group IP
address monitoring global threshold. When the global threshold reaches 0, the global
weight is deducted from the redundancy group threshold. The retry interval determines
the ping interval for each IP address monitored by the redundancy group. The pings are
sent as soon as the configuration is committed. The retry count sets the number of
allowed consecutive ping failures for each IP address monitored by the redundancy group.
In this example, you configure the following settings for redundancy group 1:
• IP address to monitor—10.1.1.10
• IP address retry-count—10
• Weight—150
• Secondary IP address—10.1.1.101
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
user@host#
set chassis cluster redundancy-group 1 ip-monitoring global-weight 100
set chassis cluster redundancy-group 1 ip-monitoring global-threshold 200
set chassis cluster redundancy-group 1 ip-monitoring retry-interval 3
set chassis cluster redundancy-group 1 ip-monitoring retry-count 10
set chassis cluster redundancy-group 1 ip-monitoring family inet 10.1.1.10 weight 150
interface reth1.0 secondary-ip-address 10.1.1.101
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring global-weight
100
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring global-threshold
200
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring retry-interval 3
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 ip-monitoring retry-count 10
{primary:node0}[edit]
Results From configuration mode, confirm your configuration by entering the show chassis cluster
redundancy-group 1 command. If the output does not display the intended configuration,
repeat the configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show chassis cluster redundancy-group 1
ip-monitoring {
global-weight 100;
global-threshold 200;
family {
inet {
10.1.1.10 {
weight 100;
interface reth1.0 secondary-ip-address 10.1.1.101;
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show chassis cluster ip-monitoring status command.
For information about a specific group, enter the show chassis cluster ip-monitoring status
redundancy-group command.
{primary:node0}
user@host> show chassis cluster ip-monitoring status
node0:
--------------------------------------------------------------------------
Redundancy group: 1
Global threshold: 200
Current threshold: -120
node1:
--------------------------------------------------------------------------
Redundancy group: 1
Global threshold: 200
Related • Understanding Chassis Cluster Redundancy Group Interface Monitoring on page 105
Documentation
• Understanding Chassis Cluster Redundancy Group IP Address Monitoring for Branch
SRX Series Devices on page 133
Chassis cluster employs a number of highly efficient failover mechanisms that promote
high availability to increase your system's overall reliability and productivity.
A redundancy group is a collection of objects that fail over as a group. Each redundancy
group monitors a set of objects (physical interfaces), and each monitored object is
assigned a weight. Each redundancy group has an initial threshold of 255. When a
monitored object fails, the weight of the object is subtracted from the threshold value
of the redundancy group. When the threshold value reaches zero, the redundancy group
fails over to the other node. As a result, all the objects associated with the redundancy
group fail over as well. Graceful restart of the routing protocols enables the SRX Series
device to minimize traffic disruption during a failover.
Back-to-back failovers of a redundancy group in a short interval can cause the cluster
to exhibit unpredictable behavior. To prevent such unpredictable behavior, configure a
dampening time between failovers. On failover, the previous primary node of a redundancy
group moves to the secondary-hold state and stays in the secondary-hold state until the
hold-down interval expires. After the hold-down interval expires, the previous primary
node moves to the secondary state. If a failure occurs on the new primary node during
the hold-down interval, the system fails over immediately and overrides the hold-down
interval.
The default dampening time for a redundancy group 0 is 300 seconds (5 minutes) and
is configurable to up to 1800 seconds with the hold-down-interval statement. For some
configurations, such as those with a large number of routes or logical interfaces, the
default interval or the user-configured interval might not be sufficient. In such cases, the
system automatically extends the dampening time in increments of 60 seconds until
the system is ready for failover.
The hold-down interval affects manual failovers, as well as automatic failovers associated
with monitoring failures.
On SRX Series devices, chassis cluster failover performance is optimized to scale with
more logical interfaces. Previously, during redundancy group failover, gratuitous arp
(GARP) is sent by the Juniper Services Redundancy Protocol (jsrpd) process running in
the Routing Engine on each logical interface to steer the traffic to the appropriate node.
With logical interface scaling, the Routing Engine becomes the checkpoint and GARP is
directly sent from the Services Processing Unit (SPU).
• Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
on page 145
This example shows how to configure the dampening time between back-to-back
redundancy group failovers for a chassis cluster. Back-to-back redundancy group failovers
that occur too quickly can cause a chassis cluster to exhibit unpredictable behavior.
Requirements
Before you begin:
Overview
The dampening time is the minimum interval allowed between back-to-back failovers
for a redundancy group. This interval affects manual failovers and automatic failovers
caused by interface monitoring failures.
In this example, you set the minimum interval allowed between back-to-back failovers
to 420 seconds for redundancy group 0.
Configuration
Step-by-Step To configure the dampening time between back-to-back redundancy group failovers:
Procedure
1. Set the dampening time for the redundancy group.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 0 hold-down-interval 420
{primary:node0}[edit]
user@host# commit
Verification
Action To verify the configuration, enter the show configuration chassis cluster command.
• Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
on page 145
You can initiate a redundancy group x (redundancy groups numbered 1 through 128)
failover manually. A manual failover applies until a failback event occurs.
For example, suppose that you manually do a redundancy group 1 failover from node 0
to node 1. Then an interface that redundancy group 1 is monitoring fails, dropping the
threshold value of the new primary redundancy group to zero. This event is considered
a failback event, and the system returns control to the original redundancy group.
You can also initiate a redundancy group 0 failover manually if you want to change the
primary node for redundancy group 0. You cannot enable preemption for redundancy
group 0.
When you do a manual failover for redundancy group 0, the node in the primary state
transitions to the secondary-hold state. The node stays in the secondary-hold state for
the default or configured time (a minimum of 300 seconds) and then transitions to the
secondary state.
State transitions in cases where one node is in the secondary-hold state and the other
node reboots, or the control link connection or fabric link connection is lost to that node,
are described as follows:
• Reboot case—The node in the secondary-hold state transitions to the primary state;
the other node goes dead (inactive).
• Control link failure case—The node in the secondary-hold state transitions to the
ineligible state and then to a disabled state; the other node transitions to the primary
state.
• Fabric link failure case—The node in the secondary-hold state transitions directly to
the ineligible state.
Keep in mind that during an in-service software upgrade (ISSU), the transitions described
here cannot happen. Instead, the other (primary) node transitions directly to the secondary
state because Juniper Networks releases earlier than 10.0 do not interpret the
secondary-hold state. While you start an ISSU, if one of the nodes has one or more
redundancy groups in the secondary-hold state, you must wait for them to move to the
secondary state before you can do manual failovers to make all the redundancy groups
be primary on one node.
• Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
on page 145
• Understanding Chassis Cluster Redundant Ethernet Interfaces for Branch SRX Series
Devices on page 77
• Understanding Chassis Cluster Redundant Ethernet Interfaces for High-End SRX Series
Devices
Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
Chassis clustering supports SNMP traps, which are triggered whenever there is a
redundancy group failover.
The trap message can help you troubleshoot failovers. It contains the following
information:
These are the different states that a cluster can be in at any given instant: hold, primary,
secondary-hold, secondary, ineligible, and disabled. Traps are generated for the following
state transitions (only a transition from a hold state does not trigger a trap):
A transition can be triggered because of any event, such as interface monitoring, SPU
monitoring, failures, and manual failovers.
The trap is forwarded over the control link if the outgoing interface is on a node different
from the node on the Routing Engine that generates the trap.
You can specify that a trace log be generated by setting the traceoptions flag snmp
statement.
Related • Understanding Chassis Cluster Redundancy Group Manual Failover on page 143
Documentation
• Initiating a Chassis Cluster Manual Redundancy Group Failover on page 146
• Understanding Chassis Cluster Redundant Ethernet Interfaces for Branch SRX Series
Devices on page 77
• Understanding Chassis Cluster Redundant Ethernet Interfaces for High-End SRX Series
Devices
You can initiate a failover manually with the request command. A manual failover bumps
up the priority of the redundancy group for that member to 255.
• Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Addresses on page 79
could result in loss of state, such as routing state, and degrade performance
by introducing system churn.
Use the show command to display the status of nodes in the cluster:
{primary:node0}
user@host> show chassis cluster status redundancy-group 0
Cluster ID: 9
Node Priority Status Preempt Manual failover
Use the request command to trigger a failover and make node 1 primary:
{primary:node0}
user@host> request chassis cluster failover redundancy-group 0 node 1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Initiated manual failover for redundancy group 0
Use the show command to display the new status of nodes in the cluster:
{secondary-hold:node0}
user@host> show chassis cluster status redundancy-group 0
Cluster ID: 9
Node Priority Status Preempt Manual failover
Output to this command shows that node 1 is now primary and node 0 is in the
secondary-hold state. After 5 minutes, node 0 will transition to the secondary state.
You can reset the failover for redundancy groups by using the request command. This
change is propagated across the cluster.
{secondary-hold:node0}
user@host> request chassis cluster failover reset redundancy-group 0 node 0
node0:
--------------------------------------------------------------------------
No reset required for redundancy group 0.
node1:
--------------------------------------------------------------------------
Successfully reset manual failover for redundancy group 0
You cannot trigger a back-to-back failover until the 5-minute interval expires.
{secondary-hold:node0}
user@host> request chassis cluster failover redundancy-group 0 node 0
node0:
--------------------------------------------------------------------------
Manual failover is not permitted as redundancy-group 0 on node0 is in
secondary-hold state.
Use the show command to display the new status of nodes in the cluster:
{secondary-hold:node0}
user@host> show chassis cluster status redundancy-group 0
Cluster ID: 9
Node Priority Status Preempt Manual failover
Output to this command shows that a back-to-back failover has not occurred for either
node.
After doing a manual failover, you must issue the reset failover command before requesting
another failover.
When the primary node fails and comes back up, election of the primary node is done
based on regular criteria (priority and preempt).
Related • Understanding Chassis Cluster Redundancy Group Manual Failover on page 143
Documentation
• Example: Configuring a Chassis Cluster with a Dampening Time Between Back-to-Back
Redundancy Group Failovers on page 142
• Understanding SNMP Failover Traps for Chassis Cluster Redundancy Group Failover
on page 145
• Understanding Chassis Cluster Redundant Ethernet Interfaces for Branch SRX Series
Devices on page 77
• Understanding Chassis Cluster Redundant Ethernet Interfaces for High-End SRX Series
Devices
Action From the CLI, enter the show chassis cluster status command:
{primary:node1}
user@host> show chassis cluster status
Cluster ID: 3
Node name Priority Status Preempt Manual failover
{primary:node1}
user@host> show chassis cluster status
Cluster ID: 15
Node Priority Status Preempt Manual failover
{primary:node1}
user@host> show chassis cluster status
Cluster ID: 15
Node Priority Status Preempt Manual failover
Related • Initiating a Chassis Cluster Manual Redundancy Group Failover on page 146
Documentation
• Example: Configuring the Number of Redundant Ethernet Interfaces in a Chassis Cluster
on page 84
To clear the failover status of a chassis cluster, enter the clear chassis cluster failover-count
command from the CLI:
{primary:node1}
user@host> clear chassis cluster failover-count
Cleared failover-count for all redundancy-groups
Related • Initiating a Chassis Cluster Manual Redundancy Group Failover on page 146
Documentation
You can connect two fabric links between each device in a cluster, which provides a
redundant fabric link between the members of a cluster. Having two fabric links helps to
avoid a possible single point of failure.
When you use dual fabric links, the RTOs and probes are sent on one link and the
fabric-forwarded and flow-forwarded packets are sent on the other link. If one fabric link
fails, the other fabric link handles the RTOs and probes, as well as the data forwarding.
The system selects the physical interface with the lowest slot, PIC, or port number on
each node for the RTOs and probes.
For all SRX Series devices, you can connect two fabric links between two devices,
effectively reducing the chance of a fabric link failure.
In most SRX Series devices in a chassis cluster, you can configure any pair of Gigabit
Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between
nodes.
For dual fabric links, both of the child interface types should be the same type. For
example, both should be Gigabit Ethernet interfaces or 10-Gigabit interfaces.
Example: Configuring the Chassis Cluster Dual Fabric Links with Matching Slots and
Ports
This example shows how to configure the chassis cluster fabric with dual fabric links
with matching slots and ports. The fabric is the back-to-back data connection between
the nodes in a cluster. Traffic on one node that needs to be processed on the other node
or to exit through an interface on the other node passes over the fabric. Session state
information also passes over the fabric.
Requirements
Before you begin, set the chassis cluster ID and chassis cluster node ID. See “Example:
Setting the Chassis Cluster Node ID and Cluster ID for Branch SRX Series Devices” on
page 51 or Example: Setting the Chassis Cluster Node ID and Cluster ID for High-End SRX
Series Devices.
Overview
In most SRX Series devices in a chassis cluster, you can configure any pair of Gigabit
Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between
nodes.
You cannot configure filters, policies, or services on the fabric interface. Fragmentation
is not supported on the fabric link. The MTU size is 8980 bytes. We recommend that no
interface in the cluster exceed this MTU size. Jumbo frame support on the member links
is enabled by default.
This example illustrates how to configure the fabric link with dual fabric links with
matching slots and ports on each node.
A typical configuration is where the dual fabric links are formed with matching slots/ports
on each node. That is, ge-3/0/0 on node 0 and ge-10/0/0 on node 1 match, as do ge-0/0/0
on node 0 and ge-7/0/0 on node 1 (the FPC slot offset is 7).
Only the same type of interfaces can be configured as fabric children, and you must
configure an equal number of child links for fab0 and fab1.
NOTE: If you are connecting each of the fabric links through a switch, you
must enable the jumbo frame feature on the corresponding switch ports. If
both of the fabric links are connected through the same switch, the
RTO-and-probes pair must be in one virtual LAN (VLAN) and the data pair
must be in another VLAN. Here, too, the jumbo frame feature must be enabled
on the corresponding switch ports.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
set interfaces fab0 fabric-options member-interfaces ge-0/0/0
set interfaces fab0 fabric-options member-interfaces ge-3/0/0
set interfaces fab1 fabric-options member-interfaces ge-7/0/0
set interfaces fab1 fabric-options member-interfaces ge-10/0/0
Step-by-Step To configure the chassis cluster fabric with dual fabric links with matching slots and ports
Procedure on each node:
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/0
user@host# set interfaces fab0 fabric-options member-interfaces ge-3/0/0
user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/0
user@host# set interfaces fab1 fabric-options member-interfaces ge-10/0/0
Results From configuration mode, confirm your configuration by entering the show interfaces
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show interfaces
...
fab0 {
fabric-options {
member-interfaces {
ge-0/0/0;
ge-3/0/0;
}
}
}
fab1 {
fabric-options {
member-interfaces {
ge-7/0/0;
ge-10/0/0;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show interfaces terse | match fab command.
{primary:node0}
Related • Understanding Chassis Cluster Dual Fabric Links for Branch SRX Series on page 151
Documentation
• Understanding Chassis Cluster Dual Fabric Links for High-End SRX Series
• Example: Configuring Chassis Cluster Dual Fabric Links with Different Slots and Ports
on page 154
Example: Configuring Chassis Cluster Dual Fabric Links with Different Slots and Ports
This example shows how to configure the chassis cluster fabric with dual fabric links
with different slots and ports. The fabric is the back-to-back data connection between
the nodes in a cluster. Traffic on one node that needs to be processed on the other node
or to exit through an interface on the other node passes over the fabric. Session state
information also passes over the fabric.
Requirements
Before you begin, set the chassis cluster ID and chassis cluster node ID. See “Example:
Setting the Chassis Cluster Node ID and Cluster ID for Branch SRX Series Devices” on
page 51 or Example: Setting the Chassis Cluster Node ID and Cluster ID for High-End SRX
Series Devices.
Overview
In most SRX Series devices in a chassis cluster, you can configure any pair of Gigabit
Ethernet interfaces or any pair of 10-Gigabit interfaces to serve as the fabric between
nodes.
You cannot configure filters, policies, or services on the fabric interface. Fragmentation
is not supported on the fabric link. The MTU size is 8980 bytes. We recommend that no
interface in the cluster exceed this MTU size. Jumbo frame support on the member links
is enabled by default.
This example illustrates how to configure the fabric link with dual fabric links with different
slots and ports on each node.
Make sure you physically connect the RTO-and-probes link to the RTO-and-probes link
on the other node. Likewise, make sure you physically connect the data link to the data
link on the other node.
• The node 0 RTO-and-probes link ge-2/1/9 to the node 1 RTO-and-probes link ge-11/0/0
• The node 0 data link ge-2/2/5 to the node 1 data link ge-11/3/0
Only the same type of interfaces can be configured as fabric children, and you must
configure an equal number of child links for fab0 and fab1.
NOTE: If you are connecting each of the fabric links through a switch, you
must enable the jumbo frame feature on the corresponding switch ports. If
both of the fabric links are connected through the same switch, the
RTO-and-probes pair must be in one virtual LAN (VLAN) and the data pair
must be in another VLAN. Here too, the jumbo frame feature must be enabled
on the corresponding switch ports.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
set interfaces fab0 fabric-options member-interfaces ge-2/1/9
set interfaces fab0 fabric-options member-interfaces ge-2/2/5
set interfaces fab1 fabric-options member-interfaces ge-11/0/0
set interfaces fab1 fabric-options member-interfaces ge-11/3/0
Step-by-Step To configure the chassis cluster fabric with dual fabric links with different slots and ports
Procedure on each node:
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-2/1/9
user@host# set interfaces fab0 fabric-options member-interfaces ge-2/2/5
user@host# set interfaces fab1 fabric-options member-interfaces ge-11/0/0
user@host# set interfaces fab1 fabric-options member-interfaces ge-11/3/0
Results From configuration mode, confirm your configuration by entering the show interfaces
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show interfaces
...
fab0 {
fabric-options {
member-interfaces {
ge-2/1/9;
ge-2/2/5;
}
}
}
fab1 {
fabric-options {
member-interfaces {
ge-11/0/0;
ge-11/3/0;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show interfaces terse | match fab command.
{primary:node0}
Related • Understanding Chassis Cluster Dual Fabric Links for Branch SRX Series on page 151
Documentation
• Understanding Chassis Cluster Dual Fabric Links for High-End SRX Series
• Example: Configuring the Chassis Cluster Dual Fabric Links with Matching Slots and
Ports on page 152
The goal of conditional route advertisement in a chassis cluster is to ensure that incoming
traffic from the upstream network arrives on the node that is on the currently active
redundant Ethernet interface. To understand how this works, keep in mind that in a
chassis cluster, each node has its own set of interfaces. Figure 16 on page 160 shows a
typical scenario, with a redundant Ethernet interface connecting the corporate LAN,
through a chassis cluster, to an external network segment.
Related • Example: Configuring Conditional Route Advertising in a Chassis Cluster on page 160
Documentation
• Verifying a Chassis Cluster Configuration on page 99
This example shows how to configure conditional route advertising in a chassis cluster
to ensure that incoming traffic from the upstream network arrives on the node that is on
the currently active redundant Ethernet interface.
Requirements
Before you begin, understand conditional route advertising in a chassis cluster. See
“Understanding Conditional Route Advertising in a Chassis Cluster” on page 159.
Overview
As illustrated in Figure 17 on page 162, routing prefixes learned from the redundant Ethernet
interface through the IGP are advertised toward the network core using BGP. Two BGP
sessions are maintained, one from interface t1-1/0/0 and one from t1-1/0/1 for BGP
multihoming. All routing prefixes are advertised on both sessions. Thus, for a route
advertised by BGP, learned over a redundant Ethernet interface, if the active redundant
Ethernet interface is on the same node as the BGP session, you advertise the route with
a “good” BGP attribute.
To achieve this behavior, you apply a policy to BGP before exporting routes. An additional
term in the policy match condition determines the current active redundant Ethernet
interface child interface of the next hop before making the routing decision. When the
active status of a child redundant Ethernet interface changes, BGP reevaluates the export
policy for all routes affected.
The condition statement in this configuration works as follows. The command states
that any routes evaluated against this condition will pass only if:
• The current child interface of the redundant Ethernet interface is active at node 0 (as
specified by the route-active-on node0 keyword).
{primary:node0}[edit]
user@host# set policy-options condition reth-nh-active-on-0 route-active-on node0
Note that a route might have multiple equal-cost next hops, and those next hops might
be redundant Ethernet interfaces, regular interfaces, or a combination of both. The route
still satisfies the requirement that it has a redundant Ethernet interface as its next hop.
If you use the BGP export policy set for node 0 in the previous example command, only
OSPF routes that satisfy the following requirements will be advertised through the session:
• The OSPF routes have a redundant Ethernet interface as their next hop.
• The current child interface of the redundant Ethernet interface is currently active at
node 0.
You must also create and apply a separate policy statement for the other BGP session
by using this same process.
In addition to the BGP MED attribute, you can define additional BGP attributes, such as
origin-code, as-path, and community.
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0 from protocol
ospf
set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0 from condition
reth-nh-active-on-0
set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0 then metric 10
set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0 then accept
set policy-options condition reth-nh-active-on-0 route-active-on node0
{primary:node0}[edit]
user@host# set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0
from protocol ospf
{primary:node0}[edit]
user@host# set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0
from condition reth-nh-active-on-0
{primary:node0}[edit]
user@host# set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0
then metric 10
{primary:node0}[edit]
user@host# set policy-options policy-statement reth-nh-active-on-0 term ospf-on-0
then accept
{primary:node0}[edit]
user@host# set policy-options condition reth-nh-active-on-0 route-active-on node0
Results From configuration mode, confirm your configuration by entering the show policy-options
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
{primary:node0}[edit]
user@host# show policy-options
policy-statement reth-nh-active-on-0 {
term ospf-on-0 {
from {
protocol ospf;
condition reth-nh-active-on-0;
}
then {
metric 10;
accept;
}
}
}
condition reth-nh-active-on-0 route-active-on node0;
If you are done configuring the device, enter commit from configuration mode.
Support for Ethernet link aggregation groups (LAGs) based on IEEE 802.3ad makes it
possible to aggregate physical interfaces on a standalone device. LAGs on standalone
devices provide increased interface bandwidth and link availability. Aggregation of links
in a chassis cluster allows a redundant Ethernet interface to add more than two physical
child interfaces thereby creating a redundant Ethernet interface LAG. A redundant Ethernet
interface LAG can have up to eight links per redundant Ethernet interface per node (for
a total of 16 links per redundant Ethernet interface).
The aggregated links in a redundant Ethernet interface LAG provide the same bandwidth
and redundancy benefits of a LAG on a standalone device with the added advantage of
chassis cluster redundancy. A redundant Ethernet interface LAG has two types of
simultaneous redundancy. The aggregated links within the redundant Ethernet interface
on each node are redundant; if one link in the primary aggregate fails, its traffic load is
taken up by the remaining links. If enough child links on the primary node fail, the redundant
Ethernet interface LAG can be configured so that all traffic on the entire redundant
Ethernet interface fails over to the aggregate link on the other node. You can also configure
interface monitoring for LACP-enabled redundancy group reth child links for added
protection.
Aggregated Ethernet interfaces, known as local LAGs, are also supported on either node
of a chassis cluster but cannot be added to redundant Ethernet interfaces. Local LAGs
are indicated in the system interfaces list using an ae- prefix. Likewise any child interface
of an existing local LAG cannot be added to a redundant Ethernet interface and vice
versa. Note that it is necessary for the switch (or switches) used to connect the nodes
in the cluster to have a LAG link configured and 802.3ad enabled for each LAG on both
nodes so that the aggregate links are recognized as such and correctly pass traffic. The
total maximum number of combined individual node LAG interfaces (ae) and redundant
Ethernet (reth) interfaces per cluster is 128.
NOTE: The redundant Ethernet interface LAG child links from each node in
the chassis cluster must be connected to a different LAG at the peer devices.
If a single peer switch is used to terminate the redundant Ethernet interface
LAG, two separate LAGs must be used in the switch.
Links from different PICs or IOCs and using different cable types (for example, copper
and fiber-optic) can be added to the same redundant Ethernet interface LAG but the
speed of the interfaces must be the same and all interfaces must be in full duplex mode.
We recommend, however, that for purposes of reducing traffic processing overhead,
interfaces from the same PIC or IOC be used whenever feasible. Regardless, all interfaces
configured in a redundant Ethernet interface LAG share the same virtual MAC address.
• Layer 2 transparent mode and Layer 2 security features are supported in redundant
Ethernet interface LAGs.
• Network processor (NP) bundling can coexist with redundant Ethernet interface LAGs
on the same cluster. However, assigning an interface simultaneously to a redundant
Ethernet interface LAG and a network processor bundle is not supported.
NOTE: IOC2 cards do not have network processors but IOC1 cards do have
them.
• Single flow throughput is limited to the speed of a single physical link regardless of the
speed of the aggregate interface.
NOTE: For more information about Ethernet interface link aggregation and
LACP, see the “Aggregated Ethernet” information in the Interfaces Feature
Guide for Security Devices.
This example shows how to configure a redundant Ethernet interface link aggregation
group for a chassis cluster. Chassis cluster configuration supports more than one child
interface per node in a redundant Ethernet interface. When at least two physical child
interface links from each node are included in a redundant Ethernet interface configuration,
the interfaces are combined within the redundant Ethernet interface to form a redundant
Ethernet interface link aggregation group.
Requirements
Before you begin:
• Understand chassis cluster redundant Ethernet interface link aggregation groups. See
“Understanding Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups
for Branch SRX Series Devices” on page 165 or Understanding Chassis Cluster Redundant
Ethernet Interface Link Aggregation Groups for High-End SRX Series Devices.
Overview
NOTE: For aggregation to take place, the switch used to connect the nodes
in the cluster must enable IEEE 802.3ad link aggregation for the redundant
Ethernet interface physical child links on each node. Because most switches
support IEEE 802.3ad and are also LACP capable, we recommend that you
enable LACP on SRX Series devices. In cases where LACP is not available on
the switch, you should not enable LACP on SRX Series devices.
In this example, you assign six Ethernet interfaces to reth1 to form the Ethernet interface
link aggregation group:
• ge-1/0/1—reth1
• ge-1/0/2—reth1
• ge-1/0/3—reth1
• ge-12/0/1—reth1
• ge-12/0/2—reth1
• ge-12/0/3—reth1
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
set interfaces ge-1/0/1 gigether-options redundant-parent reth1
set interfaces ge-1/0/2 gigether-options redundant-parent reth1
set interfaces ge-1/0/3 gigether-options redundant-parent reth1
set interfaces ge-12/0/1 gigether-options redundant-parent reth1
set interfaces ge-12/0/2 gigether-options redundant-parent reth1
set interfaces ge-12/0/3 gigether-options redundant-parent reth1
{primary:node0}[edit]
user@host# set interfaces ge-1/0/1 gigether-options redundant-parent reth1
user@host# set interfaces ge-1/0/2 gigether-options redundant-parent reth1
user@host# set interfaces ge-1/0/3 gigether-options redundant-parent reth1
user@host# set interfaces ge-12/0/1 gigether-options redundant-parent reth1
user@host# set interfaces ge-12/0/2 gigether-options redundant-parent reth1
user@host# set interfaces ge-12/0/3 gigether-options redundant-parent reth1
Results From configuration mode, confirm your configuration by entering the show interfaces
reth1 command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
redundant-parent reth1;
}
}
ge-12/0/3 {
gigether-options {
redundant-parent reth1;
}
}
...
If you are done configuring the device, enter commit from configuration mode.
Verification
Action From operational mode, enter the show interfaces terse | match reth command.
{primary:node0}
Related • Understanding Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups
Documentation for Branch SRX Series Devices on page 165
• Understanding Chassis Cluster Redundant Ethernet Interface LAG Failover on page 170
Consider a reth0 interface LAG with four underlying physical links and the minimum-links
value set as 2. In this case, a failover is triggered only when the number of active physical
links is less than 2.
NOTE:
• Interface-monitor and minimum-links values are used to monitor LAG link
status and correctly calculate failover weight.
• The minimum-links value is used to keep the redundant Ethernet link status.
However, to trigger a failover, interface-monitor must be set.
{primary:node0}[edit]
user@host# set interfaces ge-0/0/4 gigether-options redundant-parent reth0
user@host# set interfaces ge-0/0/5 gigether-options redundant-parent reth0
user@host# set interfaces ge-0/0/6 gigether-options redundant-parent reth0
user@host# set interfaces ge-0/0/7 gigether-options redundant-parent reth0
Specify the minimum number of links for the redundant Ethernet interface as 2.
{primary:node0}[edit]
user@host# set interfaces reth0 redundant-ether-options minimum-links 2
Set up interface monitoring to monitor the health of the interfaces and trigger redundancy
group failover.
The following scenarios provide examples of how to monitor redundant Ethernet LAG
failover:
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight
255
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight
255
In this case, although there are three active physical links and the redundant Ethernet
LAG could have handled the traffic because of minimum-links configured, one physical
link is still down, which triggers a failover based on the computed weight.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight
75
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight
75
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/6 weight
75
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/7 weight
75
In this case, when three physical links are down, the redundant Ethernet interface will go
down due to minimum-links configured. However, the failover will not happen, which in
turn will result in traffic outage.
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4 weight
100
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/5 weight
100
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/6 weight
100
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/7 weight
100
In this case, when the three physical links are down, the redundant Ethernet interface
will go down because of the minimum-links value. However, at the same time a failover
would be triggered because of interface monitoring computed weights, ensuring that
there is no traffic disruption.
Of all the three scenarios, scenario 3 illustrates the most ideal way to manage redundant
Ethernet LAG failover.
Related • Understanding Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups
Documentation for Branch SRX Series Devices on page 165
You can combine multiple physical Ethernet ports to form a logical point-to-point link,
known as a link aggregation group (LAG) or bundle, such that a media access control
(MAC) client can treat the LAG as if it were a single link.
LAGs can be established across nodes in a chassis cluster to provide increased interface
bandwidth and link availability.
The Link Aggregation Control Protocol (LACP) provides additional functionality for LAGs.
LACP is supported in standalone deployments, where aggregated Ethernet interfaces
are supported, and in chassis cluster deployments, where aggregated Ethernet interfaces
and redundant Ethernet interfaces are supported simultaneously.
You configure LACP on a redundant Ethernet interface by setting the LACP mode for the
parent link with the lacp statement. The LACP mode can be off (the default), active, or
passive.
• Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups on page 173
• Sub-LAGs on page 174
• Supporting Hitless Failover on page 175
• Managing Link Aggregation Control PDUs on page 175
When at least two physical child interface links from each node are included in a redundant
Ethernet interface configuration, the interfaces are combined within the redundant
Ethernet interface to form a redundant Ethernet interface LAG.
Having multiple active redundant Ethernet interface links reduces the possibility of
failover. For example, when an active link is out of service, all traffic on this link is
distributed to other active redundant Ethernet interface links, instead of triggering a
redundant Ethernet active/standby failover.
Aggregated Ethernet interfaces, known as local LAGs, are also supported on either node
of a chassis cluster but cannot be added to redundant Ethernet interfaces. Likewise, any
child interface of an existing local LAG cannot be added to a redundant Ethernet interface,
and vice versa. The total maximum number of combined individual node LAG interfaces
(ae) and redundant Ethernet (reth) interfaces per cluster is 128.
However, aggregated Ethernet interfaces and redundant Ethernet interfaces can coexist,
because the functionality of a redundant Ethernet interface relies on the Junos OS
aggregated Ethernet framework.
For more information, see “Understanding Chassis Cluster Redundant Ethernet Interface
Link Aggregation Groups for Branch SRX Series Devices” on page 165 or Understanding
Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups for High-End SRX
Series Devices.
Minimum Links
Sub-LAGs
LACP maintains a point-to-point LAG. Any port connected to the third point is denied.
However, a redundant Ethernet interface does connect to two different systems or two
remote aggregated Ethernet interfaces by design.
To support LACP on both redundant Ethernet interface active and standby links, a
redundant Ethernet interface can be modeled to consist of two sub-LAGs, where all
active links form an active sub-LAG and all standby links form a standby sub-LAG.
In this model, LACP selection logic is applied and limited to one sub-LAG at a time. In
this way, two redundant Ethernet interface sub-LAGs are maintained simultaneously
while all the LACP advantages are preserved for each sub-LAG.
It is necessary for the switches used to connect the nodes in the cluster to have a LAG
link configured and 802.3ad enabled for each LAG on both nodes so that the aggregate
links will be recognized as such and correctly pass traffic.
NOTE: The redundant Ethernet interface LAG child links from each node in
the chassis cluster must be connected to a different LAG at the peer devices.
If a single peer switch is used to terminate the redundant Ethernet interface
LAG, two separate LAGs must be used in the switch.
The lacpd process manages both the active and standby links of the redundant Ethernet
interfaces. A redundant Ethernet interface state remains up when the number of active
up links is more than the number of minimum links configured. Therefore, to support
hitless failover, the LACP state on the redundant Ethernet interface standby links must
be collected and distributed before failover occurs.
• Configure Ethernet links to passively transmit PDUs, sending out link aggregation
control PDUs only when they are received from the remote end of the same link
The local end of a child link is known as the actor and the remote end of the link is known
as the partner. That is, the actor sends link aggregation control PDUs to its protocol
partner that convey what the actor knows about its own state and that of the partner’s
state.
You configure the interval at which the interfaces on the remote side of the link transmit
link aggregation control PDUs by configuring the periodic statement on the interfaces on
the local side. It is the configuration on the local side that specifies the behavior of the
remote side. That is, the remote side transmits link aggregation control PDUs at the
specified interval. The interval can be fast (every second) or slow (every 30 seconds).
For more information, see “Example: Configuring LACP on Chassis Clusters” on page 175.
By default, the actor and partner transmit link aggregation control PDUs every second.
You can configure different periodic rates on active and passive interfaces. When you
configure the active and passive interfaces at different rates, the transmitter honors the
receiver’s rate.
Requirements
Before you begin:
• Add the aggregated Ethernet interfaces using the device count. See Example: Configuring
the Number of Aggregated Ethernet Interfaces on a Device.
• Associate physical interfaces with the aggregated Ethernet Interfaces. See Example:
Associating Physical Interfaces with Aggregated Ethernet Interfaces.
• Configure the aggregated Ethernet link speed. See Example: Configuring Aggregated
Ethernet Link Speed.
• Configure the aggregated Ethernet minimum links speed. See Example: Configuring
Aggregated Ethernet Minimum Links.
Overview
In this example, you set LACP to passive mode for the reth0 interface. You set the LACP
mode for the reth1 interface to active and set the link aggregation control PDU transmit
interval to slow, which is every 30 seconds.
Configuration
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in the CLI User Guide.
[edit interfaces]
user@host# set reth0 redundant-ether-options lacp passive
[edit interfaces]
user@host# set reth1 redundant-ether-options lacp active
user@host# set reth1 redundant-ether-options lacp periodic slow
[edit interfaces]
user@host# commit
Verification
Action From operational mode, enter the show lacp interfaces reth0 command.
The output shows redundant Ethernet interface information, such as the following:
• The LACP state—Indicates whether the link in the bundle is an actor (local or near-end
of the link) or a partner (remote or far-end of the link).
• The LACP mode—Indicates whether both ends of the aggregated Ethernet interface
are enabled (active or passive)—at least one end of the bundle must be active.
This example shows how to specify a minimum number of physical links assigned to a
redundant Ethernet interface on the primary node that must be working for the interface
to be up.
Requirements
Before you begin:
Overview
When a redundant Ethernet interface has more than two child links, you can set a
minimum number of physical links assigned to the interface on the primary node that
must be working for the interface to be up. When the number of physical links on the
primary node falls below the minimum-links value, the interface will be down even if
some links are still working.
In this example, you specify that three child links on the primary node and bound to reth1
(minimum-links value) be working to prevent the interface from going down. For example,
in a redundant Ethernet interface LAG configuration in which six interfaces are assigned
to reth1, setting the minimum-links value to 3 means that all reth1 child links on the primary
node must be working to prevent the interface’s status from changing to down.
Configuration
Step-by-Step To specify the minimum number of links:
Procedure
1. Specify the minimum number of links for the redundant Ethernet interface.
{primary:node0}[edit]
{primary:node0}[edit]
user@host# commit
Verification
Purpose To verify the configuration is working properly, enter the show interface reth1 command.
Action From operational mode, enter the show show interfaces reth1 command.
{primary:node0}[edit]
user@host> show interfaces reth1
Physical interface: reth1, Enabled, Physical link is Down
Interface index: 129, SNMP ifIndex: 548
Link-level type: Ethernet, MTU: 1514, Speed: Unspecified, BPDU Error: None,
MAC-REWRITE Error: None, Loopback: Disabled, Source filtering: Disabled,
Flow control: Disabled, Minimum links needed: 3, Minimum bandwidth needed: 0
Device flags : Present Running
Interface flags: Hardware-Down SNMP-Traps Internal: 0x0
Current address: 00:10:db:ff:10:01, Hardware address: 00:10:db:ff:10:01
Last flapped : 2010-09-15 15:54:53 UTC (1w0d 22:07 ago)
Input rate : 0 bps (0 pps)
Output rate : 0 bps (0 pps)
Related • Understanding Chassis Cluster Redundant Ethernet Interface Link Aggregation Groups
Documentation Branch SRX Series Devices on page 165
When you set up an SRX Series chassis cluster, the SRX Series devices must be identical,
including their configuration. The chassis cluster synchronization feature automatically
synchronizes the configuration from the primary node to the secondary node when the
secondary joins the primary as a cluster. By eliminating the manual work needed to ensure
the same configurations on each node in the cluster, this feature reduces expenses.
If you want to disable automatic chassis cluster synchronization between the primary
and secondary nodes, you can do so by entering the set chassis cluster
configuration-synchronize no-secondary-bootup-auto command in configuration mode.
At any time, to reenable automatic chassis cluster synchronization, use the delete chassis
cluster configuration-synchronize no-secondary-bootup-auto command in configuration
mode.
To see whether the automatic chassis cluster synchronization is enabled or not, and to
see the status of the synchronization, enter the show chassis cluster information
configuration-synchronization operational command.
Either the entire configuration from the primary node is applied successfully to the
secondary node, or the secondary node retains its original configuration. There is no
partial synchronization.
NOTE: If you create a cluster with cluster IDs greater than 16, and then decide
to roll back to a previous release image that does not support extended
cluster IDs, the system comes up as standalone.
NOTE: If you have a cluster set up and running with an earlier release of Junos
OS, you can upgrade to Junos OS Release 12.1X45-D10 and re-create a cluster
with cluster IDs greater than 16. However, if for any reason you decide to
revert to the previous version of Junos OS that did not support extended
cluster IDs, the system comes up with standalone devices after you reboot.
However, if the cluster ID set is less than 16 and you roll back to a previous
release, the system will come back with the previous setup.
Action From the CLI, enter the show chassis cluster information configuration-synchronization
command:
{primary:node0}
user@host> show chassis cluster information configuration-synchronization
node0:
--------------------------------------------------------------------------
Configuration Synchronization:
Status:
Activation status: Enabled
Last sync operation: Auto-Sync
Last sync result: Not needed
Last sync mgd messages:
Events:
Mar 5 01:48:53.662 : Auto-Sync: Not needed.
node1:
--------------------------------------------------------------------------
Configuration Synchronization:
Status:
Activation status: Enabled
Last sync operation: Auto-Sync
Last sync result: Succeeded
Events:
Mar 5 01:48:55.339 : Auto-Sync: In progress. Attempt: 1
Mar 5 01:49:40.664 : Auto-Sync: Succeeded. Attempt: 1
Network Time Protocol (NTP) is used to synchronize the time between the Packet
Forwarding Engine and the Routing Engine in a standalone device and between two
devices in a chassis cluster.
In both standalone and chassis cluster modes, the primary Routing Engine runs the NTP
process to get the time from the external NTP server. Although the secondary Routing
Engine runs the NTP process in an attempt to get the time from the external NTP server,
this attempt fails because of network issues. For this reason, the secondary Routing
Engine uses NTP to get the time from the primary Routing Engine.
• Send the time from the primary Routing Engine to the secondary Routing Engine through
the chassis cluster control link.
• Get the time from an external NTP server to the primary or a standalone Routing Engine.
• Get the time from the Routing Engine NTP process to the Packet Forwarding Engine.
Related • Simplifying Network Management by Synchronizing the Primary and Backup Routing
Documentation Engines with NTP on page 183
This example shows how to simplify management by synchronizing the time between
two SRX Series devices operating in a chassis cluster. Using a Network Time Protocol
(NTP) server, you synchronize the primary Routing Engine with the secondary Routing
Engine through the management port.
Requirements
This example uses the following hardware and software components:
• Understand the basics of the Network Time Protocol. See Time Management Routing
Guide for Administration Devices.
Overview
When SRX Series devices are operating in chassis cluster mode, the backup Routing
Engine cannot access the external NTP server through the revenue port. The following
two examples synchronize the time from the peer Routing Engine or from the NTP server,
both using the management port:
• Synchronizing Time from the Peer Routing Engine with the Management Port (fxp0)
• Synchronizing Time from the NTP Server with the Management Port (fxp0)
Topology
Figure 18 on page 184 shows the time synchronization from the peer Routing Engine using
the management port, fxp0.
Figure 18: Synchronizing Time from the Peer Routing Engine Using fxp0
With this configuration, both Routing Engines in a chassis cluster will have two NTP
servers (one Routing Engine points to the external server, and the other Routing Engine
points to the peer Routing Engine fxp0 address). In the primary Routing Engine, both NTP
servers are reachable, and the NTP process selects the best server for synchronizing time.
The secondary Routing Engine can reach only one NTP server (pointing to the peer fxp0
address), so this server is used for synchronizing time.
Figure 19 on page 185 shows the time synchronization from the NTP server using the
management port, fxp0.
Figure 19: Synchronizing Time from the NTP Server Using fxp0
In this configuration, the NTP server address is 10.208.0.50, which is reachable through
the management port. The management ports of both Routing Engines in chassis cluster
mode are enabled. In this configuration, both Routing Engines can access the NTP server
to synchronize time.
Configuration
• Synchronizing Time from the Peer Routing Engine Using the Management Port
(fxp0) on page 186
• Synchronizing Time from the NTP Server Using the Management Port (fxp0) on page 186
• Results on page 186
CLI Quick To quickly configure this example, and synchronize the time from the peer Routing Engine
Configuration using the management port, copy the following commands, paste them into a text file,
remove any line breaks, change any details necessary to match your network configuration,
copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter
commit from configuration mode.
To quickly configure this example, and synchronize the time from the NTP server using
the management port, copy the following commands, paste them into a text file, remove
any line breaks, change any details necessary to match your network configuration, copy
and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit
from configuration mode.
Synchronizing Time from the Peer Routing Engine Using the Management Port
(fxp0)
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in the CLI User Guide.
To synchronize the time from the peer Routing Engine using the management port:
[edit system]
user@host#set ntp server 1.1.1.121
[edit groups]
user@host#set node0 system ntp server 10.208.131.32
[edit groups]
user@host#set node1 system ntp server 10.208.131.31
Synchronizing Time from the NTP Server Using the Management Port (fxp0)
Step-by-Step The following example requires you to navigate various levels in the configuration
Procedure hierarchy. For instructions on how to do that, see Using the CLI Editor in Configuration
Mode in the CLI User Guide.
To synchronize the time from the NTP server using the management port:
[edit system]
user@host#set ntp server 10.208.0.50
Results
From configuration mode, confirm your configuration by entering the show system ntp
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
{primary:node0}[edit]
user@host# show system ntp
server 1.1.1.121
{primary:node0}[edit]
user@host# show groups node0 system ntp
server 10.208.131.32;
{primary:node0}[edit]
user@host# show groups node1 system ntp
server 10.208.131.31;
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Action From operational mode, enter the show ntp associations command:
Meaning The output on the primary node shows the NTP association as follows:
• refid—Reference identifier of the remote peer. If the reference identifier is not known,
this field shows a value of 0.0.0.0.
The output on the primary node shows the NTP status as follows:
• x events—Number of events that have occurred since the last code change. An event
is often the receipt of an NTP polling message.
• system—Detailed description of the name and version of the operating system in use.
• precision—Precision of the peer clock, how precisely the frequency and time can be
maintained with this particular timekeeping system.
• refid—Reference identifier of the remote peer. If the reference identifier is not known,
this field shows a value of 0.0.0.0.
• reftime—Local time, in timestamp format, when the local clock was last updated. If
the local clock has never been synchronized, the value is zero.
Action From operational mode, enter the show ntp associations command:
Meaning The output on the secondary node shows the NTP association as follows:
• refid—Reference identifier of the remote peer. If the reference identifier is not known,
this field shows a value of 0.0.0.0.
The output on the secondary node shows the NTP status as follows:
• x events—Number of events that have occurred since the last code change. An event
is often the receipt of an NTP polling message.
• system—Detailed description of the name and version of the operating system in use.
• precision—Precision of the peer clock, how precisely the frequency and time can be
maintained with this particular timekeeping system.
• refid—Reference identifier of the remote peer. If the reference identifier is not known,
this field shows a value of 0.0.0.0.
• reftime—Local time, in timestamp format, when the local clock was last updated. If
the local clock has never been synchronized, the value is zero.
In this case, a single device in the cluster is used to route all traffic while the other device
is used only in the event of a failure (see Figure 20 on page 194). When a failure occurs,
the backup device becomes master and controls all forwarding.
reth 1.0
reth 0.0
EX Series EX Series
g030682
Trust zone
This configuration minimizes the traffic over the fabric link because only one node in the
cluster forwards traffic at any given time.
Related • Example: Configuring an Active/Passive Chassis Cluster Pair (CLI) on page 194
Documentation
• Example: Configuring an Active/Passive Chassis Cluster Pair (J-Web) on page 205
This example shows how to configure active/passive chassis clustering for devices.
Requirements
Before you begin:
1. Physically connect a pair of devices together, ensuring that they are the same models.
2. Create a fabric link by connecting a Gigabit Ethernet interface on one device to another
Gigabit Ethernet interface on the other device.
3. Create a control link by connecting the control port of the two SRX1500 devices.
4. Connect to one of the devices using the console port. (This is the node that forms the
cluster.) and set the cluster ID and node number.
5. Connect to the other device using the console port and set the cluster ID and node
number.
Overview
In this example, a single device in the cluster is used to route all traffic, and the other
device is used only in the event of a failure. (See Figure 21 on page 195.) When a failure
occurs, the backup device becomes master and controls all forwarding.
In this example, you configure group (applying the configuration with the apply-groups
command) and chassis cluster information. Then you configure security zones and security
policies. See Table 11 on page 196 through Table 14 on page 197.
Heartbeat threshold – 3
1 • Priority:
• Node 0: 254
• Node 1: 1
Interface monitoring
• ge-0/0/4
• ge-7/0/4
• ge-0/0/5
• ge-7/0/5
• Unit 0
• 10.16.8.1/24
• Unit 0
• 11.2.0.233/24
This security policy permits traffic from the trust ANY • Match criteria:
zone to the untrust zone. • source-address any
• destination-address any
• application any
• Action: permit
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
[edit]
set groups node0 system host-name srx1500-A
set groups node0 interfaces fxp0 unit 0 family inet address 192.168.3.110/24
set groups node1 system host-name srx1500-B
set groups node1 interfaces fxp0 unit 0 family inet address 192.168.3.111/24
set apply-groups “${node}”
set interfaces fab0 fabric-options member-interfaces ge-0/0/1
set interfaces fab1 fabric-options member-interfaces ge-7/0/1
set chassis cluster heartbeat-interval 1000
set chassis cluster heartbeat-threshold 3
set chassis cluster redundancy-group 0 node 0 priority 100
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 100
{primary:node0}[edit]
user@host# set groups node0 system host-name srx1500-A
user@host# set groups node0 interfaces fxp0 unit 0 family inet address
192.168.3.110/24
user@host# set groups node1 system host-name srx1500-B
user@host# set groups node1 interfaces fxp0 unit 0 family inet address
192.168.3.111/24
user@host# set apply-groups “${node}”
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/1
user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/1
{primary:node0}[edit]
user@host# set chassis cluster heartbeat-interval 1000
user@host# set chassis cluster heartbeat-threshold 3
{primary:node0}[edit]
user@host# set chassis cluster redundancy-group 0 node 0 priority 100
user@host# set chassis cluster redundancy-group 0 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 node 0 priority 100
user@host# set chassis cluster redundancy-group 1 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/4
weight 255
{primary:node0}[edit]
user@host# set chassis cluster reth-count 2
user@host# set interfaces ge-0/0/5 gigether-options redundant-parent reth1
user@host# set interfaces ge-7/0/5 gigether-options redundant-parent reth1
user@host# set interfaces ge-0/0/4 gigether-options redundant-parent reth0
user@host# set interfaces ge-7/0/4 gigether-options redundant-parent reth0
user@host# set interfaces reth0 redundant-ether-options redundancy-group 1
user@host# set interfaces reth0 unit 0 family inet address 10.16.8.1/24
user@host# set interfaces reth1 redundant-ether-options redundancy-group 1
user@host# set interfaces reth1 unit 0 family inet address 1.2.0.233/24
{primary:node0}[edit]
user@host# set security zones security-zone untrust interfaces reth1.0
user@host# set security zones security-zone trust interfaces reth0.0
{primary:node0}[edit]
user@host# set security policies from-zone trust to-zone untrust policy ANY match
source-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
destination-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
application any
user@host# set security policies from-zone trust to-zone untrust policy ANY then
permit
Results From configuration mode, confirm your configuration by entering the show configuration
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
address 192.168.3.110/24;
}
}
}
}
}
node1 {
system {
host-name srx1500-B;
interfaces {
fxp0 {
unit 0 {
family inet {
address 192.168.3.111/24;
}
}
}
}
}
}
apply-groups "${node}";
chassis {
cluster {
reth-count 2;
heartbeat-interval 1000;
heartbeat-threshold 3;
redundancy-group 0 {
node 0 priority 100;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 100;
node 1 priority 1;
interface-monitor {
ge–0/0/4 weight 255;
ge–7/0/4 weight 255;
ge–0/0/5 weight 255;
ge–7/0/5 weight 255;
}
}
}
}
interfaces {
ge–0/0/4 {
gigether–options {
redundant–parent reth0;
}
}
ge–7/0/4{
gigether–options {
redundant–parent reth0;
}
}
ge–0/0/5 {
gigether–options {
redundant–parent reth1;
}
}
ge–7/0/5 {
gigether–options {
redundant–parent reth1;
}
}
fab0 {
fabric–options {
member–interfaces {
ge–0/0/1;
}
}
}
fab1 {
fabric–options {
member–interfaces {
ge–7/0/1;
}
}
}
reth0 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 10.16.8.1/24;
}
}
}
reth1 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 1.2.0.233/24;
}
}
}
}
...
security {
zones {
security–zone untrust {
interfaces {
reth1.0;
}
}
security–zone trust {
interfaces {
reth0.0;
}
}
}
policies {
from-zone trust to-zone untrust {
policy ANY {
match {
source-address any;
destination-address any;
application any;
}
then {
permit;
}
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link name: fxp1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-0/0/4 255 Up 1
ge-7/0/4 255 Up 1
ge-0/0/5 255 Up 1
ge-7/0/5 255 Up 1
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitored interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster statistics command.
{primary:node0}
user@host> show chassis cluster statistics
GPRS GTP 0 0
Purpose Verify information about chassis cluster control plane statistics (heartbeats sent and
received) and the fabric link statistics (probes sent and received).
Action From operational mode, enter the show chassis cluster control-plane statistics command.
{primary:node0}
user@host> show chassis cluster control-plane statistics
Purpose Verify information about the number of RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster data-plane statistics command.
{primary:node0}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 6 0
Session create 161 0
Session close 148 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Purpose Verify the state and priority of both nodes in a cluster and information about whether
the primary node has been preempted or whether there has been a manual failover.
Action From operational mode, enter the chassis cluster status redundancy-group command.
{primary:node0}
user@host> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
Purpose Use these logs to identify any chassis cluster issues. You should run these logs on both
nodes.
Heartbeat Threshold: 3
Nodes: 0
Group Number: 0
Priorities: 100
Nodes: 0
Group Number: 1
Priorities: 1
Nodes: 1
Group Number: 0
Priorities: 100
• Select ge-0/0/4.
• Click Apply.
• Select ge-7/0/4.
• Click Apply.
• Select ge-0/0/5.
• Click Apply.
• Select ge-7/0/5.
• Click Apply.
In this case, a single device in the cluster terminates in an IPsec tunnel and is used to
process all traffic while the other device is used only in the event of a failure (see
Figure 22 on page 207). When a failure occurs, the backup device becomes master and
controls all forwarding.
the members of the chassis cluster, a failover does not require the tunnel to be
renegotiated and all established sessions are maintained.
This example shows how to configure active/passive chassis clustering with an IPsec
tunnel for SRX Series devices.
Requirements
Before you begin:
• Get two SRX5000 models with identical hardware configurations, one SRX1500 edge
router, and four EX Series Ethernet switches.
• Physically connect the two devices (back-to-back for the fabric and control ports)
and ensure that they are the same models. You can configure both the fabric and
control ports on the SRX5000 line.
• Set the two devices to cluster mode and reboot the devices. You must enter the
following operational mode commands on both devices, for example:
• On node 0:
• On node 1:
The cluster ID is the same on both devices, but the node ID must be different because
one device is node 0 and the other device is node 1. The range for the cluster ID is 1
through 255. Setting a cluster ID to 0 is equivalent to disabling a cluster.
Cluster ID greater than 15 can only be set when the fabric and control link interfaces
are connected back-to-back.
• Get two SRX5000 models with identical hardware configurations, one SRX1500 edge
router, and four EX Series Ethernet switches.
• Physically connect the two devices (back-to-back for the fabric and control ports)
and ensure that they are the same models. You can configure both the fabric and
control ports on the SRX5000 line.
From this point forward, configuration of the cluster is synchronized between the node
members and the two separate devices function as one device. Member-specific
configurations (such as the IP address of the management port of each member) are
entered using configuration groups.
Overview
In this example, a single device in the cluster terminates in an IPsec tunnel and is used
to process all traffic, and the other device is used only in the event of a failure. (See
Figure 23 on page 209.) When a failure occurs, the backup device becomes master and
controls all forwarding.
In this example, you configure group (applying the configuration with the apply-groups
command) and chassis cluster information. Then you configure IKE, IPsec, static route,
security zone, and security policy parameters. See Table 15 on page 210 through
Table 21 on page 212.
Heartbeat threshold – 3
1 • Priority:
• Node 0: 254
• Node 1: 1
Interface monitoring
• xe-5/0/0
• xe-5/1/0
• xe-17/0/0
• xe-17/1/0
• Unit 0
• 10.1.1.60/16
• Multipoint
• Unit 0
• 10.10.1.1/30
st0
• Unit 0
• 10.10.1.1/30
Proposal proposal-set -
standard
NOTE: On all high-end SRX Series devices, only reth interfaces are supported for IKE
external interface configuration in IPsec VPN. Other interface types can be configured,
but IPsec VPN might not work.
On all branch SRX Series devices, reth interfaces and the lo0 interface are supported
for IKE external interface configuration in IPsec VPN. Other interface types can be
configured, but IPsec VPN might not work.
On all high-end SRX Series devices, the lo0 logical interface cannot be configured with
RG0 if used as an IKE gateway external interface.
Policy std –
NOTE: The manual VPN name and the site-to-site gateway name cannot
be the same.
This security policy permits traffic from the trust ANY • Match criteria:
zone to the untrust zone. • source-address any
• destination-address any
• application any
• Action: permit
This security policy permits traffic from the trust vpn-any • Match criteria:
zone to the vpn zone. • source-address any
• destination-address any
• application any
• Action: permit
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
set chassis cluster control-ports fpc 2 port 0
set chassis cluster control-ports fpc 14 port 0
set groups node0 system host-name SRX5800-1
set groups node0 interfaces fxp0 unit 0 family inet address 172.19.100.50/24
set groups node1 system host-name SRX5800-2
set groups node1 interfaces fxp0 unit 0 family inet address 172.19.100.51/24
set apply-groups “${node}”
set interfaces fab0 fabric-options member-interfaces xe-5/3/0
set interfaces fab1 fabric-options member-interfaces xe-17/3/0
set chassis cluster reth-count 2
set chassis cluster heartbeat-interval 1000
set chassis cluster heartbeat-threshold 3
set chassis cluster node 0
set chassis cluster node 1
set chassis cluster redundancy-group 0 node 0 priority 254
set chassis cluster redundancy-group 0 node 1 priority 1
set chassis cluster redundancy-group 1 node 0 priority 254
set chassis cluster redundancy-group 1 node 1 priority 1
set chassis cluster redundancy-group 1 preempt
set chassis cluster redundancy-group 1 interface-monitor xe-5/0/0 weight 255
set chassis cluster redundancy-group 1 interface-monitor xe-5/1/0 weight 255
set chassis cluster redundancy-group 1 interface-monitor xe-17/0/0 weight 255
set chassis cluster redundancy-group 1 interface-monitor xe-17/1/0 weight 255
set interfaces xe-5/1/0 gigether-options redundant-parent reth1
set interfaces xe-17/1/0 gigether-options redundant-parent reth1
set interfaces xe-5/0/0 gigether-options redundant-parent reth0
set interfaces xe-17/0/0 gigether-options redundant-parent reth0
set interfaces reth0 redundant-ether-options redundancy-group 1
set interfaces reth0 unit 0 family inet address 10.1.1.60/16
set interfaces reth1 redundant-ether-options redundancy-group 1
set interfaces reth1 unit 0 family inet address 10.2.1.60/16
set interfaces st0 unit 0 multipoint family inet address 10.10.1.1/30
set security ike policy preShared mode main
set security ike policy preShared proposal-set standard
set security ike policy preShared pre-shared-key ascii-text "$ABC123"## Encrypted
password
{primary:node0}[edit]
user@host# set chassis cluster control-ports fpc 2 port 0
user@host# set chassis cluster control-ports fpc 14 port 0
{primary:node0}[edit]
user@host# set groups node0 system host-name SRX5800-1
user@host# set groups node0 interfaces fxp0 unit 0 family inet address
172.19.100.50/24
user@host#set groups node1 system host-name SRX5800-2
user@host# set groups node1 interfaces fxp0 unit 0 family inet address
172.19.100.51/24
user@host# set apply-groups “${node}”
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces xe-5/3/0
user@host# set interfaces fab1 fabric-options member-interfaces xe-17/3/0
{primary:node0}[edit]
user@host# set chassis cluster reth-count 2
user@host# set chassis cluster heartbeat-interval 1000
user@host# set chassis cluster heartbeat-threshold 3
{primary:node0}[edit]
user@host# set interfaces xe-5/1/0 gigether-options redundant-parent reth1
user@host# set interfaces xe-17/1/0 gigether-options redundant-parent reth1
user@host# set interfaces xe-5/0/0 gigether-options redundant-parent reth0
user@host# set interfaces xe-17/0/0 gigether-options redundant-parent reth0
user@host# set interfaces reth0 redundant-ether-options redundancy-group 1
user@host# set interfaces reth0 unit 0 family inet address 10.1.1.60/16
user@host# set interfaces reth1 redundant-ether-options redundancy-group 1
user@host# set interfaces reth1 unit 0 family inet address 10.2.1.60/16
{primary:node0}[edit]
user@host# set interfaces st0 unit 0 multipoint family inet address 10.10.1.1/30
user@host# set security ike policy preShared mode main
user@host# set security ike policy preShared proposal-set standard
user@host# set security ike policy preShared pre-shared-key ascii-text "$ABC123"##
Encrypted password
user@host# set security ike gateway SRX1500-1 ike-policy preShared
user@host# set security ike gateway SRX1500-1 address 10.1.1.90
user@host# set security ike gateway SRX1500-1 external-interface reth0.0
user@host# set security ipsec policy std proposal-set standard
user@host# set security ipsec vpn SRX1500-1 bind-interface st0.0
user@host# set security ipsec vpn SRX1500-1 vpn-monitor optimized
user@host# set security ipsec vpn SRX1500-1 ike gateway SRX1500-1
user@host# set security ipsec vpn SRX1500-1 ike ipsec-policy std
user@host# set security ipsec vpn SRX1500-1 establish-tunnels immediately
{primary:node0}[edit]
user@host# set routing-options static route 0.0.0.0/0 next-hop 10.2.1.1
user@host# set routing-options static route 10.3.0.0/16 next-hop 10.10.1.2
{primary:node0}[edit]
user@host# set security zones security-zone untrust host-inbound-traffic
system-services all
user@host# set security zones security-zone untrust host-inbound-traffic protocols
all
{primary:node0}[edit]
user@host# set security policies from-zone trust to-zone untrust policy ANY match
source-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
destination-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
application any
user@host# set security policies from-zone trust to-zone vpn policy vpn-any then
permit
Results From operational mode, confirm your configuration by entering the show configuration
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
}
}
}
}
}
apply-groups "${node}";
system {
root-authentication {
encrypted-password "$ABC123";
}
}
chassis {
cluster {
reth-count 2;
heartbeat-interval 1000;
heartbeat-threshold 3;
control-ports {
fpc 2 port 0;
fpc 14 port 0;
}
redundancy-group 0 {
node 0 priority 254;
node 1 priority 1;
}
redundancy-group 1 {
node 0 priority 254;
node 1 priority 1;
preempt;
interface-monitor {
xe–6/0/0 weight 255;
xe–6/1/0 weight 255;
xe–18/0/0 weight 255;
xe–18/1/0 weight 255;
}
}
}
}
interfaces {
xe–5/0/0 {
gigether–options {
redundant–parent reth0;
}
}
xe–5/1/0 {
gigether–options {
redundant–parent reth1;
}
}
xe–17/0/0 {
gigether–options {
redundant–parent reth0;
}
}
xe–17/1/0 {
gigether–options {
redundant–parent reth1;
}
}
fab0 {
fabric–options {
member–interfaces {
xe–5/3/0;
}
}
}
fab1 {
fabric–options {
member–interfaces {
xe–17/3/0;
}
}
}
reth0 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 10.1.1.60/16;
}
}
}
reth1 {
redundant–ether–options {
redundancy–group 1;
}
unit 0 {
family inet {
address 10.2.1.60/16;
}
}
}
st0 {
unit 0 {
multipoint;
family inet {
address 5.4.3.2/32;
}
}
}
}
routing–options {
static {
route 0.0.0.0/0 {
next–hop 10.2.1.1;
}
route 10.3.0.0/16 {
next–hop 10.10.1.2;
}
}
}
security {
zones {
security–zone trust {
host–inbound–traffic {
system–services {
all;
}
}
interfaces {
reth0.0;
}
}
security–zone untrust
host-inbound-traffic {
system-services {
all;
}
}
protocols {
all;
}
interfaces {
reth1.0;
}
}
security-zone vpn {
host-inbound-traffic {
system-services {
all;
}
}
protocols {
all;
}
interfaces {
st0.0;
}
}
}
policies {
from–zone trust to–zone untrust {
policy ANY {
match {
source–address any;
destination–address any;
application any;
}
then {
permit;
}
}
}
from–zone trust to–zone vpn {
policy vpn {
match {
source–address any;
destination–address any;
application any;
}
then {
permit;
}
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link name: fxp1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
xe-5/0/0 255 Up 1
xe-5/1/0 255 Up 1
xe-17/0/0 255 Up 1
xe-17/1/0 255 Up 1
Purpose Verify information about chassis cluster services and control link statistics (heartbeats
sent and received), fabric link statistics (probes sent and received), and the number of
RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster statistics command.
{primary:node0}
user@host> show chassis cluster statistics
Purpose Verify information about chassis cluster control plane statistics (heartbeats sent and
received) and the fabric link statistics (probes sent and received).
Action From operational mode, enter the show chassis cluster control-panel statistics command.
{primary:node0}
user@host> show chassis cluster control-plane statistics
Purpose Verify information about the number of RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster data-plane statistics command.
{primary:node0}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 6 0
Session create 161 0
Session close 148 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Purpose Verify the state and priority of both nodes in a cluster and information about whether
the primary node has been preempted or whether there has been a manual failover.
Action From operational mode, enter the chassis cluster status redundancy-group command.
{primary:node0}
user@host> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
Purpose Use these logs to identify any chassis cluster issues. You should run these logs on both
nodes.
Heartbeat Threshold: 3
Nodes: 0
Group Number: 0
Priorities: 254
Nodes: 0
Group Number: 1
Priorities: 254
Nodes: 1
Group Number: 0
Priorities: 1
Nodes: 1
Group Number: 1
Priorities: 1
• Select xe-5/1/0.
• Click Apply.
• Select xe-17/1/0.
• Click Apply.
• Select xe-5/0/0.
• Click Apply.
• Select xe-17/0/0.
• Click Apply.
• Click Add.
10. Click OK to check your configuration and save it as a candidate configuration, then
click Commit Options>Commit.
Multicast routing support across nodes in a chassis cluster allows multicast protocols,
such as Protocol Independent Multicast (PIM) versions 1 and 2, Internet Group
Management Protocol (IGMP), Session Announcement Protocol (SAP), and Distance
Vector Multicast Routing Protocol (DVMRP), to send traffic across interfaces in the
cluster. Note, however, that the multicast protocols should not be enabled on the chassis
management interface (fxp0) or on the fabric interfaces (fab0 and fab1). Multicast
sessions will be synched across the cluster and will be maintained during redundant
group failovers. During failover, as with other types of traffic, there might be some
multicast packet loss.
Multicast data forwarding in a chassis cluster uses the incoming interface to determine
whether or not the session remains active. Packets will be forwarded to the peer node if
a leaf session’s outgoing interface is on the peer instead of on the incoming interface’s
node. Multicast routing on a chassis cluster supports tunnels for both incoming and
outgoing interfaces.
Multicast traffic has an upstream (toward source) and downstream (toward subscribers)
direction in traffic flows. The devices replicate (fanout) a single multicast packet to
multiple networks that contain subscribers. In the chassis cluster environment, multicast
packet fanouts can be active on either nodes.
If the incoming interface is active on the current node and backup on the peer node, then
the session is active on the current node and backup on the peer node.
A PIM session encapsulates multicast data into a PIM unicast packet. A PIM session
creates the following sessions:
• Control session
• Data session
The data session saves the control session ID. The control session and the data session
are closed independently. The incoming interface is used to determine whether the PIM
session is active or not. If the outgoing interface is active on the peer node, packets are
transferred to the peer node for transmission.
In PIM sessions, the control session is synchronized to the backup node, and then the
data session is synchronized.
In multicast sessions, the template session is synchronized to the peer node, then all the
leaf sessions are synchronized, and finally the template session is synchronized again.
In this case, chassis cluster makes use of its asymmetric routing capability (see
Figure 24 on page 229). Traffic received by a node is matched against that node’s session
table. The result of this lookup determines whether or not that node should process the
packet or forward it to the other node over the fabric link. Sessions are anchored on the
egress node for the first packet that created the session. If traffic is received on the node
in which the session is not anchored, those packets are forwarded over the fabric link to
the node where the session is anchored.
NOTE: The anchor node for the session can change if there are changes in
routing during the session.
In this scenario, two Internet connections are used, with one being preferred. The
connection to the trust zone is done by using a redundant Ethernet interface to provide
LAN redundancy for the devices in the trust zone. This scenario describes two failover
cases in which sessions originate in the trust zone with a destination of the Internet
(untrust zone).
• Understanding Failures in the Trust Zone Redundant Ethernet Interface on page 229
• Understanding Failures in the Untrust Zone Interfaces on page 229
A failure in interface ge-0/0/1 triggers a failover of the redundancy group, causing interface
ge-7/0/1 in node 1 to become active. After the failover, traffic arrives at node 1. After
session lookup, the traffic is sent to node 0 because the session is active on this node.
Node 0 then processes the traffic and forwards it to the Internet. The return traffic follows
a similar process. The traffic arrives at node 0 and gets processed for security
purposes—for example, antispam scanning, antivirus scanning, and application of security
policies—on node 0 because the session is anchored to node 0. The packet is then sent
to node 1 through the fabric interface for egress processing and eventual transmission
out of node 1 through interface ge-7/0/1.
the failure, sessions in node 0 become inactive, and the passive sessions in node 1 become
active. Traffic arriving from the trust zone is still received on interface ge-0/0/1, but is
forwarded to node 1 for processing. After traffic is processed in node 1, it is forwarded to
the Internet through interface ge-7/0/0.
In this chassis cluster configuration, redundancy group 1 is used to control the redundant
Ethernet interface connected to the trust zone. As configured in this scenario, redundancy
group 1 fails over only if interface ge-0/0/1 or ge-7/0/1 fails, but not if the interfaces
connected to the Internet fail. Optionally, the configuration could be modified to permit
redundancy group 1 to monitor all interfaces connected to the Internet and fail over if an
Internet link were to fail. So, for example, the configuration can allow redundancy group
1 to monitor ge-0/0/0 and make ge-7/0/1 active for reth0 if the ge-0/0/0 Internet link
fails. (This option is not described in the following configuration examples.)
This example shows how to configure a chassis cluster pair of devices to allow asymmetric
routing. Configuring asymmetric routing for a chassis cluster allows traffic received on
either device to be processed seamlessly.
Requirements
Before you begin:
1. Physically connect a pair of devices together, ensuring that they are the same models.
a. To create the fabric link, connect a Gigabit Ethernet interface on one device to
another Gigabit Ethernet interface on the other device.
b. To create the control link, connect the control port of the two SRX1500 devices.
2. Connect to one of the devices using the console port. (This is the node that forms the
cluster.)
Overview
In this example, a chassis cluster provides asymmetric routing. As illustrated in
Figure 25 on page 231, two Internet connections are used, with one being preferred. The
connection to the trust zone is provided by a redundant Ethernet interface to provide
LAN redundancy for the devices in the trust zone.
In this example, you configure group (applying the configuration with the apply-groups
command) and chassis cluster information. Then you configure security zones and security
policies. See Table 22 on page 231 through Table 25 on page 233.
Heartbeat threshold – 3
Interface monitoring
• ge-0/0/3
• ge-7/0/3
ge-7/0/1 • Unit 0
• 1.2.1.233/24
ge-0/0/3 •
ge-7/0/3 •
reth0 • Unit 0
• 10.16.8.1/24
untrust The ge-0/0/1 and ge-7/0/1 interfaces are bound to this zone.
This security policy permits traffic from the trust ANY • Match criteria:
zone to the untrust zone. • source-address any
• destination-address any
• application any
• Action: permit
Configuration
CLI Quick To quickly configure this example, copy the following commands, paste them into a text
Configuration file, remove any line breaks, change any details necessary to match your network
configuration, and copy and paste the commands into the CLI at the [edit] hierarchy
level, and then enter commit from configuration mode.
{primary:node0}[edit]
set groups node0 system host-name srxseries-1
set groups node0 interfaces fxp0 unit 0 family inet address 192.168.100.50/24
set groups node1 system host-name srxseries-2
set groups node1 interfaces fxp0 unit 0 family inet address 192.168.100.51/24
set apply-groups “${node}”
set interfaces fab0 fabric-options member-interfaces ge-0/0/7
set interfaces fab1 fabric-options member-interfaces ge-7/0/7
set chassis cluster reth-count 1
set chassis cluster heartbeat-interval 1000
set chassis cluster heartbeat-threshold 3
set chassis cluster redundancy-group 1 node 0 priority 100
set chassis cluster redundancy-group 1 node 1 priority 1
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/3 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-7/0/3 weight 255
set interfaces ge-0/0/1 unit 0 family inet address 1.4.0.202/24
set interfaces ge-0/0/3 gigether-options redundant-parent reth0
set interfaces ge-7/0/1 unit 0 family inet address 1.2.1.233/24
set interfaces ge-7/0/3 gigether-options redundant-parent reth0
set interfaces reth0 unit 0 family inet address 10.16.8.1/24
set routing-options static route 0.0.0.0/0 qualified-next-hop 1.4.0.1 metric 10
set routing-options static route 0.0.0.0/0 qualified-next-hop 1.2.1.1 metric 100
set security zones security-zone untrust interfaces ge-0/0/1.0
set security zones security-zone untrust interfaces ge-7/0/1.0
set security zones security-zone trust interfaces reth0.0
set security policies from-zone trust to-zone untrust policy ANY match source-address
any
set security policies from-zone trust to-zone untrust policy ANY match destination-address
any
set security policies from-zone trust to-zone untrust policy ANY match application any
set security policies from-zone trust to-zone untrust policy ANY then permit
{primary:node0}[edit]
{primary:node0}[edit]
user@host# set interfaces fab0 fabric-options member-interfaces ge-0/0/7
user@host# set interfaces fab1 fabric-options member-interfaces ge-7/0/7
{primary:node0}[edit]
user@host# set chassis cluster reth-count 1
{primary:node0}[edit]
user@host# set chassis cluster heartbeat-interval 1000
user@host# set chassis cluster heartbeat-threshold 3
user@host# set chassis cluster node 0
user@host# set chassis cluster node 1
user@host# set chassis cluster redundancy-group 1 node 0 priority 100
user@host# set chassis cluster redundancy-group 1 node 1 priority 1
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-0/0/3
weight 255
user@host# set chassis cluster redundancy-group 1 interface-monitor ge-7/0/3
weight 255
{primary:node0}[edit]
user@host# set interfaces ge-0/0/1 unit 0 family inet address 1.4.0.202/24
user@host# set interfaces ge-0/0/3 gigether-options redundant-parent reth0
user@host# set interfaces ge-7/0/1 unit 0 family inet address 1.2.1.233/24
user@host# set interfaces ge-7/0/3 gigether-options redundant-parent reth0
user@host# set interfaces reth0 unit 0 family inet address 10.16.8.1/24
6. Configure the static routes (one to each ISP, with preferred route through ge-0/0/1).
{primary:node0}[edit]
user@host# set routing-options static route 0.0.0.0/0 qualified-next-hop 1.4.0.1
metric 10
user@host# set routing-options static route 0.0.0.0/0 qualified-next-hop 1.2.1.1
metric 100
{primary:node0}[edit]
user@host# set security zones security-zone untrust interfaces ge-0/0/1.0
user@host# set security zones security-zone untrust interfaces ge-7/0/1.0
user@host# set security zones security-zone trust interfaces reth0.0
{primary:node0}[edit]
user@host# set security policies from-zone trust to-zone untrust policy ANY match
source-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
destination-address any
user@host# set security policies from-zone trust to-zone untrust policy ANY match
application any
user@host# set security policies from-zone trust to-zone untrust policy ANY then
permit
Results From operational mode, confirm your configuration by entering the show configuration
command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
For brevity, this show command output includes only the configuration that is relevant
to this example. Any other configuration on the system has been replaced with ellipses
(...).
}
}
}
...
routing-options {
static {
route 0.0.0.0/0 {
next-hop 1.4.0.1;
metric 10;
}
}
}
routing-options {
static {
route 0.0.0.0/0 {
next-hop 1.2.1.1;
metric 100;
}
}
}
security {
zones {
security–zone untrust {
interfaces {
ge-0/0/1.0;
ge-7/0/1.0;
}
}
security–zone trust {
interfaces {
reth0.0;
}
}
}
policies {
from-zone trust to-zone untrust {
policy ANY {
match {
source-address any;
destination-address any;
application any;
}
then {
permit;
}
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
Confirm that the configuration is working properly.
Purpose Verify the chassis cluster status, failover status, and redundancy group information.
Action From operational mode, enter the show chassis cluster status command.
{primary:node0}
user@host> show chassis cluster status
Cluster ID: 1
Node Priority Status Preempt Manual failover
Action From operational mode, enter the show chassis cluster interfaces command.
{primary:node0}
user@host> show chassis cluster interfaces
Control link name: fxp1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-0/0/3 255 Up 1
ge-7/0/3 255 Up 1
Purpose Verify information about the statistics of the different objects being synchronized, the
fabric and control interface hellos, and the status of the monitored interfaces in the
cluster.
Action From operational mode, enter the show chassis cluster statistics command.
{primary:node0}
user@host> show chassis cluster statistics
Purpose Verify information about chassis cluster control plane statistics (heartbeats sent and
received) and the fabric link statistics (probes sent and received).
Action From operational mode, enter the show chassis cluster control-plane statistics command.
{primary:node0}
user@host> show chassis cluster control-plane statistics
Purpose Verify information about the number of RTOs sent and received for services.
Action From operational mode, enter the show chassis cluster data-plane statistics command.
{primary:node0}
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 6 0
Session create 160 0
Session close 147 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Purpose Verify the state and priority of both nodes in a cluster and information about whether
the primary node has been preempted or whether there has been a manual failover.
Action From operational mode, enter the chassis cluster status redundancy-group command.
{primary:node0}
user@host> show chassis cluster status redundancy-group 1
Cluster ID: 1
Node Priority Status Preempt Manual failover
Purpose Use these logs to identify any chassis cluster issues. You should run these logs on both
nodes.
Understanding Layer 2 Ethernet Switching Capability in Chassis Cluster on SRX Series Devices
Ethernet ports support various Layer 2 features such as Spanning Tree Protocols (xSTP),
DOT1X, Link Aggregation (LAG), Internet Group Membership Protocol (IGMP), GARP
VLAN Registration Protocol (GVRP), Link Layer Discovery Protocol (LLDP), and snooping.
The enhanced feature extends Layer 2 switching capability to devices in a chassis cluster.
This feature allows users to use Ethernet switching features on both nodes of a chassis
cluster. The Ethernet ports on either of the nodes can be configured for family Ethernet
switching. Users can configure a Layer 2 VLAN domain with member ports from both the
nodes and the Layer 2 switching protocols on both the devices.
Figure 26 on page 244 shows the Layer 2 switching across chassis cluster nodes:
To ensure that Layer 2 switching works seamlessly across chassis cluster nodes, a
dedicated physical link connecting the nodes is required. This type of link is called a
switching fabric interface (swfab). Its purpose is to carry Layer 2 traffic between the nodes.
NOTE: The Q-in-Q feature in chassis cluster mode is not supported because
of chip limitation for swfab interface configuration in Broadcom chipsets.
Related • Example: Configuring Switch Fabric Interfaces to Enable Switching in Chassis Cluster
Documentation Mode (CLI) on page 245
• Example: Configuring IRB and VLAN with Members Across Two Nodes (CLI) on page 246
• Example: Configuring Aggregated Ethernet Device with LAG and LACP (CLI) on page 248
This example shows how to configure swfab to enable switching in chassis cluster mode.
Requirements
The physical link used as the switch fabric members must be directly connected. Switching
supported ports must be used for swfab interfaces.
Before you begin, read through the following example to understand the configuration
of chassis cluster fabric:
Overview
New pseudointerfaces swfab0 and swfab1 will be created for Layer 2 fabric functionality.
Users need to configure dedicated Ethernet ports on each side of the node to be
associated with the swfab interface.
Configuration
Step-by-Step To configure swfab interfaces:
Procedure
1. Configure swfab0 and swfab1 to associate switch fabric interfaces to enable
switching across the nodes. Note that swfab0 corresponds to node 0 and
swfab1corresponds to node 1.
{primary:node0} [edit]
user@host# set interfaces swfab0 fabric-options member-interfaces ge-0/0/6
user@host# set interfaces swfab0 fabric-options member-interfaces ge-0/0/7
user@host# set interfaces swfab1 fabric-options member-interfaces ge-5/0/6
user@host# set interfaces swfab1 fabric-options member-interfaces ge-5/0/7
{primary:node0} [edit]
user@host# commit
Verification
Purpose Verify that the user will be allowed to configure multiple ports as members of swfab
ports.
Action From configuration mode, enter the show interfaces swfab0 command to view the
configured interfaces for each port.
user@host# show interfaces swfab0
fabric-options{
member-interfaces {
ge-0/0/6;
ge-0/0/7;
}
}
From the configuration mode, enter the show chassis cluster ethernet-switching
interfaces command to view the appropriate member interfaces.
user@host# show chassis cluster ethernet-switching interfaces
swfab0:
Name Status
ge-0/0/6 up
ge-0/0/7 up
swfab1:
Name Status
ge-5/0/6 up
ge-5/0/7 up
Example: Configuring IRB and VLAN with Members Across Two Nodes (CLI)
Requirements
No special configuration beyond device initialization is required before configuring this
feature.
Overview
This example shows configuration of IRB and configuration of VLAN with members across
node 0 and node 1.
Configuration
Step-by-Step To configure VLAN, follow the steps from 1 to 4 and then commit the configuration. To
Procedure configure IRB, follow the steps from 1 to 8.
{primary:node0} [edit]
user@host# set interfaces ge-2/0/0 unit 0 family ethernet-switching
{primary:node0} [edit]
user@host# set interfaces ge-11/0/0 unit 0 family ethernet-switching
{primary:node0} [edit]
user@host# set vlans vlan10 vlan-id 10
{primary:node0} [edit]
user@host# set vlans vlan10 interface ge-2/0/0
user@host# set vlans vlan10 interface ge-11/0/0
{primary:node0} [edit]
user@host# set interfaces vlan unit 10 family inet address 45.45.45.1/24
{primary:node0} [edit]
user@host# set vlans vlan10 l3-interface vlan.10
7. Check the configuration by entering the show vlans and show interfaces commands.
[edit]
user@host# commit
Verification
Purpose To verify that the configurations of VLAN and IRB are working properly.
Action From configuration mode, enter the show interfaces terse ge-2/0/0 command to view
the node 0 interface.
user@host# run show interfaces terse ge-2/0/0
Interface Admin Link Proto Local Remote
ge-2/0/0 up up
ge-2/0/0.0 up up eth-switch
From configuration mode, enter the show interfaces terse ge-11/0/0 command to view
the node 1 interface.
user@host# run show interfaces terse ge-11/0/0
Interface Admin Link Proto Local Remote
ge-11/0/0 up up
ge-11/0/0.0 up up eth-switch
From configuration mode, enter the show vlans command to view the VLAN interface.
Example: Configuring Aggregated Ethernet Device with LAG and LACP (CLI)
Requirements
No special configuration beyond device initialization is required before configuring this
feature.
Overview
This example shows the configuration of aggregated Ethernet (ae) devices with LAG
and LACP.
Configuration
Step-by-Step To configure LAG:
Procedure
1. Configure the number of ae devices with LAG interface that you need to create.
[edit]
user@host# set chassis aggregated-devices ethernet device-count 5
[edit]
user@host# set interfaces ge-2/0/1 gigether-options 802.3ad ae0
user@host# set interfaces ge-2/0/2 gigether-options 802.3ad ae0
[edit]
user@host# set interfaces ae0 aggregated-ether-options lacp active
[edit]
user@host# set interfaces ae0 unit 0 family ethernet-switching
5. Configure VLAN.
[edit]
user@host# set vlans vlan20 vlan-id 20
[edit]
user@host# set vlans vlan20 interface ae0
7. Check the configuration by entering the show vlans and show interfaces commands
family ethernet-switching;
}
}
[edit]
user@host# commit
NOTE: Likewise, you can configure other devices with LAG and LACP.
Verification
Purpose Verify that you can configure ae devices with LAG and LACP.
Action From configuration mode, enter the show lacp interfaces to view the LACP interfaces.
From configuration mode, enter the show vlans command to view the VLAN interfaces.
From configuration mode, enter the show interfaces (interface name) command to view
the status of the ge-2/0/1 and ge-2/0/2 interfaces.
user@host# run show interfaces ge-2/0/1 terse
Interface Admin Link Proto Local Remote
ge-2/0/1 up up
ge-2/0/1.0 up up aenet --> ae0.0
Devices in a chassis cluster can be upgraded separately one at a time; some models
allow one device after the other to be upgraded using failover and an in-service software
upgrade (ISSU) to reduce the operational impact of the upgrade.
4. Repeat Step 2.
Related • Upgrading Both Devices in a Chassis Cluster Using an ISSU for High-End SRX Series
Documentation Devices
• Upgrading Devices in a Chassis Cluster Using ICU for Branch SRX Series Devices on
page 255
Supported Platforms SRX1500, SRX1500, SRX300, SRX300, SRX300, SRX300, SRX300, SRX320, SRX320,
SRX320, SRX320, SRX320, SRX340, SRX340, SRX340, SRX340, SRX340, SRX345, SRX345,
SRX345, SRX345, SRX345, SRX550, SRX550, SRX550, SRX550, SRX550, vSRX
Supported Platforms
Starting from Junos OS 15.1X49-D50 onwards, in-band cluster upgrade (ICU) is supported
in SRX1500 devices.
For SRX300, SRX320, SRX340, SRX345, and SRX550 devices, the devices in a chassis
cluster can be upgraded with a minimal service disruption of approximately 30 seconds
using in-band cluster upgrade (ICU) with the no-sync option. The chassis cluster ICU
feature allows both devices in a cluster to be upgraded from supported Junos OS versions.
• Before starting ICU, you should ensure that sufficient disk space is available. See
Upgrading ICU Using a Build Available Locally on a Primary Node in a Chassis Cluster and
Upgrading ICU Using a Build Available on an FTP Server.
• This feature cannot be used to downgrade to a build earlier than Junos OS 11.2 R2.
The upgrade is initiated with the Junos OS build locally available on the primary node of
the device or on an FTP server.
NOTE:
• The primary node, RG0, changes to the secondary node after an ICU
upgrade.
• During ICU, the chassis cluster redundancy groups are failed over to the
primary node to change the cluster to active/passive mode.
• ICU states can be checked from the syslog or with the console/terminal
logs.
Upgrading ICU Using a Build Available Locally on a Primary Node in a Chassis Cluster
Supported Platforms
NOTE: Ensure that sufficient disk space is available for the Junos OS package
in the /var/tmp location in the secondary node of the cluster.
To upgrade ICU using a build locally available on the primary node of a cluster:
1. Copy the Junos OS package build to the primary node at any location, or mount a
network file server folder containing the Junos OS build.
Supported Platforms
NOTE: Ensure that sufficient disk space is available for the Junos OS package
in the /var/tmp location in both the primary and the secondary nodes of the
cluster.
user@root> request system software in-service-upgrade <ftp url for junos image>
no-sync
WARNING: A reboot is required to load this software correctly. Use the request
system reboot command when software installation is complete.
This warning message can be ignored because the ICU process automatically
reboots both the nodes.
Supported Platforms
You can abort an ICU at any time by issuing the following command on the primary node:
NOTE: Issuing an abort command during or after the secondary node reboots
puts the cluster in an inconsistent state. The secondary node boots up running
the new Junos OS build, while the primary continues to run the older Junos
OS build.
To recover from the chassis cluster inconsistent state, perform the following actions
sequentially on the secondary node:
NOTE: You must execute the above steps sequentially to complete the
recovery process and avoid cluster instability.
Table 26 on page 258 lists the options and their descriptions for the request system software
in-service-upgrade command.
no-sync Disables the flow state from syncing up when the old secondary node has booted with a new
Junos OS image.
no-tcp-syn-check Creates a window wherein the TCP SYN check for the incoming packets will be disabled. The
default value for the window is 7200 seconds (2 hours).
no-validate Disables the validation of the configuration at the time of the installation. The system behavior
is similar to software add.
unlink Removes the package from the local media after installation.
NOTE:
• During ICU, if an abort command is executed, ICU will abort only after the
current operation finishes. This is required to avoid any inconsistency with
the devices.
• After an abort, ICU will try to roll back the build on the nodes if the upgrading
nodes step was completed.
NOTE: After the chassis cluster is disabled using this CLI command, you do
not have a similar CLI option to enable it back.
You can also use the below CLI commands to disable chassis cluster:
• Upgrading Devices in a Chassis Cluster Using ICU for Branch SRX Series Devices on
page 255
Configuration Statements
Use the statements in the chassis configuration hierarchy to configure alarms, aggregated
devices, clusters, the Routing Engine, and other chassis properties.
chassis {
aggregated-devices {
ethernet {
device-count number;
lacp {
link-protection {
non-revertive;
}
system-priority number;
}
}
sonet {
device-count number;
}
}
alarm {
ds1 {
ais (ignore | red | yellow);
ylw (ignore | red | yellow);
}
ethernet {
link-down (ignore | red | yellow);
}
integrated-services {
failure (ignore | red | yellow);
}
management-ethernet {
link-down (ignore | red | yellow);
}
serial {
cts-absent (ignore | red | yellow);
retry-count number;
retry-interval seconds;
}
node (0 | 1 ) {
priority number;
}
preempt;
}
reth-count number;
traceoptions {
file {
filename;
files number;
match regular-expression;
(world-readable | no-world-readable);
size maximum-file-size;
}
flag flag;
level {
(alert | all | critical | debug | emergency | error | info | notice | warning);
}
no-remote-trace;
}
}
config-button {
no-clear;
no-rescue;
}
craft-lockout;
fpc slot-number {
offline;
pic slot-number {
aggregate-ports;
framing {
(e1 | e3 | sdh | sonet | t1 | t3);
}
ingress-policer-overhead bytes
max-queues-per-interface (4 | 8);
mlfr-uni-nni-bundles number;
no-multi-rate;
np-cache;
port slot-number {
framing (e1 | e3 | sdh | sonet | t1 | t3);
speed (oc12-stm4 | oc3-stm1 | oc48-stm16);
}
q-pic-large-buffer (large-scale | small-scale);
services-offload {
low-latency;
per-session-statistics;
}
shdsl {
pic-mode (1-port-atm | 2-port-atm | 4-port-atm | efm);
}
sparse-dlcis;
traffic-manager {
egress-shaping-overhead number;
ingress-shaping-overhead number;
mode (egress-only | ingress-and-egress);
}
tunnel-queuing;
}
services-offload;
}
ioc-npc-connectivity {
ioc slot-number {
npc (npc-slot-number | none);
}
}
maximum-ecmp (16 | 32 | 64);
network-services (ethernet | IP);
routing-engine {
bios {
no-auto-upgrade;
}
on-disk-failure {
disk-failure-action (halt | reboot);
}
usb-wwan {
port 1;
}
}
usb {
storage {
disable;
}
}
}
Use the statements in the security configuration hierarchy to configure actions, certificates,
dynamic virtual private networks (VPNs), firewall authentication, flow, forwarding options,
group VPNs, Intrusion Detection Prevention (IDP), Internet Key Exchange (IKE), Internet
Protocol Security (IPsec), logging, Network Address Translation (NAT), public key
infrastructure (PKI), policies, resource manager, rules, screens, secure shell known hosts,
trace options, user identification, unified threat management (UTM), and zones.
Statements that are exclusive to the SRX Series devices running Junos OS are described
in this section.
Each of the following topics lists the statements at a sub-hierarchy of the [edit security]
hierarchy.
cluster (Chassis)
Syntax cluster {
configuration-synchronize {
no-secondary-bootup-auto;
}
control-link-recovery;
heartbeat-interval milliseconds;
heartbeat-threshold number;
network-management {
cluster-master;
}
redundancy-group group-number {
gratuitous-arp-count number;
hold-down-interval number;
interface-monitor interface-name {
weight number;
}
ip-monitoring {
family {
inet {
ipv4-address {
interface {
logical-interface-name;
secondary-ip-address ip-address;
}
weight number;
}
}
}
global-threshold number;
global-weight number;
retry-count number;
retry-interval seconds;
}
node (0 | 1 ) {
priority number;
}
preempt;
}
reth-count number;
traceoptions {
file {
filename;
files number;
match regular-expression;
(world-readable | no-world-readable);
size maximum-file-size;
}
flag flag;
level {
(alert | all | critical | debug | emergency | error | info | notice | warning);
}
no-remote-trace;
}
}
Options The remaining statements are explained separately. See CLI Explorer.
Syntax configuration-synchronize {
no-secondary-bootup-auto;
}
Description Disables the automatic chassis cluster synchronization between the primary and
secondary nodes. To reenable automatic chassis cluster synchronization, use the delete
chassis cluster configuration-synchronize no-secondary-bootup-auto command in
configuration mode.
control-link-recovery
Syntax control-link-recovery;
Description Enable control link recovery to be done automatically by the system. After the control
link recovers, the system checks whether it receives at least 30 consecutive heartbeats
on the control link. This is to ensure that the control link is not flapping and is perfectly
healthy. Once this criterion is met, the system issues an automatic reboot on the node
that was disabled when the control link failed. When the disabled node reboots, the node
rejoins the cluster. There is no need for any manual intervention.
Syntax ethernet {
device-count number;
lacp {
link-protection {
non-revertive;
}
system-priority number;
}
}
Options The remaining statements are explained separately. See CLI Explorer.
fabric-options
Syntax fabric-options {
member-interfaces member-interface-name ;
}
NOTE: When you run the system autoinstallation command, the command
will configure unit 0 logical interface for all the active state physical interfaces.
However, few commands like fabric-options do not allow its physical interface
to be configured with a logical interface. If the system autoinstallation and
the fabric-options commands are configured together the following message
is displayed incompatible with 'system autoinstallation’.
Options The remaining statements are explained separately. See CLI Explorer.
Syntax gigether-options {
802.3ad {
backup | primary
lacp {
port-priority number;
}
}
auto-negotiation {
remote-fault;
}
flow-control | no-flow-control;
ieee-802-3az-eee ;
ignore-l3-incompletes;
loopback | no-loopback
loopback-remote
no-auto-negotiation;
redundant-parent interface-name;
}
Options The remaining statements are explained separately. See CLI Explorer.
global-threshold
Description Specify the failover value for all IP addresses monitored by the redundancy group. When
IP addresses with a configured total weight in excess of the threshold have become
unreachable, the weight of IP monitoring is deducted from the redundancy group
threshold.
Options number —Value at which the IP monitoring weight will be applied against the redundancy
group failover threshold.
Range: 0 through 255
Default: 0
global-weight
Description Specify the relative importance of all IP address monitored objects to the operation of
the redundancy group. Every monitored IP address is assigned a weight. If the monitored
address becomes unreachable, the weight of the object is deducted from the
global-threshold of IP monitoring objects in its redundancy group. When the
global-threshold reaches 0, the global-weight is deducted from the redundancy group.
Every redundancy group has a default threshold of 255. If the threshold reaches 0, a
failover is triggered. Failover is triggered even if the redundancy group is in manual failover
mode and preemption is not enabled.
Options number —Combined weight assigned to all monitored IP addresses. A higher weight value
indicates a greater importance.
Range: 0 through 255
Default: 255
gratuitous-arp-count
Description Specify the number of gratuitous Address Resolution Protocol (ARP) requests to send
on an active interface after failover.
Options number—Number of gratuitous ARP requests that a newly elected primary device in a
chassis cluster sends out to announce its presence to the other network devices.
Range: 1 through 16
Default: 4
heartbeat-interval
Release Information Statement introduced in Junos OS Release 9. Statement updated in Junos OS Release
10.4.
Description Set the interval between the periodic signals broadcast to the devices in a chassis cluster
to indicate that the active node is operational.
heartbeat-threshold
Release Information Statement introduced in Junos OS Release 9.0. Statement updated in Junos OS Release
10.4.
Description Set the number of consecutive missed heartbeat signals that a device in a chassis cluster
must exceed to trigger failover of the active node.
hold-down-interval
Description Set the minimum interval to be allowed between back-to-back failovers for the specified
redundancy group (affects manual failovers, as well as automatic failovers associated
with monitoring failures).
For redundancy group 0, this setting prevents back-to-back failovers from occurring less
than 5 minutes (300 seconds) apart. Note that a redundancy group 0 failover implies a
Routing Engine failure.
For some configurations, such as ones with a large number of routes or logical interfaces,
the default or specified interval for redundancy group 0 might not be sufficient. In such
cases, the system automatically extends the dampening time in increments of 60 seconds
until the system is ready for failover.
Syntax interface {
logical-interface-name;
secondary-ip-address ip-address;
}
Hierarchy Level [edit chassis cluster redundancy-group group-number ip-monitoring family family-name
IP–address]
Description Specify the redundant Ethernet interface, including its logical-unit-number, through which
the monitored IP address must be reachable. The specified redundant Ethernet interface
can be in any redundancy group. Likewise specify a secondary IP address to be used as
a ping source for monitoring the IP address through the secondary node’s redundant
Ethernet interface link.
interface-monitor
Description Specify a redundancy group interface to be monitored for failover and the relative weight
of the interface.
ip-monitoring
Syntax ip-monitoring {
family {
inet {
ipv4-address {
interface {
logical-interface-name;
secondary-ip-address ip-address;
}
weight number;
}
}
}
global-threshold number;
global-weight number;
retry-count number;
retry-interval seconds;
}
Description Specify a global IP address monitoring threshold and weight, and the interval between
pings (retry-interval) and the number of consecutive ping failures (retry-count) permitted
before an IP address is considered unreachable for all IP addresses monitored by the
redundancy group. Also specify IP addresses, a monitoring weight, a redundant Ethernet
interface number, and a secondary IP monitoring ping source for each IP address, for the
redundancy group to monitor.
Options family inet IPv4 address—The address to be continually monitored for reachability.
lacp (Interfaces)
Syntax lacp {
port-priority port-number;
}
Description For redundant Ethernet interfaces in a chassis cluster only, configure Link Aggregation
Control Protocol (LACP).
Default: If you do not specify lacp as either active or passive, LACP remains off (the
default).
Syntax link-protection {
non-revertive;
}
Description Enable Link Aggregation Control Protocol (LACP) link protection at the global (chassis)
level.
Options non-revertive—Disable the ability to switch to a better priority link (if one is available)
once a link is established as active and a collection or distribution is enabled.
member-interfaces
Description Specify the member interface name. Member interfaces that connect to each other must
be of the same type.
network-management
Syntax network-management {
cluster-master;
}
Description Define parameters for network management. To manage an SRX Series Services Gateway
cluster through a non-fxp0 interface, use this command to define the node as a virtual
chassis in NSM. This command establishes a single DMI connection from the primary
node to the NSM server. This connection is used to manage both nodes in the cluster.
Note that the non-fxp0 interface (regardless of which node it is present on) is always
controlled by the primary node in the cluster. The output of a <get-system-information>
RPC returns a <chassis-cluster> tag in all SRX Series devices. When NSM receives this
tag, it models SRX Series clusters as devices with autonomous control planes.
Options cluster-master—Enable in-band management on the primary cluster node through NSM.
Syntax node (0 | 1 ) {
priority number;
}
Description Identify the device in a chassis cluster. The node 0 device in the cluster has the chassis
ID 1, and the node 1 device in the cluster has the chassis ID 2.
Syntax node (0 | 1 ) {
priority number;
}
Description Identify each cluster node in a redundancy group and set its relative priority for mastership.
Options node-number —Cluster node number, set with the chassis cluster node node-number
statement.
Syntax preempt;
Description Enable chassis cluster node preemption within a redundancy group. If preempt is added
to a redundancy group configuration, the device with the higher priority in the group can
initiate a failover to become master. By default, preemption is disabled.
Initiating a failover with the request chassis cluster failover node or request chassis cluster
failover redundancy-group command overrides the priority settings and preemption.
Description Define the priority of a node (device) in a redundancy group. Initiating a failover with the
request chassis cluster failover node or request chassis cluster failover redundancy-group
command overrides the priority settings.
Options priority-number —Priority value of the node. The eligible node with the highest priority is
elected master.
Range: 1 through 254
Description Define a redundancy group. Except for redundancy group 0, a redundancy group is a
logical interface consisting of two physical Ethernet interfaces, one on each chassis. One
interface is active, and the other is on standby. When the active interface fails, the standby
interface becomes active. The logical interface is called a redundant Ethernet interface
(reth).
Redundancy group 0 consists of the two Routing Engines in the chassis cluster and
controls which Routing Engine is primary. You must define redundancy group 0 in the
chassis cluster configuration.
redundancy-interface-process
Syntax redundancy-interface-process {
command binary-file-path;
disable;
failover (alternate-media | other-routing-engine);
}
• failover—Configure the device to reboot if the software process fails four times within
30 seconds, and specify the software to use during the reboot.
redundant-ether-options
Syntax redundant-ether-options {
(flow-control | no-flow-control);
lacp {
(active | passive);
periodic (fast | slow);
}
link-speed speed;
(loopback | no-loopback);
minimum-links number;
redundancy-group number;
source-address-filter mac-address;
(source-filtering | no-source-filtering);
}
Options The remaining statements are explained separately. See CLI Explorer.
Related • Example: Enabling Eight Queue Class of Service on Redundant Ethernet Interfaces
Documentation
• Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Addresses on page 79
redundant-parent (Interfaces)
Description Assign local (child) interfaces to the redundant Ethernet (reth) interfaces. A redundant
Ethernet interface contains a pair of Fast Ethernet interfaces or a pair of Gigabit Ethernet
interfaces that are referred to as child interfaces of the redundant Ethernet interface (the
redundant parent).
Related • Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Documentation Addresses on page 79
redundant-pseudo-interface-options
Syntax redundant-pseudo-interface-options {
redundancy-group redundancy-group;
}
An Internet Key Exchange (IKE) gateway operating in chassis cluster, needs an external
interface to communicate with a peer device. When an external interface (a reth interface
or a standalone interface) is used for communication; the interface might go down when
the physical interfaces are down. Instead, use loopback interfaces as an alternative to
physical interfaces.
Description Specify the number of redundant Ethernet (reth) interfaces allowed in the chassis cluster.
Note that the number of reth interfaces configured determines the number of redundancy
groups that can be configured.
reth (Interfaces)
vlan-rewrite {
translate {
from-vlan-id;
to-vlan-id ;
}
}
}
inet {
accounting {
destination-class-usage;
source-class-usage {
input;
output;
}
}
address (source–address/prefix) {
arp destination-address ;
}
broadcast address;
preferred;
primary;
vrrp-group group-id {
(accept-data | no-accept-data);
advertise-interval seconds;
advertisements-threshold number;
authentication-key key-value;
authentication-type (md5 | simple);
fast-interval milliseconds;
inet6-advertise-interval milliseconds
(preempt <hold-timeseconds> | no-preempt );
preferred;
priority value;
track {
interface interface-name {
bandwidth-threshold bandwidth;
priority-cost value;
}
priority-hold-time seconds;
route route-address{
routing-instance routing-instance;
priority-cost value;
}
}
virtual-address [address];
virtual-link-local-address address;
vrrp-inherit-from {
active-group value;
active-interface interface-name;
}
}
web-authentication {
http;
https;
redirect-to-https;
}
}
dhcp {
client-identifier {
(ascii string | hexadecimal string);
}
lease-time (length | infinite);
retransmission-attempt value;
retransmission-interval seconds;
server-address server-address;
update-server;
vendor-id vendor-id ;
}
dhcp-client {
client-identifier {
prefix {
host-name;
logical-system-name;
routing-instance-name;
}
use-interface-description (device | logical);
user-id (ascii string| hexadecimal string);
}
lease-time (length | infinite);
retransmission-attempt value;
retransmission-interval seconds;
server-address server-address;
update-server;
vendor-id vendor-id ;
}
filter {
group number;
input filter-name;
input-list [filter-name];
output filter-name;
output-list [filter-name];
}
mtu value;
no-neighbor-learn;
no-redirects;
policer {
input input-name;
}
primary;
rpf-check {
fail-filter filter-name;
mode {
loose;
}
}
sampling {
input;
output;
}
simple-filter;
unconditional-src-learn;
unnumbered-address {
interface-name;
preferred-source-address preferred-source-address;
}
}
inet6 {
accounting {
destination-class-usage;
source-class-usage {
input;
ouput;
}
}
address source–address/prefix {
eui-64;
ndp address {
(mac mac-address | multicast-mac multicast-mac-address);
publish;
}
preferred;
primary;
vrrp-inet6-group group_id {
(accept-data | no-accept-data);
advertisements-threshold number;
authentication-key value;
authentication-type (md5 | simple);
fast-interval milliseconds;
inet6-advertise-interval milliseconds;
(preempt <hold-time seconds>| no-preempt );
priority value;
track {
interface interface-name {
bandwidth-threshold value;
priority-cost value;
}
priority-hold-time seconds;
route route-address{
routing-instance routing-instance;
}
}
vrrp-inherit-from {
active-group value;
active-interface interface-name;
}
}
web-authentication {
http;
https;
redirect-to-https;
}
}
(dad-disable | no-dad-disable);
filter {
group number;
input filter-name;
input-list [filter-name];
output filter-name;
output-list [filter-name];
}
mtu value;
nd6-stale-time seconds;
no-neighbor-learn;
no-redirects;
rpf-check {
fail-filter filter-name;
mode {
loose;
}
}
sampling {
input;
output;
}
unnumbered-address;
}
iso {
address source-address;
mtu value;
}
vpls {
filter {
group number;
input filter-name;
input-list [filter-name];
output filter-name;
output-list [filter-name];
}
policer {
input input-name;
output output-name;
}
}
}
native-inner-vlan-id value;
(no-traps | traps);
proxy-arp (restricted | unrestricted);
traps;
vlan-id vlan-id;
vlan-id-list vlan-id-list;
vlan-id-range vlan-id1-vlan-id2;
}
vlan-tagging;
}
Description Configure a redundant Ethernet interface (reth) for chassis cluster. It is a pseudointerface
that includes at minimum of one physical interface from each node of the cluster.
Options The remaining statements are explained separately. See CLI Explorer.
Related • Example: Configuring Chassis Cluster Redundant Ethernet Interfaces for IPv4 and IPv6
Documentation Addresses on page 79
Description Specify the number of consecutive ping attempts that must fail before an IP address
monitored by the redundancy group is declared unreachable. (See retry-interval for a
related redundancy group IP address monitoring variable.)
Description Specify the ping packet send frequency (in seconds) for each IP address monitored by
the redundancy group. (See retry-count for a related IP address monitoring configuration
variable.)
Options interval—Pause time between each ping sent to each IP address monitored by the
redundancy group.
Range: 1 to 30 seconds
Default: 1 second
route-active-on
Description For chassis cluster configurations, identify the device (node) on which a route is active.
Syntax traceoptions {
file {
filename;
files number;
match regular-expression;
(world-readable | no-world-readable);
size maximum-file-size;
}
flag flag;
level {
(alert | all | critical | debug | emergency | error | info | notice | warning);
}
no-remote-trace;
}
Options • file filename —Name of the file to receive the output of the tracing operation. Enclose
the name within quotation marks. All files are placed in the directory /var/log.
• files number —(Optional) Maximum number of trace files. When a trace file named
trace-file reaches its maximum size, it is renamed to trace-file .0, then trace-file.1 , and
so on, until the maximum number of trace files is reached. The oldest archived file is
overwritten.
• If you specify a maximum number of files, you also must specify a maximum file size
with the size option and a filename.
• match regular-expression —(Optional) Refine the output to include lines that contain
the regular expression.
• size maximum-file-size —(Optional) Maximum size of each trace file, in kilobytes (KB),
megabytes (MB), or gigabytes (GB). When a trace file named trace-file reaches this
size, it is renamed trace-file .0. When the trace-file again reaches its maximum size,
trace-file .0 is renamed trace-file .1 and trace-file is renamed trace-file .0. This
renaming scheme continues until the maximum number of trace files is reached. Then
the oldest trace file is overwritten.
• If you specify a maximum file size, you also must specify a maximum number of trace
files with the files option and filename.
Range: 0 KB through 1 GB
Default: 128 KB
weight
Description Specify the relative importance of the object to the operation of the redundancy group.
This statement is primarily used with interface monitoring and IP address monitoring
objects. The failure of an object—such as an interface—with a greater weight brings the
group closer to failover. Every monitored object is assigned a weight.
• interface-monitor objects—If the object fails, its weight is deducted from the threshold
of its redundancy group;
Every redundancy group has a default threshold of 255. If the threshold reaches 0, a
failover is triggered. Failover is triggered even if the redundancy group is in manual failover
mode and preemption is not enabled.
Options number —Weight assigned to the interface or monitored IP address. A higher weight value
indicates a greater importance.
Range: 0 through 255
Operational Commands
List of Sample Output clear chassis cluster control-plane statistics on page 307
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
clear chassis cluster control-plane statistics
user@host> clear chassis cluster control-plane statistics
Cleared control-plane statistics
List of Sample Output clear chassis cluster data-plane statistics on page 308
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
clear chassis cluster data-plane statistics
user@host> clear chassis cluster data-plane statistics
Cleared data-plane statistics
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
The following example displays the redundancy groups before and after the
failover-counts are cleared.
Cluster ID: 3
Node name Priority Status Preempt Manual failover
Cluster ID: 3
Node name Priority Status Preempt Manual failover
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
user@host> clear chassis cluster ip-monitoring failure-count
node0:
--------------------------------------------------------------------------
Cleared failure count for all IPs
node1:
--------------------------------------------------------------------------
Cleared failure count for all IPs
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
user@host> clear chassis cluster ip-monitoring failure-count ip-address 1.1.1.1
node0:
--------------------------------------------------------------------------
Cleared failure count for IP: 1.1.1.1
node1:
--------------------------------------------------------------------------
Cleared failure count for IP: 1.1.1.1
Description Clear the control plane and data plane statistics of a chassis cluster.
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
clear chassis cluster statistics
user@host> clear chassis cluster statistics
Cleared control-plane statistics
Cleared data-plane statistics
Description Synchronizes the configuration from the primary node to the secondary node when the
secondary node joins the primary node in a cluster.
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
request chassis cluster configuration-synchronize
user@host> request chassis cluster configuration-synchronize
Performing configuration synchronization from remote node.
Description For chassis cluster configurations, initiate manual failover in a redundancy group from
one node to the other, which becomes the primary node, and automatically reset the
priority of the group to 255. The failover stays in effect until the new primary node becomes
unavailable, the threshold of the redundancy group reaches 0, or you use the request
chassis cluster failover reset command.
After a manual failover, you must use the request chassis cluster failover reset command
before initiating another failover.
Options • node node-number-Number of the chassis cluster node to which the redundancy group
fails over.
• Range: 0 through 1
List of Sample Output request chassis cluster failover node 0 redundancy-group 1 on page 315
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
request chassis cluster failover node 0 redundancy-group 1
user@host> request chassis cluster failover node 0 redundancy-group 1
Initiated manual failover for redundancy group 1
Supported Platforms
Description For chassis cluster configurations, initiate manual failover in a redundancy group from
one node to the other, which becomes the primary node, and automatically reset the
priority of the group to 255. The failover stays in effect until the new primary node becomes
unavailable, the threshold of the redundancy group reaches 0, or you use the request
chassis cluster failover reset command.
After a manual failover, you must use the request chassis cluster failover reset command
before initiating another failover.
Options • node node-number-Number of the chassis cluster node to which the redundancy group
fails over.
• Range: 0 through 1
Related • Initiating a Chassis Cluster Manual Redundancy Group Failover on page 146
Documentation
• Verifying Chassis Cluster Failover Status on page 148
List of Sample Output request chassis cluster failover redundancy-group 0 node 1 on page 316
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
request chassis cluster failover redundancy-group 0 node 1
user@host> request chassis cluster failover redundancy-group 0 node 1
{primary:node0}
user@host> request chassis cluster failover redundancy-group 0 node 1
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Initiated manual failover for redundancy group 0
Description In chassis cluster configurations, undo the previous manual failover and return the
redundancy group to its original settings.
List of Sample Output request chassis cluster failover reset redundancy-group 0 on page 317
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
request chassis cluster failover reset redundancy-group 0
user@host> request chassis cluster failover reset redundancy-group 0
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
request chassis fpc
user@host> request chassis fpc online slot 0
FPC 0 already online
Supported Platforms SRX300, SRX320, SRX340, SRX345, SRX5400, SRX550, SRX5600, SRX5800
Release Information For SRX5400, SRX5600, and SRX5800 devices, command introduced in Junos OS
Release 9.6 and support for reboot as a required parameter added in Junos OS Release
11.2R2. For SRX5400 devices, the command is introduced in Junos OS Release
12.1X46-D20. For SRX300, SRX320, SRX340, and SRX345 devices, command introduced
in Junos OS Release 15.1X49-D40.
Description The in-service software upgrade (ISSU) feature allows a chassis cluster pair to be
upgraded from supported Junos OS versions with a traffic impact similar to that of
redundancy group failovers. Before upgrading, you should perform failovers so that all
redundancy groups are active on only one device. We recommend that graceful restart
for routing protocols be enabled before you initiate an ISSU.
For SRX300, SRX320, SRX340, SRX345, and SRX550 devices, you must use the no-sync
parameter to perform an in-band cluster upgrade (ICU). This allows a chassis cluster
pair to be upgraded with a minimal service disruption of approximately 30 seconds.
• no-copy—(Optional) Installs the software upgrade package but does not save the
copies of package files.
• no-sync—Stops the flow state from synchronizing when the old secondary node has
booted with a new Junos OS image.
This parameter applies to SRX300, SRX320, SRX340, SRX345, and SRX550 devices
only. It is required for an ICU.
• no-tcp-syn-check—(Optional) Creates a window wherein the TCP SYN check for the
incoming packets is disabled. The default value for the window is 7200 seconds (2
hours).
This parameter applies to SRX300, SRX320, SRX340, SRX345, and SRX550 devices
only.
This parameter applies to SRX300, SRX320, SRX340, SRX345, and SRX550 devices
only.
• reboot—Reboots each device in the chassis cluster pair after installation is completed.
This parameter applies to SRX5400, SRX5600, and SRX5800 devices only. It is required
for an ISSU. (The devices in a cluster are automatically rebooted following an ICU.)
List of Sample Output request system software in-service-upgrade (High-End SRX Series Devices) on page 320
request system software in-service-upgrade (Branch SRX Series Devices) on page 321
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output
request system software in-service-upgrade (High-End SRX Series Devices)
user@host> request system software in-service-upgrade
/var/tmp/junos-srx1k3k-11.2R2.5-domestic.tgz no-copy reboot
Chassis ISSU Started
node0:
--------------------------------------------------------------------------
Chassis ISSU Started
ISSU: Validating Image
Inititating in-service-upgrade
node0:
--------------------------------------------------------------------------
Inititating in-service-upgrade
Checking compatibility with configuration
mgd: commit complete
Validation succeeded
ISSU: Preparing Backup RE
Finished upgrading secondary node node0
Rebooting Secondary Node
node0:
--------------------------------------------------------------------------
Shutdown NOW!
[pid 3257]
ISSU: Backup RE Prepare Done
Waiting for node0 to reboot.
node0 booted up.
Waiting for node0 to become secondary
node0 became secondary.
Waiting for node0 to be ready for failover
ISSU: Preparing Daemons
Secondary node0 ready for failover.
Failing over all redundancy-groups to node0
ISSU: Preparing for Switchover
Initiated failover for all the redundancy groups to node1
Waiting for node0 take over all redundancy groups
node0:
--------------------------------------------------------------------------
Exiting in-service-upgrade window
Exiting in-service-upgrade window
Chassis ISSU Aborted
node0:
--------------------------------------------------------------------------
Chassis ISSU Ended
ISSU completed successfully, rebooting...
Shutdown NOW!
[pid 4294]
Sample Output
request system software in-service-upgrade (Branch SRX Series Devices)
user@host> request system software in-service-upgrade
/var/tmp/junos-srxsme-11.2R2.2-domestic.tgz no-sync
ISSU: Validating package
WARNING: in-service-upgrade shall reboot both the nodes
in your cluster. Please ignore any subsequent
reboot request message
ISSU: start downloading software package on secondary node
Pushing bundle to node1
NOTICE: Validating configuration against junos-srxsme-11.2R2.2-domestic.tgz.
NOTICE: Use the 'no-validate' option to skip this if desired.
Formatting alternate root (/dev/ad0s1a)...
/dev/ad0s1a: 630.5MB (1291228 sectors) block size 16384, fragment size 2048
using 4 cylinder groups of 157.62MB, 10088 blks, 20224 inodes.
super-block backups (for fsck -b #) at:
32, 322848, 645664, 968480
Checking compatibility with configuration
Initializing...
Verified manifest signed by PackageProduction_11_2_0
Verified junos-11.2R2.2-domestic signed by PackageProduction_11_2_0
Using junos-11.2R2.2-domestic from
/altroot/cf/packages/install-tmp/junos-11.2R2.2-domestic
Copying package ...
Saving boot file package in /var/sw/pkg/junos-boot-srxsme-11.2R2.2.tgz
Verified manifest signed by PackageProduction_11_2_0
Hardware Database regeneration succeeded
Validating against /config/juniper.conf.gz
cp: /cf/var/validate/chroot/var/etc/resolv.conf and /etc/resolv.conf are identical
(not copied).
cp: /cf/var/validate/chroot/var/etc/hosts and /etc/hosts are identical (not
copied).
mgd: commit complete
Validation succeeded
Installing package '/altroot/cf/packages/install-tmp/junos-11.2R2.2-domestic' ...
Verified junos-boot-srxsme-11.2R2.2.tgz signed by PackageProduction_11_2_0
Verified junos-srxsme-11.2R2.2-domestic signed by PackageProduction_11_2_0
Saving boot file package in /var/sw/pkg/junos-boot-srxsme-11.2R2.2.tgz
JUNOS 11.2R2.2 will become active at next reboot
WARNING: A reboot is required to load this software correctly
WARNING: Use the 'request system reboot' command
WARNING: when software installation is complete
Saving state for rollback ...
ISSU: finished upgrading on secondary node node1
ISSU: start upgrading software package on primary node
NOTICE: Validating configuration against junos-srxsme-11.2R2.2-domestic.tgz.
NOTICE: Use the 'no-validate' option to skip this if desired.
node1:
--------------------------------------------------------------------------
Successfully reset all redundancy-groups priority back to configured ones.
Redundancy-groups-0 will not be reset and the primaryship remains unchanged.
node0:
--------------------------------------------------------------------------
Initiated manual failover for all redundancy-groups to node0
Redundancy-groups-0 will not failover and the primaryship remains unchanged.
ISSU: rebooting Secondary Node
node1:
--------------------------------------------------------------------------
Shutdown NOW!
[pid 7023]
ISSU: Waiting for secondary node node1 to reboot.
ISSU: node 1 went down
ISSU: Waiting for node 1 to come up
ISSU: node 1 came up
ISSU: secondary node node1 booted up.
Shutdown NOW!
[pid 45056]
Release Information Support for extended cluster identifiers (more than 15 identifiers) added in Junos OS
Release 12.1X45-D10.
Description This operational mode command sets the chassis cluster identifier (ID) and node ID on
each device, and reboots the devices to enable clustering. The system uses the chassis
cluster ID and chassis cluster node ID to apply the correct configuration for each node
(for example, when you use the apply-groups command to configure the chassis cluster
management interface). The chassis cluster ID and node ID statements are written to
the EPROM, and the statements take effect when the system is rebooted.
NOTE: If you have a cluster set up and running with an earlier release of Junos
OS, you can upgrade to Junos OS Release 12.1X45-D10 or later and re-create
a cluster with cluster IDs greater than 16. If for any reason you decide to revert
to the previous version of Junos OS that did not support extended cluster
IDs, the system comes up with standalone devices after you reboot. If the
cluster ID set is less than 16 and you roll back to a previous release, the system
comes back with the previous setup.
Options cluster-id cluster-id —Identifies the cluster within the Layer 2 domain.
Range: 0 through 255
Related • Example: Setting the Chassis Cluster Node ID and Cluster ID for Branch SRX Series
Documentation Devices on page 51
• Example: Setting the Chassis Cluster Node ID and Cluster ID for High-End SRX Series
Devices
Output Fields When you enter this command, you are provided feedback on the status of your request.
Release Information Command introduced in Junos OS Release 9.3. Output changed to support dual control
ports in Junos OS Release 10.0.
List of Sample Output show chassis cluster control-plane statistics on page 325
show chassis cluster control-plane statistics (SRX5000 line devices) on page 325
Output Fields Table 27 on page 324 lists the output fields for the show chassis cluster control-plane
statistics command. Output fields are listed in the approximate order in which they appear.
Control link statistics Statistics of the control link used by chassis cluster traffic. Statistics for Control link 1 are
displayed when you use dual control links (SRX5000 lines only).
Fabric link statistics Statistics of the fabric link used by chassis cluster traffic. Statistics for Child Link 1 are
displayed when you use dual fabric links.
Switch fabric link statistics Statistics of the switch fabric link used by chassis cluster traffic.
Sample Output
show chassis cluster control-plane statistics
user@host> show chassis cluster control-plane statistics
Control link statistics:
Control link 0:
Heartbeat packets sent: 11646
Heartbeat packets received: 8343
Heartbeat packet errors: 0
Fabric link statistics:
Child link 0
Probes sent: 11644
Probes received: 8266
Switch fabric link statistics:
Probe state : DOWN
Probes sent: 8145
Probes received: 8013
Probe recv errors: 0
Probe send errors: 0
Sample Output
show chassis cluster control-plane statistics (SRX5000 line devices)
user@host> show chassis cluster control-plane statistics
Control link statistics:
Control link 0:
Heartbeat packets sent: 258698
Heartbeat packets received: 258693
Heartbeat packet errors: 0
Control link 1:
Heartbeat packets sent: 258698
Heartbeat packets received: 258693
Heartbeat packet errors: 0
Fabric link statistics:
Child link 0
Probes sent: 258690
Probes received: 258690
Child link 1
Probes sent: 258505
Probes received: 258505
Description Display the status of the data plane interface (also known as a fabric interface) in a
chassis cluster configuration.
List of Sample Output show chassis cluster data-plane interfaces on page 326
Output Fields Table 28 on page 326 lists the output fields for the show chassis cluster data-plane
interfaces command. Output fields are listed in the approximate order in which they
appear.
Sample Output
show chassis cluster data-plane interfaces
user@host> show chassis cluster data-plane interfaces
fab0:
Name Status
ge-2/1/9 up
ge-2/2/5 up
fab1:
Name Status
ge-8/1/9 up
ge-8/2/5 up
List of Sample Output show chassis cluster data-plane statistics on page 328
Output Fields Table 29 on page 327 lists the output fields for the show chassis cluster data-plane statistics
command. Output fields are listed in the approximate order in which they appear.
Sample Output
show chassis cluster data-plane statistics
user@host> show chassis cluster data-plane statistics
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 0 0
Session create 0 0
Session close 0 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPsec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RTSP ALG 0 0
Description Display the status of the switch fabric interfaces (swfab) in a chassis cluster.
List of Sample Output show chassis cluster ethernet-switching interfaces on page 329
Output Fields Table 30 on page 329 lists the output fields for the show chassis cluster ethernet-switching
interfaces command. Output fields are listed in the approximate order in which they
appear.
Sample Output
show chassis cluster ethernet-switching interfaces
user@host> show chassis cluster ethernet-switching interfaces
swfab0:
Name Status
ge-0/0/9 up
ge-0/0/10 up
swfab1:
Name Status
ge-5/0/9 up
ge-5/0/10 up
List of Sample Output show chassis cluster ethernet-switching status on page 331
Output Fields Table 31 on page 330 lists the output fields for the show chassis cluster ethernet-switching
status command. Output fields are listed in the approximate order in which they appear.
NOTE: If you create a cluster with cluster IDs greater than 16, and then
decide to roll back to a previous release image that does not support
extended cluster IDs, the system comes up as standalone.
NOTE: If you have a cluster set up and running with an earlier release of
Junos OS, you can upgrade to Junos OS Release 12.1X45-D10 and re-create
a cluster with cluster IDs greater than 16. However, if for any reason you
decide to revert to the previous version of Junos OS that did not support
extended cluster IDs, the system comes up with standalone devices after
you reboot.
Sample Output
show chassis cluster ethernet-switching status
user@host> show chassis cluster ethernet-switching status
Cluster ID: 10
Node Priority Status Preempt Manual failover
Description Display chassis cluster messages. The messages indicate each node's health condition
and details of the monitored failure.
Output Fields Table 32 on page 332 lists the output fields for the show chassis cluster information
command. Output fields are listed in the approximate order in which they appear.
Redundancy Group Information • Redundancy Group—ID number (0 - 255) of a redundancy group in the cluster.
• Current State—State of the redundancy group: primary, secondary, hold, or
secondary-hold.
• Weight—Relative importance of the redundancy group.
• Time—Time when the redundancy group changed the state.
• From—State of the redundancy group before the change.
• To—State of the redundancy group after the change.
• Reason—Reason for the change of state of the redundancy group.
Chassis cluster LED information • Current LED color—Current color state of the LED.
• Last LED change reason—Reason for change of state of the LED.
Sample Output
show chassis cluster information
user@host> show chassis cluster information
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Sample Output
show chassis cluster information
user@host> show chassis cluster information
The following output is specific to monitoring abnormal (unhealthy) case.
node0:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
node1:
--------------------------------------------------------------------------
Redundancy Group Information:
Failure Information:
Description Display chassis cluster messages. The messages indicate the redundancy mode,
automatic synchronization status, and if automatic synchronization is enabled on the
device.
List of Sample Output show chassis cluster information configuration-synchronization on page 336
Output Fields Table 33 on page 336 lists the output fields for the show chassis cluster information
configuration-synchronization command. Output fields are listed in the approximate order
in which they appear.
Events The timestamp of the event, the automatic configuration synchronization status, and
the number of synchronization attempts.
Sample Output
show chassis cluster information configuration-synchronization
user@host> show chassis cluster information configuration-synchronization
node0:
--------------------------------------------------------------------------
Configuration Synchronization:
Status:
Activation status: Enabled
Last sync operation: Auto-Sync
Last sync result: Not needed
Last sync mgd messages:
Events:
Feb 25 22:21:49.174 : Auto-Sync: Not needed
node1:
--------------------------------------------------------------------------
Configuration Synchronization:
Status:
Activation status: Enabled
Last sync operation: Auto-Sync
Last sync result: Succeeded
Last sync mgd messages:
mgd: rcp: /config/juniper.conf: No such file or directory
Network security daemon: warning: You have enabled/disabled inet6 flow.
Network security daemon: You must reboot the system for your change to
take effect.
Network security daemon: If you have deployed a cluster, be sure to reboot
all nodes.
mgd: commit complete
Events:
Feb 25 23:02:33.467 : Auto-Sync: In progress. Attempt: 1
Feb 25 23:03:13.200 : Auto-Sync: Succeeded. Attempt: 1
Release Information Command modified in Junos OS Release 9.0. Output changed to support dual control
ports in Junos OS Release 10.0. Output changed to support control interfaces in Junos
OS Release 11.2. Output changed to support redundant pseudo interfaces in Junos OS
Release 12.1X44-D10. For high-end SRX Series devices, output changed to support the
internal security association (SA) option in Junos OS Release 12.1X45-D10.
Description Display the status of the control interface in a chassis cluster configuration.
Output Fields Table 34 on page 338 lists the output fields for the show chassis cluster interfaces
command. Output fields are listed in the approximate order in which they appear.
Control link status State of the chassis cluster control interface: up or down.
Sample Output
show chassis cluster interfaces
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status
0 em0 Up
1 em1 Down
Fabric interfaces:
Name Child-interface Status
fab0 ge-0/1/0 Up
fab0
fab1 ge-6/1/0 Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 2
reth2 Down Not configured
reth3 Down Not configured
reth4 Down Not configured
reth5 Down Not configured
reth6 Down Not configured
reth7 Down Not configured
reth8 Down Not configured
reth9 Down Not configured
reth10 Down Not configured
reth11 Down Not configured
Redundant-pseudo-interface Information:
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-0/1/9 100 Up 0
ge-0/1/9 100 Up
Sample Output
show chassis cluster interfaces (SRX5000 line devices)
user@host> show chassis cluster interfaces
Control link status: Up
Control interfaces:
Index Interface Monitored-Status Internal SA
0 em0 Up enabled
1 em1 Down enabled
Fabric interfaces:
Name Child-interface Status
fab0 ge-0/1/0 Up
fab0
fab1 ge-6/1/0 Up
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up 1
reth1 Up 2
reth2 Down Not configured
reth3 Down Not configured
reth4 Down Not configured
reth5 Down Not configured
reth6 Down Not configured
reth7 Down Not configured
reth8 Down Not configured
reth9 Down Not configured
reth10 Down Not configured
reth11 Down Not configured
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 1
Interface Monitoring:
Interface Weight Status Redundancy-group
ge-0/1/9 100 Up 0
ge-0/1/9 100 Up
Sample Output
show chassis cluster interfaces
user@host> show chassis cluster interfaces
The following output is specific to fabric monitoring failure:
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 fxp1 Up Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 ge-0/0/2 Down / Down
fab0
fab1 ge-9/0/2 Up / Up
fab1
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Sample Output
show chassis cluster interfaces
(SRX5400, SRX5600, and SRX5800 devices with SRX5000 line SRX5K-SCB3 (SCB3) with enhanced midplanes
and SRX5K-MPC3-100G10G (IOC3) or SRX5K-MPC3-40G10G (IOC3))
user@host> show chassis cluster interfaces
The following output is specific to SRX5400, SRX5600, and SRX5800 devices in a
chassis cluster cluster, when the PICs containing fabric links on the SRX5K-MPC3-40G10G
(IOC3) are powered off to turn on alternate PICs. If no alternate fabric links are configured
on the PICs that are turned on, RTO synchronous communication between the two nodes
stops and the chassis cluster session state will not back up, because the fabric link is
missing.
Control interfaces:
Index Interface Monitored-Status Internal-SA
0 em0 Up Disabled
1 em1 Down Disabled
Fabric interfaces:
Name Child-interface Status
(Physical/Monitored)
fab0 <<< fab child missing once PIC off
lined
fab0
fab1 xe-10/2/7 Up / Down
fab1
Redundant-ethernet Information:
Name Status Redundancy-group
reth0 Up Not configured
reth1 Down 1
Redundant-pseudo-interface Information:
Name Status Redundancy-group
lo0 Up 0
Release Information Command introduced in Junos OS Release 9.6. Support for global threshold, current
threshold, and weight of each monitored IP address added in Junos OS Release
12.1X47-D10.
Description Display the status of all monitored IP addresses for a redundancy group.
Options • none— Display the status of monitored IP addresses for all redundancy groups on the
node.
List of Sample Output show chassis cluster ip-monitoring status on page 344
show chassis cluster ip-monitoring status redundancy-group on page 345
Output Fields Table 35 on page 343 lists the output fields for the show chassis cluster ip-monitoring
status command.
Global threshold Failover value for all IP addresses monitored by the redundancy group.
Current threshold Value equal to the global threshold minus the total weight of the unreachable IP address.
Values for this field are: reachable, unreachable, and unknown. The status is “unknown”
if Packet Forwarding Engines (PFEs) are not yet up and running.
Table 35: show chassis cluster ip-monitoring status Output Fields (continued)
Field Name Field Description
Reason Explanation for the reported status. See Table 36 on page 344.
Weight Combined weight (0 - 255) assigned to all monitored IP addresses. A higher weight value
indicates greater importance.
Expanded reason output fields for unreachable IP addresses added in Junos OS Release
10.1. You might see any of the following reasons displayed.
Table 36: show chassis cluster ip-monitoring status redundancy group Reason Fields
Reason Reason Description
No route to host The router could not resolve the ARP, which is needed to send the ICMP packet to the
host with the monitored IP address.
No auxiliary IP found The redundant Ethernet interface does not have an auxiliary IP address configured.
redundancy-group state unknown Unable to obtain the state (primary, secondary, secondary-hold, disable) of a
redundancy-group.
No reth child MAC address Could not extract the MAC address of the redundant Ethernet child interface.
Secondary link not monitored The secondary link might be down (the secondary child interface of a redundant Ethernet
interface is either down or non-functional).
Unknown The IP address has just been configured and the router still does not know the status of
this IP.
or
Sample Output
show chassis cluster ip-monitoring status
user@host> show chassis cluster ip-monitoring status
node0:
--------------------------------------------------------------------------
Redundancy group: 1
Global threshold: 200
Current threshold: -120
node1:
--------------------------------------------------------------------------
Redundancy group: 1
Global threshold: 200
Current threshold: -120
Sample Output
show chassis cluster ip-monitoring status redundancy-group
user@host> show chassis cluster ip-monitoring status redundancy-group 1
node0:
--------------------------------------------------------------------------
Redundancy group: 1
node1:
--------------------------------------------------------------------------
Redundancy group: 1
Release Information Command modified in Junos OS Release 9.0. Output changed to support dual control
ports in Junos OS Release 10.0.
Output Fields Table 37 on page 346 lists the output fields for the show chassis cluster statistics command.
Output fields are listed in the approximate order in which they appear.
Control link statistics Statistics of the control link used by chassis cluster traffic. Statistics for Control link 1 are
displayed when you use dual control links (SRX5000 lines only). Note that the output
for the SRX5000 lines will always show Control link 0 and Control link 1 statistics, even
though only one control link is active or working.
Fabric link statistics Statistics of the fabric link used by chassis cluster traffic. Statistics for Child Link 1 are
displayed when you use dual fabric links.
Sample Output
show chassis cluster statistics
user@host> show chassis cluster statistics
Control link statistics:
Control link 0:
Heartbeat packets sent: 798
Heartbeat packets received: 784
Heartbeat packets errors: 0
Fabric link statistics:
Child link 0
Probes sent: 793
Probes received: 0
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 0 0
Session create 0 0
Session close 0 0
Session change 0 0
Gate create 0 0
Sample Output
show chassis cluster statistics (SRX5000 line devices)
user@host> show chassis cluster statistics
Control link statistics:
Control link 0:
Heartbeat packets sent: 258689
Heartbeat packets received: 258684
Heartbeat packets errors: 0
Control link 1:
Heartbeat packets sent: 258689
Heartbeat packets received: 258684
Heartbeat packets errors: 0
Fabric link statistics:
Child link 0
Probes sent: 258681
Probes received: 258681
Child link 1
Probes sent: 258501
Probes received: 258501
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 0 0
Session create 1 0
Session close 1 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Sample Output
show chassis cluster statistics (SRX5000 line devices)
user@host> show chassis cluster statistics
Control link statistics:
Control link 0:
Heartbeat packets sent: 82371
Heartbeat packets received: 82321
Heartbeat packets errors: 0
Control link 1:
Heartbeat packets sent: 0
Heartbeat packets received: 0
Heartbeat packets errors: 0
Fabric link statistics:
Child link 0
Probes sent: 258681
Probes received: 258681
Child link 1
Probes sent: 258501
Probes received: 258501
Services Synchronized:
Service name RTOs sent RTOs received
Translation context 0 0
Incoming NAT 0 0
Resource manager 0 0
Session create 1 0
Session close 1 0
Session change 0 0
Gate create 0 0
Session ageout refresh requests 0 0
Session ageout refresh replies 0 0
IPSec VPN 0 0
Firewall user authentication 0 0
MGCP ALG 0 0
H323 ALG 0 0
SIP ALG 0 0
SCCP ALG 0 0
PPTP ALG 0 0
RPC ALG 0 0
RTSP ALG 0 0
RAS ALG 0 0
MAC address learning 0 0
GPRS GTP 0 0
Release Information Command modified in Junos OS Release 9.2. Support for dual control ports added in
Junos OS Release 10.0. Support for monitoring failures added in Junos OS Release
12.1X47-D10.
Options • none—Display the status of all redundancy groups in the chassis cluster.
Output Fields Table 38 on page 350 lists the output fields for the show chassis cluster status command.
Output fields are listed in the approximate order in which they appear.
Cluster ID ID number (1-15) of a cluster is applicable for releases upto 12.1X45-D10. ID number
(1-255) is applicable for releases 12.1X45-D10 and later. Setting a cluster ID to 0 is
equivalent to disabling a cluster.
Manual failover • Yes: If the Mastership is set manually through the CLI with the request chassis cluster
failover node or request chassis cluster failover redundancy-group command. This
overrides Priority and Preempt.
• No: Mastership is not set manually through the CLI.
Sample Output
Displays chassis cluster status with all redundancy groups.
Cluster ID: 1
Node Priority Status Preempt Manual Monitor-failures
Sample Output
Displays chassis cluster status with redundancy group 1 only.
Cluster ID: 1
Node Priority Status Preempt Manual Monitor-failures
Description Display environmental information about the services gateway chassis, including the
temperature and information about the fans, power supplies, and Routing Engine.
Output Fields Table 39 on page 353 lists the output fields for the show chassis environment command.
Output fields are listed in the approximate order in which they appear.
Temp Temperature of air flowing through the chassis in degrees Celsius (C) and Fahrenheit (F).
Fan Fan status: OK, Testing (during initial power-on), Failed, or Absent.
Sample Output
show chassis environment
user@host> show chassis environment
user@host> show chassis environment
Class Item Status Measurement
Temp PEM 0 OK 40 degrees C / 104 degrees F
Description SRX Series devices display information about the ports on the Control Board (CB) Ethernet
switch.
Output Fields Table 40 on page 357 lists the output fields for the show chassis ethernet-switch command.
Output fields are listed in the approximate order in which they appear.
Link is good on port n Information about the link between each port on the CB's Ethernet switch and one of the following
connected to device devices:
Autonegotiate is By default, built-in Fast Ethernet ports on a PIC autonegotiate whether to operate at 10 Mbps or 100
Enabled (or Disabled) Mbps. All other interfaces automatically choose the correct speed based on the PIC type and whether
the PIC is configured to operate in multiplexed mode.
Sample Output
show chassis ethernet-switch
user@host> show chassis ethernet-switch
node0:
--------------------------------------------------------------------------
Displaying summary for switch 0
Link is good on GE port 0 connected to device: FPC0
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
node1:
--------------------------------------------------------------------------
Displaying summary for switch 0
Link is good on GE port 0 connected to device: FPC0
Speed is 1000Mb
Duplex is full
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
Autonegotiate is Enabled
Flow Control TX is Disabled
Flow Control RX is Disabled
List of Sample Output show chassis fabric plane(SRX5600 and SRX5800 devices with SRX5000 line SCB
II (SRX5K-SCBE) and SRX5K-RE-1800X4) on page 362
Output Fields Table 41 on page 361 lists the output fields for the show chassis fabric plane command.
Output fields are listed in the approximate order in which they appear.
PFE Slot number of each Packet Forwarding Engine and the state of the none
links to the FPC:
Sample Output
show chassis fabric plane
(SRX5600 and SRX5800 devices with SRX5000 line SCB II (SRX5K-SCBE) and SRX5K-RE-1800X4)
user@host> show chassis fabric plane
node0:
--------------------------------------------------------------------------
Fabric management PLANE state
Plane 0
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 1
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 2
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 3
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 4
Plane state: SPARE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 5
Plane state: SPARE
FPC 0
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 9
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
node1:
--------------------------------------------------------------------------
Fabric management PLANE state
Plane 0
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 1
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 2
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 3
Plane state: ACTIVE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 4
Plane state: SPARE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
Plane 5
Plane state: SPARE
FPC 0
PFE 0 :Links ok
FPC 1
PFE 0 :Links ok
FPC 2
PFE 0 :Links ok
FPC 3
PFE 0 :Links ok
FPC 4
PFE 0 :Links ok
FPC 7
PFE 0 :Links ok
FPC 8
PFE 0 :Links ok
FPC 10
PFE 0 :Links ok
List of Sample Output show chassis fabric plane-location(SRX5600 and SRX5800 devices with SRX5000
line SCB II (SRX5K-SCBE) and SRX5K-RE-1800X4) on page 367
Output Fields Table 42 on page 367 lists the output fields for the show chassis fabric plane-location
command. Output fields are listed in the approximate order in which they appear.
Sample Output
show chassis fabric plane-location
(SRX5600 and SRX5800 devices with SRX5000 line SCB II (SRX5K-SCBE) and SRX5K-RE-1800X4)
user@host> show chassis fabric plane-location
node0:
--------------------------------------------------------------------------
------------Fabric Plane Locations-------------
Plane 0 Control Board 0
Plane 1 Control Board 0
Plane 2 Control Board 1
Plane 3 Control Board 1
Plane 4 Control Board 2
Plane 5 Control Board 2
node1:
--------------------------------------------------------------------------
------------Fabric Plane Locations-------------
Plane 0 Control Board 0
Plane 1 Control Board 0
Plane 2 Control Board 1
Plane 3 Control Board 1
Plane 4 Control Board 2
Plane 5 Control Board 2
List of Sample Output show chassis fabric summary(SRX5600 and SRX5800 devices with SRX5000 line
SCB II (SRX5K-SCBE) and SRX5K-RE-1800X4) on page 370
Output Fields Table 43 on page 369 lists the output fields for the show chassis fabric summary command.
Output fields are listed in the approximate order in which they appear.
For information about link and destination errors, issue the show
chassis fabric fpcs commands.
• Spare—SIB is redundant and will move to active state if one of
the working SIBs fails.
• None—No errors
• Link Errors—Fabric link errors were found on the SIB RX link.
• Cell drops—Fabric cell drops were found on the SIB ASIC.
• Link, Cell drops—Both link errors and cell drops were detected on
at least one of the FPC’s fabric links.
NOTE: The Errors column is empty only when the FPC or SIB is
offline.
Sample Output
show chassis fabric summary
(SRX5600 and SRX5800 devices with SRX5000 line SCB II (SRX5K-SCBE) and SRX5K-RE-1800X4)
user@host> show chassis fabric summary
node0:
--------------------------------------------------------------------------
Plane State Uptime
0 Online 14 minutes, 10 seconds
1 Online 14 minutes, 5 seconds
2 Online 14 minutes
3 Online 13 minutes, 55 seconds
node1:
--------------------------------------------------------------------------
Plane State Uptime
0 Online 14 minutes, 7 seconds
1 Online 14 minutes, 2 seconds
2 Online 13 minutes, 57 seconds
3 Online 13 minutes, 51 seconds
4 Spare 13 minutes, 46 seconds
5 Spare 13 minutes, 41 seconds
Release Information Command introduced in Junos OS Release 9.2. Command modified in Junos OS Release
9.2 to include node option.
• models—(Optional) Display model numbers and part numbers for orderable FRUs.
Output Fields Table 44 on page 372 lists the output fields for the show chassis hardware command.
Output fields are listed in the approximate order in which they appear.
Item Chassis component—Information about the backplane; power supplies; fan trays; Routing
Engine; each Physical Interface Module (PIM)—reported as FPC and PIC—and each fan,
blower, and impeller.
Serial Number Serial number of the chassis component. The serial number of the backplane is also the
serial number of the device chassis. Use this serial number when you need to contact
Juniper Networks Customer Support about the device chassis.
CLEI code Common Language Equipment Identifier code. This value is displayed only for hardware
components that use ID EEPROM format v2. This value is not displayed for components
that use ID EEPROM format v1.
EEPROM Version ID EEPROM version used by hardware component: 0x01 (version 1) or 0x02 (version 2).
• There are three SCB slots in SRX5800 devices. The third slot can be used for an
SCB or an FPC. When an SRX5K-SCB was used , the third SCB slot was used as an
FPC. SCB redundancy is provided in chassis cluster mode.
• With an SCB2, a third SCB is supported. If a third SCB is plugged in, it provides
intra-chassis fabric redundancy.
• The Ethernet switch in the SCB2 provides the Ethernet connectivity among all the
FPCs and the Routing Engine. The Routing Engine uses this connectivity to distribute
forwarding and routing tables to the FPCs. The FPCs use this connectivity to send
exception packets to the Routing Engine.
• Fabric connects all FPCs in the data plane. The Fabric Manager executes on the
Routing Engine and controls the fabric system in the chassis. Packet Forwarding
Engines on the FPC and fabric planes on the SCB are connected through HSL2
channels.
• SCB2 supports HSL2 with both 3.11 Gbps and 6.22 Gbps (SerDes) link speed and
various HSL2 modes. When an FPC is brought online, the link speed and HSL2 mode
are determined by the type of FPC.
Starting with Junos OS Release 15.1X49-D10, the SRX5K-SCB3 (SCB3) with enhanced
midplanes is introduced.
• Type of Flexible PIC Concentrator (FPC), Physical Interface Card (PIC), Modular
Interface Cards (MICs), and PIMs.
• IOCs
Starting with Junos OS Release 15.1X49-D10, the SRX5K-MPC3-100G10G (IOC3) and
the SRX5K-MPC3-40G10G (IOC3) are introduced.
• IOC3 has two types of IOC3 MPCs, which have different built-in MICs: the 24x10GE
+ 6x40GE MPC and the 2x100GE + 4x10GE MPC.
• IOC3 supports SCB3 and SRX5000 line backplane and enhanced backplane.
• IOC3 can only work with SRX5000 line SCB2 and SCB3. If an SRX5000 line SCB is
detected, IOC3 will be offline, an FPC misconfiguration alarm will be raised, and a
system log message is generated.
• IOC3 interoperates with SCB2 and SCB3.
• IOC3 interoperates with the SRX5K-SPC-4-15-320 (SPC2) and the SRX5K-MPC
(IOC2).
• The maximum power consumption for one IOC3 is 645W. An enhanced power
module must be used.
• The IOC3 does not support the following command to set a PIC to go offline or
online:
request chassis pic fpc-slot <fpc-slot> pic-slot <pic-slot> <offline | online> .
• IOC3 supports 240 Gbps of throughput with the enhanced SRX5000 line backplane.
• Chassis cluster functions the same as for the SRX5000 line IOC2.
• IOC3 supports intra-chassis and inter-chassis fabric redundancy mode.
• IOC3 supports ISSU and ISHU in chassis cluster mode.
• IOC3 supports intra-FPC and and Inter-FPC Express Path (previously known as
services offloading) with IPv4.
• NAT of IPv4 and IPv6 in normal mode and IPv4 for Express Path mode.
• All four PICs on the 24x10GE + 6x40GE cannot be powered on. A maximum of two
PICs can be powered on at the same time.
Use the set chassis fpc <slot> pic <pic> power off command to choose the PICs you
want to power on.
NOTE: The RE2 provides significantly better performance than the previously used
Routing Engine, even with a single core.
le
PEM 0 Rev 05 740-034724 QCS17460203K PS 4.1kW; 200-240V AC i
n
PEM 1 Rev 04 740-034724 QCS172302017 PS 4.1kW; 200-240V AC i
n
Routing Engine 0 REV 01 740-056658 9013040855 SRX5k RE-1800X4
Routing Engine 1
CB 0 REV 01 750-056587 CACG1424 SRX5k SCB II
CB 1 REV 01 750-056587 CACC9307 SRX5k SCB II
CB 2 REV 01 750-056587 CAAZ1128 SRX5k SCB II
FPC 0 REV 10 750-056758 CACS2667 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Cp
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
FPC 1 REV 18 750-054877 CACH4092 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Flow
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
FPC 2 REV 10 750-056758 CACV0038 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Flow
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
FPC 3 REV 10 750-043157 CACB6877 SRX5k IOC II
CPU REV 04 711-043360 CACH6074 SRX5k MPC PMB
node1:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN1235BC7AGA SRX5800
Midplane REV 01 710-024803 ACRC3244 SRX5800 Backplane
FPM Board REV 01 710-024632 CACA2108 Front Panel Display
PDM Rev 03 740-013110 QCS1739519B Power Distribution Module
PEM 0 Rev 04 740-034724 QCS17230201Z PS 4.1kW; 200-240V AC
in
PEM 1 Rev 05 740-034724 QCS174502014 PS 4.1kW; 200-240V AC
in
Routing Engine 0 REV 01 740-056658 9009153221 SRX5k RE-1800X4
Routing Engine 1
CB 0 REV 01 750-056587 CACC9541 SRX5k SCB II
CB 1 REV 01 750-056587 CACG1447 SRX5k SCB II
CB 2 REV 01 750-056587 CACH9058 SRX5k SCB II
FPC 0 REV 18 750-054877 CACH4004 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Cp
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
FPC 1 REV 18 750-054877 CACH4082 SRX5k SPC II
C in
PEM 1 Rev 03 740-034701 QCS13090904T PS 1.4-2.6kW; 90-264V A
C in
Routing Engine 0 REV 01 740-056658 9009196496 SRX5k RE-1800X4
CB 0 REV 01 750-062257 CAEC2501 SRX5k SCB3
FPC 0 REV 10 750-056758 CADC8067 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Cp
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
FPC 2 REV 01 750-062243 CAEE5924 SRX5k IOC3 24XGE+6XLG
CPU REV 01 711-062244 CAEB4890 SRX5k IOC3 PMB
PIC 0 BUILTIN BUILTIN 12x 10GE SFP+
PIC 1 BUILTIN BUILTIN 12x 10GE SFP+
PIC 2 BUILTIN BUILTIN 3x 40GE QSFP+
Xcvr 0 REV 01 740-038623 MOC13156230449 QSFP+-40G-CU1M
Xcvr 2 REV 01 740-038623 MOC13156230449 QSFP+-40G-CU1M
PIC 3 BUILTIN BUILTIN 3x 40GE QSFP+
WAN MEZZ REV 01 750-062682 CAEE5817 24x 10GE SFP+ Mezz
FPC 4 REV 11 750-043157 CACY1595 SRX5k IOC II
CPU REV 04 711-043360 CACZ8879 SRX5k MPC PMB
MIC 1 REV 04 750-049488 CACM6062 10x 10GE SFP+
PIC 2 BUILTIN BUILTIN 10x 10GE SFP+
Xcvr 7 REV 01 740-021308 AD1439301TU SFP+-10G-SR
Xcvr 8 REV 01 740-021308 AD1439301SD SFP+-10G-SR
Xcvr 9 REV 01 740-021308 AD1439301TS SFP+-10G-SR
FPC 5 REV 05 750-044175 ZZ1371 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Flow
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
Fan Tray Enhanced Fan Tray
node1:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN124FEC0AGB SRX5600
Midplane REV 01 760-063936 ACRE2946 Enhanced SRX5600 Midplane
FPM Board test 710-017254 test Front Panel Display
PEM 0 Rev 01 740-038514 QCS114111003 DC 2.6kW Power Entry
Module
PEM 1 Rev 01 740-038514 QCS12031100J DC 2.6kW Power Entry
Module
Routing Engine 0 REV 01 740-056658 9009186342 SRX5k RE-1800X4
CB 0 REV 01 750-062257 CAEB8178 SRX5k SCB3
FPC 0 REV 07 750-044175 CAAD0769 SRX5k SPC II
CPU BUILTIN BUILTIN SRX5k DPC PPC
PIC 0 BUILTIN BUILTIN SPU Cp
PIC 1 BUILTIN BUILTIN SPU Flow
PIC 2 BUILTIN BUILTIN SPU Flow
PIC 3 BUILTIN BUILTIN SPU Flow
FPC 4 REV 11 750-043157 CACY1592 SRX5k IOC II
node1:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN1235BC7AGA SRX5800
Midplane REV 01 710-024803 ACRC3244 SRX5800 Backplane
FPM Board REV 01 710-024632 CACA2108 Front Panel Display
PDM Rev 03 740-013110 QCS1739519B Power Distribution Module
PEM 0 Rev 04 740-034724 QCS17230201Z PS 4.1kW; 200-240V AC
in
PEM 1 Rev 05 740-034724 QCS174502014 PS 4.1kW; 200-240V AC
in
Routing Engine 0 REV 01 740-056658 9009153221 SRX5k RE-1800X4
ad0 3998 MB Virtium - TuffDrive VCF P1T0200298450703 72 Compact Flash
ad1 114304 MB VSFA18PI128G-KC 32779-073 Disk 1
usb0 (addr 1) EHCI root hub 0 Intel uhub0
usb0 (addr 2) product 0x0020 32 vendor 0x8087 uhub1
DIMM 0 VL31B5263F-F8SD DIE REV-0 PCB REV-0 MFR ID-ce80
DIMM 1 VL31B5263F-F8SD DIE REV-0 PCB REV-0 MFR ID-ce80
DIMM 2 VL31B5263F-F8SD DIE REV-0 PCB REV-0 MFR ID-ce80
node1:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN124FEC0AGB SRX5600
Address 0x50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Midplane REV 01 710-024803 ACRC3244 SRX5800 Backplane
Jedec Code: 0x7fb0 EEPROM Version: 0x01
P/N: 710-024803 S/N: S/N ACRC3244
Assembly ID: 0x091a Assembly Version: 01.01
Date: 02-26-2014 Assembly Flags: 0x00
Version: REV 01
ID: SRX5800 Backplane FRU Model Number: SRX5800-BP-A
Board Information Record:
Address 0x00: ad 01 08 00 4c 96 14 d3 28 00 00 ff ff ff ff ff
I2C Hex Data:
Address 0x00: 7f b0 01 ff 09 1a 01 01 52 45 56 20 30 31 00 00
Address 0x10: 00 00 00 00 37 31 30 2d 30 32 34 38 30 33 00 00
Address 0x20: 53 2f 4e 20 41 43 52 43 33 32 34 34 00 1a 02 07
Address 0x30: de ff ff ff ad 01 08 00 4c 96 14 d3 28 00 00 ff
Address 0x40: ff ff ff ff 01 00 00 00 00 00 00 00 00 00 00 53
Address 0x50: 52 58 35 38 30 30 2d 42 50 2d 41 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 ff ff ff ff ff ff ff ff ff ff
Address 0x70: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
FPM Board REV 01 710-024632 CACA2108 Front Panel Display
Jedec Code: 0x7fb0 EEPROM Version: 0x01
P/N: 710-024632 S/N: S/N CACA2108
Assembly ID: 0x096f Assembly Version: 01.01
Date: 02-05-2014 Assembly Flags: 0x00
Version: REV 01
ID: Front Panel Display FRU Model Number: SRX5800-CRAFT-A
Board Information Record:
Address 0x00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
I2C Hex Data:
Address 0x00: 7f b0 01 ff 09 6f 01 01 52 45 56 20 30 31 00 00
Address 0x10: 00 00 00 00 37 31 30 2d 30 32 34 36 33 32 00 00
Address 0x20: 53 2f 4e 20 43 41 43 41 32 31 30 38 00 05 02 07
Address 0x30: de ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x40: ff ff ff ff 01 00 00 00 00 00 00 00 00 00 00 53
Address 0x50: 52 58 35 38 30 30 2d 43 52 41 46 54 2d 41 00 00
Address 0x60: 00 00 00 00 00 00 ff ff ff ff ff ff ff ff ff ff
Address 0x70: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
PDM Rev 03 740-013110 QCS1739519B Power Distribution Module
Routing Engine 1
CB 0 REV 01 750-056587 CACC9541 SRX5k SCB II
Jedec Code: 0x7fb0 EEPROM Version: 0x02
P/N: 750-056587 S/N: S/N CACC9541
Assembly ID: 0x0c19 Assembly Version: 01.01
Date: 03-07-2014 Assembly Flags: 0x00
Version: REV 01 CLEI Code: PROTOXCLEI
ID: SRX5k SCB II FRU Model Number: SRX5K-SCBE
Board Information Record:
Address 0x00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
I2C Hex Data:
Address 0x00: 7f b0 02 fe 0c 19 01 01 52 45 56 20 30 31 00 00
Address 0x10: 00 00 00 00 37 35 30 2d 30 35 36 35 38 37 00 00
Address 0x20: 53 2f 4e 20 43 41 43 43 39 35 34 31 00 07 03 07
Address 0x30: de ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x40: ff ff ff ff 01 50 52 4f 54 4f 58 43 4c 45 49 53
Address 0x50: 52 58 35 4b 2d 53 43 42 45 00 00 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 41 00 00 ff ff ff ff ff ff ff
Address 0x70: ff ff ff 08 ff ff ff ff ff ff ff ff ff ff ff ff
CB 1 REV 01 750-056587 CACG1447 SRX5k SCB II
Jedec Code: 0x7fb0 EEPROM Version: 0x02
P/N: 750-056587 S/N: S/N CACG1447
Assembly ID: 0x0c19 Assembly Version: 01.01
Date: 03-07-2014 Assembly Flags: 0x00
Version: REV 01 CLEI Code: PROTOXCLEI
ID: SRX5k SCB II FRU Model Number: SRX5K-SCBE
Board Information Record:
Address 0x00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
I2C Hex Data:
Address 0x00: 7f b0 02 fe 0c 19 01 01 52 45 56 20 30 31 00 00
Address 0x10: 00 00 00 00 37 35 30 2d 30 35 36 35 38 37 00 00
Address 0x20: 53 2f 4e 20 43 41 43 47 31 34 34 37 00 07 03 07
Address 0x30: de ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x40: ff ff ff ff 01 50 52 4f 54 4f 58 43 4c 45 49 53
Address 0x50: 52 58 35 4b 2d 53 43 42 45 00 00 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 41 00 00 ff ff ff ff ff ff ff
Address 0x70: ff ff ff 08 ff ff ff ff ff ff ff ff ff ff ff ff
CB 2 REV 01 750-056587 CACH9058 SRX5k SCB II
Jedec Code: 0x7fb0 EEPROM Version: 0x02
P/N: 750-056587 S/N: S/N CACH9058
Assembly ID: 0x0c19 Assembly Version: 01.01
Date: 03-06-2014 Assembly Flags: 0x00
Version: REV 01 CLEI Code: PROTOXCLEI
ID: SRX5k SCB II FRU Model Number: SRX5K-SCBE
Board Information Record:
Address 0x00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
I2C Hex Data:
Address 0x00: 7f b0 02 fe 0c 19 01 01 52 45 56 20 30 31 00 00
Address 0x10: 00 00 00 00 37 35 30 2d 30 35 36 35 38 37 00 00
Address 0x20: 53 2f 4e 20 43 41 43 48 39 30 35 38 00 06 03 07
Address 0x30: de ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x40: ff ff ff ff 01 50 52 4f 54 4f 58 43 4c 45 49 53
Address 0x50: 52 58 35 4b 2d 53 43 42 45 00 00 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 41 00 00 ff ff ff ff ff ff ff
Address 0x70: ff ff ff 08 ff ff ff ff ff ff ff ff ff ff ff ff
node1:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN124FEC0AGB SRX5600
Jedec Code: 0x7fb0 EEPROM Version: 0x02
S/N: JN124FEC0AGB
Assembly ID: 0x051b Assembly Version: 00.00
Date: 00-00-0000 Assembly Flags: 0x08
ID: SRX5600
Board Information Record:
Address 0x00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
I2C Hex Data:
Address 0x00: 7f b0 02 ff 05 1b 00 00 00 00 00 00 00 00 00 00
Address 0x10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x20: 4a 4e 31 32 34 46 45 43 30 41 47 42 08 00 00 00
Address 0x30: 00 00 00 ff 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x70: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Midplane REV 01 760-063936 ACRE2946 Enhanced SRX5600 Midpla
ne
Jedec Code: 0x7fb0 EEPROM Version: 0x02
P/N: 760-063936 S/N: ACRE2946
Assembly ID: 0x0914 Assembly Version: 01.01
Date: 03-19-2015 Assembly Flags: 0x08
Version: REV 01 CLEI Code: CLEI-CODE
ID: SRX5600 Midplane FRU Model Number: SRX5600X-CHAS
Board Information Record:
Address 0x00: ad 01 08 00 88 a2 5e 12 68 00 ff ff ff ff ff ff
I2C Hex Data:
Address 0x00: 7f b0 02 ff 09 14 01 01 52 45 56 20 30 31 00 00
Address 0x10: 00 00 00 00 37 36 30 2d 30 36 33 39 33 36 00 00
Address 0x20: 53 2f 4e 20 41 43 52 45 32 39 34 36 08 13 03 07
Address 0x30: df ff ff ff ad 01 08 00 88 a2 5e 12 68 00 ff ff
Address 0x40: ff ff ff ff 01 43 4c 45 49 2d 43 4f 44 45 20 53
Address 0x50: 52 58 35 36 30 30 58 2d 43 48 41 53 20 20 20 20
Address 0x60: 20 20 20 20 20 20 31 30 31 ff ff ff ff ff ff ff
Address 0x70: ff ff ff ba ff ff ff ff ff ff ff ff ff ff ff ff
FPM Board test 710-017254 test Front Panel Display
Jedec Code: 0x7fb0 EEPROM Version: 0x02
P/N: 710-017254 S/N: test
Assembly ID: 0x01ff Assembly Version: 01.00
Date: 06-18-2007 Assembly Flags: 0x00
Version: test
ID: Front Panel Display
Board Information Record:
Address 0x00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
I2C Hex Data:
Address 0x00: 7f b0 02 ff 01 ff 01 00 74 65 73 74 00 00 00 00
Address 0x10: 00 00 00 00 37 31 30 2d 30 31 37 32 35 34 00 00
Address 0x20: 74 65 73 74 00 00 00 00 00 00 00 00 00 12 06 07
Address 0x30: d7 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x40: ff ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x70: 00 00 00 00 ff ff ff ff ff ff ff ff ff ff ff ff
PEM 0 Rev 01 740-038514 QCS114111003 DC 2.6kW Power Entry
Module
Address 0x70: 00 00 00 00 c0 02 ab 1c 00 00 00 00 0a b5 00 00
WAN MEZZ REV 01 750-062682 CAEA4788 24x 10GE SFP+ Mezz
Jedec Code: 0x7fb0 EEPROM Version: 0x01
P/N: 750-062682 S/N: CAEA4788
Assembly ID: 0x0c76 Assembly Version: 01.01
Date: 04-28-2015 Assembly Flags: 0x00
Version: REV 01
ID: 24x 10GE SFP+ Mezz
Board Information Record:
Address 0x00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
I2C Hex Data:
Address 0x00: 7f b0 01 ff 0c 76 01 01 52 45 56 20 30 31 00 00
Address 0x10: 00 00 00 00 37 35 30 2d 30 36 32 36 38 32 00 00
Address 0x20: 53 2f 4e 20 43 41 45 41 34 37 38 38 00 1c 04 07
Address 0x30: df ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x40: ff ff ff ff 00 ff ff ff ff ff ff ff ff ff ff ff
Address 0x50: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x60: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x70: ff ff ff ff 00 00 00 00 00 00 00 00 00 00 00 00
FPC 4 REV 11 750-043157 CACY1592 SRX5k IOC II
Jedec Code: 0x7fb0 EEPROM Version: 0x02
P/N: 750-043157 S/N: CACY1592
Assembly ID: 0x0bd1 Assembly Version: 04.11
Date: 07-30-2014 Assembly Flags: 0x00
Version: REV 11 CLEI Code: COUIBCWBAA
ID: SRX5k IOC II FRU Model Number: SRX5K-MPC
Board Information Record:
Address 0x00: ff ff ff ff ff ff ff ff ff ff ae 01 f2 06 00 ff
I2C Hex Data:
Address 0x00: 7f b0 02 ff 0b d1 04 0b 52 45 56 20 31 31 00 00
Address 0x10: 00 00 00 00 37 35 30 2d 30 34 33 31 35 37 00 00
Address 0x20: 53 2f 4e 20 43 41 43 59 31 35 39 32 00 1e 07 07
Address 0x30: de ff ff ff ff ff ff ff ff ff ff ff ff ff ae 01
Address 0x40: f2 06 00 ff 01 43 4f 55 49 42 43 57 42 41 41 53
Address 0x50: 52 58 35 4b 2d 4d 50 43 00 00 00 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 41 00 00 ff ff ff ff ff ff ff
Address 0x70: ff ff ff 92 ff ff ff ff ff ff ff ff ff ff ff ff
CPU REV 04 711-043360 CACZ8831 SRX5k MPC PMB
Jedec Code: 0x7fb0 EEPROM Version: 0x01
P/N: 711-043360 S/N: CACZ8831
Assembly ID: 0x0bd2 Assembly Version: 01.04
Date: 07-28-2014 Assembly Flags: 0x00
Version: REV 04
ID: SRX5k MPC PMB
Board Information Record:
Address 0x00: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
I2C Hex Data:
Address 0x00: 7f b0 01 ff 0b d2 01 04 52 45 56 20 30 34 00 00
Address 0x10: 00 00 00 00 37 31 31 2d 30 34 33 33 36 30 00 00
Address 0x20: 53 2f 4e 20 43 41 43 5a 38 38 33 31 00 1c 07 07
Address 0x30: de ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x40: ff ff ff ff 00 ff ff ff ff ff ff ff ff ff ff ff
Address 0x50: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x60: ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
Address 0x70: ff ff ff ff 00 00 00 00 49 fa 60 10 40 05 76 5c
MIC 1 REV 04 750-049488 CACN0239 10x 10GE SFP+
Jedec Code: 0x7fb0 EEPROM Version: 0x02
P/N: 750-049488 S/N: CACN0239
Assembly ID: 0x0a88 Assembly Version: 02.04
Date: 02-26-2014 Assembly Flags: 0x00
Version: REV 04 CLEI Code: COUIBCXBAA
Address 0x00: 00 00 00 00 0a 21 00 00 00 00 00 00 00 00 00 00
Address 0x10: 00 00 00 00 42 55 49 4c 54 49 4e 00 41 73 73 65
Address 0x20: 42 55 49 4c 54 49 4e 00 41 73 73 65 00 00 00 00
Address 0x30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x70: 00 00 00 00 de ad be ef 46 13 5b d0 40 43 43 c0
PIC 2 BUILTIN BUILTIN SPU Flow
Jedec Code: 0x0000 EEPROM Version: 0x00
P/N: BUILTIN S/N: BUILTIN
Assembly ID: 0x0a21 Assembly Version: 00.00
Date: 00-00-0000 Assembly Flags: 0x00
ID: SPU Flow
Board Information Record:
Address 0x00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
I2C Hex Data:
Address 0x00: 00 00 00 00 0a 21 00 00 00 00 00 00 00 00 00 00
Address 0x10: 00 00 00 00 42 55 49 4c 54 49 4e 00 41 73 73 65
Address 0x20: 42 55 49 4c 54 49 4e 00 41 73 73 65 00 00 00 00
Address 0x30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Address 0x70: 00 00 00 00 de ad be ef 46 0c 66 40 40 43 43 c0
PIC 3 BUILTIN BUILTIN SPU Flow
Jedec Code: 0x0000 EEPROM Version: 0x00
P/N: BUILTIN S/N: BUILTIN
Assembly ID: 0x0a21 Assembly Version: 00.00
Date: 00-00-0000 Assembly Flags: 0x00
ID: SPU Flow
Board Information Record:
Address 0x00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
I2C Hex Data:
Address 0x00: 00 00 00 00 0a 21 00 00 00 00 00 00 00 00
00 00
Address 0x10: 00 00 00 00 42 55 49 4c 54 49 4e 00 41 73
73 65
Address 0x20: 42 55 49 4c 54 49 4e 00 41 73 73 65 00 00
00 00
Address 0x30: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00
Address 0x40: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00
Address 0x50: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00
Address 0x60: 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00
Address 0x70: 00 00 00 00 de ad be ef 46 0e db 00 40 43
43 c0
Fan Tray Enhanced Fan Tray
FRU Model Number: SRX5600-HC-FAN
node1:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number CLEI code FRU model number
Midplane REV 01 760-063936 CLEI-CODE SRX5600X-CHAS
PEM 0 Rev 01 740-038514 SRX5600-PWR-2400-DC-S
PEM 1 Rev 01 740-038514 SRX5600-PWR-2400-DC-S
Routing Engine 0 REV 01 740-056658 COUCATTBAA SRX5K-RE-1800X4
CB 0 REV 01 750-062257 CLEI-CODE SRX5K-SCB3
FPC 0 REV 07 750-044175 COUCASFBAA SRX5K-SPC-4-15-320
CPU BUILTIN
FPC 4 REV 11 750-043157 COUIBCWBAA SRX5K-MPC
MIC 1 REV 04 750-049488 COUIBCXBAA SRX-MIC-10XG-SFPP
List of Sample Output show chassis routing-engine (Sample 1 - SRX550) on page 400
show chassis routing-engine (Sample 2- vSRX) on page 400
Output Fields Table 45 on page 399 lists the output fields for the show chassis routing-engine command.
Output fields are listed in the approximate order in which they appear.
CPU utilization Current CPU utilization statistics on the control plane core.
User Current CPU utilization in user mode on the control plane core.
Background Current CPU utilization in nice mode on the control plane core.
Kernel Current CPU utilization in kernel mode on the control plane core.
Interrupt Current CPU utilization in interrupt mode on the control plane core.
Idle Current CPU utilization in idle mode on the control plane core.
Uptime Length of time the Routing Engine has been up (running) since the last start.
Last reboot reason Reason for the last reboot of the Routing Engine.
Load averages The average number of threads waiting in the run queue or currently executing over 1-,
5-, and 15-minute periods.
Sample Output
show chassis routing-engine (Sample 1 - SRX550)
user@host> show chassis routing-engine
Routing Engine status:
Temperature 38 degrees C / 100 degrees F
CPU temperature 36 degrees C / 96 degrees F
Total memory 512 MB Max 435 MB used ( 85 percent)
Control plane memory 344 MB Max 296 MB used ( 86 percent)
Data plane memory 168 MB Max 138 MB used ( 82 percent)
CPU utilization:
User 8 percent
Background 0 percent
Kernel 4 percent
Interrupt 0 percent
Idle 88 percent
Model RE-SRX5500-LOWMEM
Serial ID AAAP8652
Start time 2009-09-21 00:04:54 PDT
Uptime 52 minutes, 47 seconds
Last reboot reason 0x200:chassis control reset
Load averages: 1 minute 5 minute 15 minute
0.12 0.15 0.10
Sample Output
show chassis routing-engine (Sample 2- vSRX)
user@host> show chassis routing-engine
Routing Engine status:
Total memory 1024 MB Max 358 MB used ( 35 percent)
Control plane memory 1024 MB Max 358 MB used ( 35 percent)
5 sec CPU utilization:
User 2 percent
Background 0 percent
Kernel 4 percent
Interrupt 6 percent
Idle 88 percent
Model VSRX RE
Start time 2015-03-03 07:04:18 UTC
Uptime 2 days, 11 hours, 51 minutes, 11 seconds
Last reboot reason Router rebooted after a normal shutdown.
Load averages: 1 minute 5 minute 15 minute
0.07 0.04 0.06
Description Display tracing options for the chassis cluster redundancy process.
List of Sample Output show configuration chassis cluster traceoptions on page 402
Output Fields Table 46 on page 402 lists the output fields for the show configuration chassis cluster
traceoptions command. Output fields are listed in the approximate order in which they
appear.
file Name of the file that receives the output of the tracing operation.
Sample Output
show configuration chassis cluster traceoptions
user@host> show configuration chassis cluster traceoptions
file chassis size 10k files 300;
level all;
Index
• Index on page 405
configuring I
conditional route advertising...................................159 in-band cluster upgrade
interface monitoring...................................................106 aborting...........................................................................257
redundant Ethernet interfaces..................................79 chassis cluster..............................................................255
control link.................................................................................65 using FTP Server..........................................................256
control-link-recovery statement.....................................272 using local build...........................................................256
conventions Index is the statement name. Example - action
text and syntax................................................................xv statement...........................................................................287
creating an SRX Series chassis cluster...........................36 initiating manual redundancy group failover.............146
curly braces, in configuration statements.....................xvi interface monitoring configuration................................106
customer support..................................................................xvii interface statement.............................................................282
contacting JTAC.............................................................xvii interface-monitor statement...........................................283
interfaces
D redundant Ethernet......................................................173
data interfaces on SRX Series devices
fabric (dual).....................................................................151 management...................................................................53
forwarding........................................................................59 node.....................................................................................47
plane....................................................................................57 ip-monitoring statement...................................................284
device-count statement.....................................................272
disabling L
chassis clusters............................................................259 LACP (Link Aggregation Control Protocol)
documentation configuring on chassis clusters................................175
comments on.................................................................xvii understanding in chassis cluster mode................173
lacp statement......................................................................285
E link-protection statement................................................286
environmental information
chassis, displaying.......................................................353 M
management interfaces.......................................................53
F manuals
fabric data link (dual)...........................................................151 comments on.................................................................xvii
fabric data-link failure...........................................................59 member-interfaces statement.......................................286
fabric monitoring....................................................................59
fabric-options statement..................................................274 N
font conventions......................................................................xv node interfaces on SRX Series devices...........................47
FPC node statement
operation of, controlling............................................318 (Cluster)....................................269, 271, 273, 275, 288
(Redundancy-Group)................................................288
G
global-threshold statement.............................................276 P
global-weight statement...................................................277 parentheses, in syntax descriptions................................xvi
gratuitous-arp-count statement....................................278 preempt statement.............................................................289
priority statement................................................................289
H
hardware setup, chassis cluster........................................43 R
heartbeat-interval statement..........................................279 redundancy group
heartbeat-threshold statement.....................................280 initiating manual failover...........................................146
hold-down-interval statement........................................281