Junos Node Slicing
Junos Node Slicing
Modified: 2018-12-16
Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc. in the United States
and other countries. All other trademarks, service marks, registered marks, or registered service marks are the property of their respective
owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right to change, modify,
transfer, or otherwise revise this publication without notice.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related limitations through the
year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use with) Juniper Networks
software. Use of such software is subject to the terms and conditions of the End User License Agreement (“EULA”) posted at
https://support.juniper.net/support/eula/. By downloading, installing or using such software, you agree to the terms and conditions of
that EULA.
peer-gnf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
description (AF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Chapter 5 Operational Commands for BSYS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
show chassis network-slices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
show chassis fpc pic-status (Node Slicing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
show chassis fpc (Node Slicing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
show chassis adc (Node Slicing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
show chassis network-slices fpcs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
show system anomalies gnf-id (BSYS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Chapter 6 Operational Commands for GNF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
show chassis gnf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
show chassis gnf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
show chassis hardware (GNF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
show chassis fpc (GNF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
show chassis fpc pic-status (GNF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
show chassis adc (GNF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
show interfaces (Abstracted Fabric) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
show system anomalies (GNF) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
Chapter 7 Configuration Statements for JDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
virtual-network-functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
interfaces (Junos Node Slicing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
routing-options (Junos Node Slicing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
system login (Junos Nose Slicing) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
root-login (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
Chapter 8 Operational Commands for JDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Generic Guidelines for Using JDM Server Commands . . . . . . . . . . . . . . . . . . . . . . 137
clear log (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
monitor list (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
monitor start (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
monitor stop (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
request server authenticate-peer-server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
request virtual-network-functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
show virtual-network-functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
show version vnf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
show version (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
show system cpu (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
show system storage (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
show system memory (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
show system network (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
restart (JDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
If the information in the latest release notes differs from the information in the
documentation, follow the product Release Notes.
Juniper Networks Books publishes books by Juniper Networks engineers and subject
matter experts. These books go beyond the technical documentation to explore the
nuances of network architecture, deployment, and administration. The current list can
be viewed at https://www.juniper.net/books.
If you want to use the examples in this manual, you can use the load merge or the load
merge relative command. These commands cause the software to merge the incoming
configuration into the current candidate configuration. The example does not become
active until you commit the candidate configuration.
If the example configuration contains the top level of the hierarchy (or multiple
hierarchies), the example is a full example. In this case, use the load merge command.
If the example configuration does not start at the top level of the hierarchy, the example
is a snippet. In this case, use the load merge relative command. These procedures are
described in the following sections.
1. From the HTML or PDF version of the manual, copy a configuration example into a
text file, save the file with a name, and copy the file to a directory on your routing
platform.
For example, copy the following configuration to a file and name the file ex-script.conf.
Copy the ex-script.conf file to the /var/tmp directory on your routing platform.
system {
scripts {
commit {
file ex-script.xsl;
}
}
}
interfaces {
fxp0 {
disable;
unit 0 {
family inet {
address 10.0.0.1/24;
}
}
}
}
2. Merge the contents of the file into your routing platform configuration by issuing the
load merge configuration mode command:
[edit]
user@host# load merge /var/tmp/ex-script.conf
load complete
Merging a Snippet
To merge a snippet, follow these steps:
1. From the HTML or PDF version of the manual, copy a configuration snippet into a text
file, save the file with a name, and copy the file to a directory on your routing platform.
For example, copy the following snippet to a file and name the file
ex-script-snippet.conf. Copy the ex-script-snippet.conf file to the /var/tmp directory
on your routing platform.
commit {
file ex-script-snippet.xsl; }
2. Move to the hierarchy level that is relevant for this snippet by issuing the following
configuration mode command:
[edit]
user@host# edit system scripts
[edit system scripts]
3. Merge the contents of the file into your routing platform configuration by issuing the
load merge relative configuration mode command:
For more information about the load command, see CLI Explorer.
Documentation Conventions
Caution Indicates a situation that might result in loss of data or hardware damage.
Laser warning Alerts you to the risk of personal injury from a laser.
Table 2 on page xiv defines the text and syntax conventions used in this guide.
Bold text like this Represents text that you type. To enter configuration mode, type the
configure command:
user@host> configure
Italic text like this • Introduces or emphasizes important • A policy term is a named structure
new terms. that defines match conditions and
• Identifies guide names. actions.
• Junos OS CLI User Guide
• Identifies RFC and Internet draft titles.
• RFC 1997, BGP Communities Attribute
Italic text like this Represents variables (options for which Configure the machine’s domain name:
you substitute a value) in commands or
configuration statements. [edit]
root@# set system domain-name
domain-name
Text like this Represents names of configuration • To configure a stub area, include the
statements, commands, files, and stub statement at the [edit protocols
directories; configuration hierarchy levels; ospf area area-id] hierarchy level.
or labels on routing platform • The console port is labeled CONSOLE.
components.
< > (angle brackets) Encloses optional keywords or variables. stub <default-metric metric>;
# (pound sign) Indicates a comment specified on the rsvp { # Required for dynamic MPLS only
same line as the configuration statement
to which it applies.
[ ] (square brackets) Encloses a variable for which you can community name members [
substitute one or more values. community-ids ]
GUI Conventions
Bold text like this Represents graphical user interface (GUI) • In the Logical Interfaces box, select
items you click or select. All Interfaces.
• To cancel the configuration, click
Cancel.
> (bold right angle bracket) Separates levels in a hierarchy of menu In the configuration editor hierarchy,
selections. select Protocols>Ospf.
Documentation Feedback
We encourage you to provide feedback so that we can improve our documentation. You
can use either of the following methods:
• Online feedback system—Click TechLibrary Feedback, on the lower right of any page
on the Juniper Networks TechLibrary site, and do one of the following:
• Click the thumbs-up icon if the information on the page was helpful to you.
• Click the thumbs-down icon if the information on the page was not helpful to you
or if you have suggestions for improvement, and use the pop-up form to provide
feedback.
Technical product support is available through the Juniper Networks Technical Assistance
Center (JTAC). If you are a customer with an active J-Care or Partner Support Service
support contract, or are covered under warranty, and need post-sales technical support,
you can access our tools and resources online or open a case with JTAC.
• JTAC hours of operation—The JTAC centers have resources available 24 hours a day,
7 days a week, 365 days a year.
• Find solutions and answer questions using our Knowledge Base: https://kb.juniper.net/
To verify service entitlement by product serial number, use our Serial Number Entitlement
(SNE) Tool: https://entitlementsearch.juniper.net/entitlementsearch/
Before setting up Junos Node Slicing, it is important to understand the concept of Junos
Node Slicing and its various components.
Using Junos Node Slicing, you can create multiple partitions in a single physical MX Series
router. These partitions are referred to as guest network functions (GNFs). Each GNF
behaves as an independent router, with its own dedicated control plane, data plane, and
management plane. This enables you to run multiple services on a single converged MX
Series router, while still maintaining operational isolation between them. You can leverage
the same physical device to create parallel partitions that do not share the control plane
or the forwarding plane, but only share the same chassis, space, and power.
You can also send traffic between GNFs through the switch fabric by using an Abstracted
Fabric (AF) interface, a pseudo interface that behaves as a first class Ethernet interface.
An AF interface facilitates routing control, data, and management traffic between GNFs.
Junos Node Slicing supports multi-version software compatibility, thereby allowing the
GNFs to be independently upgraded.
• Reduced time-to-market for new services and capabilities—Each GNF can operate
on a different Junos software version. This advantage enables companies to evolve
each GNF at its own pace. If a new service or a feature needs to be deployed on a
certain GNF, and it requires a new software release, only the GNF involved requires an
update. Additionally, with the increased agility, Junos Node Slicing enables service
providers and enterprises to introduce highly flexible Everything-as-a-service business
model to rapidly respond to ever-changing market conditions.
The MX Series router functions as the base system (BSYS). The BSYS owns all the
physical components of the router, including the line cards and the switching fabric. The
BSYS assigns line cards to GNFs.
The Juniper Device Manager (JDM) software orchestrates the GNF VMs. In JDM, a GNF
VM is referred to as a virtual network function (VNF). A GNF thus comprises a VNF and
a set of line cards.
JDM and VNFs are hosted on a pair of external industry standard x86 servers.
Through configuration at the BSYS, you can assign line cards of the chassis to different
GNFs. Figure 1 on page 19 shows three GNFs with their dedicated line cards running on
an external server.
x86 Server
MX Series Chassis
g043735
See “Connecting the Servers and the Router” on page 30 for information about how to
connect an MX Series router to a pair of external x86 servers.
In Junos Node Slicing, the MX Series router functions as the base system (BSYS). The
BSYS owns all the physical components of the router, including all line cards and fabric.
Through Junos OS configuration at the BSYS, you can assign line cards to GNFs and
define Abstracted Fabric (AF) interfaces between GNFs. The BSYS software runs on a
pair of redundant Routing Engines of the MX Series router.
A guest network function (GNF) logically owns the line cards assigned to it by the base
system (BSYS), and maintains the forwarding state of the line cards. You can configure
multiple GNFs on an MX Series router (see “Configuring Guest Network Functions” on
page 47). The Junos OS control plane of each GNF runs as a virtual machine (VM). The
Juniper Device Manager (JDM) software, hosted on a pair of x86 servers, orchestrates
the GNF VMs. In the JDM, the GNFs are referred to as virtual network functions (VNF).
Creating a GNF requires two sets of configurations, one to be performed at the BSYS,
and the other at the JDM.
A GNF is defined by an ID. This ID must be the same at the BSYS and JDM.
The BSYS part of the GNF configuration comprises giving it an ID and a set of line cards.
The JDM part of the GNF configuration comprises specifying the following attributes:
• A VNF name.
• A GNF ID. This ID must be the same as the GNF ID used at the BSYS.
The server resource template defines the number of dedicated CPU cores and the size
of DRAM to be assigned to a GNF. For a list of predefined server resource templates
available for GNFs, see the Server Hardware Resource Requirements (Per GNF) section
in “Minimum Hardware and Software Requirements for Junos Node Slicing” on page 27.
After a GNF is configured, you can access it by connecting to the virtual console port of
the GNF. Using the Junos OS CLI at the GNF, you can then configure the GNF system
properties such as hostname and management IP address, and subsequently access it
through its management port.
The Juniper Device Manager (JDM), a virtualized Linux container, enables provisioning
and management of the GNF VMs.
JDM supports Junos OS-like CLI, NETCONF for configuration and management and SNMP
for monitoring.
A JDM instance is hosted on each of the x86 servers. The JDM instances are typically
configured as peers that synchronize the GNF configurations: when a GNF VM is created
on one server, the backup GNF VM is automatically created on the other server.
AF
PFE - 0 PFE - 0
PFE - 1 PFE - 1
g200119
GNF1 GNF2
An AF interface connects two GNFs through the fabric and aggregates all the Packet
Forwarding Engines (PFEs) that connect the two GNFs. An AF interface can leverage the
sum of the bandwidth of each Packet Forwarding Engine belonging to the AF interface.
For example, if GNF1 has one MPC8 (which has four Packet Forwarding Engines with
240 Gbps capacity each), and GNF1 is connected with GNF2 and GNF3 using AF interfaces
(af1 and af2), the maximum AF capacity of GNF1 would be 4x240 Gbps = 960 Gbps.
GNF1—af1——GNF2
GNF1—af2——GNF3
For information on the bandwidth supported on each MPC, see Table 3 on page 22.
NOTE: The non-AF interfaces support all the protocols that work on Junos
OS.
• Multicast forwarding
• MPLS applications where the AF interface acts as a core interface (L3VPN, VPLS,
L2VPN, L2CKT, EVPN, and IP over MPLS)
• IPv4 Forwarding
• IPv6 Forwarding
• MPLS
• ISO
• CCC
• With the AF interface configuration, GNFs support AF-capable MPCs. Table 3 on page 22
lists the AF-capable MPCs, the number of PFEs supported per MPC, and the bandwidth
supported per MPC.
Number of
MPC Initial Release PFEs Total Bandwidth
Number of
MPC Initial Release PFEs Total Bandwidth
NOTE:
• A GNF that does not have the AF interface configuration supports all the
MPCs that are supported by a standalone MX Series router. For the list of
supported MPCs, see MPCs Supported by MX Series Routers.
• We recommend that you set the MTU settings on the AF interface to align
to the maximum allowed value on the XE/GE interfaces. This ensures
minimal or no fragmentation of packets over the AF interface.
AF Interface Restrictions
• There can be minimal traffic drops (both transit and host) during the offline/restart
of an MPC hosted on a remote GNF.
• Interoperability between AF-capable MPCs and non-AF capable MPCs is not supported.
See Also • Configuring Abstracted Fabric Interfaces Between a Pair of GNFs on page 50
Figure 3 on page 24 shows the mastership behavior of GNF and BSYS with Routing Engine
redundancy.
BSYS Mastership
The BSYS Routing Engine mastership arbitration behavior is identical to that of Routing
Engines on MX Series routers.
GNF Mastership
The GNF mastership is independent of the BSYS mastership and that of other GNFs. The
GNF mastership arbitration is done through Junos OS. Under connectivity failure conditions,
GNF mastership is handled conservatively.
• JDM administrator—Responsible for the JDM server port configuration, and for the
provisioning and life-cycle management of the GNF VMs (VNFs). JDM CLI commands
are available for these tasks.
While JDM software versioning does not have a similar restriction with respect to the GNF
or BSYS software versions, we recommend that you regularly update the JDM software.
A JDM upgrade does not affect any of the running GNFs.
Please contact Juniper Networks if you have queries pertaining to Junos Node Slicing
licenses.
• Minimum Hardware and Software Requirements for Junos Node Slicing on page 27
• Preparing for Junos Node Slicing Setup on page 30
• Setting Up Junos Node Slicing on page 38
• Setting Up YANG-Based Orchestration of GNFs on page 59
To set up Junos Node Slicing, you need an MX Series router and a pair of industry standard
x86 servers. The x86 servers host the Juniper Device Manager (JDM) along with the GNF
VMs.
MX Series Router
The following routers support Junos Node Slicing:
• MX2010
• MX2020
• MX480
• MX960
• MX2008
NOTE: For the MX960 and MX480 routers, the Control Boards must be
SCBE2; and the Routing Engines must be interoperable with SCBE2
(RE-S-1800, RE-S-X6-64G).
x86 Servers
Ensure that both the servers have similar (preferably identical) hardware configuration.
The server hardware requirements are thus the sum of the requirements of the individual
GNFs, and the shared resource requirements. The server hardware requirements are a
function of how many GNFs you plan to use.
x86 CPU:
BIOS:
Storage:
• /vm-primary, which must have a minimum available storage space of 350 GB.
Each GNF must be associated with a resource template, which defines the number of
dedicated CPU cores and the size of DRAM to be assigned for that GNF.
2core-16g 2 16
4core-32g 4 32
6core-48g 6 48
8core-64g 8 64
Table 5 on page 29 lists the server hardware resources that are shared between all the
guest network functions (GNFs) on a server:
Component Specification
CPU • Four cores to be allocated for JDM and Linux host processing.
Network Ports • Two 10-Gbps Ethernet interfaces for control plane connection between the server and the router.
• Minimum—1 PCIe NIC card with Intel X710 dual port 10-Gbps Direct Attach, SFP+, Converged
Network Adapter, PCIe 3.0, x8
• Recommended—2 NIC cards of the above type. Use one port from each card to provide redundancy
at the card level.
• One Ethernet interface (1/10 Gbps) for Linux host management network.
• One Ethernet interface (1/10 Gbps) for JDM management network.
• One Ethernet interface (1/10 Gbps) for GNF management network. (This port is shared by all the
GNFs on that server).
• Serial port or an equivalent interface (iDRAC, IPMI) for server console access.
To enable virtualization for RHEL, choose “Virtualization Host" for the Base Environment
and "Virtualization Platform" as an Add-On from the Software Selection screen during
installation.
NOTE:
• The hypervisor supported is KVM.
• Install additional packages required for Intel X710 NIC Driver and JDM. For more
information, see the “Updating Intel X710 NIC Driver for x86 Servers” on page 34 and
“Installing Additional Packages for JDM” on page 36 sections.
• If you are using Intel X710 NIC, ensure that you have the latest driver (2.0.23 or later)
installed.
The servers must also have the BIOS setup as described in “x86 Server CPU BIOS Settings”
on page 32 and the Linux GRUB configuration as described in “x86 Server Linux GRUB
Configuration” on page 33.
NOTE:
• The x86 servers require internet connectivity for you to be able to perform
host OS updates and install the additional packages.
• Ensure that you have the same host OS software version on both the
servers.
NOTE: The following software packages are required to set up Junos Node
Slicing:
• JDM package
Before setting up Junos Node Slicing, you need to perform a few preparatory steps, such
as connecting the servers and the router, installing additional packages, configuring x86
server Linux GRUB, and setting up the BIOS of the x86 server CPUs.
To set up Junos Node Slicing, you must directly connect a pair of external x86 servers to
the MX Series router. Besides the management port for the Linux host, each server also
requires two additional ports for providing management connectivity for the JDM and
the GNF VMs, respectively, and two ports for connecting to the MX Series router.
Figure 4 on page 31 shows how an MX2020 router is connected to a pair of x86 external
servers.
Remote
User Management
Network
Linux Linux
iDRAC Host JDM GNF-1-n iDRAC Host JDM GNF-1-n
Mgmt Mgmt Mgmt Mgmt Mgmt Mgmt
Server 0 Server 1
Console Console
Serial Port Serial Port
p3p1 p3p2 p3p1 p3p2
Console Console
RE0 RE1
g043728
MX 2020
According to the example in Figure 4 on page 31, em1, em2, and em3 on the x86 servers
are the ports that are used for the management of the Linux host, the JDM and the GNFs,
respectively. p3p1 and p3p2 on each server are the two 10-Gbps ports that are connected
to the Control Boards of the MX Series router.
NOTE: The names of interfaces on the server, such as em1, p3p1 might vary
according to the server hardware configuration.
For more information on the XGE ports of the MX Series router Control Board (CB)
mentioned in Figure 4 on page 31, see:
NOTE: Use the show chassis ethernet-switch command to view these XGE
ports. In the command output on MX960, refer to the port numbers 24 and
26 to view these ports on the SCBE2. In the command output on MX2010 and
MX2020, refer to the port numbers 26 and 27 to view these ports on the
Control Board-Routing Engine (CB-RE).
• Hyperthreading is disabled.
• The CPU cores are set to reduce jitter by limiting C-state use.
To find the rated frequency of the CPU cores on the server, run the Linux host command
lscpu, and check the value for the field Model name. See the following example:
..
Model name: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
..
To find the frequency at which the CPU cores are currently running, run the Linux host
command grep MHz /proc/cpuinfo and check the value for each CPU core.
On a server that has the BIOS set to operate the CPU cores at their rated frequency, the
observed values for the CPU cores will all match the rated frequency (or be very close
to it), as shown in the following example.
…
cpu MHz : 2499.902
cpu MHz : 2500.000
cpu MHz : 2500.000
cpu MHz : 2499.902
…
On a server that does not have the BIOS set to operate the CPU cores at their rated
frequency, the observed values for the CPU cores do not match the rated frequency, and
the values could also vary with time (you can check this by rerunning the command).
…
cpu MHz : 1200.562
cpu MHz : 1245.468
cpu MHz : 1217.625
cpu MHz : 1214.156
To set the x86 server BIOS system profile to operate the CPU cores at their rated
frequency, reduce jitter, and disable hyperthreading, consult the server manufacturer,
because these settings vary with server model and BIOS versions.
For x86 servers running Red Hat Enterprise Linux (RHEL) 7.3, perform the following steps:
1. Determine the number of CPU cores on the x86 server. Ensure that hyperthreading
has already been disabled, as described in “x86 Server CPU BIOS Settings” on page 32.
You can use the Linux command lscpu to find the total number of CPU cores, as shown
in the following example:
…
Cores per socket: 12
Sockets: 2
…
Here, there are 24 cores (12 x 2). The CPU cores are numbered as core 0 to core 23.
2. As per this example, the isolcpus parameter must be set to ’isolcpus=2-23’ (isolate
all CPU cores other than cores 0 and 1 for use by JDM and the GNF VMs).
To set the isolcpus parameter in the Linux GRUB configuration file, follow the procedure
described in the section Isolating CPUs from the process scheduler in this Red Hat
document. A summary of the section is as follows:
a. Edit the Linux GRUB file /etc/default/grub to append the isolcpus parameter to
the variable GRUB_CMDLINE_LINUX, as shown in the following example:
GRUB_CMDLINE_LINUX=
"crashkernel=auto rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet
isolcpus=2-23”
b. Run the Linux shell command grub2-mkconfig to generate the updated GRUB file
as shown below:
# grub2-mkconfig -o /boot/grub2/grub.cfg
# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
d. Verify that the isolcpus parameter has now been set, by checking the output of the
Linux command cat /proc/cmdline, as shown in the following example:
# cat /proc/cmdline
For x86 servers running Ubuntu 16.04, perform the following steps:
1. Determine the number of CPU cores on the x86 server. Ensure that hyperthreading
has already been disabled, as described in x86 Server CPU BIOS Settings. You can
use the Linux command lscpu to find the total number of CPU cores.
2. Edit the /etc/default/grub file to append the isolcpus parameter to the variable
GRUB_CMDLINE_LINUX_DEFAULT, as shown in the following example:
GRUB_CMDLINE_LINUX_DEFAULT=
"intel_pstate=disable processor.ignore_ppc=1 isolcpus=2-15"
5. Verify that the isolcpus parameter has now been set, by checking the output of the
Linux command cat /proc/cmdline.
You need to first identify the X710 NIC interface on the servers. For example, this could
be p3p1.
You can check the NIC driver version by running the Linux command ethtool -i interface.
See the following example:
driver: i40e
version: 2.0.23
firmware-version: 5.05 0x80002899 17.5.12
...
Refer to the Intel support page for instructions on updating the driver.
NOTE: Ensure that the host OS is up to date prior to updating the Intel X710
NIC driver.
• For RedHat:
• kernel-devel
• Development Tools
• For Ubuntu:
• make
• gcc
If you are using RedHat, run the following commands to install the packages:
If you are using Ubuntu, run the following commands to install the packages:
NOTE: After updating the Intel X710 NIC driver, you might notice the following
message in the host OS log:
"i40e: module verification failed: signature and/or required key missing - tainting
kernel"
Ignore this message. It appears because the updated NIC driver module has
superseded the base version of the driver that was packaged with the host
OS.
See Also • Minimum Hardware and Software Requirements for Junos Node Slicing on page 27
NOTE: The x86 Servers must have the virtualization packages installed.
For RHEL 7.3, install the following additional packages, which can be downloaded from
the Red Hat Customer Portal.
• python-psutil-1.2.1-1.el7.x86_64.rpm
• net-snmp-5.7.2-24.el7.x86_64.rpm
• net-snmp-libs-5.7.2-24.el7.x86_64.rpm
• libvirt-snmp-0.0.3-5.el7.x86_64.rpm
Only for Junos OS Releases 17.4R1 and earlier, and for 18.1R1, if you are running RHEL 7.3,
also install the following additional package:
• libstdc++-4.8.5-11.el7.i686.rpm
NOTE:
• The package version numbers shown are the minimum versions. Newer
versions might be available in the latest RHEL 7.3 patches.
• For RHEL, we recommend that you install the packages using the yum
command.
• python-psutil
Only for Junos OS Releases 17.4R1 and earlier, and for 18.1R1, if you are running Ubuntu,
also install the following additional package:
• libstdc++6:i386
NOTE:
• For Ubuntu, you can use the apt-get command to install the latest version
of these packages. For example, use:
• Ensure that the MX Series router is connected to the x86 servers as described in
“Connecting the Servers and the Router” on page 30.
• Power on the two x86 servers and both the Routing Engines on the MX Series router.
• Identify the Linux host management port on both the x86 servers. For example, em1.
• Identify the ports to be assigned for the JDM and the GNF management ports. For
example, em2 and em3.
• Identify the two 10-Gbps ports that are connected to the Control Boards on the MX
Series router. For example, p3p1 and p3p2.
• Minimum Hardware and Software Requirements for Junos Node Slicing on page 27
• Minimum Hardware and Software Requirements for Junos Node Slicing on page 27
Before proceeding to perform the Junos Node Slicing setup tasks, you must have
completed the procedures described in the chapter “Preparing for Junos Node Slicing
Setup” on page 30.
NOTE: Ensure that the MX Series router is connected to the x86 servers as
described in “Connecting the Servers and the Router” on page 30.
Junos Node Slicing requires the MX Series router to function as the base system (BSYS).
Use the following steps to configure an MX Series router to operate in BSYS mode:
1. Install the Junos OS package for BSYS on both the Routing Engines of the MX Series
router.
b. Click Base System > Junos OS version number > Junos version number (64-bit
High-End).
c. On the Software Download page, select the I Agree option under End User License
Agreement and then click Proceed.
2. On the MX Series router, run the show chassis hardware command and verify that the
transceivers on both the Control Boards (CBs) are detected. The following text
represents a sample output:
NOTE: A router in the BSYS mode is not expected to run features other than
the ones required to run the basic management functionalities in Junos Node
Slicing. For example, the BSYS is not expected to have interface configurations
associated with the line cards installed in the system. Instead, guest network
functions (GNFs) will have the full-fledged router configurations.
Download and install the JDM RPM package for x86 servers running RHEL as follows:
b. Click JDM > Junos OS version number > Juniper Device Manager version number (for
Redhat).
c. On the Software Download page, select the I Agree option under the End User License
Agreement and then click Proceed.
To install the package on x86 servers running RHEL, perform the following steps on each
of the servers:
1. Disable SELINUX and reboot the server. You can disable SELINUX by setting the value
for SELINUX to disabled in the /etc/selinux/config file.
2. Install the JDM RPM package (indicated by the .rpm extension) by using the following
command. An example of the JDM RPM package used is shown below:
Download and install the JDM Ubuntu package for x86 servers running Ubuntu 16.04 as
follows:
b. Click JDM > Junos OS version number > Juniper Device Manager version number (for
Debian).
c. On the Software Download page, select the I Agree option under the End User License
Agreement and then click Proceed.
To install the JDM package on the x86 servers running Ubuntu 16.04, perform the following
steps on each of the servers:
2. Install the JDM Ubuntu package (indicated by the .deb extension) by using the following
command. An example of the JDM Ubuntu package used is shown below:
1. At each server, start the JDM, and assign identities for the two servers as server0 and
server1, respectively, as follows:
Starting JDM
Starting JDM
2. Enter the JDM console on each server by running the following command:
root@jdm% cli
New Password:
NOTE:
• The JDM root password must be the same on both the servers.
root@jdm# commit
8. From the Linux host, run the ssh jdm command to log in to the JDM shell.
• Minimum Hardware and Software Requirements for Junos Node Slicing on page 27
• Monitor the state of the JDM, the host server and the GNFs by using JDM CLI commands.
NOTE: The non-root user accounts function only inside JDM, not on the host
server.
2. Define a user name and assign the user with a predefined login class.
New Password:
root@jdm# commit
Table 6 on page 43 contains the predefined login classes that JDM supports for non-root
users:
read-only Similar to operator class, except that the users cannot restart daemons inside JDM.
• The two 10-Gbps server ports that are connected to the MX Series router.
Therefore, you need to identify the following on each server before starting the
configuration of the ports:
• The server interfaces (for example, p3p1 and p3p2) that are connected to CB0 and CB1
on the MX Series router.
• The server interfaces (for example, em2 and em3) to be used for JDM management
and GNF management.
For more information, see the figure “Connecting the Servers and the Router” on page 30.
NOTE:
• You need this information for both server0 and server1.
To configure the x86 server interfaces in JDM, perform the following steps on both the
servers:
NOTE: Ensure that you apply the same configuration on both server0 and
server1.
At both server0 and server1, run the following JDM CLI command:
For example, to log in to the peer server from server0, exit the JDM CLI,
and use the following command from JDM shell:
Similarly, to log in to the peer server from server1, use the following
command:
4. Apply the configuration statements in the JDM CLI configuration mode to set the JDM
management IP address, default route, and the JDM hostname for each JDM instance
as shown in the following example.
root@jdm# set groups server0 interfaces jmgmt0 unit 0 family inet address 10.216.105.112/21
root@jdm# set groups server1 interfaces jmgmt0 unit 0 family inet address 10.216.105.113/21
root@jdm# set groups server0 routing-options static route 0.0.0.0/0 next-hop 10.216.111.254
root@jdm# set groups server1 routing-options static route 0.0.0.0/0 next-hop 10.216.111.254
root@jdm# set groups server0 system host-name test-jdm-server0
root@jdm# set groups server1 system host-name test-jdm-server1
root@jdm# commit
NOTE:
• jmgmt0 stands for the JDM management port. This is different from the
Linux host management port. Both JDM and the Linux host management
ports are independently accessible from the management network.
5. Run the following JDM CLI command on each server and ensure that all the interfaces
are up.
Component Interface
Status Comments
Host to JDM port virbr0
up
Physical CB0 port p3p1
up
Physical CB1 port p3p2
up
Physical JDM mgmt port em2
up
Physical VNF mgmt port em3
up
JDM-GNF bridge bridge_jdm_vm
up
CB0 cb0
up
CB1 cb1
up
JDM mgmt port jmgmt0
up
JDM to HOST port bme1
up
JDM to GNF port bme2
up
JDM to JDM link0* cb0.4002
up
JDM to JDM link1 cb1.4002
up
NOTE: For sample JDM configurations, see “Sample Configuration for Junos
Node Slicing” on page 54.
If you want to modify the server interfaces configured in the JDM, perform the following
steps:
2. From the configuration mode, deactivate the virtual network functions configuration,
and then commit the change.
3. Configure and commit the new interfaces as described in the step 1 of the main
procedure.
root@jdm:~# reboot
5. From the configuration mode, activate the virtual network functions configuration,
and then commit the change.
NOTE:
• Before attempting to create a GNF, you must ensure that the servers have
sufficient resources (CPU, memory, storage) for that GNF.
• You need to assign an ID to each GNF. This ID must be the same at the
BSYS and the JDM.
At the BSYS, specify a GNF by assigning it an ID and a set of line cards by applying the
configuration as shown in the following example:
user@router# commit
In the JDM, the GNF VMs are referred to as virtual network functions (VNFs). A VNF has
the following attributes:
• A VNF name.
• A GNF ID. This ID must be the same as the GNF ID used at the BSYS.
1. Retrieve the Junos OS image for GNFs and place it in the host OS directory
/var/jdm-usr/gnf-images/ on both the servers.
b. Click GNF > Junos OS version number > Junos version number (Guest Network
Function) .
c. On the Software Download page, select the I Agree option under the End User
License Agreement and then click Proceed.
2. Assign this image to a GNF by using the JDM CLI command as shown in the following
example:
Server0:
Added image: /vm-primary/test-gnf/test-gnf.img
Server1:
Added image: /vm-primary/test-gnf/test-gnf.img
3. Configure the VNF by applying the configuration statements as shown in the following
example:
To also specify a baseline or initial Junos OS configuration for a GNF, prepare the GNF
configuration file (example: /var/jdm-usr/gnf-config/test-gnf.conf) on both the servers
and specify the filename as the parameter in the base-config statement as shown
below:
• You use the same GNF ID as the one specified earlier in BSYS.
• The baseline configuration filename (with the path) is the same on both
the servers.
• The GNF name used here is the same as the one assigned to the Junos
OS image for GNF in the step 2.
4. To verify that the VNF is created, run the following JDM CLI command:
5. Log in to the console of the VNF by issuing the following JDM CLI command:
6. Configure the VNF the same way as you configure an MX Series Routing Engine.
NOTE:
• For sample configurations, see “Sample Configuration for Junos Node
Slicing” on page 54.
• If you had previously brought down any physical x86 CB interfaces or the
GNF management interface from Linux shell (by using the command ifconfig
interface-name down), these will automatically be brought up when the
GNF is started.
• As exceptions, the following two parameters under the chassis configuration hierarchy
should be applied at both BSYS and GNF:
In this example, af2 is the Abstracted Fabric interface instance 2 and af4 is the
Abstracted Fabric interface instance 4.
NOTE: The allowed AF interface values range from af0 through af9.
The GNF AF interface will be visible and up. You can configure an AF interface the way
you configure any other interface.
NOTE:
• If you want to apply MPLS family configurations on the AF interfaces, you
can apply the command set interfaces af-name unit logical-unit-number
family mpls on both the GNFs between which the AF interface is configured.
The following sections explain the forwarding class- to-queue mapping, and the behavior
aggregate (BA) classifiers and rewrites supported on the Abstracted Fabric (AF)
interfaces.
An AF interface is a simulated WAN interface with most capabilities of any other interface
except that the traffic designated to a remote Packet Forwarding Engine will still have
to go over the two fabric queues (Low/High priority ones).
NOTE: Presently, the AF interface operates in 2-queue mode only. Hence, all
queue-based features such as scheduling, policing, and shaping are not
available on an AF interface.
Packets on the AF interface inherit the fabric queue that is determined by the fabric
priority configured for the forwarding class to which that packet belongs. For example,
see the following forwarding class to queue map configuration:
[edit]
As shown in the preceding example, when a packet gets classified to the forwarding class
VoiceSig, the code in the forwarding path examines the fabric priority of that forwarding
class and decides which fabric queue to choose for this packet. In this case, high-priority
fabric queue is chosen.
You can also apply rewrites for IP packets entering the MPLS tunnel and do a rewrite
of both EXP and IPv4 type of service (ToS) bits. This approach will work as it does on
other normal interfaces.
NOTE:
The following are not supported:
Standard linkUp/linkDown SNMP traps are generated. A default community string jdm
is used.
Standard linkUp/linkDown SNMP traps are generated. A default community string host
is used.
JDM to JDM connectivity loss/regain traps are sent using generic syslog traps
(jnxSyslogTrap) through the host management interface.
The JDM connectivity down trap JDM_JDM_LINK_DOWN is sent when the JDM is not
able to communicate with the peer JDM on another server over cb0 or cb1 links. See
the following example:
The JDM to JDM Connectivity up trap JDM_JDM_LINK_UP is sent when either the cb0 or
cb1 link comes up, and JDMs on both the servers are able to communicate again. See
the following example:
SNMP traps are sent to the target NMS server. To configure the target NMS server details
in the JDM, see the following example:
[edit]
interfaces {
cb0 p3p1;
cb1 p3p2;
jdm-management em2;
vnf-management em3;
}
}
interfaces {
jmgmt0 {
unit 0 {
family inet {
address 10.216.105.113/21;
}
}
}
routing-options {
static {
route {
0.0.0.0/0 next-hop 10.216.111.254;
}
}
}
}
}
}
apply-groups [ server0 server1 ];
system {
root-authentication {
encrypted-password "..."; ## SECRET-DATA
}
services {
ssh;
netconf {
ssh;
rfc-compliant;
}
}
}
virtual-network-functions {
test-gnf {
id 1;
chassis-type mx2020;
resource-template 2core-16g;
base-config /var/jdm-usr/gnf-config/test-gnf.conf;
}
}
peer-gnf id 2 af1;
}
af4 {
peer-gnf id 4 af1;
}
description gnf-a;
fpcs [ 0 19];
}
gnf 2 {
af1 {
peer-gnf id 1 af2;
}
af4 {
peer-gnf id 4 af2;
}
description gnf-b;
fpcs [ 1 6 ];
}
gnf 4 {
af1 {
peer-gnf id 1 af4;
}
af2 {
peer-gnf id 2 af4;
}
description gnf-d;
fpcs [ 3 4 ];
}
}
}
Assume that there is an AF interface between GNF1 and GNF2. The following sample
configuration illustrates how to apply rewrites on the AF interface at GNF1 and apply
classifiers on the AF interface on GNF2, in a scenario where traffic comes from GNF1 to
GNF2:
GNF1 Configuration
interfaces {
xe-4/0/0 {
unit 0 {
family inet {
address 22.1.2.2/24;
}
}
}
af2 {
unit 0 {
family inet {
address 32.1.2.1/24;
}
}
}
}
class-of-service {
classifiers {
dscp testdscp {
forwarding-class assured-forwarding {
loss-priority low code-points [ 001001 000000 ];
}
}
}
interfaces {
xe-4/0/0 {
unit 0 {
classifiers {
dscp testdscp;
}
}
classifiers {
dscp testdscp;
}
}
af1 {
unit 0 {
rewrite-rules {
dscp testdscp; /*Rewrite rule applied on egress AF interface on GNF1.*/
}
}
}
}
rewrite-rules {
dscp testdscp {
forwarding-class assured-forwarding {
loss-priority low code-point 001001;
}
}
}
}
GNF2 Configuration
interfaces {
xe-3/0/0:0 {
unit 0 {
family inet {
address 42.1.2.1/24;
}
}
}
af1 {
unit 0 {
family inet {
address 32.1.2.2/24;
}
}
}
}
class-of-service {
classifiers {
dscp testdscp {
forwarding-class network-control {
loss-priority low code-points 001001;
}
}
}
interfaces {
af1 {
unit 0 {
classifiers {
dscp testdscp; /*Classifier applied on AF at ingress of GNF2*/
}
}
}
}
}
Related • Minimum Hardware and Software Requirements for Junos Node Slicing on page 27
Documentation
• Connecting the Servers and the Router on page 30
Starting from Junos OS Release 17.4R1, Junos Node Slicing supports YANG-based
abstraction to orchestrate guest network functions (GNFs), using single touchpoint.
NOTE: Junos Node Slicing also supports GNF life cycle management using
the dual touchpoint method. In this method, ODL sends RPCs to, and receive
responses from, JDM and BSYS separately. To enable dual touchpoint, you
need to mount both BSYS and Juniper Device Manager (JDM) on ODL.
To install the YANG package on the BSYS, use the following command:
NOTE: You need to install the same package on the backup Routing Engine
as well.
After successful installation, you can find the YANG Package contents at the following
location:
root@router:/opt/yang-pkg/junos-node-slicing
You can verify the package installation status on both the Routing Engines of the BSYS,
using the following command:
NOTE:
• Ensure that server0 and server1 are up and running.
• Ensure that you replace the jdm-server0-ip and jdm-server1-ip with proper
IP addresses.
To complete the single touchpoint setup, you need to set the BSYS to communicate
with the JDM. This enables the BSYS to service the RPC requests from ODL.
To enable the BSYS to communicate with the JDM, issue the following commands on
the BSYS.
NOTE:
• In the single touchpoint method, use the XML with the prefix jdm- , as seen
in Table 7 on page 61 (for example,
<rpc><jdm-get-route-information/></rpc>).
• In the dual touchpoint method, use the XML without the jdm- prefix (for
example, <rpc><get-route-information/></rpc>).
Task RPC
Task RPC
Stop a GNF.
<rpc>
<jdm-request-virtual-network-functions>
<vnf-name>
GNF-NAME
</vnf-name>
<stop/>
</jdm-request-virtual-network-functions>
</rpc>
Start a GNF.
<rpc>
<jdm-request-virtual-network-functions>
<vnf-name>
GNF-NAME
</vnf-name>
<start/>
</jdm-request-virtual-network-functions>
</rpc>
Restart a GNF.
<rpc>
<jdm-request-virtual-network-functions>
<vnf-name>
GNF-NAME
</vnf-name>
<restart/>
</jdm-request-virtual-network-functions>
</rpc>
<jdm-get-software-information/>
</rpc>
<jdm-get-server-connections/>
</rpc>
<jdm-get-inventory-software-vnf-information/>
</rpc>
<jdm-get-visibility-vnf-information/>
</rpc>
2. Issue the following command on both the master and backup Routing Engines:
Junos Node Slicing upgrade involves upgrading Juniper Device Manager (JDM), guest
network functions (GNFs), and the base system (BSYS).
You can upgrade each of these components independently, as long as they are within
the allowed range of software versions (see “Multi-Version Software Interoperability
Overview” on page 25 for more details). You can also upgrade all of them together.
NOTE: Before starting the upgrade process, save the JDM, GNF VM, and BSYS
configurations for reference.
Upgrading JDM
1. Upgrade the JDM by performing the following tasks on both the servers:
a. Copy the new JDM package (RPM or Ubuntu) to a directory on the host (for
example, /var/tmp).
Stopping JDM
If you are upgrading the JDM RHEL package, use the following command:
If you are upgrading the JDM Ubuntu package, use the following command:
NOTE: A JDM upgrade does not affect any of the running GNFs.
See also:
• Installing JDM Ubuntu Package on x86 Servers Running Ubuntu 16.04 on page 40
The GNF and BSYS packages can be upgraded in the same way as you would upgrade
Junos OS on a standalone MX Series router.
NOTE: You can also overwrite an existing GNF image with a new one through
JDM by using the JDM CLI command request virtual-network-functions vnf-name
add-image new-image-name force. This can be useful in a rare situation where
the GNF image does not boot. You can also use the force option to perform
a cleanup if, for example, you abruptly stopped an earlier add-image process
by pressing Ctrl-C (example: request virtual-network-functions vnf-name
delete-image image-name force).
GNFs separately. Also, you can run unified ISSU on each GNF independently—without
affecting other GNFs. See also Understanding the Unified ISSU Process.
The following are sample messages that appear if incompatibilities are detected during
software upgrade:
--------------------------------------------------------------------------------
CFG Anomalies for: set snmp interface
--------------------------------------------------------------------------------
FRU-ID Ano-ID ACTION MESSAGE
--------------------------------------------------------------------------------
NONE 102 WARN <sample config incompatibility 1>
--------------------------------------------------------------------------------
FRU Anomalies:
--------------------------------------------------------------------------------
FRU-ID Ano-ID ACTION MESSAGE
--------------------------------------------------------------------------------
0xaa0b 100 WARN <sample FRU incompatibility 1>
Sample output showing how to use the 'force' option to proceed with an upgrade:
After a software update of a GNF or BSYS, if software incompatibilities between the GNF
and the BSYS exist, they will be raised as a chassis alarm. You can view the incompatibility
alarm information by using the show chassis alarms command. You can further view the
details of the incompatibilities by using the show system anomalies command. For more
details, see “Viewing Incompatibilities Between Software Versions” on page 69.
The alarms appear only on GNFs even if the upgrade is performed on the BSYS. The
following types of alarm can occur:
To view software incompatibilities from the BSYS, use the CLI as shown in the following
example:
To view software incompatibilities from a GNF, use the CLI as shown in the following
example:
NOTE:
• As shown in the CLI, remember to specify the GNF ID while viewing the
incompatibilities from BSYS.
If you are restarting both the servers, choose the all-servers option while stopping
each GNF as shown in the following example:
gnf_name stopped
If you are restarting a particular server, stop the GNFs on that server by specifying the
server-id as shown in the following example:
gnf_name stopped
NOTE: If you want to view the status of GNFs on both the servers, choose
the all-servers option. Example: show virtual-network-functions all-servers).
3. From the Linux host shell, stop the JDM by using the following command:
Stopping JDM
4. From the Linux host shell, verify that the JDM status shows as stopped.
JDM is stopped
5. After rebooting, verify that the JDM status now shows as running.
After a server reboot, the JDM and the configured GNFs will automatically start running.
If you are replacing the servers, ensure that the operating server pair continues to have
similar or identical hardware configuration. If the server pair were to become temporarily
dissimilar during the replacement (this could be the case when replacing the servers
sequentially), it is recommended that you disable GRES and NSR for this period, and
re-enable them only when both the servers are similar once again.
Following the host OS update, if you are using Intel X710 NICs, ensure that the version of
the X710 NIC driver in use continues to be the latest version as described in “Updating
Intel X710 NIC Driver for x86 Servers” on page 34 .
1. Shut down a VNF by using the JDM CLI command request virtual-network-functions
gnf-name stop all-servers. For example:
server0:
--------------------------------------------------------------------------
test-gnf stopped
server1:
--------------------------------------------------------------------------
test-gnf stopped
2. Delete the VNF configuration by applying the JDM CLI configuration statement delete
virtual-network-functions gnf-name. See the following example:
3. Delete the VNF image repository by using the JDM CLI command request
virtual-network-functions gnf-name delete-image all-servers. For example:
server0:
--------------------------------------------------------------------------
Deleted the image repository
server1:
--------------------------------------------------------------------------
Deleted the image repository
NOTE:
• To delete a VNF completely, you must perform all the three steps.
• If you want to delete a VNF management interface, you must stop and
delete the VNF first.
• JDM package
NOTE: Save the JDM configuration if you want to use it for reference.
1. Delete the GNFs first by performing all the steps described in the section “Deleting
Guest Network Functions” on page 71.
2. Stop the JDM on each server by running the following command at the host Linux
shell:
3. Uninstall the JDM on each server by running the following command at the host Linux
shell.
4. To revert the MX Series router from BSYS mode to standalone mode, apply the
following configuration statements on the MX Series router:
root@router# commit
• network-slices on page 76
• guest-network-functions on page 77
• gnf on page 78
• control-plane-bandwidth-percent (Node Slicing) on page 79
• description (GNF) on page 80
• fpcs (Node Slicing) on page 81
• af-name on page 82
• peer-gnf on page 83
• description (AF) on page 84
network-slices
Syntax network-slices {
guest-network-functions{
gnf id {
control-plane-bandwidth-percent percent;
description description;
fpcs fpcs;
af-name
}
}
}
guest-network-functions
Syntax guest-network-functions {
gnf id {
control-plane-bandwidth-percent percent;
description description;
fpcs fpcs;
af-name
}
}
gnf
Syntax gnf id {
control-plane-bandwidth-percent percent;
description description;
fpcs fpcs;
af-name
}
Options id—GNF ID
Range: 1–10
• guest-network-functions on page 77
Description Allocate a percentage of the bandwidth that exists on the control plane on the router to
the specified guest network function (GNF). Allocating bandwidth prevents potential
overutilization by one GNF over another.
• gnf on page 78
• description on page 80
• fpcs on page 81
description (GNF)
Description Provide a description string for the specified guest network function (GNF).
Options description—A description string for the specified guest network function (GNF).
• gnf on page 78
• control-plane-bandwidth-percent on page 79
• fpcs on page 81
• gnf on page 78
• control-plane-bandwidth-percent on page 79
• description on page 80
af-name
Syntax af-name {
peer-gnf {
id peer-gnf-id;
remote-af-name;
}
description af-description;
}
Description Configure Abstracted Fabric (AF) interface between a pair of guest network functions
(GNFs). AF interface is a pseudo interface that represents a first class Ethernet interface
behavior. An AF interface is created on a GNF to communicate with the peer GNF when
the two GNFs are connected to each other through the CLI.
peer-gnf
Syntax peer-gnf {
id peer-gnf-id;
remote-af-name;
}
Description Configure the details of the GNF peer connected using the AF interface.
Options idpeer-gnf -id—Name of the GNF peer connected using the Abstracted Fabric (AF)
interface.
Range: 1 through 10
description (AF)
Description Provide a description string for the specified Abstracted Fabric (AF) interface.
Options description—A description string for the specified Abstracted Fabric (AF) interface.
• control-plane-bandwidth-percent on page 79
• fpcs on page 81
Description Display Junos Node Slicing information for the guest network functions (GNFs) configured
on the base system (BSYS). The gnf gnf-id option displays the information about a
particular GNF.
Output Fields Table 8 on page 86 lists the output fields for the show chassis network-slices command.
Output fields are listed in the approximate order in which they appear.
Uptime Duration for which the GNFs have been up and running.
FPCs assigned Shows the FPC slot numbers assigned to the GNF.
BSYS sw version Shows the Junos software version used in the BSYS.
GNF sw version Shows the Junos software version used in the GNF.
GNF uptime Duration for which the GNF has been up and running.
GNF Routing Engine Show the details of the Routing Engines in the slot 0 and 1. It includes the Current State, (master or
Status: backup), Routing Engine model, and GNF host name.
Sample Output
guest-network-functions:
GNF Description State Uptime
1 gnf-a Online 12 hours, 46 minutes, 11 seconds
2 gnf-b Online 12 hours, 13 minutes, 57 seconds
3 gnf-c Online 12 hours, 3 minutes, 55 seconds
4 gnf-d Online 12 hours, 8 minutes, 20 seconds
5 gnf-e Online 12 hours, 2 minutes, 46 seconds
6 gnf-f Online 11 hours, 56 minutes, 29 seconds
GNF ID 1
GNF description NA
GNF state Online
FPCs assigned 7 8
FPCs online 7 8
BSYS router(mx960)
BSYS sw version 18.2-20180321_0948_bsys
GNF sw version 18.2-20180314_gnf
Chassis mx960
BSYS master RE 0
GNF uptime 4 days, 23 hours, 55 minutes, 1 second
GNF Routing Engine Status:
Slot 0:
Current state Master
Model RE-GNF-2400x4
GNF host name gnf-host0
Slot 1:
Current state Backup
Model RE-GNF-2400x4
GNF host name gnf-host1
GNF ID 2
GNF description NA
GNF state Online
FPCs assigned NA
FPCs online NA
BSYS router(mx960)
BSYS sw version 18.2-20180321_0948_bsys
GNF sw version 18.2-20180216_gnf
Chassis mx960
BSYS master RE 0
GNF uptime 4 days, 23 hours, 53 minutes, 54 seconds
GNF Routing Engine Status:
Slot 0:
Current state Master
Model RE-GNF-2400x4
GNF host name gnf-host2
Slot 1:
Current state Backup
Model RE-GNF-2400x4
GNF host name gnf-host3
GNF ID 2
GNF description NA
GNF state Online
FPCs assigned NA
FPCs online NA
BSYS router(mx960)
BSYS sw version 18.2-20180321_0948_bsys
GNF sw version 18.2-20180216_gnf
Chassis mx960
BSYS master RE 0
GNF uptime 4 days, 23 hours, 53 minutes, 54 seconds
GNF Routing Engine Status:
Slot 0:
Current state Master
Model RE-GNF-2400x4
GNF host name gnf-host2
Slot 1:
Current state Backup
Model RE-GNF-2400x4
GNF host name gnf-host3
Description Display the status of the physical interface cards (PICs) of each Flexible PIC Concentrator
(FPC) assigned to different guest network functions (GNFs).
Sample Output
user@router> show chassis fpc pic-status
Description Display information about Flexible PIC Concentrators (fpcs) assigned to different guest
network functions (GNFs).
Output Fields Table 9 on page 91 lists the output fields for the show chassis fpc command. Output
fields are listed in the approximate order in which they appear.
Slot or Slot State Slot number and state. The state can be one of the following conditions:
Temp (C) or Temperature of the air passing by the FPC, in degrees Celsius or in both Celsius
Temperature and Fahrenheit.
Total CPU Total percentage of CPU being used by the FPC's processor.
Utilization (%)
Interrupt CPU Of the total CPU being used by the FPC's processor, the percentage being used
Utilization (%) for interrupts.
1 min CPU Information about the Routing Engine's CPU utilization in the past 1 minute.
Utilization (%)
5 min CPU Information about the Routing Engine's CPU utilization in the past 5 minutes.
Utilization (%)
15 min CPU Information about the Routing Engine's CPU utilization in the past 15 minutes.
Utilization (%)
Heap Utilization Percentage of heap space (dynamic memory) being used by the FPC's processor.
(%) If this number exceeds 80 percent, there may be a software problem (memory
leak).
Buffer Utilization Percentage of buffer space being used by the FPC's processor for buffering
(%) internal messages.
Sample Output
Release Information Command introduced in Junos OS Release 12.3 for MX2020 Universal Routing Platforms.
Command introduced in Junos OS Release 12.3 for MX2010 Universal Routing Platforms.
Command introduced in Junos OS Release 17.2 for MX2008 Universal Routing Platforms.
Output Fields Table 10 on page 93 lists the output fields for the show chassis adc command. Output
fields are listed in the approximate order in which they appear.
Uptime How long the Routing Engine has been connected to the adapter card and, therefore, how long the
adapter card has been up and running.
Sample Output
Description Display information about the FPCs associated with different guest network functions
(GNFs).
Output Fields Table 11 on page 95 lists the output fields for the show chassis network-slices fpcs
command. Output fields are listed in the approximate order in which they appear.
Sample Output
guest-network-functions:
GNF FPCs
1 6
2 1
3 0 2
4 5
5 7 8 9
6 3 4
Description Display incompatibilities between the software version running on the base system
(BSYS) and the software running on a specific guest network function (GNF).
Options gnf-id id—Specify the GNF ID for which you want to view the software incompatibilities.
Output Fields Table 12 on page 96 lists the output fields for the show system anomalies gnf-id command.
Output fields are listed in the approximate order in which they appear.
Anomaly Type Shows the software incompatibility type. The following are the possible values:
Default Action Shows the default actions associated with incompatibilities. The following are
the possible values:
Sample Output
Description Display information about the guest network function (GNF) you logged in.
Output Fields Table 13 on page 100 lists the output fields for the show chassis gnf command. Output
fields are listed in the approximate order in which they appear.
GNF uptime Duration for which the GNF has been up and running.
GNF Routing Engine The details of the Routing Engines in the slot 0 and 1. The details include the Current State, (master
Status: or backup), Routing Engine model, and GNF host name.
Sample Output
GNF ID 1
GNF description NA
GNF state Online
FPCs assigned 8 9
FPCs online 8 9
BSYS router(mx960)
BSYS sw version 18.2-20180321_0948_bsys
GNF sw version 18.2-20180314_1035_gnf
Chassis mx960
BSYS master RE 0
GNF uptime 54 minutes, 37 seconds
GNF Routing Engine Status:
Slot 0:
Current state Master
Model RE-GNF-2100x4
GNF host name gnf-host0
Slot 1:
Current state Backup
Model RE-GNF-2100x4
GNF host name gnf-host1
Description Display information about the guest network function (GNF) you logged in.
Output Fields Table 13 on page 100 lists the output fields for the show chassis gnf command. Output
fields are listed in the approximate order in which they appear.
GNF uptime Duration for which the GNF has been up and running.
GNF Routing Engine The details of the Routing Engines in the slot 0 and 1. The details include the Current State, (master
Status: or backup), Routing Engine model, and GNF host name.
Sample Output
GNF ID 1
GNF description NA
GNF state Online
FPCs assigned 8 9
FPCs online 8 9
BSYS router(mx960)
BSYS sw version 18.2-20180321_0948_bsys
GNF sw version 18.2-20180314_1035_gnf
Chassis mx960
BSYS master RE 0
GNF uptime 54 minutes, 37 seconds
GNF Routing Engine Status:
Slot 0:
Current state Master
Model RE-GNF-2100x4
GNF host name gnf-host0
Slot 1:
Current state Backup
Model RE-GNF-2100x4
GNF host name gnf-host1
Description Display a list of all hardware components of the chassis, including the hardware version
level and serial number, the GNF Routing Engine details, and the FPCs assigned to the
GNF.
Output Fields Table 15 on page 104 lists the output fields for the show chassis hardware command.
Output fields are listed in the approximate order in which they appear.
Serial number Serial number of the chassis component. The serial number of the backplane
is also the serial number of the router chassis. Use this serial number when you
need to contact Juniper Networks Customer Support about the router or switch
chassis.
Sample Output
bsys-re0:
--------------------------------------------------------------------------
Hardware inventory:
Item Version Part number Serial number Description
Chassis JN11C9CDDAFK MX2010
Midplane REV 35 750-044636 ABAB9184 Lower Backplane
Midplane 1 REV 02 711-044557 ABAB9048 Upper Backplane
PMP REV 04 711-032426 ACAJ2622 Power Midplane
FPM Board REV 09 760-044634 ABCF2618 Front Panel Display
PSM 0 REV 01 740-050037 1EDB3130084 DC 52V Power Supply
Module
PSM 1 REV 01 740-050037 1EDB313001Z DC 52V Power Supply
Module
PSM 2 REV 01 740-050037 1EDB321018D DC 52V Power Supply
Module
PSM 3 REV 01 740-050037 1EDB32101AZ DC 52V Power Supply
Module
PSM 4 REV 01 740-050037 1EDB32202C2 DC 52V Power Supply
Module
PSM 5 REV 01 740-050037 1EDB32100TC DC 52V Power Supply
Module
PSM 6 REV 01 740-050037 1EDB3210166 DC 52V Power Supply
Module
PSM 7 REV 01 740-050037 1EDB3210165 DC 52V Power Supply
Module
PSM 8 REV 01 740-050037 1EDB3210163 DC 52V Power Supply
Module
PDM 0 REV 03 740-045234 1EGA3170177 DC Power Dist Module
Routing Engine 0 REV 08 750-055814 CAFV5537 RE-S-2X00x8
CB 0 REV 08 750-055087 CAFN3426 MX2K Enhanced SCB
Xcvr 0 REV 01 740-031980 ALM0HC7 SFP+-10G-SR
Xcvr 1 REV 01 740-031980 123363A00418 SFP+-10G-SR
CB 1 REV 08 750-055087 CAFN3423 MX2K Enhanced SCB
SPMB 0 REV 05 711-041855 CAEZ5998 PMB Board
SPMB 1 REV 05 711-041855 CAEZ5993 PMB Board
SFB 0 REV 06 711-044466 ABCD6742 Switch Fabric Board
SFB 1 REV 06 711-044466 ABCG5627 Switch Fabric Board
SFB 2 REV 06 711-044466 ABCG5659 Switch Fabric Board
SFB 3 REV 06 711-044466 ABCG5653 Switch Fabric Board
SFB 4 REV 06 711-044466 ABCG5611 Switch Fabric Board
SFB 5 REV 06 711-044466 ABCG5635 Switch Fabric Board
SFB 6 REV 06 711-044466 ABCG5638 Switch Fabric Board
SFB 7 REV 06 711-044466 ABCG3650 Switch Fabric Board
FPC 8 REV 68 750-044130 ABCY5967 MPC6E 3D
CPU REV 12 711-045719 ABCY9696 RMPC PMB
Fan Tray 0 REV 06 760-046960 ACAY0428 172mm FanTray - 6 Fans
Fan Tray 1 REV 06 760-046960 ACAY0800 172mm FanTray - 6 Fans
Fan Tray 2 REV 06 760-046960 ACAY0797 172mm FanTray - 6 Fans
Fan Tray 3 REV 06 760-046960 ACAY1047 172mm FanTray - 6 Fans
gnf2-re0:
--------------------------------------------------------------------------
Chassis GN59081553B0 MX2010-GNF <<<
Routing Engine 0 RE-GNF-1700x4
Description Display information about the Flexible PIC Concentrators (fpcs) assigned to the guest
network function (GNF).
Output Fields Table 16 on page 107 lists the output fields for the show chassis fpc command. Output
fields are listed in the approximate order in which they appear.
Slot or Slot State Slot number and state. The state can be one of the following conditions:
Temp (C) or Temperature of the air passing by the FPC, in degrees Celsius or in both Celsius
Temperature and Fahrenheit.
Total CPU Total percentage of CPU being used by the FPC's processor.
Utilization (%)
Interrupt CPU Of the total CPU being used by the FPC's processor, the percentage being used
Utilization (%) for interrupts.
1 min CPU Information about the Routing Engine's CPU utilization in the past 1 minute.
utilization (%)
5 min CPU Information about the Routing Engine's CPU utilization in the past 5 minutes.
utilization (%)
15 min CPU Information about the Routing Engine's CPU utilization in the past 15 minutes.
utilization (%)
Heap Utilization Percentage of heap space (dynamic memory) being used by the FPC's processor.
(%) If this number exceeds 80 percent, there might be a software problem (memory
leak).
Buffer Utilization Percentage of buffer space being used by the FPC's processor for buffering
(%) internal messages.
Sample Output
4 Online 42 20 0 19 19 19 3584 8 25 2
6 Online 46 12 0 11 11 11 3136 8 19 2
Description Display the status of the physical interface cards (PICs) of each Flexible PIC Concentrator
(FPC) assigned to the guest network function (GNF).
Sample Output
Release Information Command introduced in Junos OS Release 12.3 for MX2020 Universal Routing Platforms.
Command introduced in Junos OS Release 12.3 for MX2010 Universal Routing Platforms.
Command introduced in Junos OS Release 17.2 for MX2008 Universal Routing Platforms.
Description Display chassis information about the adapter cards (ADCs) assigned to the guest network
function (GNF).
Output Fields Table 17 on page 110 lists the output fields for the show chassis adc command. The output
fields are listed in the approximate order in which they appear.
Uptime How long the Routing Engine has been connected to the adapter card and, therefore, how long the
adapter card has been up and running.
Sample Output
Description Display status information for the specified Abstracted Fabric (AF) interface.
Options brief | detail | extensive | terse—(Optional) Display the specified level of output.
Output Fields Table 18 on page 113 describes the output fields for the show interfaces (Abstracted
Fabric) command. Output fields are listed in the approximate order in which they appear.
Physical Interface
Physical interface Name and status of the physical interface. All levels
Interface index Index number of the physical interface, which reflects its initialization sequence. detail extensive none
SNMP ifIndex SNMP index number for the physical interface. detail extensive none
Generation Unique number for use by Juniper Networks technical support only. detail extensive
Link-level type Encapsulation being used on the physical interface. All levels
MTU Maximum transmission unit size on the physical interface. All levels
Device flags Information about the physical device. Possible values are described in the All levels
“Device Flags” section under Common Output Fields Description.
Interface flags Information about the interface. Possible values are described in the “Interface All levels
Flags” section under Common Output Fields Description.
Hold-times Current interface hold-time up and hold-time down, in milliseconds (ms). detail extensive
Last flapped Date, time, and how long ago the interface went from down to up. The format detail extensive none
is Last flapped: year-month-day hour:minute:second:timezone (hour:minute:second
ago). For example, Last flapped: 2002-04-26 10:52:40 PDT (04:33:20 ago).
Statistics last cleared Time when the statistics for the interface were last set to zero. detail extensive
Traffic statistics Number and rate of bytes and packets received and transmitted on the physical detail extensive
interface.
IPv6 transit statistics Number of IPv6 transit bytes and packets received and transmitted on the extensive
interface if IPv6 statistics tracking is enabled.
Input errors Input errors on the interface. The following paragraphs explain the counters extensive
whose meaning might not be obvious:
Output errors Output errors on the interface. The following paragraphs explain the counters extensive
whose meaning might not be obvious:
• Carrier transitions—Number of times the interface has gone from down to up.
• Errors—Sum of the outgoing frame aborts and FCS errors.
• Drops—Number of packets dropped by the output queue.
NOTE:
• MTU errors—Number of packets whose size exceeded the MTU of the interface.
• Resource errors—Sum of transmit drops.
Peer GNF id The GNF peer connected using the AF interface. detail extensive none
Peer GNF Forwarding Shows forwarding element (FE) number and the FPC slot, FE bandwidth, and detail extensive none
element(FE) view FE status (up/down).
Residual Transmit Displays the historical transmit statistics for the peer AF interface from the FPC
Statistics events (FPC offline/online/removal/power off on the local or remote side).
Fabric Queue Displays the statistics of the transmit traffic on the fabric queues (high and low
Statistics queues) for each peer PFE on the AF interface.
Residual Queue Displays the historical fabric queue statistics for the peer AF interface from the
Statistics FPC events (FPC offline/online/removal/power off on the local or remote side).
Logical Interface
Logical interface Name of the logical interface. All levels
Index Index number of the logical interface, which reflects its initialization sequence. detail extensive none
SNMP ifIndex SNMP interface index number for the logical interface. detail extensive none
Generation Unique number for use by Juniper Networks technical support only. detail extensive
Flags Information about the logical interface. Possible values are described in the All levels
“Logical Interface Flags” section under Common Output Fields Description.
Protocol Protocol family. Possible values are described in the “Protocol Field” section detail extensive none
under Common Output Fields Description.
MTU Maximum transmission unit size on the logical interface. detail extensive none
Traffic statistics Number and rate of bytes and packets received and transmitted on the specified detail extensive
interface set.
Transit statistics Number of IPv6 transit bytes and packets received and transmitted on the extensive
logical interface if IPv6 statistics tracking is enabled.
Local statistics Number and rate of bytes and packets destined to the router. extensive
Generation Unique number for use by Juniper Networks technical support only. detail extensive
Route Table Route table in which the logical interface address is located. For example, 0 detail extensive none
refers to the routing table inet.0.
Flags Information about protocol family flags. Possible values are described in the detail extensive
“Family Flags” section under Common Output Fields Description.
Addresses, Flags Information about the address flags. Possible values are described in the detail extensive none
“Addresses Flags” section under Common Output Fields Description.
protocol-family Protocol family configured on the logical interface. If the protocol is inet, the IP brief
address of the interface is also displayed.
Flags Information about the address flag. Possible values are described in the detail extensive none
“Addresses Flags” section under Common Output Fields Description.
Destination IP address of the remote side of the connection. detail extensive none
Generation Unique number for use by Juniper Networks technical support only. detail extensive
Sample Output
Logical interface af2.0 (Index 334) (SNMP ifIndex 647) (Generation 234)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.1 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 23382038
Output bytes : 650688227
Input packets: 341621
Output packets: 5986312
Local statistics:
Input bytes : 23381827
Output bytes : 28423617
Input packets: 341618
Output packets: 329361
Transit statistics:
Input bytes : 211 0 bps
Output bytes : 622264610 0 bps
Input packets: 3 0 pps
Output packets: 5656951 0 pps
Protocol inet, MTU: 1500
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold
cnt: 0, NH drop cnt: 0
Generation: 314, Route table: 0
Flags: Sendbcast-pkt-to-re
Addresses, Flags: Is-Preferred Is-Primary
Destination: 10.10.10/24, Local: 10.10.10.1, Broadcast: 10.10.10.255,
Generation: 224
Protocol mpls, MTU: 1488, Maximum labels: 3, Generation: 315, Route table: 0
Flags: Is-Primary
Protocol multiservice, MTU: Unlimited, Generation: 316, Route table: 0
Flags: Is-Primary
Policer: Input: __default_arp_policer__
Logical interface af2.1 (Index 336) (SNMP ifIndex 649) (Generation 235)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.2 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Local statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Transit statistics:
Input bytes : 0 0 bps
Output bytes : 0 0 bps
Input packets: 0 0 pps
Output packets: 0 0 pps
Protocol inet, MTU: 1500
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 0, Curr new hold
cnt: 0, NH drop cnt: 0
Generation: 317, Route table: 0
Flags: Sendbcast-pkt-to-re
Protocol mpls, MTU: 1488, Maximum labels: 3, Generation: 318, Route table: 0
Logical interface af2.32767 (Index 337) (SNMP ifIndex 675) (Generation 236)
Flags: Up SNMP-Traps 0x4004000 VLAN-Tag [ 0x0000.0 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Local statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Transit statistics:
Logical interface af2.0 (Index 334) (SNMP ifIndex 647) (Generation 234)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.1 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 23383187
Output bytes : 650689682
Input packets: 341638
Output packets: 5986329
Local statistics:
Input bytes : 23382976
Output bytes : 28425072
Input packets: 341635
Output packets: 329378
Transit statistics:
Input bytes : 211 0 bps
Output bytes : 622264610 0 bps
Input packets: 3 0 pps
Output packets: 5656951 0 pps
Protocol inet, MTU: 1500
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold
cnt: 0, NH drop cnt: 0
Generation: 314, Route table: 0
Flags: Sendbcast-pkt-to-re
Addresses, Flags: Is-Preferred Is-Primary
Destination: 10.10.10/24, Local: 10.10.10.1, Broadcast: 10.10.10.255,
Generation: 224
Protocol mpls, MTU: 1488, Maximum labels: 3, Generation: 315, Route table: 0
Flags: Is-Primary
Protocol multiservice, MTU: Unlimited, Generation: 316, Route table: 0
Flags: Is-Primary
Policer: Input: __default_arp_policer__
Logical interface af2.1 (Index 336) (SNMP ifIndex 649) (Generation 235)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.2 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Local statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Transit statistics:
Input bytes : 0 0 bps
Output bytes : 0 0 bps
Input packets: 0 0 pps
Output packets: 0 0 pps
Protocol inet, MTU: 1500
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 0, Curr new hold
cnt: 0, NH drop cnt: 0
Logical interface af2.32767 (Index 337) (SNMP ifIndex 675) (Generation 236)
Flags: Up SNMP-Traps 0x4004000 VLAN-Tag [ 0x0000.0 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Local statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Transit statistics:
Input bytes : 0 0 bps
Output bytes : 0 0 bps
Input packets: 0 0 pps
Output packets: 0 0 pps
Protocol multiservice, MTU: Unlimited, Generation: 320, Route table: 0
Flags: None
Policer: Input: __default_arp_policer__
state: unsuppressed
Current address: 2c:6b:f5:55:eb:f6, Hardware address: 2c:6b:f5:55:eb:f6
Alternate link address: Unspecified
Last flapped : 2017-10-18 19:40:00 EDT (02:50:04 ago)
Statistics last cleared: Never
Traffic statistics:
Input bytes : 4048 0 bps
Output bytes : 144092 59440 bps
Input packets: 88 0 pps
Output packets: 186 4 pps
IPv6 transit statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Input errors:
Errors: 0, Drops: 0, Framing errors: 0, Runts: 0, Giants: 0, Policed discards:
0, Resource errors: 0
Output errors:
Carrier transitions: 0, Errors: 0, Drops: 0, MTU errors: 0, Resource errors:
0
Bandwidth : 480 Gbps
Peer GNF id : 4
Peer GNF Forwarding element(FE) view :
FPC slot:FE Num FE Bandwidth(Gbps) Status
6:0 240 Up
6:1 240 Up
Logical interface af4.1 (Index 328) (SNMP ifIndex 593) (Generation 137)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.1 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 414
Output bytes : 139906
Input packets: 9
Output packets: 107
Local statistics:
Input bytes : 414
Output bytes : 598
Input packets: 9
Output packets: 13
Transit statistics:
Input bytes : 0 0 bps
Output bytes : 139308 59240 bps
Input packets: 0 0 pps
Output packets: 94 4 pps
Protocol inet, MTU: 1500
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold
cnt: 0, NH drop cnt: 0
Generation: 162, Route table: 0
Flags: Sendbcast-pkt-to-re
Output Filters: f-basic-sr-tcm-ca
Addresses, Flags: Is-Preferred Is-Primary
Destination: 1.1.1/24, Local: 1.1.1.1, Broadcast: 1.1.1.255, Generation:
148
Protocol multiservice, MTU: Unlimited, Generation: 163, Route table: 0
Policer: Input: __default_arp_policer__
Logical interface af4.1 (Index 328) (SNMP ifIndex 593) (Generation 137)
Flags: Up SNMP-Traps 0x4000 VLAN-Tag [ 0x8100.1 ] Encapsulation: ENET2
Traffic statistics:
Input bytes : 77518213184
Output bytes : 3054342
Input packets: 52450568
Output packets: 4591
Local statistics:
Input bytes : 460
Output bytes : 4600
Input packets: 10
Output packets: 100
Transit statistics:
Input bytes : 77518212724 1944810944 bps
Output bytes : 3049742 68632 bps
Input packets: 52450558 164494 pps
Output packets: 4491 20 pps
Protocol inet, MTU: 1500
Max nh cache: 75000, New hold nh limit: 75000, Curr nh cnt: 1, Curr new hold
cnt: 0, NH drop cnt: 0
Generation: 162, Route table: 0
Flags: Sendbcast-pkt-to-re
Output Filters: f-basic-sr-tcm-ca
Addresses, Flags: Is-Preferred Is-Primary
Description Display the incompatibilities between the software version running on the guest network
function (GNF) and the version running on the base system (BSYS).
Output Fields Table 19 on page 127 lists the output fields for the show system anomalies command.
Output fields are listed in the approximate order in which they appear.
Anomaly Type Shows the software incompatibility type. The following are the possible values:
Default Action Shows the default actions associated with incompatibilities. The following are
the possible values:
Sample Output
virtual-network-functions
Description Associate a GNF ID, base configuration, chassis type and resource template with the
VNF.
The GNFs that are configured and committed will appear as auto-complete options in
operational commands.
chassis-type chassis-type—Choose the type of the router chassis (for example, MX960)
used as the base system (BSYS) in the node slicing setup.
server
Syntax server {
interfaces {
cb0 cb0-interface;
cb1 cb1-interface;
jdm-management jdm-management-interface;
vnf-management gnf-management-interface;
}
}
Description Configure the server interfaces for the JDM and GNFs. These include a JDM management
interface, a GNF management interface, and two server interfaces that are connected
to the MX Series router.
Options cb0 cb0-interface—The server interface that is connected to the control board 0 of the
MX Series router.
cb1 cb1-interface—The server interface that is connected to the control board 1 of the
MX Series router.
unit unit—Interface unit number. This is a logical unit number. The only supported value
is 0.
• inet—Indicates IPv4.
• inet6—Indicates IPv6.
Syntax routing-options {
static {
route route {
next-hop next-hop;
}
}
}
Description Create a non-root user in JDM for Junos Nose Slicing. The non-root user accounts function
only inside JDM, not on the host server.
class class-name—Predefined login classes that JDM supports for non-root users.
• super-user
• operator
• read-only
• unauthorized
Required Privilege
Level
root-login (JDM)
deny—Disable users from logging in to the JDM as root through SSH. This configuration
option is applicable only to the JDM management interface (jmgmt0). Setting this
configuration option does not block the internal JDM to JDM communication, which
uses root account with password-less authentication method.
The following are general guidelines on how to use the JDM server commands:
• Use the commit synchronize command to ensure that the configuration committed on
one server is synchronized with the other server. The synchronization is bidirectional.
A JDM configuration change at either of the servers is synchronized with the other
server. When a virtual machine (VM) is instantiated, the GNF-re0 VM instance starts
on server0 and the GNF-re1 VM instance starts on server1.
NOTE: If you do not use the commit synchronize command, you must
configure and manage the VMs on both the servers manually.
Sample Output
clear log
user@jdm> clear log syslog
Sample Output
monitor list
user@jdm> monitor list
Description Start displaying the system log or trace file and additional entries being added to those
files.
Additional Information Log files are generated by the routing protocol process or by system logging.
Output Fields Table 20 on page 141 describes the output fields for the monitor start command. Output
fields are listed in the approximate order in which they appear.
***filename *** Name of the file from which entries are being displayed.
Sample Output
monitor start
user@jdm> monitor start syslog
t '
Oct 19 19:44:36 jdm mgd[3268]: UI_COMMIT: User 'root' requested 'commit' operati
on (comment: none)
Oct 19 19:44:36 jdm mgd[3268]: UI_COMMIT_PROGRESS: Commit operation in progress:
Additional Information Log files are generated by the routing protocol process or by system logging.
Description Copy the ssh public key to the peer JDM. This command is equivalent to ssh-copy-id
user@jdm-server<0/1>.
NOTE: If the JDM fails to establish SSH connection with its peer on either of
the two CB links, you need to run the JDM CLI command request server
authenticate-peer-server. You can use the JDM CLI command show server
connections to view the status of the SSH connection between the JDM peers.
Note that the command request server authenticate-peer-server will prompt
for user confirmation twice - once per CB link.
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• show virtual-network-functions on page 146
Sample Output
request virtual-network-functions
Description Start, stop or restart the VNFs. Also, you can add or remove the base image.
NOTE: You can issue these commands either on both the servers (server0
and server1) or on one specific server.
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• show virtual-network-functions on page 146
show virtual-network-functions
Description Display the list of guest network functions (GNFs) along with their IDs, status and
availability.
server—Display the details of the GNFs on one specific server. Applicable value is 0 or 1.
vnf-name—Display additional details of a particular GNF. You can use the detail option
to view the detailed output. For example, show virtual-network-functions gnf1 detail.
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• request virtual-network-functions on page 145
Output Fields Table 21 on page 146 lists the output fields for the show virtual-network-functions
command.
• Up
• Down
VNF CPU Utilization Shows the GNF CPU utilization details. See also: show system cpu (JDM).
and Allocation
Information
VNF Memory Displays the following memory information about the GNFs:
Information
• Name—GNF name.
• Resident—The memory used by the GNFs.
• Actual—Actual memory.
VNF Storage Displays the following guest network function (GNF) storage information:
Information
• Directories—Names of the directories.
• Size—Total storage size.
• Used—Storage used.
VNF Interfaces Shows the GNF interface statistics information. See also: show system network (JDM).
Statistics
VNF Network Shows the list of Physical Interfaces, Virtual Interfaces and MAC addresses.
Information
Sample Output
show virtual-network-functions
user@jdm> show virtual-network-functions
5 bittern-gnf-e Running Up
Sample Output
Sample Output
VNF Information
---------------------------
ID 1
Name: gnf1
Status: Running
Liveness: up
IP Address: 192.168.2.1
Cores: 2
Memory: 16GB
Resource Template: 2core-16g
Qemu Process id: 20478
SMBIOS version: v1
Description Display the hostname and version information about the specified guest network function
(GNF).
Options vnf-name—Name of the GNF for which you want to view the version details.
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• request virtual-network-functions on page 145
Sample Output
Depending on the platform running Junos OS, you might see different installed
sub-packages.
Hostname: gnf2
Model: mx960
Junos: 17.4X48-D10.3
JUNOS OS Kernel 64-bit [20170913.201739_fbsd-builder_stable_11]
JUNOS OS libs [20170913.201739_fbsd-builder_stable_11]
JUNOS OS runtime [20170913.201739_fbsd-builder_stable_11]
JUNOS OS time zone information [20170913.201739_fbsd-builder_stable_11]
JUNOS network stack and utilities [20170926.111120_builder_junos_174_x48_d10]
JUNOS modules [20170926.111120_builder_junos_174_x48_d10]
JUNOS mx modules [20170926.111120_builder_junos_174_x48_d10]
JUNOS libs [20170926.111120_builder_junos_174_x48_d10]
JUNOS OS libs compat32 [20170913.201739_fbsd-builder_stable_11]
JUNOS OS 32-bit compatibility [20170913.201739_fbsd-builder_stable_11]
JUNOS libs compat32 [20170926.111120_builder_junos_174_x48_d10]
JUNOS runtime [20170926.111120_builder_junos_174_x48_d10]
Junos vmguest package [20170926.111120_builder_junos_174_x48_d10]
JUNOS py extensions [20170926.111120_builder_junos_174_x48_d10]
JUNOS py base [20170926.111120_builder_junos_174_x48_d10]
JUNOS OS vmguest [20170913.201739_fbsd-builder_stable_11]
JUNOS OS crypto [20170913.201739_fbsd-builder_stable_11]
JUNOS mx libs compat32 [20170926.111120_builder_junos_174_x48_d10]
JUNOS mx runtime [20170926.111120_builder_junos_174_x48_d10]
JUNOS common platform support [20170926.111120_builder_junos_174_x48_d10]
Description Display the version information about the Juniper Device Manager (JDM).
Options all-servers—Display the version details of the JDM instances on both the servers.
server—Display the version details of the JDM instance on one specific server.
Range: 0 through 1
vnf —Display the version details for a particular guest network function (GNF). You need
to mention the GNF name in the command. Example: show version vnf gnf2.
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• request virtual-network-functions on page 145
Sample Output
show version
user@jdm> show version
Hostname: mgb-dvaita-ixr1-jdm
Model: junos_node_slicing
Server slot : 1
JDM package version : 17.4-R1.7
Host Software [Red Hat Enterprise Linux]
JDM container Software [Ubuntu 14.04.1 LTS]
JDM daemon jdmd [Version: 17.4R1.7-secure]
JDM daemon jinventoryd [Version: 17.4R1.7-secure]
JDM daemon jdmmon [Version: 17.4R1.7-secure]
Host daemon jlinkmon [Version: 17.4R1.7-secure]
Hostname: mgb-dvaita-ixr1-jdm
Model: junos_node_slicing
Server slot : 1
Hostname: mgb-dvaita-ixr1-jdm
Model: junos_node_slicing
Server slot : 1
JDM package version : 17.4-R1.7
Host Software [Red Hat Enterprise Linux]
JDM container Software [Ubuntu 14.04.1 LTS]
JDM daemon jdmd [Version: 17.4R1.7-secure]
JDM daemon jinventoryd [Version: 17.4R1.7-secure]
JDM daemon jdmmon [Version: 17.4R1.7-secure]
Host daemon jlinkmon [Version: 17.4R1.7-secure]
KERNEL 3.10.0-514.el7.x86_64
MGD release 17.4R1.7 built by builder on 2017-11-17 11:29:41 UTC
CLI release 17.4R1.7 built by builder on 2017-11-17 10:53:44 UTC
base-actions-dd release 17.4R1.7 built by builder on 2017-11-17 10:06:17 UTC
jdmd_common-actions-dd release 17.4R1.7 built by builder on 2017-11-17 10:06:09
UTC
jdmd_nv_jdm-actions-dd release 17.4R1.7 built by builder on 2017-11-17 10:06:09
UTC
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• request virtual-network-functions on page 145
Output Fields Table 22 on page 154 describes the output fields for the show system cpu command.
Output fields are listed in the approximate order in which they appear.
Sample Output
-------------
VNF CPU-Id(s) Usage Qemu Pid
State
---------------------------------------- ----------------------- ------ --------
-----------
test 4,5,6,7,8,9,10,11 5.0% 32392
Running
Description Display the JDM storage details such as storage size, used space, and available space.
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• request virtual-network-functions on page 145
Output Fields Table 23 on page 156 describes the output fields for the show system storage command.
Output fields are listed in the approximate order in which they appear.
VNF Storage Information Displays the following guest network function (GNF) storage information:
Sample Output
Description Display the memory usage information about the host server, Juniper Device Manager
(JDM), and guest network functions (GNF).
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• request virtual-network-functions on page 145
Output Fields Table 24 on page 158 describes the output fields for the show system memory command.
Output fields are listed in the approximate order in which they appear.
Memory Usage Information Displays the following memory usage information about host server and JDM:
• Total—Total memory.
• Used—Used memory.
• Free—Available memory.
VNF Memory Information Displays the following memory information about the GNFs:
• Name—GNF name.
• Resident—The memory used by the GNFs.
• Actual—Actual memory.
Sample Output
Description Display the statistics information for physical interface, JDM interface, and interfaces per
guest network function (GNF).
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• request virtual-network-functions on page 145
Output Fields Table 25 on page 160 describes the output fields for the show system network command.
Output fields are listed in the approximate order in which they appear.
Physical Interfaces
Name Name of the physical interface.
Sample Output
Physical Interfaces
---------------------------------------------------------------------------------------------------------------------------------------------
Name Index MTU Hardware-address Rcvd Bytes Rcvd Packets Rcvd Error
Rcvd Drop Trxd Bytes Trxd Packets Trxd Error Trxd Drop Flags
-------- ----- ------- ----------------- ------------ ------------ ----------
--------- ------------ ------------ ---------- --------- ------
enp3s0f1 4 1500 00:25:90:b5:75:51 8787662837 51975964 0
538926 40009223 407379 0 0 BMPRU
ens3f1 7 1500 3c:fd:fe:08:87:02 1019880532 16723722 0
11243028 19265494115 31971968 0 0 BMPRU
ens3f0 3 1500 3c:fd:fe:08:87:00 5951717054 81330473 0
11226877 139135292735 124708008 0 0 BMPRU
enp3s0f2 5 1500 00:25:90:b5:75:52 3343179197 40806691 0
461955 3449064446 12191724 0 0 BMRU
-------------------------------------------------------------------------------------------------------------------------------------------
Name Index MTU Hardware-address Rcvd Bytes Rcvd Packets Rcvd Error Rcvd
Drop Trxd Bytes Trxd Packets Trxd Error Trxd Drop Flags
-------- ----- ----- ----------------- ------------ ------------ ----------
--------- ------------ ------------ ---------- --------- ------
bme1 1433 1500 52:54:00:21:20:2e 502730 4506 0 0
477328 2619 0 0 BMRU
jmgmt0 1439 1500 00:f1:60:3d:20:22 4991675 66429 0 2862
100548 891 0 0 BMRU
bme2 1435 1500 52:54:00:88:b5:dd 2930 33 0 0
3466 39 0 0 ABMRU
cb0.4002 2 1500 00:f1:60:3d:20:20 12204921 209269 0 0
3688591023 195579 0 0 ABMRU
cb1.4002 3 1500 00:f1:60:3d:20:21 161850 3026 0 0
204784 3029 0 0 ABMRU
.......................
restart (JDM)
jlinkmon—Restart the JDM link monitor daemon, which runs on the Linux host.
NOTE: The options gracefully, immediately, and soft are not available for
restarting the Juniper link monitor daemon.
Related • Generic Guidelines for Using JDM Server Commands on page 137
Documentation
• request virtual-network-functions on page 145
Output Fields When you enter this command, you are provided feedback on the status of your request.
Sample Output