Juniper MPC3-5 Data Sheet
Juniper MPC3-5 Data Sheet
In this test, we used the topology shown in Figure 4. Once we had confirmed the control plane (service)
Juniper configured 8000 VPN instances across the scalability and performance of L3VPNs, we verified
three PE routers. Once these VPNs were all success- the data plane forwarding performance — i.e. the
fully established between the Juniper MX240 IPv4 throughput. Measuring in accordance with
routers, we configured the Ixia emulator to advertise RFC2544’s frame loss test, we tested the throughput
821 unique IPv4 prefixes with three selected subnet and latency performance of the L3VPN traffic, using
masks (/16, /24 and /32) to the three PE routers. the test topology shown in Figure 1.
All prefixes of the same VPN instance were based We forwarded IPv4 test traffic at 240 Gbit/s via
on a single mask length. 96 L3VPNs with 10,243 prefixes each, resulting in
a total of 983,328 prefixes. The MPC5E line card
VPNv4 Route Scalability reached full line rate on all interfaces at 128 bytes,
256 bytes, 512 bytes, 1024 bytes, 1280 bytes,
MPC5E line cards successfully 1518 bytes single frame sizes and IMIX packet
demonstrated support for up to sizes. During the official test runs of 600 seconds
6,584,000 IPv4 routes while operating duration, the MPC5E line card did not experience
at only 12 % CPU and 48 % memory any packet loss.
resource usage.
Tail-End Protection
IP/LDP Fast Reroute
While LDP Fast Reroute (FRR) offers a local repair
As the next test in the L3VPN group, we measured mechanism at the network level, it is largely
the resiliency of VPN services in case of node or ineffective when a failure occurs at the egress node.
link failures in the MPLS transport network. In a multi-homed egress node scenario, tail-end-
protection offers a solution with repair times consid-
Using IP Fast Reroute, the MPC5E line erably lower than simple IP rerouting.
card successfully restored service to
8000 L3VPNs with 6,584,000 routes in
16.3 ms or less.
Juniper configured 128,000 VPWS services on Nonstop routing is a hardware feature that saves
each of the two MX480 PE routers in the scenario. routing protocol information by running the routing
We validated that the services were up and running protocol process on a backup routing engine in
by sending Ethernet traffic across each VPWS. addition to the primary routing engine on an
MX Series 3D router. The failover takes place purely
In all test runs, the CPU usage observed via CLI internally; in contrast to Graceful Restart mecha-
remained below 5 % and memory usage nisms, no alerts are sent to neighboring routers.
below 28 %.
Using LDP Fast Reroute, the MPC5E We ran this test in the topology according to Figure
demonstrated failover of 22.7 ms or 4, failing the topmost PE router by pulling its active
less when configured with 8,188 VPLS (primary) routing engine during operations. Juniper
instances and 511,124 MAC addresses. set up 1,000 L3VPNs with 500,000 unique IPv4
routes, 1,000 VPWS and 1,000 VPLS instances.
The device under test was populated with an
We ran this test using a load of 5 Gbit/s per
MPC5E card with 24 GbE and 6 10GbE ports.
direction. Our team created a BFD-only failure (no
loss of optical carrier) between the top PE router The MX Series 3D routers successfully showed
and the P router next to it on the bottom right. The nonstop active routing functionality: all test traffic
failover scenario was staged exactly identical to the consisting of 1,000 L3VPN traffic flows, 1,000
IP FRR failover test for L3VPNs. VPLS traffic flows and 1,000 VPWS traffic flows
was forwarded hitless.
The measurements showed between 15.1–22.7 ms
recovery time for all services. This was well below
our target for 50 ms. We validated that the routers
adjacent to the link failure used repair routes and
had updated their label information base.
We measured that the restoration was hitless and
had no impact on traffic.
In an initial baseline test, we measured the power Part of this solution is a software called Juniper
consumption of a fully populated router configu- Agile Deployment (JAM) which manages the line
ration including the module under test (MX-MPC5E card software management.
2x100GbE+4x10GbE). In a second step, we We performed the following three test steps:
removed the module under test from the router and
measured the power consumption again. We calcu- • Baseline test: We sent IPv4 test traffic across the
lated the modular power consumption of the line single router under test.
card under test (Pwi) as the difference between • Juniper installed an additional MPC3E-NG line
baseline power consumption and the power card. “Unknown hardware status” was
consumption without the module. displayed via CLI, as expected, for the newly
The Weighted Modular Energy Consumption of the inserted line card. We then started the
MPC5E line card reached 390.4 W. The Energy installation of JAM package (jam-mpc-2e-3e-
Efficiency Rating for the MPC5E at full load was ng64-14.1R6.2) to the base software version.
measured at 2.1 W/Gbit. (Junos version 14.1R6.2). During this operation,
the traffic flows on the other line cards continued
Junos Continuity without packet loss.
Service providers require maximum router uptime • After the JAM package was installed, the correct
and rebooting a router is not a welcome activity. hardware information was shown via CLI. We
Juniper Networks Junos Continuity is a solution that sent additional test traffic consisting of 1,000
reduces the frequency of MX Series 3D router L3VPN flows, 1,000 VPLS flows and 1,000
L3VPN L3VPN
Card Under Test VPWS
VPWS
VPLS VPLS
(CE1) MX240 (RR) MX240 MX240
MX240
(CE2)
(DUT)
MX240
(emulated router1) MX240 MX240
Figure 10: Juniper Continuity / Juniper Agile Deployment (JAM) Test Setup
12 12
MX480 Figure 13: IPv6 Forwarding Delay
Figure 11: IPv4/IPv6 Performance Test Setup With IPv4:IPv6 traffic, the router showed average
latency values between 18.7--21.0 μs and
We used the topology in Figure 12; the MX480 maximum latency values of 33.2--35.1 μs for fixed
was configured for an IPv4/IPv6 eBGP routing packet sizes. The IMIX average latency was
scenario. We forwarded IPv4 traffic, IPv6 and a measured as 29.7 μs and the maximum latency as
mix of IPv4 and IPv6 (80:20 proportion) test traffic 46.5 μs.
at 240 Gbit/s. Packet sizes for all IPv4 and IPv6
streams were chosen to have identical sizes of 128
bytes, 256 bytes, and 512 bytes, 1024 bytes,
1280 bytes and 1518 bytes and the IMIX. In all
cases, the MPC5E line card reached full line rate
throughput without any packet loss.
With IPv4 traffic, the router showed average latency
values between 17.5–20.2 μs and maximum
latency values of 33.1--34.6 μs for fixed packet
sizes. The IMIX average latency was measured as
29.4 μs and the maximum latency as 46.5 μs.
Figure 14: IPv4/IPv6 Forwarding Delay
With IPv6 traffic, the router showed average latency
values between 19.0--21.3 μs and maximum