0% found this document useful (0 votes)
145 views10 pages

Juniper MPC3-5 Data Sheet

Juniper commissioned EANTC to test the MPC3E-NG and MPC5E line cards. EANTC found that the line cards demonstrated: 1) Flawless data throughput of up to 240 Gbps and support for over 6.5 million IPv4 routes. 2) Excellent control plane scalability for L2VPNs and L3VPNs, supporting over 128,000 VPWS instances. 3) Impressive hitless failover times of under 17 ms for 8000 L3 VPNs and 6.5 million routes.

Uploaded by

Dan Knox
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views10 pages

Juniper MPC3-5 Data Sheet

Juniper commissioned EANTC to test the MPC3E-NG and MPC5E line cards. EANTC found that the line cards demonstrated: 1) Flawless data throughput of up to 240 Gbps and support for over 6.5 million IPv4 routes. 2) Excellent control plane scalability for L2VPNs and L3VPNs, supporting over 128,000 VPWS instances. 3) Impressive hitless failover times of under 17 ms for 8000 L3 VPNs and 6.5 million routes.

Uploaded by

Dan Knox
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Juniper Networks MPC3E-NG and MPC5E

EANTC Performance and Scalability Test

Juniper commissioned EANTC with an independent Test Setup


test of two Modular Port Concentrators, the MPC5E Juniper’s MPCs provide
and MPC3E-NG for Juniper’s MX Series 3D packet forwarding
Universal Edge Routers. EANTC validated perfor- services for MX240,
mance, scalability, energy efficiency and high avail- MX480, MX960,
ability capabilities in realistic test scenarios. Our MX2010, and MX2020
team developed a detailed, reproducible test plan routers. In the EANTC
and executed the tests on site at Juniper’s labs in test, we used MX240s,
Bangalore, India, in September 2015, and at MX480s and MX2010s
Juniper labs in Sunnyvale, USA, in March 2016. to host the MPCs.
When the EANTC team tests new router line cards The MPC3E-NG utilizes
designed and manufactured by a leading vendor up to two MICs (Modular
such as Juniper, we baseline data plane and control Interface Cards) which
plane scalability, service and high availability provide the physical
support. We specifically focus on the integration of interfaces. The MPC5E is
all the requirements in a stable solution supporting available in a variety of
diverse multi-service scenarios. fixed configurations
which combine packet
Executive Summary forwarding and high
Juniper’s new line cards showed flawless data density Ethernet inter-
MPC3E-NG & MPC5E-40G10G
throughput, excelled in control plane scalability for faces on a single line
L2VPNs and L3VPNs, and impressed in the L3VPN card.
service failover tests as shown in the table below.
For our test, Juniper provided the following MPCs:
• MPC3E-3D-NG-Q
Test Highlights Flexible configuration, here used with one 4-port
 Up to 6,584,000 IPv4 VPN routes 10-Gigabit Ethernet MIC (MIC-3D-4XGE-XFP)
without impact to performance or
scale • MPC5EQ-40G10G
Fixed configuration MPC with six 40-Gigabit
 Demonstrated full line rate perfor-
Ethernet ports and 24 10-Gigabit Ethernet ports
mance (240 Gbps) for IMIX and
packet sizes of 128 bytes or larger.
and up to one million queues per port; in some
cases, an MPC5E-40G10G was used instead
 Demonstrated impressive hitless (32,000 queues per port)
failover of 16.3 ms for 8000
L3 VPNs and 6,584,000 routes • MPC5E-100G10G
Fixed configuration MPC with two 100-Gigabit
 Showed 128,000 VPWS instances Ethernet ports and four 10-Gigabit Ethernet
ports; in the power efficiency test, one such
 FIB supported 10 Million module and one MPC5EQ-100G10G with more
IPv4 routes queues was configured in the system under test.
 RIB supported 65 Million In our test, we used a number of test topologies per
IPv4 entries different test areas as shown in Figures 1–4.

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 1 of 10


EANTC defined an IPv4 and IPv6 traffic mix
(“IMIX”) with a range of packet sizes to ensure a
12 12 realistic traffic load for the line cards as shown in
MX480 MX2010 MX480
the table:
Frame Size (Bytes) Weight
Figure 1: L3VPN Performance Test Setup 64 (IPv4) or 78 (IPv6) 3
100 26
4 4 373 6
MX240 570 5
2 2 1300 6
1518 16
AC Power Source
9000 1
Figure 2: Power Efficiency Test Setup
Test Phase 1, Bangalore
Initially, we executed all tests using an Ixia XG12
12 12
test chassis and tester ports in a mix of 10GbE and
MX480 MX240 MX480 100GbE ports. IxOS version 6.30.850.30 and
IxNetwork version 6.30 GA were used.
Figure 3: VPWS Scalability Test Setup
Junos OS version 15.1R2 was used for all tests.
x8,000 Test Phase 2, Sunnyvale
IPv4 Prefixes
x8,000
In March 2016, EANTC conducted the L3VPN and
IPv4-IPv6 forwarding performance tests for a
Device under Test
second time with adjusted parameters (different
MX240 frame sizes). The tests took place in Sunnyvale,
Route USA. The same hardware models and software
Reflector versions were used as in the first test phase with the
MX240 MX240 following exceptions: In the L3PVN test the transit
router was changed and an MX2010 router
running Junos OS version 15.1F5.3 was used
MX240 MX240 instead of an MX960 router. The PE routers under
test remained the same type of MX480 routers. The
MX240 MX240
line cards under tests remained the same type as
x8,000 x3,280,000 x3,280,000 x8,000
well, running software version 15.1R2 as before.
IPv4 Prefixes IPv4 Prefixes The Ixia test equipment software was updated to
IxNetwork 8.00.1027.17EA and IxOS
8.00.1200.7EA.
Figure 4: L3VPN Scalability VRF Instances Test Setup For the second test phase, Juniper configured the
MX Series 3D in Hyper Mode, which has been
Ixia Emulator IP/MPLS Core available since Junos OS Release 13.3R4.
Access Network
Enhanced MPCs such as the MPC3E, MPC4E,
PE Router MPC5E, and MPC6E can be configured in Hyper
10GigabitEthernet
Mode to support increased packet processing rates;
P Router Link Aggregation this mode enables the router to provide better
100GigabitEthernet performance and throughput in common use cases.
eBGP session
While this configuration can significantly improve

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 2 of 10


performance, EANTC notes that a number of The MX240 routers successfully learned all
features are not supported, including creation of 6,568,000 advertised routes and 16,000 interface
virtual chassis, interoperability with legacy DPCs or routes. Medium levels of traffic passed without
non-Ethernet interfaces, termination or tunneling of packet loss, using the IMIX defined above.
subscriber-based services.
In parallel, we monitored the CPU and memory
usage of the PE routers to verify how stressful the
Test Results: L3VPNs L3VPN administration was. CPU loads remained
very low (around 12 %). The PE routers used
VRF Instance and Route Scalability around 10 GB each to maintain all routes, resulting
in maximum of 48 % memory use based on 32 GB
MPC5E line cards successfully or 73 % based on 16 GB total RAM per router. The
demonstrated up to 8,000 L3VPN results confirmed that there was still sufficient main
instances with 821 IPv4 prefixes per memory left for other services.
instance.
L3VPN Forwarding Performance

L3VPNs, more precisely MPLS/BGP VPNs servicing


IPv4 multi-point transport for business connectivity, MPC5E line cards demonstrated full line
rate (240 Gbit/s) IPv4 L3VPN
are the most common managed packet transport
throughput without any packet drops,
service offered today. L3VPN service scalability is a
using IMIX and a variety of single
critical performance metric for edge routers: The packet sizes starting at 128 Bytes.
number of VPNs supported, and the number of
routes per VPN (or routes in total) are key baseline
figures characterizing router capabilities. These results were achieved in the second test phase.

In this test, we used the topology shown in Figure 4. Once we had confirmed the control plane (service)
Juniper configured 8000 VPN instances across the scalability and performance of L3VPNs, we verified
three PE routers. Once these VPNs were all success- the data plane forwarding performance — i.e. the
fully established between the Juniper MX240 IPv4 throughput. Measuring in accordance with
routers, we configured the Ixia emulator to advertise RFC2544’s frame loss test, we tested the throughput
821 unique IPv4 prefixes with three selected subnet and latency performance of the L3VPN traffic, using
masks (/16, /24 and /32) to the three PE routers. the test topology shown in Figure 1.
All prefixes of the same VPN instance were based We forwarded IPv4 test traffic at 240 Gbit/s via
on a single mask length. 96 L3VPNs with 10,243 prefixes each, resulting in
a total of 983,328 prefixes. The MPC5E line card
VPNv4 Route Scalability reached full line rate on all interfaces at 128 bytes,
256 bytes, 512 bytes, 1024 bytes, 1280 bytes,
MPC5E line cards successfully 1518 bytes single frame sizes and IMIX packet
demonstrated support for up to sizes. During the official test runs of 600 seconds
6,584,000 IPv4 routes while operating duration, the MPC5E line card did not experience
at only 12 % CPU and 48 % memory any packet loss.
resource usage.

Continuing with the same topology and configu-


ration as in the previous test case, we generated
traffic in all 8,000 VPNs and routes across the
whole infrastructure to verify the prefixes had been
successfully learned and distributed between the
three PE routers.

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 3 of 10


Figure 4. The emulation was achieved in a realistic
way by adding a transparent shadow switch in
between which interrupted only the Bidirectional
Forwarding Detection (BFD) between the two
MX240 routers. This failure emulated an underlying
transport network issue more realistically, compared
to the usual lab test way of pulling an optical fiber.
Prior to the failover event, we configured all
services to pass across the primary path. Following
the failover event, we verified that all services
passed across the backup path. The out of service
time was measured in three test repetitions,
Figure 5: L3VPN Throughput
between 13.7–16.3 ms. After this time, traffic to all
The maximum latency was measured at 70.0 μs for routes in all VPNs was fully restored.
any fixed packet size and 83.6 μs for IMIX. The
average latency was 49.0 μs for any fixed packet
size and 67.3 μs for IMIX. The reported latency
values represent the whole chain of three routers.

Figure 7: IP FRR Out of Service Time

We also tested recovery of the network, when the


emulated BFD failure was switched off. All services
returned to normal; recovery was hitless with zero
Figure 6: Latency L3VPN packet loss.

Tail-End Protection
IP/LDP Fast Reroute
While LDP Fast Reroute (FRR) offers a local repair
As the next test in the L3VPN group, we measured mechanism at the network level, it is largely
the resiliency of VPN services in case of node or ineffective when a failure occurs at the egress node.
link failures in the MPLS transport network. In a multi-homed egress node scenario, tail-end-
protection offers a solution with repair times consid-
Using IP Fast Reroute, the MPC5E line erably lower than simple IP rerouting.
card successfully restored service to
8000 L3VPNs with 6,584,000 routes in
16.3 ms or less.

Using the same test topology as in the first VPN test


cases (Figure 4), Juniper configured Fast Reroute
support to provide a loop-free alternate route (LFA)
in the IP/MPLS test network.
We emulated a link failure between the top PE
router and the router tagged “route reflector” in

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 4 of 10


where marked in Figure 8), services on all
At the egress, the MPC5E line card 3.28 million IPv4 routes were recovered within
demonstrated tail-end protection 12.7–14.1 ms across three test runs.
support, restoring 3.28 Million IPv4
routes across 8,000 VPNs in 14.1 ms or
less. IP services restoration may take
considerably longer depending on the
individual BGP configuration.

Using the same topology (Figure 4) as previously,


we emulated a node failure of the bottom right
router. We configured the two bottom Ixia load
generator ports to be part of an emulated multi-
home CPE installation, so that the network could
reach the destination connected to the bottom right Figure 9: Tail-end Protection
PE router by routing to the bottom left PE router
instead. We verified that the backup labels were in use in
the forwarding tables and label information bases
of the MPLS routers involved.
Following this fast failover, there was a subsequent
service interruption of 28 s, caused by BGP global
convergence within the entire test network. This was
MX240 a glitch that could certainly be fixed with a proper
network design; it shows that any failover designs
need to take race conditions of parallel resiliency
MX240 MX240 mechanisms into account.
When we switched back on the connection through
MX240 MX240 the bottom right router, recovery of the service was
X
hitless.
MX240 MX240
Test Results: L2 (Ethernet) VPNs
Ethernet VPNs are one of the most important
transport services to be implemented by any service
provider router; they are particularly important for
Figure 8: Tail-End Protection Test Setup wholesale access markets and some enterprise
industry sectors.
The failure itself was emulated similarly to before:
VPWS Scalability
An additional transparent shadow switch was
inserted up from the bottom right router. Some
The MPC5E demonstrated 128,000
redundant links were removed to enforce the node simultaneously active VPWS services.
failure scenario. We configured two identical sets of Only 6 % of the router’s CPU resources
3,280,000 IPv4 prefixes for each of the bottom two and 12 % of its memory resources
PE routers, emulating the dual-homed CPE. were used.

Prior to the failover event, we ensured that all routes


took the primary path on the right hand side of the A point-to-point service (Virtual Private Wire
diagram. After the failover event (disabling BFD Service, VPWS) has two endpoints and does not

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 5 of 10


require MAC address learning. It is widely used Test Results: Hardware and
due to its simple configuration and provisioning. Operational Aspects
In the first test case in this area, we aimed to qualify This test section covers a number of scenarios to
the number of VPWS supported on a single MPC highlight MPC5E capabilities that are important
line card. We used the test scenario shown in from the operational point of view, will be useful for
Figure 3 with one MPC5E card, using 12 of its 10- specific usage scenarios, or are baseline figures
GbE ports. This test scenario facilitated functional important to qualify the router’s performance.
tests since the core router was attached with one
10-GbE port to each PE router. Nonstop Active Routing (NSR)

Juniper configured 128,000 VPWS services on Nonstop routing is a hardware feature that saves
each of the two MX480 PE routers in the scenario. routing protocol information by running the routing
We validated that the services were up and running protocol process on a backup routing engine in
by sending Ethernet traffic across each VPWS. addition to the primary routing engine on an
MX Series 3D router. The failover takes place purely
In all test runs, the CPU usage observed via CLI internally; in contrast to Graceful Restart mecha-
remained below 5 % and memory usage nisms, no alerts are sent to neighboring routers.
below 28 %.

IP/LDP FRR During failover of the primary routing


Orthogonal to the L3VPN failover case, we tested engine, the MX Series 3D router
the failover performance for L2VPN services. The continued to service L3VPNs, VPWS
mechanisms in use are identical — IP Fast Reroute and VPLS services with IPv4-only
(IP FRR) provides resiliency solution to cope traffic without any service disruption.
efficiently with node and link failures. Failing
segments are repaired locally by a router detecting
This test verified that the state of routing and
the failure by using pre-established backup routes,
signaling protocols IS-IS, BGP, and LDP would be
without the immediate need to inform other routers
fully maintained, and no traffic dropped in case of
of the failure.
primary routing engine failure.

Using LDP Fast Reroute, the MPC5E We ran this test in the topology according to Figure
demonstrated failover of 22.7 ms or 4, failing the topmost PE router by pulling its active
less when configured with 8,188 VPLS (primary) routing engine during operations. Juniper
instances and 511,124 MAC addresses. set up 1,000 L3VPNs with 500,000 unique IPv4
routes, 1,000 VPWS and 1,000 VPLS instances.
The device under test was populated with an
We ran this test using a load of 5 Gbit/s per
MPC5E card with 24 GbE and 6 10GbE ports.
direction. Our team created a BFD-only failure (no
loss of optical carrier) between the top PE router The MX Series 3D routers successfully showed
and the P router next to it on the bottom right. The nonstop active routing functionality: all test traffic
failover scenario was staged exactly identical to the consisting of 1,000 L3VPN traffic flows, 1,000
IP FRR failover test for L3VPNs. VPLS traffic flows and 1,000 VPWS traffic flows
was forwarded hitless.
The measurements showed between 15.1–22.7 ms
recovery time for all services. This was well below
our target for 50 ms. We validated that the routers
adjacent to the link failure used repair routes and
had updated their label information base.
We measured that the restoration was hitless and
had no impact on traffic.

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 6 of 10


Power Efficiency
Network operators are sensitive to the electrical The MX Series 3D router demon-
power consumption of their network devices. In this strated hitless service operation
test we measured the power consumption of the during the upgrade of an unused line
MPC5E line card based on the test methodology card, no router reboot was required.
defined in “Energy efficiency for telecommunication
equipment: methodology for measurement and reboots: it enables upgrades of line card software
reporting for router and ethernet switch products”, (drivers) without installing new Junos OS software
ATIS-0600015.03.2013. or rebooting the router as a whole.

In an initial baseline test, we measured the power Part of this solution is a software called Juniper
consumption of a fully populated router configu- Agile Deployment (JAM) which manages the line
ration including the module under test (MX-MPC5E card software management.
2x100GbE+4x10GbE). In a second step, we We performed the following three test steps:
removed the module under test from the router and
measured the power consumption again. We calcu- • Baseline test: We sent IPv4 test traffic across the
lated the modular power consumption of the line single router under test.
card under test (Pwi) as the difference between • Juniper installed an additional MPC3E-NG line
baseline power consumption and the power card. “Unknown hardware status” was
consumption without the module. displayed via CLI, as expected, for the newly
The Weighted Modular Energy Consumption of the inserted line card. We then started the
MPC5E line card reached 390.4 W. The Energy installation of JAM package (jam-mpc-2e-3e-
Efficiency Rating for the MPC5E at full load was ng64-14.1R6.2) to the base software version.
measured at 2.1 W/Gbit. (Junos version 14.1R6.2). During this operation,
the traffic flows on the other line cards continued
Junos Continuity without packet loss.
Service providers require maximum router uptime • After the JAM package was installed, the correct
and rebooting a router is not a welcome activity. hardware information was shown via CLI. We
Juniper Networks Junos Continuity is a solution that sent additional test traffic consisting of 1,000
reduces the frequency of MX Series 3D router L3VPN flows, 1,000 VPLS flows and 1,000

L3VPN L3VPN
Card Under Test VPWS
VPWS
VPLS VPLS
(CE1) MX240 (RR) MX240 MX240

MX240
(CE2)
(DUT)

MX240
(emulated router1) MX240 MX240

Ixia Traffic Generator IP/MPLS Core VLAN (VPWS)


(emulated router2) Access network VLAN (VPLS)
PE router
10GigabitEthernet Overall Test Traffic
P router
VLAN (L3VPN) IPv4 Test Traffic

Figure 10: Juniper Continuity / Juniper Agile Deployment (JAM) Test Setup

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 7 of 10


VPWS flows on the newly detected line card Address Prefix Prefix # IPv4
under test. No packet loss was observed. Prefix IPv4 Length Step IPv4 Prefixes
• JAM package deinstallation: as expected we did 2.0.0.0 8 2.0.0.0 5
not observe any service interruption in IPv4 12.0.0.0 9 1.0.0.0 5
background traffic running on other line cards. 17.0.0.0 10 0.128.0.0 20
The router under test was able to install and 27.0.0.0 11 0.64.0.0 20
uninstall the JAM package without any impact on 32.0.0.0 12 0.32.0.0 20
existing IPv4 test traffic. In addition, we confirmed 34.128.0.0 13 0.16.0.0 30
that the newly installed line card became usable 36.96.0.0 14 0.8.0.0 30
after the upgrade by transferring VPN traffic.
37.80.0.0 15 0.4.0.0 30
FIB Scalability 37.200.0.0 16 0.2.0.0 30
38.4.0.0 17 0.1.0.0 50
The MX240, using the MPC5E line 38.54.0.0 18 0.0.128.0 60
card, successfully installed 10 Million
38.84.0.0 19 0.0.64.0 70
unique IPv4 routes and (separately)
10 Million unique IPv6 routes on a 38.101.128.0 20 0.0.32.0 200
single port into the FIB. 38.126.128.0 21 0.0.16.0 300
38.145.64.0 22 0.0.8.0 400
Three additional baseline tests reviewed the control 38.157.192.0 23 0.0.4.0 500
plane scalability aspects of the forwarding infor- 38.165.144.0 24 0.0.2.0 1k
mation base (FIB) and the routing information base 38.173.96.0 25 0.0.1.0 1k
(RIB) for IPv4 and IPv6. Edge routers must effectively
38.177.72.0 26 0.0.0.128 996k
support a large number of business services flows,
Internet flows and broadband network gateway 46.74.152.0 27 0.0.0.64 8M
(BNG) flows in any combination without restriction. 76.207.24.0 28 0.0.0.32 1M
To ensure sufficient scale in all use cases, the 78.183.96.0 30 0.0.0.8 230
maximum number of supported routes at the FIB and Total 10M
RIB level are a key router selection criteria. Emulated BGP IPv4 Prefixes
In the first test with IPv4 routes only, the Ixia test In both cases, we verified the actual installation in
equipment emulated a total of 10 million unique the FIB by sending 5 Gbit/s data traffic using all
routes with different prefix lengths from /8 to /30. routes across the line card under test.
The large number of prefixes with lengths larger
than 24 bits is not unusual for internal service
provider networks. Most routes were adjacent.
EANTC validated that the route aggregation was
disabled on the router. All IPv4 routes were adver-
tised to the router on a single interface using BGP.
Separately, Juniper asked us to run the same test
with 10 Million IPv6 unique routes only, using an
otherwise identical configuration.

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 8 of 10


IPv4 and IPv6 Mixed Performance
The results described below were achieved during
the March 2016 test session.
Routers are faced with a growing fraction of IPv6
traffic as service providers and enterprises deploy
new networks and services using IPv6 addresses.
Currently, the average IPv6 to IPv4 ratio in service
provider networks worldwide is 99:1 — but in
certain markets the ratio is to two to three percent;
enterprise values may be much higher. Therefore,
router capability to concurrently handle both IPv4/
Figure 12: IPv4 Forwarding Delay
IPv6 (dual stack) across the same physical interface
latency values of 24.4--27.7 μs for fixed packet
is very important.
sizes. The IMIX average latency was measured as
30.5 μs and the maximum latency as 38.7 μs.
With IPv4 traffic, IPv6 traffic and with
80:20 IPv4:IPv6 traffic mix, the
MPC5E line card exhibited line-rate
performance for IMIX and single
packet sizes of 128, 256, 512, 1024,
1280 and 1518 bytes.

12 12
MX480 Figure 13: IPv6 Forwarding Delay
Figure 11: IPv4/IPv6 Performance Test Setup With IPv4:IPv6 traffic, the router showed average
latency values between 18.7--21.0 μs and
We used the topology in Figure 12; the MX480 maximum latency values of 33.2--35.1 μs for fixed
was configured for an IPv4/IPv6 eBGP routing packet sizes. The IMIX average latency was
scenario. We forwarded IPv4 traffic, IPv6 and a measured as 29.7 μs and the maximum latency as
mix of IPv4 and IPv6 (80:20 proportion) test traffic 46.5 μs.
at 240 Gbit/s. Packet sizes for all IPv4 and IPv6
streams were chosen to have identical sizes of 128
bytes, 256 bytes, and 512 bytes, 1024 bytes,
1280 bytes and 1518 bytes and the IMIX. In all
cases, the MPC5E line card reached full line rate
throughput without any packet loss.
With IPv4 traffic, the router showed average latency
values between 17.5–20.2 μs and maximum
latency values of 33.1--34.6 μs for fixed packet
sizes. The IMIX average latency was measured as
29.4 μs and the maximum latency as 46.5 μs.
Figure 14: IPv4/IPv6 Forwarding Delay
With IPv6 traffic, the router showed average latency
values between 19.0--21.3 μs and maximum

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 9 of 10


RIB Scalability IPv4 and IPv6 did not measure such a delay for IPv6 (both RIB and
FIB population time were 22.7 minutes).
The MPC5E line card demonstrated Juniper suggested to use 5 Gbit/s bidirectional
support for 65 million IPv4 routes throughput across the entire card (using two ports
installed in the RIB. In a separate test only). There was no frame loss monitored as
run, MPC5E demonstrated support for expected.
65 million IPv6 routes installed in the
RIB. The DUT (Juniper MX240) using the module under
test: the MPC5E 3D 24XGE+6XLGE line card,
successfully installed up to 65,000,044 IPv4 routes
Often, routers have to process multiple routes per including 5,000,000 unique IPv4 prefixes in RIB
IPv4 prefix, since prefixes can usually be reached table.
via multiple gateways. In this final baseline test, we
evaluated the size of the routing information base
(RIB) which needs to hold all these copies of routes. About EANTC
We verified the maximum number of IPv4 BGP EANTC (European
routes that the DUT can sustain in the Routing Infor- Advanced Networking Test
mation Base (RIB). Separately, we verified the same Center) is internationally
statement for IPv6 routes only. recognized as one of the
world's leading
Router independent test centers for
under Test Sets of telecommunication technol-
Prefixes ogies. Based in Berlin,
Germany, the company offers vendor-neutral consul-
Identical sets
of prefixes
tancy and realistic, reproducible high-quality testing
x12
x9 services since 1991. Customers include leading
Sets of network equipment manufacturers, tier-1 service
Prefixes
providers, large enterprises and governments
worldwide. EANTC's proof of concept, acceptance
iBGP session tests and network audits cover established and next-
generation fixed and mobile network technologies.
10GigabitEthernet
http://www.eantc.com
Test Traffic
EANTC AG, Salzufer 14, 10587 Berlin, Germany
[email protected], http://www.eantc.com /
Figure 15: RIB Test Topology
v2.5 20160628
For this test, Juniper suggested another test topology
where 13 identical copies of 5 million unique routes
were advertised over 10 interfaces of an MPC5E
card, resulting in a total of 65 million routes to be
installed in the RIB. The eleventh port was used as a
traffic source, and the twelfth port remained unused
on Juniper’s request.
In the IPv4 case, the Ixia emulator measured 21.2
minutes to populate the RIB and 5.8 minutes in
addition to populate the FIB; the delay was
probably related to internal processing times. We

EANTC Test Report: Juniper Networks MPC3E-NG and MPC5E – Page 10 of 10

You might also like