Junos Multicast Protocols User Guide
Junos Multicast Protocols User Guide
Published
2021-04-18
ii
Juniper Networks, the Juniper Networks logo, Juniper, and Junos are registered trademarks of Juniper Networks, Inc.
in the United States and other countries. All other trademarks, service marks, registered marks, or registered service
marks are the property of their respective owners.
Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper Networks reserves the right
to change, modify, transfer, or otherwise revise this publication without notice.
The information in this document is current as of the date on the title page.
Juniper Networks hardware and software products are Year 2000 compliant. Junos OS has no known time-related
limitations through the year 2038. However, the NTP application is known to have some difficulty in the year 2036.
The Juniper Networks product that is the subject of this technical documentation consists of (or is intended for use
with) Juniper Networks software. Use of such software is subject to the terms and conditions of the End User License
Agreement ("EULA") posted at https://support.juniper.net/support/eula/. By downloading, installing or using such
software, you agree to the terms and conditions of that EULA.
iii
Table of Contents
About This Guide | xlv
1 Overview
Understanding Multicast | 2
Multicast Overview | 2
Configuring IGMP | 25
Understanding IGMP | 27
Configuring IGMP | 29
Enabling IGMP | 31
Disabling IGMP | 57
iv
Configuring MLD | 60
Understanding MLD | 60
Configuring MLD | 64
Enabling MLD | 65
Modifying the MLD Version | 67
Requirements | 73
Overview | 73
Configuration | 74
Verification | 75
Requirements | 86
Overview | 86
Configuration | 87
Verification | 89
Disabling MLD | 91
Requirements | 129
Configuration | 132
Requirements | 135
Configuration | 136
Requirements | 153
Configuration | 157
Verification | 161
Configuring IGMP Snooping Trace Operations | 161
Requirements | 164
Configuration | 165
Configuring MLD Snooping on a Switch VLAN with ELS Support (CLI Procedure) | 195
Requirements | 202
Configuration | 204
Requirements | 207
Configuration | 209
Configuring MLD Snooping Tracing Operations on EX Series Switches (CLI Procedure) | 214
Configuring MLD Snooping Tracing Operations on EX Series Switch VLANs (CLI Procedure) | 217
Requirements | 221
Configuration | 223
Requirements | 226
Configuration | 229
Viewing MVLAN and MVR Receiver VLAN Information on EX Series Switches with ELS | 263
Configuring Multicast VLAN Registration on non-ELS EX Series Switches | 264
Example: Configuring Multicast VLAN Registration on EX Series Switches Without ELS | 266
Requirements | 266
Configuration | 270
Routing Content to Densely Clustered Receivers with PIM Dense Mode | 294
Routing Content to Larger, Sparser Groups with PIM Sparse Mode | 305
Requirements | 321
Overview | 321
Configuration | 323
Verification | 326
Example: Configuring Multicast for Virtual Routers with IPv6 Interfaces | 334
Requirements | 334
Overview | 334
Configuration | 335
Verification | 340
Requirements | 345
x
Overview | 345
Configuration | 345
Verification | 347
Configuring the Static PIM RP Address on the Non-RP Routing Device | 349
Overview | 353
Configuration | 353
Verification | 356
Example: Rejecting PIM Bootstrap Messages at the Boundary of a PIM Domain | 368
Requirements | 381
Overview | 382
Configuration | 382
Verification | 384
Overview | 388
Configuration | 389
Verification | 391
Understanding Multicast Rendezvous Points, Shared Trees, and Rendezvous-Point Trees | 396
Requirements | 408
Overview | 408
Configuration | 410
Requirements | 412
Overview | 412
Configuration | 414
Verification | 416
Requirements | 440
Overview | 440
Configuration | 442
Verification | 444
Requirements | 459
Overview | 459
Configuration | 461
Verification | 463
Example: Configuring SSM Maps for Different Groups to Different Sources | 464
Requirements | 465
xiii
Overview | 465
Configuration | 465
Verification | 468
Requirements | 478
Overview | 478
Configuration | 482
Verification | 489
Rapidly Detecting Communication Failures with PIM and the BFD Protocol | 499
Configuring PIM and the Bidirectional Forwarding Detection (BFD) Protocol | 499
Requirements | 509
Overview | 509
Configuration | 510
Verification | 515
Requirements | 519
Overview | 519
Configuration | 521
Verification | 534
Requirements | 551
Overview | 552
Configuration | 555
Verification | 560
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
Requirements | 562
Overview | 563
Configuration | 567
Verification | 569
Requirements | 591
Overview | 592
Configuration | 593
Verification | 596
Requirements | 600
Overview | 601
Configuration | 602
Verification | 604
Requirements | 605
Overview | 605
Configuration | 607
Verification | 610
Requirements | 618
xvi
Overview | 618
Configuration | 621
Verification | 630
Example: Configuring a Specific Tunnel for IPv4 Multicast VPN Traffic (Using Draft-Rosen MVPNs) | 636
Requirements | 636
Overview | 636
PE Router Configuration | 638
Verification | 650
Requirements | 656
Overview | 656
Configuration | 659
Verification | 668
Requirements | 675
Overview | 676
Configuration | 680
Verification | 688
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode | 690
Requirements | 690
Overview | 691
Configuration | 694
Verification | 695
xvii
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 696
Requirements | 696
Overview | 697
Configuration | 704
Verification | 709
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 713
Requirements | 714
Overview | 714
Configuration | 721
Verification | 726
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast
Mode | 728
Requirements | 728
Overview | 728
Configuration | 731
Verification | 733
Requirements | 734
Overview | 734
Configuration | 735
Verification | 743
Generating Next-Generation MVPN VRF Import and Export Policies Overview | 765
xviii
Comparison of Draft Rosen Multicast VPNs and Next-Generation Multiprotocol BGP Multicast
VPNs | 769
PIM Sparse Mode, PIM Dense Mode, Auto-RP, and BSR for MBGP MVPNs | 771
Example: Configuring Point-to-Multipoint LDP LSPs as the Data Plane for Intra-AS MBGP
MVPNs | 781
Requirements | 781
Overview | 783
Configuration | 786
Verification | 788
Example: Configuring Ingress Replication for IP Multicast Using MBGP MVPNs | 789
Requirements | 789
Overview | 790
Configuration | 792
Verification | 798
Requirements | 807
Configuration | 809
Requirements | 832
Overview | 832
Configuration | 834
Verification | 844
Requirements | 844
Overview | 845
Configuration | 847
Verification | 851
Example: Configuring BGP Route Flap Damping Based on the MBGP MVPN Address Family | 851
xix
Requirements | 852
Overview | 852
Configuration | 853
Verification | 865
Requirements | 868
Configuring Sender-Only and Receiver-Only Sites Using PIM ASM Provider Tunnels | 874
Requirements | 892
Configuration | 894
Requirements | 947
Overview | 947
Configuration | 948
Verification | 959
Requirements | 966
Overview | 967
Verification | 983
Example: Configuring Sender-Based RPF in a BGP MVPN with MLDP Point-to-Multipoint Provider
Tunnels | 1003
Requirements | 1003
Overview | 1004
Verification | 1019
Understanding Wildcards to Configure Selective Point-to-Multipoint LSPs for an MBGP MVPN | 1039
Anti-spoofing support for MPLS labels in BGP/MPLS IP VPNs (Inter-AS Option B) | 1086
Example: Configuring PIM Join Load Balancing on Draft-Rosen Multicast VPN | 1098
Requirements | 1099
Configuration | 1104
Verification | 1108
Example: Configuring PIM Join Load Balancing on Next-Generation Multicast VPN | 1110
Requirements | 1111
xxi
Configuration | 1114
Verification | 1120
Requirements | 1123
Overview | 1124
Configuration | 1125
Verification | 1131
Requirements | 1140
Overview | 1140
Configuration | 1141
Verification | 1152
Requirements | 1159
Overview | 1160
Configuration | 1161
Requirements | 1165
Overview | 1165
Configuration | 1165
Verification | 1168
Requirements | 1171
Overview | 1171
xxii
Configuration | 1172
Verification | 1174
Requirements | 1174
Overview | 1175
Configuration | 1176
Verification | 1179
Use Multicast-Only Fast Reroute (MoFRR) to Minimize Packet Loss During Link Failures | 1180
Requirements | 1193
Overview | 1193
Verification | 1201
Requirements | 1204
Overview | 1205
Verification | 1212
Requirements | 1216
Overview | 1216
Configuration | 1226
Verification | 1233
Enable Multicast Between Layer 2 and Layer 3 Devices Using Snooping | 1239
Requirements | 1243
Configuration | 1246
Verification | 1249
Enabling Bulk Updates for Multicast Snooping | 1250
Enabling Multicast Snooping for Multichassis Link Aggregation Group Interfaces | 1251
Configuring Multicast Snooping to Ignore Spanning Tree Topology Change Messages | 1253
Requirements | 1259
Overview | 1259
Configuration | 1261
Verification | 1271
Requirements | 1278
Overview | 1279
Configuration | 1279
Verification | 1282
Requirements | 1282
Overview | 1283
Configuration | 1283
Verification | 1286
Requirements | 1290
Overview | 1291
Configuration | 1292
Verification | 1294
Requirements | 1294
Configuration | 1299
Verification | 1311
Requirements | 1317
Overview | 1317
Configuration | 1318
Verification | 1320
Requirements | 1321
Overview | 1321
Configuration | 1323
Verification | 1325
Requirements | 1327
Overview | 1327
Configuration | 1329
xxv
Verification | 1333
7 Troubleshooting
Knowledge Base | 1336
accept-remote-source | 1350
active-source-limit | 1360
advertise-from-main-vpn-tables | 1368
algorithm | 1370
anycast-pim | 1377
anycast-prefix | 1379
asm-override-ssm | 1380
assert-timeout | 1382
xxvi
authentication-key | 1385
auto-rp | 1386
autodiscovery | 1388
autodiscovery-only | 1389
backoff-period | 1391
backup-pe-group | 1393
backups | 1396
bandwidth | 1397
bootstrap | 1403
bootstrap-export | 1405
bootstrap-import | 1406
bootstrap-priority | 1408
cont-stats-collection-interval | 1414
count | 1416
create-new-ucast-tunnel | 1417
dampen | 1419
data-encapsulation | 1420
data-forwarding | 1422
data-mdt-reuse | 1424
xxvii
default-peer | 1425
default-vpn-source | 1427
defaults | 1428
dense-groups | 1430
df-election | 1433
disable | 1434
distributed-dr | 1450
dr-election-on-p2p | 1453
dr-register-policy | 1454
dvmrp | 1456
embedded-rp | 1458
export-target | 1468
flood-groups | 1479
flow-map | 1480
group-ranges | 1526
group-rp-mapping | 1528
hello-interval | 1533
host-only-interface | 1540
idle-standby-path-switchover-delay | 1545
igmp | 1547
igmp-snooping | 1551
xxx
igmp-snooping-options | 1557
ignore-stp-topology-change | 1558
immediate-leave | 1559
import-target | 1568
inclusive | 1570
infinity | 1571
ingress-replication | 1572
inet-mdt | 1576
interface | 1593
interface-name | 1600
interval | 1602
intra-as | 1605
join-load-balance | 1607
join-prune-timeout | 1608
l2-querier | 1613
ldp-p2mp | 1617
listen | 1623
local | 1624
loose-check | 1643
mapping-agent-election | 1644
maximum-bandwidth | 1649
maximum-rps | 1651
mdt | 1655
min-rate | 1661
minimum-receive-interval | 1665
mld | 1667
mld-snooping | 1669
mpls-internet-multicast | 1689
xxxiii
msdp | 1690
multicast | 1693
multicast-replication | 1697
multicast-snooping-options | 1703
multichassis-lag-replicate-state | 1707
multiplier | 1708
multiple-triggered-joins | 1710
mvpn | 1713
mvpn-iana-rt-import | 1716
mvpn-mode | 1720
neighbor-policy | 1721
nexthop-hold-time | 1723
no-bidirectional-mode | 1727
no-qos-adjust | 1730
offer-period | 1731
omit-wildcard-address | 1735
override-interval | 1738
pim | 1747
pim-asm | 1754
pim-snooping | 1755
pim-to-igmp-proxy | 1760
pim-to-mld-proxy | 1761
prefix | 1771
process-non-null-as-null-register | 1782
xxxv
propagation-delay | 1784
provider-tunnel | 1787
proxy | 1793
qualified-vlan | 1797
receiver | 1817
redundant-sources | 1820
register-limit | 1822
register-probe-time | 1824
reset-tracking-bit | 1828
restart-duration | 1831
reverse-oif-mapping | 1832
robustness-count | 1846
rp | 1850
rp-register-policy | 1853
rp-set | 1855
rpf-selection | 1858
rpt-spt | 1861
sap | 1866
scope | 1868
scope-policy | 1869
secret-key-timeout | 1871
selective | 1872
xxxvii
sglimit | 1877
signaling | 1879
snoop-pseudowires | 1881
source-active-advertisement | 1882
source-address | 1899
spt-only | 1908
spt-threshold | 1909
ssm-groups | 1911
standby-path-creation-delay | 1921
static-lsp | 1932
stickydr | 1935
subscriber-leave-timer | 1939
threshold-rate | 1954
tunnel-source | 2001
unicast-umh-election | 2007
upstream-interface | 2008
use-p2mp-lsp | 2010
vrf-advertise-selective | 2019
vpn-group-address | 2031
wildcard-group-inet | 2032
wildcard-group-inet6 | 2034
mtrace | 2096
Multicast allows an IP network to support more than just the unicast model of data delivery that
prevailed in the early stages of the Internet. Multicast provides an efficient method for delivering traffic
flows that can be characterized as one-to-many or many-to-many.
In a multicast network, the key component is the routing device, which is able to replicate packets and is
therefore multicast-capable. The routing devices in the IP multicast network, which has exactly the same
topology as the unicast network it is based on, use a multicast routing protocol to build a distribution
tree that connects receivers (preferred to the multimedia implications of listeners, but listeners is also
used) to sources. In multicast terminology, the distribution tree is rooted at the source (the root of the
distribution tree is the source). The interface on the routing device leading toward the source is the
upstream interface, although the less precise terms incoming or inbound interface are used as well. To
keep bandwidth use to a minimum, it is best for only one upstream interface on the routing device to
receive multicast packets. The interface on the routing device leading toward the receivers is the
downstream interface, although the less precise terms outgoing or outbound interface are used as well.
There can be 0 to N–1 downstream interfaces on a routing device, where N is the number of logical
interfaces on the routing device.
RELATED DOCUMENTATION
Overview
Understanding Multicast | 2
2
CHAPTER 1
Understanding Multicast
IN THIS CHAPTER
Multicast Overview | 2
Multicast Overview
IN THIS SECTION
IP Multicast Uses | 4
IP Multicast Terminology | 6
IP Multicast Addressing | 8
Multicast Addresses | 9
IP has three fundamental types of addresses: unicast, broadcast, and multicast. A unicast address is used
to send a packet to a single destination. A broadcast address is used to send a datagram to an entire
subnetwork. A multicast address is used to send a datagram to a set of hosts that can be on different
subnetworks and that are configured as members of a multicast group.
A multicast datagram is delivered to destination group members with the same best-effort reliability as a
standard unicast IP datagram. This means that multicast datagrams are not guaranteed to reach all
members of a group or to arrive in the same order in which they were transmitted. The only difference
between a multicast IP packet and a unicast IP packet is the presence of a group address in the IP
header destination address field. Multicast addresses use the Class D address format.
NOTE: On all SRX Series devices, reordering is not supported for multicast fragments. Reordering
of unicast fragments is supported.
Individual hosts can join or leave a multicast group at any time. There are no restrictions on the physical
location or the number of members in a multicast group. A host can be a member of more than one
multicast group at any time. A host does not have to belong to a group to send packets to members of a
group.
Routers use a group membership protocol to learn about the presence of group members on directly
attached subnetworks. When a host joins a multicast group, it transmits a group membership protocol
message for the group or groups that it wants to receive and sets its IP process and network interface
card to receive frames addressed to the multicast group.
The Junos® operating system (Junos OS) routing protocol process supports a wide variety of routing
protocols. These routing protocols carry network information among routing devices not only for unicast
traffic streams sent between one pair of clients and servers, but also for multicast traffic streams
containing video, audio, or both, between a single server source and many client receivers. The routing
protocols used for multicast differ in many key ways from unicast routing protocols.
Information is delivered over a network by three basic methods: unicast, broadcast, and multicast.
The differences among unicast, broadcast, and multicast can be summarized as follows:
• Multicast: One-to-many, from one source to multiple destinations expressing an interest in receiving
the traffic.
4
NOTE: This list does not include a special category for many-to-many applications, such as
online gaming or videoconferencing, where there are many sources for the same receiver and
where receivers often double as sources. Many-to-many is a service model that repeatedly
employs one-to-many multicast and therefore requires no unique protocol. The original
multicast specification, RFC 1112, supports both the any-source multicast (ASM) many-to-
many model and the source-specific multicast (SSM) one-to-many model.
With unicast traffic, many streams of IP packets that travel across networks flow from a single source,
such as a website server, to a single destination such as a client PC. Unicast traffic is still the most
common form of information transfer on networks.
Broadcast traffic flows from a single source to all possible destinations reachable on the network, which
is usually a LAN. Broadcasting is the easiest way to make sure traffic reaches its destinations.
Television networks use broadcasting to distribute video and audio. Even if the television network is a
cable television (CATV) system, the source signal reaches all possible destinations, which is the main
reason that some channels’ content is scrambled. Broadcasting is not feasible on the Internet because of
the enormous amount of unnecessary information that would constantly arrive at each end user's
device, the complexities and impact of scrambling, and related privacy issues.
Multicast traffic lies between the extremes of unicast (one source, one destination) and broadcast (one
source, all destinations). Multicast is a “one source, many destinations” method of traffic distribution,
meaning only the destinations that explicitly indicate their need to receive the information from a
particular source receive the traffic stream.
On an IP network, because destinations (clients) do not often communicate directly with sources
(servers), the routing devices between source and destination must be able to determine the topology of
the network from the unicast or multicast perspective to avoid routing traffic haphazardly. Multicast
routing devices replicate packets received on one input interface and send the copies out on multiple
output interfaces.
In IP multicast, the source and destination are almost always hosts and not routing devices. Multicast
routing devices distribute the multicast traffic across the network from source to destinations. The
multicast routing device must find multicast sources on the network, send out copies of packets on
several interfaces, prevent routing loops, connect interested destinations with the proper source, and
keep the flow of unwanted packets to a minimum. Standard multicast routing protocols provide most of
these capabilities, but some router architectures cannot send multiple copies of packets and so do not
support multicasting directly.
IP Multicast Uses
Multicast allows an IP network to support more than just the unicast model of data delivery that
prevailed in the early stages of the Internet. Multicast, originally defined as a host extension in RFC
5
1112 in 1989, provides an efficient method for delivering traffic flows that can be characterized as one-
to-many or many-to-many.
Unicast traffic is not strictly limited to data applications. Telephone conversations, wireless or not,
contain digital audio samples and might contain digital photographs or even video and still flow from a
single source to a single destination. In the same way, multicast traffic is not strictly limited to
multimedia applications. In some data applications, the flow of traffic is from a single source to many
destinations that require the packets, as in a news or stock ticker service delivered to many PCs. For this
reason, the term receiver is preferred to listener for multicast destinations, although both terms are
common.
Network applications that can function with unicast but are better suited for multicast include
collaborative groupware, teleconferencing, periodic or “push” data delivery (stock quotes, sports scores,
magazines, newspapers, and advertisements), server or website replication, and distributed interactive
simulation (DIS) such as war simulations or virtual reality. Any IP network concerned with reducing
network resource overhead for one-to-many or many-to-many data or multimedia applications with
multiple receivers benefits from multicast.
If unicast were employed by radio or news ticker services, each radio or PC would have to have a
separate traffic session for each listener or viewer at a PC (this is actually the method for some Web-
based services). The processing load and bandwidth consumed by the server would increase linearly as
more people “tune in” to the server. This is extremely inefficient when dealing with the global scale of
the Internet. Unicast places the burden of packet duplication on the server and consumes more and
more backbone bandwidth as the number of users grows.
If broadcast were employed instead, the source could generate a single IP packet stream using a
broadcast destination address. Although broadcast eliminates the server packet duplication issue, this is
not a good solution for IP because IP broadcasts can be sent only to a single subnetwork, and IP routing
devices normally isolate IP subnetworks on separate interfaces. Even if an IP packet stream could be
addressed to literally go everywhere, and there were no need to “tune” to any source at all, broadcast
would be extremely inefficient because of the bandwidth strain and need for uninterested hosts to
discard large numbers of packets. Broadcast places the burden of packet rejection on each host and
consumes the maximum amount of backbone bandwidth.
For radio station or news ticker traffic, multicast provides the most efficient and effective outcome, with
none of the drawbacks and all of the advantages of the other methods. A single source of multicast
packets finds its way to every interested receiver. As with broadcast, the transmitting host generates
only a single stream of IP packets, so the load remains constant whether there is one receiver or one
million. The network routing devices replicate the packets and deliver the packets to the proper
receivers, but only the replication role is a new one for routing devices. The links leading to subnets
consisting of entirely uninterested receivers carry no multicast traffic. Multicast minimizes the burden
placed on sender, network, and receiver.
6
IP Multicast Terminology
Multicast has its own particular set of terms and acronyms that apply to IP multicast routing devices and
networks. Figure 1 on page 6 depicts some of the terms commonly used in an IP multicast network.
In a multicast network, the key component is the routing device, which is able to replicate packets and is
therefore multicast-capable. The routing devices in the IP multicast network, which has exactly the same
topology as the unicast network it is based on, use a multicast routing protocol to build a distribution
tree that connects receivers (preferred to the multimedia implications of listeners, but listeners is also
used) to sources. In multicast terminology, the distribution tree is rooted at the source (the root of the
distribution tree is the source). The interface on the routing device leading toward the source is the
upstream interface, although the less precise terms incoming or inbound interface are used as well. To
keep bandwidth use to a minimum, it is best for only one upstream interface on the routing device to
receive multicast packets. The interface on the routing device leading toward the receivers is the
downstream interface, although the less precise terms outgoing or outbound interface are used as well.
There can be 0 to N–1 downstream interfaces on a routing device, where N is the number of logical
interfaces on the routing device. To prevent looping, the upstream interface must never receive copies
of downstream multicast packets.
Routing loops are disastrous in multicast networks because of the risk of repeatedly replicated packets.
One of the complexities of modern multicast routing protocols is the need to avoid routing loops, packet
by packet, much more rigorously than in unicast routing protocols.
7
The routing device's multicast forwarding state runs more logically based on the reverse path, from the
receiver back to the root of the distribution tree. In RPF, every multicast packet received must pass an
RPF check before it can be replicated or forwarded on any interface. When it receives a multicast packet
on an interface, the routing device verifies that the source address in the multicast IP packet is the
destination address for a unicast IP packet back to the source.
If the outgoing interface found in the unicast routing table is the same interface that the multicast
packet was received on, the packet passes the RPF check. Multicast packets that fail the RPF check are
dropped, because the incoming interface is not on the shortest path back to the source. Routing devices
can build and maintain separate tables for RPF purposes.
The distribution tree used for multicast is rooted at the source and is the shortest-path tree (SPT), but
this path can be long if the source is at the periphery of the network. Providing a shared tree on the
backbone as the distribution tree locates the multicast source more centrally in the network. Shared
distribution trees with roots in the core network are created and maintained by a multicast routing
device operating as a rendezvous point (RP), a feature of sparse mode multicast protocols.
Scoping limits the routing devices and interfaces that can forward a multicast packet. Multicast scoping
is administrative in the sense that a range of multicast addresses is reserved for scoping purposes, as
described in RFC 2365, Administratively Scoped IP Multicast. Routing devices at the boundary must
filter multicast packets and ensure that packets do not stray beyond the established limit.
Each subnetwork with hosts on the routing device that has at least one interested receiver is a leaf on
the distribution tree. Routing devices can have multiple leaves on different interfaces and must send a
copy of the IP multicast packet out on each interface with a leaf. When a new leaf subnetwork is added
to the tree (that is, the interface to the host subnetwork previously received no copies of the multicast
packets), a new branch is built, the leaf is joined to the tree, and replicated packets are sent out on the
interface. The number of leaves on a particular interface does not affect the routing device. The action is
the same for one leaf or a hundred.
NOTE: On Juniper Networks security devices, if the maximum number of leaves on a multicast
distribution tree is exceeded, multicast sessions are created up to the maximum number of
8
leaves, and any multicast sessions that exceed the maximum number of leaves are ignored. The
maximum number of leaves on a multicast distribution tree is device specific.
When a branch contains no leaves because there are no interested hosts on the routing device interface
leading to that IP subnetwork, the branch is pruned from the distribution tree, and no multicast packets
are sent out that interface. Packets are replicated and sent out multiple interfaces only where the
distribution tree branches at a routing device, and no link ever carries a duplicate flow of packets.
Collections of hosts all receiving the same stream of IP packets, usually from the same multicast source,
are called groups. In IP multicast networks, traffic is delivered to multicast groups based on an IP
multicast address, or group address. The groups determine the location of the leaves, and the leaves
determine the branches on the multicast network.
IP Multicast Addressing
Multicast uses the Class D IP address range (224.0.0.0 through 239.255.255.255). Class D addresses are
commonly referred to as multicast addresses because the entire classful address concept is obsolete.
Multicast addresses can never appear as the source address in an IP packet and can only be the
destination of a packet.
Multicast addresses usually have a prefix length of /32, although other prefix lengths are allowed.
Multicast addresses represent logical groupings of receivers and not physical collections of devices.
Blocks of multicast addresses can still be described in terms of prefix length in traditional notation, but
only for convenience. For example, the multicast address range from 232.0.0.0 through
232.255.255.255 can be written as 232.0.0.0/8 or 232/8.
Internet service providers (ISPs) do not typically allocate multicast addresses to their customers because
multicast addresses relate to content, not to physical devices. Receivers are not assigned their own
multicast addresses, but need to know the multicast address of the content. Sources need to be
assigned multicast addresses only to produce the content, not to identify their place in the network.
Every source and receiver still needs an ordinary, unicast IP address.
Multicast addressing most often references the receivers, and the source of multicast content is usually
not even a member of the multicast group for which it produces content. If the source needs to monitor
the packets it produces, monitoring can be done locally, and there is no need to make the packets
traverse the network.
Many applications have been assigned a range of multicast addresses for their own use. These
applications assign multicast addresses to sessions created by that application. You do not usually need
to statically assign a multicast address, but you can do so.
9
Multicast Addresses
Multicast host group addresses are defined to be the IP addresses whose high-order four bits are 1110,
giving an address range from 224.0.0.0 through 239.255.255.255, or simply 224.0.0.0/4. (These
addresses also are referred to as Class D addresses.)
The Internet Assigned Numbers Authority (IANA) maintains a list of registered IP multicast groups. The
base address 224.0.0.0 is reserved and cannot be assigned to any group. The block of multicast
addresses from 224.0.0.1 through 224.0.0.255 is reserved for local wire use. Groups in this range are
assigned for various uses, including routing protocols and local discovery mechanisms.
The range from 239.0.0.0 through 239.255.255.255 is reserved for administratively scoped addresses.
Because packets addressed to administratively scoped multicast addresses do not cross configured
administrative boundaries, and because administratively scoped multicast addresses are locally assigned,
these addresses do not need to be unique across administrative boundaries.
Which MAC addresses are used on the frame containing this packet? The packet source address—the
unicast IP address of the host originating the multicast content—translates easily and directly to the
MAC address of the source. But what about the packet’s destination address? This is the IP multicast
group address. Which destination MAC address for the frame corresponds to the packet’s multicast
group address?
One option is for LANs simply to use the LAN broadcast MAC address, which guarantees that the frame
is processed by every station on the LAN. However, this procedure defeats the whole purpose of
multicast, which is to limit the circulation of packets and frames to interested hosts. Also, hosts might
have access to many multicast groups, which multiplies the amount of traffic to noninterested
destinations. Broadcasting frames at the LAN level to support multicast groups makes no sense.
However, there is an easy way to effectively use Layer 2 frames for multicast purposes. The MAC
address has a bit that is set to 0 for unicast (the LAN term is individual address) and set to 1 to indicate
that this is a multicast address. Some of these addresses are reserved for multicast groups of specific
vendors or MAC-level protocols. Internet multicast applications use the range 0x01-00-5E-00-00-00 to
0x01-00-5E-FF-FF-FF. Multicast receivers (hosts running TCP/IP) listen for frames with one of these
addresses when the application joins a multicast group. The host stops listening when the application
terminates or the host leaves the group at the packet layer (Layer 3).
10
This means that 3 bytes, or 24 bits, are available to map IPv4 multicast addresses at Layer 3 to MAC
multicast addresses at Layer 2. However, all IPv4 addresses, including multicast addresses, are 32 bits
long, leaving 8 IP address bits left over. Which method of mapping IPv4 multicast addresses to MAC
multicast addresses minimizes the chance of “collisions” (that is, two different IP multicast groups at the
packet layer mapping to the same MAC multicast address at the frame layer)?
First, it is important to realize that all IPv4 multicast addresses begin with the same 4 bits (1110), so
there are really only 4 bits of concern, not 8. A LAN must not drop the last bits of the IPv4 address
because these are almost guaranteed to be host bits, depending on the subnet mask. But the high-order
bits, the leftmost address bits, are almost always network bits, and there is only one LAN (for now).
One other bit of the remaining 24 MAC address bits is reserved (an initial 0 indicates an Internet
multicast address), so the 5 bits following the initial 1110 in the IPv4 address are dropped. The 23
11
remaining bits are mapped, one for one, into the last 23 bits of the MAC address. An example of this
process is shown in Figure 2 on page 12.
12
Note that this process means that there are 32 (25) IPv4 multicast addresses that could map to the same
MAC multicast addresses. For example, multicast IPv4 addresses 224.8.7.6 and 229.136.7.6 translate to
the same MAC address (0x01-00-5E-08-07-06). This is a real concern, and because the host could be
interested in frames sent to both of the those multicast groups, the IP software must reject one or the
other.
NOTE: This “collision” problem does not exist in IPv6 because of the way IPv6 handles multicast
groups, but it is always a concern in IPv4. The procedure for placing IPv6 multicast packets inside
multicast frames is nearly identical to that for IPv4, except for the MAC destination address
0x3333 prefix (and the lack of “collisions”).
Once the MAC address for the multicast group is determined, the host's operating system essentially
orders the LAN interface card to join or leave the multicast group. Once joined to a multicast group, the
host accepts frames sent to the multicast address as well as the host’s unicast address and ignores other
multicast group’s frames. It is possible for a host to join and receive multicast content from more than
one group at the same time, of course.
To avoid multicast routing loops, every multicast routing device must always be aware of the interface
that leads to the source of that multicast group content by the shortest path. This is the upstream
(incoming) interface, and packets are never to be forwarded back toward a multicast source. All other
interfaces are potential downstream (outgoing) interfaces, depending on the number of branches on the
distribution tree.
Routing devices closely monitor the status of the incoming and outgoing interfaces, a process that
determines the multicast forwarding state. A routing device with a multicast forwarding state for a
particular multicast group is essentially “turned on” for that group's content. Interfaces on the routing
device's outgoing interface list send copies of the group's packets received on the incoming interface list
for that group. The incoming and outgoing interface lists might be different for different multicast
groups.
The multicast forwarding state in a routing device is usually written in either (S,G) or (*,G) notation.
These are pronounced “ess comma gee” and “star comma gee,” respectively. In (S,G), the S refers to the
unicast IP address of the source for the multicast traffic, and the G refers to the particular multicast
group IP address for which S is the source. All multicast packets sent from this source have S as the
source address and G as the destination address.
The asterisk (*) in the (*,G) notation is a wildcard indicating that the state applies to any multicast
application source sending to group G. So, if two sources are originating exactly the same content for
multicast group 224.1.1.2, a routing device could use (*,224.1.1.2) to represent the state of a routing
device forwarding traffic from both sources to the group.
14
Multicast routing protocols enable a collection of multicast routing devices to build (join) distribution
trees when a host on a directly attached subnet, typically a LAN, wants to receive traffic from a certain
multicast group, prune branches, locate sources and groups, and prevent routing loops.
• Distance Vector Multicast Routing Protocol (DVMRP)—The first of the multicast routing protocols
and hampered by a number of limitations that make this method unattractive for large-scale Internet
use. DVMRP is a dense-mode-only protocol, and uses the flood-and-prune or implicit join method to
deliver traffic everywhere and then determine where the uninterested receivers are. DVMRP uses
source-based distribution trees in the form (S,G), and builds its own multicast routing tables for RPF
checks.
• Multicast OSPF (MOSPF)—Extends OSPF for multicast use, but only for dense mode. However,
MOSPF has an explicit join message, so routing devices do not have to flood their entire domain with
multicast traffic from every source. MOSPF uses source-based distribution trees in the form (S,G).
• Bidirectional PIM mode—A variation of PIM. Bidirectional PIM builds bidirectional shared trees that
are rooted at a rendezvous point (RP) address. Bidirectional traffic does not switch to shortest path
trees as in PIM-SM and is therefore optimized for routing state size instead of path length. This
means that the end-to-end latency might be longer compared to PIM sparse mode. Bidirectional PIM
routes are always wildcard-source (*,G) routes. The protocol eliminates the need for (S,G) routes and
data-triggered events. The bidirectional (*,G) group trees carry traffic both upstream from senders
toward the RP, and downstream from the RP to receivers. As a consequence, the strict reverse path
forwarding (RPF)-based rules found in other PIM modes do not apply to bidirectional PIM. Instead,
bidirectional PIM (*,G) routes forward traffic from all sources and the RP. Bidirectional PIM routing
devices must have the ability to accept traffic on many potential incoming interfaces. Bidirectional
PIM scales well because it needs no source-specific (S,G) state. Bidirectional PIM is recommended in
deployments with many dispersed sources and many dispersed receivers.
• PIM dense mode—In this mode of PIM, the assumption is that almost all possible subnets have at
least one receiver wanting to receive the multicast traffic from a source, so the network is flooded
with traffic on all possible branches, then pruned back when branches do not express an interest in
receiving the packets, explicitly (by message) or implicitly (time-out silence). This is the dense mode
of multicast operation. LANs are appropriate networks for dense-mode operation. Some multicast
routing protocols, especially older ones, support only dense-mode operation, which makes them
inappropriate for use on the Internet. In contrast to DVMRP and MOSPF, PIM dense mode allows a
routing device to use any unicast routing protocol and performs RPF checks using the unicast routing
table. PIM dense mode has an implicit join message, so routing devices use the flood-and-prune
method to deliver traffic everywhere and then determine where the uninterested receivers are. PIM
dense mode uses source-based distribution trees in the form (S,G), as do all dense-mode protocols.
PIM also supports sparse-dense mode, with mixed sparse and dense groups, but there is no special
15
notation for that operational mode. If sparse-dense mode is supported, the multicast routing
protocol allows some multicast groups to be sparse and other groups to be dense.
• PIM sparse mode—In this mode of PIM, the assumption is that very few of the possible receivers
want packets from each source, so the network establishes and sends packets only on branches that
have at least one leaf indicating (by message) an interest in the traffic. This multicast protocol allows
a routing device to use any unicast routing protocol and performs reverse-path forwarding (RPF)
checks using the unicast routing table. PIM sparse mode has an explicit join message, so routing
devices determine where the interested receivers are and send join messages upstream to their
neighbors, building trees from receivers to the rendezvous point (RP). PIM sparse mode uses an RP
routing device as the initial source of multicast group traffic and therefore builds distribution trees in
the form (*,G), as do all sparse-mode protocols. PIM sparse mode migrates to an (S,G) source-based
tree if that path is shorter than through the RP for a particular multicast group's traffic. WANs are
appropriate networks for sparse-mode operation, and indeed a common multicast guideline is not to
run dense mode on a WAN under any circumstances.
• Core Based Trees (CBT)—Shares all of the characteristics of PIM sparse mode (sparse mode, explicit
join, and shared (*,G) trees), but is said to be more efficient at finding sources than PIM sparse mode.
CBT is rarely encountered outside academic discussions. There are no large-scale deployments of
CBT, commercial or otherwise.
• PIM source-specific multicast (SSM)—Enhancement to PIM sparse mode that allows a client to
receive multicast traffic directly from the source, without the help of an RP. Used with IGMPv3 to
create a shortest-path tree between receiver and source.
• IGMPv1—The original protocol defined in RFC 1112, Host Extensions for IP Multicasting. IGMPv1
sends an explicit join message to the routing device, but uses a timeout to determine when hosts
leave a group. Three versions of the Internet Group Management Protocol (IGMP) run between
receiver hosts and routing devices.
• IGMPv2—Defined in RFC 2236, Internet Group Management Protocol, Version 2. Among other
features, IGMPv2 adds an explicit leave message to the join message.
• IGMPv3—Defined in RFC 3376, Internet Group Management Protocol, Version 3. Among other
features, IGMPv3 optimizes support for a single source of content for a multicast group, or source-
specific multicast (SSM). Used with PIM SSM to create a shortest-path tree between receiver and
source.
• Bootstrap Router (BSR) and Auto-Rendezvous Point (RP)—Allow sparse-mode routing protocols to
find RPs within the routing domain (autonomous system, or AS). RP addresses can also be statically
configured.
• Multicast Source Discovery Protocol (MSDP)—Allows groups located in one multicast routing domain
to find RPs in other routing domains. MSDP is not used on an RP if all receivers and sources are
16
located in the same routing domain. Typically runs on the same routing device as PIM sparse mode
RP. Not appropriate if all receivers and sources are located in the same routing domain.
• Session Announcement Protocol (SAP) and Session Description Protocol (SDP)—Display multicast
session names and correlate the names with multicast traffic. SDP is a session directory protocol that
advertises multimedia conference sessions and communicates setup information to participants who
want to join the session. A client commonly uses SDP to announce a conference session by
periodically multicasting an announcement packet to a well-known multicast address and port using
SAP.
• Pragmatic General Multicast (PGM)—Special protocol layer for multicast traffic that can be used
between the IP layer and the multicast application to add reliability to multicast traffic. PGM allows a
receiver to detect missing information in all cases and request replacement information if the
receiver application requires it.
The differences among the multicast routing protocols are summarized in Table 1 on page 16.
Multicast Dense Mode Sparse Implicit Join Explicit Join (S,G) SBT (*,G) Shared Tree
Routing Mode
Protocol
Multicast Dense Mode Sparse Implicit Join Explicit Join (S,G) SBT (*,G) Shared Tree
Routing Mode
Protocol
It is important to realize that retransmissions due to a high bit-error rate on a link or overloaded routing
device can make multicast as inefficient as repeated unicast. Therefore, there is a trade-off in many
multicast applications regarding the session support provided by the Transmission Control Protocol
(TCP) (but TCP always resends missing segments), or the simple drop-and-continue strategy of the User
Datagram Protocol (UDP) datagram service (but reordering can become an issue). Modern multicast uses
UDP almost exclusively.
The Juniper Networks T Series Core Routers handle extreme multicast packet replication requirements
with a minimum of router load. Each memory component replicates a multicast packet twice at most.
Even in the worst-case scenario involving maximum fan-out, when 1 input port and 63 output ports
need a copy of the packet, the T Series routing platform copies a multicast packet only six times. Most
multicast distribution trees are much sparser, so in many cases only two or three replications are
18
necessary. In no case does the T Series architecture have an impact on multicast performance, even with
the largest multicast fan-out requirements.
Multicast is a “one source, many destinations” method of traffic distribution, meaning that only the
destinations that explicitly indicate their need to receive the information from a particular source receive
the traffic stream.
In the data plane of the SRX Series chassis, the SRX5000 line Module Port Concentrator (SRX5K-MPC)
forwards Layer 3 IP multicast data packets, which include multicast protocol packets (for example, MLD,
IGMP and PIM packets), and the data packets.
In incoming direction, the MPC receives multicast packets from an interface and forwards them to the
central point or to a Services Processing Unit (SPU). The SPU performs multicast route lookup, flow-
based security check, and packet replication.
In outgoing direction, the MPC receives copies of a multicast packet or Layer 3 multicast control
protocol packets from SPU, and transmits them to either multicast capable routers or to hosts in a
multicast group.
In the SRX Series chassis, the SPU perform multicast route lookup, if available, to forward an incoming
multicast packet and replicates it for each multicast outgoing interface. After receiving replicated
multicast packets and their corresponding outgoing interface information from the SPU, the MPC
transmits these packets to next hops.
NOTE: On all SRX Series devices, during RG1 failover with multicast traffic and high number of
multicast sessions, the failover delay is from 90 through 120 seconds for traffic to resume on the
secondary node. The delay of 90 through 120 seconds is only for the first failover. For
subsequent failovers, the traffic resumes within 8 through 18 seconds.
RELATED DOCUMENTATION
You configure a router network to support multicast applications with a related family of protocols. To
use multicast, you must understand the basic components of a multicast network and their relationships,
and then configure the device to act as a node in the network.
RELATED DOCUMENTATION
Multicast Overview | 2
Verifying a Multicast Configuration
IN THIS SECTION
• Fragment handling
• Packet reordering
The structure and processing of IPv6 multicast data session are the same as those of IPv4. Each data
session has the following:
The reverse path forwarding (RPF) check behavior for IPv6 is the same as that for IPv4. Incoming
multicast data is accepted only if the RPF check succeeds. In an IPv6 multicast flow, incoming Multicast
Listener Discovery (MLD) protocol packets are accepted only if MLD or PIM is enabled in the security
zone for the incoming interface. Sessions for multicast protocol packets have a default timeout value of
300 seconds. This value cannot be configured. The null register packet is sent to rendezvous point (RP).
In IPv6 multicast flow, a multicast router has the following three roles:
21
• Designated router
This router receives the multicast packets, encapsulates them with unicast IP headers, and sends
them for multicast flow.
• Intermediate router
There are two sessions for the packets, the control session, for the outer unicast packets, and the
data session. The security policies are applied to the data session and the control session, is used for
forwarding.
• Rendezvous point
The RP receives the unicast PIM register packet, separates the unicast header, and then forwards the
inner multicast packet. The packets received by RP are sent to the pd interface for decapsulation and
are later handled like normal multicast packets.
On a Services Processing Unit (SPU), the multicast session is created as a template session for matching
the incoming packet's tuple. Leaf sessions are connected to the template session. On the Customer
Premise Equipment (CPE), only the template session is created. Each CPE session carries the fan-out lists
that are used for load-balanced distribution of multicast SPU sessions.
NOTE: IPv6 multicast uses the IPv4 multicast behavior for session distribution.
The network service access point identifier (nsapi) of the leaf session is set up on the multicast text
traffic going into the tunnels, to point to the outgoing tunnel. The zone ID of the tunnel is used for
policy lookup for the leaf session in the second stage. Multicast packets are unidirectional. Thus for
multicast text session sent into the tunnels, forwarding sessions are not created.
When the multicast route ages out, the corresponding chain of multicast sessions is deleted. When the
multicast route changes, then the corresponding chain of multicast sessions is deleted. This forces the
next packet hitting the multicast route to take the first path and re-create the chain of sessions; the
multicast route counter is not affected.
NOTE: The IPv6 multicast packet reorder approach is same as that for IPv4.
For the encapsulating router, the incoming packet is multicast, and the outgoing packet is unicast. For
the intermediate router, the incoming packet is unicast, and the outgoing packet is unicast.
RELATED DOCUMENTATION
Junos OS substantially supports the following RFCs and Internet drafts, which define standards for IP
multicast protocols, including the Distance Vector Multicast Routing Protocol (DVMRP), Internet Group
Management Protocol (IGMP), Multicast Listener Discovery (MLD), Multicast Source Discovery Protocol
(MSDP), Pragmatic General Multicast (PGM), Protocol Independent Multicast (PIM), Session
Announcement Protocol (SAP), and Session Description Protocol (SDP).
• RFC 3956, Embedding the Rendezvous Point (RP) Address in an IPv6 Multicast Address
• RFC 3590, Source Address Selection for the Multicast Listener Discovery (MLD) Protocol
• RFC 7761, Protocol Independent Multicast – Sparse Mode (PIM-SM): Protocol Specification
• RFC 5059, Bootstrap Router (BSR) Mechanism for Protocol Independent Multicast (PIM)
• RFC 6514, BGP Encodings and Procedures for Multicast in MPLS/BGP IP VPNs
The following RFCs and Internet drafts do not define standards, but provide information about multicast
protocols and related technologies. The IETF classifies them variously as “Best Current Practice,”
“Experimental,” or “Informational.”
• RFC 3446, Anycast Rendevous Point (RP) mechanism using Protocol Independent Multicast (PIM)
and Multicast Source Discovery Protocol (MSDP)
• RFC 3973, Protocol Independent Multicast – Dense Mode (PIM-DM): Protocol Specification
(Revised)
RELATED DOCUMENTATION
CHAPTER 2
IN THIS CHAPTER
Configuring IGMP | 25
Configuring MLD | 60
Configuring IGMP
IN THIS SECTION
Understanding IGMP | 27
Configuring IGMP | 29
Enabling IGMP | 31
Disabling IGMP | 57
Multicast group membership protocols enable a routing device to detect when a host on a directly
attached subnet, typically a LAN, wants to receive traffic from a certain multicast group. Even if more
than one host on the LAN wants to receive traffic for that multicast group, the routing device sends only
one copy of each packet for that multicast group out on that interface, because of the inherent
broadcast nature of LANs. When the multicast group membership protocol informs the routing device
that there are no interested hosts on the subnet, the packets are withheld and that leaf is pruned from
the distribution tree.
The Internet Group Management Protocol (IGMP) and the Multicast Listener Discovery (MLD) Protocol
are the standard IP multicast group membership protocols: IGMP and MLD have several versions that
are supported by hosts and routing devices:
• IGMPv1—The original protocol defined in RFC 1112. An explicit join message is sent to the routing
device, but a timeout is used to determine when hosts leave a group. This process wastes processing
cycles on the routing device, especially on older or smaller routing devices.
• IGMPv2—Defined in RFC 2236. Among other features, IGMPv2 adds an explicit leave message to
the join message so that routing devices can more easily determine when a group has no interested
listeners on a LAN.
• IGMPv3—Defined in RFC 3376. Among other features, IGMPv3 optimizes support for a single source
of content for a multicast group, or source-specific multicast (SSM).
27
The various versions of IGMP and MLD are backward compatible. It is common for a routing device to
run multiple versions of IGMP and MLD on LAN interfaces. Backward compatibility is achieved by
dropping back to the most basic of all versions run on a LAN. For example, if one host is running
IGMPv1, any routing device attached to the LAN running IGMPv2 can drop back to IGMPv1 operation,
effectively eliminating the IGMPv2 advantages. Running multiple IGMP versions ensures that both
IGMPv1 and IGMPv2 hosts find peers for their versions on the routing device.
SEE ALSO
Configuring MLD
Understanding IGMP
The Internet Group Management Protocol (IGMP) manages the membership of hosts and routing
devices in multicast groups. IP hosts use IGMP to report their multicast group memberships to any
immediately neighboring multicast routing devices. Multicast routing devices use IGMP to learn, for
each of their attached physical networks, which groups have members.
IGMP is also used as the transport for several related multicast protocols (for example, Distance Vector
Multicast Routing Protocol [DVMRP] and Protocol Independent Multicast version 1 [PIMv1]).
A routing device receives explicit join and prune messages from those neighboring routing devices that
have downstream group members. When PIM is the multicast protocol in use, IGMP begins the process
as follows:
1. To join a multicast group, G, a host conveys its membership information through IGMP.
2. The routing device then forwards data packets addressed to a multicast group G to only those
interfaces on which explicit join messages have been received.
3. A designated router (DR) sends periodic join and prune messages toward a group-specific rendezvous
point (RP) for each group for which it has active members. One or more routing devices are
automatically or statically designated as the RP, and all routing devices must explicitly join through
the RP.
28
4. Each routing device along the path toward the RP builds a wildcard (any-source) state for the group
and sends join and prune messages toward the RP.
The term route entry is used to refer to the state maintained in a routing device to represent the
distribution tree.
• source address
• group address
• timers
• flag bits
The wildcard route entry's incoming interface points toward the RP.
The outgoing interfaces point to the neighboring downstream routing devices that have sent join and
prune messages toward the RP as well as the directly connected hosts that have requested
membership to group G.
5. This state creates a shared, RP-centered, distribution tree that reaches all group members.
IGMP is also used as the transport for several related multicast protocols (for example, Distance Vector
Multicast Routing Protocol [DVMRP] and Protocol Independent Multicast version 1 [PIMv1]).
IGMP is an integral part of IP and must be enabled on all routing devices and hosts that need to receive
IP multicast traffic.
For each attached network, a multicast routing device can be either a querier or a nonquerier. The
querier routing device periodically sends general query messages to solicit group membership
information. Hosts on the network that are members of a multicast group send report messages. When
a host leaves a group, it sends a leave group message.
IGMP version 3 (IGMPv3) supports inclusion and exclusion lists. Inclusion lists enable you to specify
which sources can send to a multicast group. This type of multicast group is called a source-specific
multicast (SSM) group, and its multicast address is 232/8.
IGMPv3 provides support for source filtering. For example, a routing device can specify particular
routing devices from which it accepts or rejects traffic. With IGMPv3, a multicast routing device can
learn which sources are of interest to neighboring routing devices.
29
Exclusion mode works the opposite of an inclusion list. It allows any source but the ones listed to send
to the SSM group.
IGMPv3 interoperates with versions 1 and 2 of the protocol. However, to remain compatible with older
IGMP hosts and routing devices, IGMPv3 routing devices must also implement versions 1 and 2 of the
protocol. IGMPv3 supports the following membership-report record types: mode is allowed, allow new
sources, and block old sources.
SEE ALSO
Configuring IGMP
Before you begin:
1. Determine whether the router is directly attached to any multicast sources. Receivers must be able
to locate these sources.
2. Determine whether the router is directly attached to any multicast group receivers. If receivers are
present, IGMP is needed.
3. Determine whether to configure multicast to use sparse, dense, or sparse-dense mode. Each mode
has different configuration considerations.
5. Determine whether to locate the RP with the static configuration, BSR, or auto-RP method.
6. Determine whether to configure multicast to use its own RPF routing table when configuring PIM in
sparse, dense, or sparse-dense mode.
7. Configure the SAP and SDP protocols to listen for multicast session announcements. See Configuring
the Session Announcement Protocol.
To configure the Internet Group Management Protocol (IGMP), include the igmp statement:
igmp {
accounting;
interface interface-name {
disable;
30
(accounting | no-accounting);
group-policy [ policy-names ];
immediate-leave;
oif-map map-name;
promiscuous-mode;
ssm-map ssm-map-name;
static {
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
source-count number;
source-increment increment;
}
}
}
version version;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
• [edit protocols]
By default, IGMP is enabled on all interfaces on which you configure Protocol Independent Multicast
(PIM), and on all broadcast interfaces on which you configure the Distance Vector Multicast Routing
Protocol (DVMRP).
NOTE: You can configure IGMP on an interface without configuring PIM. PIM is generally not
needed on IGMP downstream interfaces. Therefore, only one “pseudo PIM interface” is created
31
to represent all IGMP downstream (IGMP-only) interfaces on the router. This reduces the amount
of router resources, such as memory, that are consumed. You must configure PIM on upstream
IGMP interfaces to enable multicast routing, perform reverse-path forwarding for multicast data
packets, populate the multicast forwarding table for upstream interfaces, and in the case of
bidirectional PIM and PIM sparse mode, to distribute IGMP group memberships into the
multicast routing domain.
Enabling IGMP
The Internet Group Management Protocol (IGMP) manages multicast groups by establishing,
maintaining, and removing groups on a subnet. Multicast routing devices use IGMP to learn which
groups have members on each of their attached physical networks. IGMP must be enabled for the router
to receive IPv4 multicast packets. IGMP is only needed for IPv4 networks, because multicast is handled
differently in IPv6 networks. IGMP is automatically enabled on all IPv4 interfaces on which you
configure PIM and on all IPv4 broadcast interfaces when you configure DVMRP.
If IGMP is not running on an interface—either because PIM and DVMRP are not configured on the
interface or because IGMP is explicitly disabled on the interface—you can explicitly enable IGMP.
1. If PIM and DVMRP are not running on the interface, explicitly enable IGMP by including the interface
name.
2. See if IGMP is disabled on any interfaces. In the following example, IGMP is disabled on a Gigabit
Ethernet interface.
5. Verify the operation of IGMP on the interfaces by checking the output of the show igmp interface
command.
SEE ALSO
Understanding IGMP
Disabling IGMP
show igmp interface
The query interval, the response interval, and the robustness variable are related in that they are all
variables that are used to calculate the group membership timeout. The group membership timeout is
the number of seconds that must pass before a multicast router determines that no more members of a
host group exist on a subnet. The group membership timeout is calculated as the (robustness variable x
query-interval) + (query-response-interval). If no reports are received for a particular group before the
group membership timeout has expired, the routing device stops forwarding remotely-originated
multicast packets for that group onto the attached network.
By default, host-query messages are sent every 125 seconds. You can change this interval to change the
number of IGMP messages sent on the subnet.
SEE ALSO
Understanding IGMP
Modifying the IGMP Query Response Interval
Modifying the IGMP Robustness Variable
show igmp interface
show igmp statistics
The query response interval, the host-query interval, and the robustness variable are related in that they
are all variables that are used to calculate the group membership timeout. The group membership
timeout is the number of seconds that must pass before a multicast router determines that no more
members of a host group exist on a subnet. The group membership timeout is calculated as the
(robustness variable x query-interval) + (query-response-interval). If no reports are received for a
particular group before the group membership timeout has expired, the routing device stops forwarding
remotely originated multicast packets for that group onto the attached network.
The default query response interval is 10 seconds. You can configure a subsecond interval up to one
digit to the right of the decimal point. The configurable range is 0.1 through 0.9, then in 1-second
intervals 1 through 999,999.
2. Verify the configuration by checking the IGMP Query Response Interval field in the output of the
show igmp interface command.
3. Verify the operation of the query interval by checking the Membership Query field in the output of
the show igmp statistics command.
SEE ALSO
Understanding IGMP
Modifying the IGMP Host-Query Message Interval
Modifying the IGMP Robustness Variable
show igmp interface
show igmp statistics
The immediate-leave setting enables host tracking, meaning that the device keeps track of the hosts that
send join messages. This allows IGMP to determine when the last host sends a leave message for the
multicast group.
When the immediate leave setting is enabled, the device removes an interface from the forwarding-table
entry without first sending IGMP group-specific queries to the interface. The interface is pruned from
the multicast tree for the multicast group specified in the IGMP leave message. The immediate leave
setting ensures optimal bandwidth management for hosts on a switched network, even when multiple
multicast groups are being used simultaneously.
When immediate leave is disabled and one host sends a leave group message, the routing device first
sends a group query to determine if another receiver responds. If no receiver responds, the routing
device removes all hosts on the interface from the multicast group. Immediate leave is disabled by
default for both IGMP version 2 and IGMP version 3.
35
NOTE: Although host tracking is enabled for IGMPv2 and MLDv1 when you enable immediate
leave, use immediate leave with these versions only when there is one host on the interface. The
reason is that IGMPv2 and MLDv1 use a report suppression mechanism whereby only one host
on an interface sends a group join report in response to a membership query. The other
interested hosts suppress their reports. The purpose of this mechanism is to avoid a flood of
reports for the same group. But it also interferes with host tracking, because the router only
knows about the one interested host and does not know about the others.
2. Verify the configuration by checking the Immediate Leave field in the output of the show igmp
interface command.
SEE ALSO
Understanding IGMP
show igmp interface
You define the policy to match only IGMP group addresses (for IGMPv2) by using the policy's route-
filter statement to match the group address. You define the policy to match IGMP (source, group)
addresses (for IGMPv3) by using the policy's route-filter statement to match the group address and the
policy's source-address-filter statement to match the source address.
36
3. Apply the policies to the IGMP interfaces on which you prefer not to receive specific group or
(source, group) reports. In this example, ge-0/0/0.1 is running IGMPv2, and ge-0/1/1.0 is running
IGMPv3.
4. Verify the operation of the filter by checking the Rejected Report field in the output of the show
igmp statistics command.
SEE ALSO
Understanding IGMP
Example: Configuring Policy Chains and Route Filters
37
NOTE: When you enable IGMP on an unnumbered Ethernet interface that uses a /32 loopback
address as a donor address, you must configure IGMP promiscuous mode to accept the IGMP
packets received on this interface.
NOTE: When enabling promiscuous-mode, all routers on the ethernet segment must be
configured with the promiscuous mode statement. Otherwise, only the interface configured with
lowest IPv4 address acts as the querier for IGMP for this Ethernet segment.
2. Verify the configuration by checking the Promiscuous Mode field in the output of the show igmp
interface command.
3. Verify the operation of the filter by checking the Rx non-local field in the output of the show igmp
statistics command.
SEE ALSO
Understanding IGMP
Loopback Interface Configuration
Junos OS Network Interfaces Library for Routing Devices
show igmp interface
show igmp statistics
38
When the routing device that is serving as the querier receives a leave-group message from a host, the
routing device sends multiple group-specific queries to the group being left. The querier sends a specific
number of these queries at a specific interval. The number of queries sent is called the last-member
query count. The interval at which the queries are sent is called the last-member query interval. Because
both settings are configurable, you can adjust the leave latency. The IGMP leave latency is the time
between a request to leave a multicast group and the receipt of the last byte of data for the multicast
group.
The last-member query count x (times) the last-member query interval = (equals) the amount of time it
takes a routing device to determine that the last member of a group has left the group and to stop
forwarding group traffic.
The default last-member query interval is 1 second. You can configure a subsecond interval up to one
digit to the right of the decimal point. The configurable range is 0.1 through 0.9, then in 1-second
intervals 1 through 999,999.
1. Configure the time (in seconds) that the routing device waits for a report in response to a group-
specific query.
2. Verify the configuration by checking the IGMP Last Member Query Interval field in the output of the
show igmp interfaces command.
NOTE: You can configure the last-member query count by configuring the robustness variable.
The two are always equal.
SEE ALSO
When the query router receives an IGMP leave message on a shared network running IGMPv2, the
query router must send an IGMP group query message a specified number of times. The number of
IGMP group query messages sent is determined by the robust count.
The value of the robustness variable is also used in calculating the following IGMP message intervals:
• Group member interval—Amount of time that must pass before a multicast router determines that
there are no more members of a group on a network. This interval is calculated as follows:
(robustness variable x query-interval) + (1 x query-response-interval).
• Other querier present interval—The robust count is used to calculate the amount of time that must
pass before a multicast router determines that there is no longer another multicast router that is the
querier. This interval is calculated as follows: (robustness variable x query-interval) + (0.5 x query-
response-interval).
• Last-member query count—Number of group-specific queries sent before the router assumes there
are no local members of a group. The number of queries is equal to the value of the robustness
variable.
In IGMPv3, a change of interface state causes the system to immediately transmit a state-change report
from that interface. In case the state-change report is missed by one or more multicast routers, it is
retransmitted. The number of times it is retransmitted is the robust count minus one. In IGMPv3, the
robust count is also a factor in determining the group membership interval, the older version querier
interval, and the other querier present interval.
By default, the robustness variable is set to 2. You might want to increase this value if you expect a
subnet to lose packets.
2. Verify the configuration by checking the IGMP Robustness Count field in the output of the show
igmp interfaces command.
SEE ALSO
Increasing the maximum number of IGMP packets transmitted per second might be useful on a router
with a large number of interfaces participating in IGMP.
To change the limit for the maximum number of IGMP packets the router can transmit in 1 second,
include the maximum-transmit-rate statement and specify the maximum number of packets per second
to be transmitted.
SEE ALSO
To enable source-specific multicast (SSM) functionality, you must configure version 3 on the host and
the host’s directly connected routing device. If a source address is specified in a multicast group that is
statically configured, the version must be set to IGMPv3.
If a static multicast group is configured with the source address defined, and the IGMP version is
configured to be version 2, the source is ignored and only the group is added. In this case, the join is
treated as an IGMPv2 group join.
41
BEST PRACTICE: If you configure the IGMP version setting at the individual interface hierarchy
level, it overrides the interface all statement. That is, the new interface does not inherit the
version number that you specified with the interface all statement. By default, that new interface
is enabled with version 2. You must explicitly specify a version number when adding a new
interface. For example, if you specified version 3 with interface all, you would need to configure
the version 3 statement for the new interface. Additionally, if you configure an interface for a
multicast group at the [edit interface interface-name static group multicast-group-address]
hierarchy level, you must specify a version number as well as the other group parameters.
Otherwise, the interface is enabled with the default version 2.
If you have already configured the routing device to use IGMP version 1 (IGMPv1) and then configure it
to use IGMPv2, the routing device continues to use IGMPv1 for up to 6 minutes and then uses IGMPv2.
2. Verify the configuration by checking the version field in the output of the show igmp interfaces
command. The show igmp statistics command has version-specific output fields, such as V1
Membership Report, V2 Membership Report, and V3 Membership Report.
SEE ALSO
Understanding IGMP
show pim interfaces
show igmp statistics
42
When enabling IGMP static group membership, you cannot configure multiple groups using the group-
count, group-increment, source-count, and source-increment statements if the all option is specified as
the IGMP interface.
Class-of-service (CoS) adjustment is not supported with IGMP static group membership.
1. On the DR, configure the static groups to be created by including the static statement and group
statement and specifying which IP multicast address of the group to be created. When creating
groups individually, you must specify a unique address for each group.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
static {
group 233.252.0.1 ;
}
}
3. After you have committed the configuration and the source is sending traffic, use the show igmp
group command to verify that static group 233.252.0.1 has been created.
NOTE: When you configure static IGMP group entries on point-to-point links that connect
routing devices to a rendezvous point (RP), the static IGMP group entries do not generate join
messages toward the RP.
When you create IGMP static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, you can specify that a number of static groups be automatically
created. This is useful when you want to test forwarding to multiple receivers without having to
configure each receiver separately.
1. On the DR, configure the number of static groups to be created by including the group-count
statement and specifying the number of groups to be created.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
static {
group 233.252.0.1 {
group-count 3;
}
}
}
44
3. After you have committed the configuration and after the source is sending traffic, use the show
igmp group command to verify that static groups 233.252.0.1, 233.252.0.2, and 233.252.0.3 have
been created.
When you create IGMP static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, you can also configure the group address to be automatically
incremented for each group created. This is useful when you want to test forwarding to multiple
receivers without having to configure each receiver separately and when you do not want the group
addresses to be sequential.
In this example, you create three groups and increase the group address by an increment of two for each
group.
1. On the DR, configure the group address increment by including the group-increment statement and
specifying the number by which the address should be incremented for each group. The increment is
specified in dotted decimal notation similar to an IPv4 address.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
group-increment 0.0.0.2;
group-count 3;
}
}
}
3. After you have committed the configuration and after the source is sending traffic, use the show
igmp group command to verify that static groups 233.252.0.1, 233.252.0.3, and 233.252.0.5 have
been created.
When you create IGMP static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, and your network is operating in source-specific multicast (SSM)
mode, you can also specify that the multicast source address be accepted. This is useful when you want
to test forwarding to multicast receivers from a specific multicast source.
If you specify a group address in the SSM range, you must also specify a source.
46
If a source address is specified in a multicast group that is statically configured, the IGMP version on the
interface must be set to IGMPv3. IGMPv2 is the default value.
In this example, you create group 233.252.0.1 and accept IP address 10.0.0.2 as the only source.
1. On the DR, configure the source address by including the source statement and specifying the IPv4
address of the source host.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
source 10.0.0.2;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show igmp
group command to verify that static group 233.252.0.1 has been created and that source 10.0.0.2
has been accepted.
When you create IGMP static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, you can specify that a number of multicast sources be
47
automatically accepted. This is useful when you want to test forwarding to multicast receivers from
more than one specified multicast source.
In this example, you create group 233.252.0.1 and accept addresses 10.0.0.2, 10.0.0.3, and 10.0.0.4 as
the sources.
1. On the DR, configure the number of multicast source addresses to be accepted by including the
source-count statement and specifying the number of sources to be accepted.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
source 10.0.0.2 {
source-count 3;
}
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show igmp
group command to verify that static group 233.252.0.1 has been created and that sources 10.0.0.2,
10.0.0.3, and 10.0.0.4 have been accepted.
Source: 10.0.0.3
Last reported by: Local
Timeout: 0 Type: Static
Group: 233.252.0.1
Source: 10.0.0.4
Last reported by: Local
Timeout: 0 Type: Static
When you configure static groups on an interface on which you want to receive multicast traffic, and
specify that a number of multicast sources be automatically accepted, you can also specify the number
by which the address should be incremented for each source accepted. This is useful when you want to
test forwarding to multiple receivers without having to configure each receiver separately and you do
not want the source addresses to be sequential.
In this example, you create group 233.252.0.1 and accept addresses 10.0.0.2, 10.0.0.4, and 10.0.0.6 as
the sources.
1. Configure the multicast source address increment by including the source-increment statement and
specifying the number by which the address should be incremented for each source. The increment is
specified in dotted decimal notation similar to an IPv4 address.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
source 10.0.0.2 {
source-count 3;
source-increment 0.0.0.2;
}
}
49
}
}
3. After you have committed the configuration and after the source is sending traffic, use the show
igmp group command to verify that static group 233.252.0.1 has been created and that sources
10.0.0.2, 10.0.0.4, and 10.0.0.6 have been accepted.
When you configure static groups on an interface on which you want to receive multicast traffic and
your network is operating in source-specific multicast (SSM) mode, you can specify that certain
multicast source addresses be excluded.
By default the multicast source address configured in a static group operates in include mode. In include
mode the multicast traffic for the group is accepted from the source address configured. You can also
configure the static group to operate in exclude mode. In exclude mode the multicast traffic for the
group is accepted from any address other than the source address configured.
If a source address is specified in a multicast group that is statically configured, the IGMP version on the
interface must be set to IGMPv3. IGMPv2 is the default value.
In this example, you exclude address 10.0.0.2 as a source for group 233.252.0.1.
1. On the DR, configure a multicast static group to operate in exclude mode by including the exclude
statement and specifying which IPv4 source address to exclude.
2. After you commit the configuration, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
interface fe-0/1/2.0 {
version 3;
static {
group 233.252.0.1 {
exclude;
source 10.0.0.2;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show igmp
group detail command to verify that static group 233.252.0.1 has been created and that the static
group is operating in exclude mode.
SEE ALSO
1. Enable accounting globally or on an IGMP interface. This example shows both options.
2. Configure the events to be recorded and filter the events to a system log file with a descriptive
filename, such as igmp-events.
4. You can monitor the system log file as entries are added to the file by running the monitor start and
monitor stop commands.
SEE ALSO
Understanding IGMP
Specifying Log File Size, Number, and Archiving Properties
When configuring limits for IGMP multicast groups, keep the following in mind:
• Each any-source group (*,G) counts as one group toward the limit.
53
• Each source-specific group (S,G) counts as one group toward the limit.
• Multiple source-specific groups count individually toward the group limit, even if they are for the
same group. For example, (S1, G1) and (S2, G1) would count as two groups toward the configured
limit.
• Combinations of any-source groups and source-specific groups count individually toward the group
limit, even if they are for the same group. For example, (*, G1) and (S, G1) would count as two groups
toward the configured limit.
• Configuring and committing a group limit on a network that is lower than what already exists on the
network results in the removal of all groups from the configuration. The groups must then request to
rejoin the network (up to the newly configured group limit).
• You can dynamically limit multicast groups on IGMP logical interfaces using dynamic profiles.
Starting in Junos OS Release 12.2, you can optionally configure a system log warning threshold for
IGMP multicast group joins received on the logical interface. It is helpful to review the system log
messages for troubleshooting purposes and to detect if an excessive amount of IGMP multicast group
joins have been received on the interface. These log messages convey when the configured group limit
has been exceeded, when the configured threshold has been exceeded, and when the number of groups
drop below the configured threshold.
The group-threshold statement enables you to configure the threshold at which a warning message is
logged. The range is 1 through 100 percent. The warning threshold is a percentage of the group limit, so
you must configure the group-limit statement to configure a warning threshold. For instance, when the
number of groups exceed the configured warning threshold, but remain below the configured group
limit, multicast groups continue to be accepted, and the device logs the warning message. In addition,
the device logs a warning message after the number of groups drop below the configured warning
threshold. You can further specify the amount of time (in seconds) between the log messages by
configuring the log-interval statement. The range is 6 through 32,767 seconds.
You might consider throttling log messages because every entry added after the configured threshold
and every entry rejected after the configured limit causes a warning message to be logged. By
configuring a log interval, you can throttle the amount of system log warning messages generated for
IGMP multicast group joins.
NOTE: On ACX Series routers, the maximum number of multicast routes is 1024.
[edit]
user@host# edit protocols igmp interface interface-name
To confirm your configuration, use the show protocols igmp command. To verify the operation of IGMP
on the interface, including the configured group limit and the optional warning threshold and interval
between log messages, use the show igmp interface command.
SEE ALSO
Flag Description
(Continued)
Flag Description
In the following example, tracing is enabled for all routing protocol packets. Then tracing is narrowed to
focus only on IGMP packets of a particular type. To configure tracing operations for IGMP:
1. (Optional) Configure tracing at the routing options level to trace all protocol packets.
6. Configure tracing flags. Suppose you are troubleshooting issues with a particular multicast group. The
following example shows how to flag all events for packets associated with the group IP address.
SEE ALSO
Understanding IGMP
Tracing and Logging Junos OS Operations
mtrace
Disabling IGMP
To disable IGMP on an interface, include the disable statement:
disable;
SEE ALSO
Understanding IGMP
Configuring IGMP
58
Enabling IGMP
SEE ALSO
12.2 Starting in Junos OS Release 12.2, you can optionally configure a system log warning threshold for IGMP
multicast group joins received on the logical interface.
RELATED DOCUMENTATION
Configuring MLD | 60
IN THIS SECTION
Purpose | 59
59
Action | 59
Meaning | 59
Purpose
Action
Sample Output
command-name
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Derived Parameters:
IGMP Membership Timeout: 260.0
IGMP Other Querier Present Timeout: 255.0
Meaning
The output shows a list of the interfaces that are configured for IGMP. Verify the following information:
Configuring MLD
IN THIS SECTION
Understanding MLD | 60
Configuring MLD | 64
Enabling MLD | 65
Disabling MLD | 91
Understanding MLD
The Multicast Listener Discovery (MLD) Protocol manages the membership of hosts and routers in
multicast groups. IP version 6 (IPv6) multicast routers use MLD to learn, for each of their attached
physical networks, which groups have interested listeners. Each routing device maintains a list of host
multicast addresses that have listeners for each subnetwork, as well as a timer for each address.
However, the routing device does not need to know the address of each listener—just the address of
each host. The routing device provides addresses to the multicast routing protocol it uses, which
ensures that multicast packets are delivered to all subnetworks where there are interested listeners. In
this way, MLD is used as the transport for the Protocol Independent Multicast (PIM) Protocol.
MLD is an integral part of IPv6 and must be enabled on all IPv6 routing devices and hosts that need to
receive IP multicast traffic. The Junos OS supports MLD versions 1 and 2. Version 2 is supported for
source-specific multicast (SSM) include and exclude modes.
61
In include mode, the receiver specifies the source or sources it is interested in receiving the multicast
group traffic from. Exclude mode works the opposite of include mode. It allows the receiver to specify
the source or sources it is not interested in receiving the multicast group traffic from.
For each attached network, a multicast routing device can be either a querier or a nonquerier. A querier
routing device, usually one per subnet, solicits group membership information by transmitting MLD
queries. When a host reports to the querier routing device that it has interested listeners, the querier
routing device forwards the membership information to the rendezvous point (RP) routing device by
means of the receiver's (host's) designated router (DR). This builds the rendezvous-point tree (RPT)
connecting the host with interested listeners to the RP routing device. The RPT is the initial path used
by the sender to transmit information to the interested listeners. Nonquerier routing devices do not
transmit MLD queries on a subnet but can do so if the querier routing device fails.
All MLD-configured routing devices start as querier routing devices on each attached subnet (see Figure
3 on page 61). The querier routing device on the right is the receiver's DR.
To elect the querier routing device, the routing devices exchange query messages containing their IPv6
source addresses. If a routing device hears a query message whose IPv6 source address is numerically
lower than its own selected address, it becomes a nonquerier. In Figure 4 on page 62, the routing
device on the left has a source address numerically lower than the one on the right and therefore
becomes the querier routing device.
NOTE: In the practical application of MLD, several routing devices on a subnet are nonqueriers.
If the elected querier routing device fails, query messages are exchanged among the remaining
routing devices. The routing device with the lowest IPv6 source address becomes the new
querier routing device. The IPv6 Neighbor Discovery Protocol (NDP) implementation drops
62
incoming Neighbor Announcement (NA) messages that have a broadcast or multicast address in
the target link-layer address option. This behavior is recommended by RFC 2461.
The querier routing device sends general MLD queries on the link-scope all-nodes multicast address
FF02::1 at short intervals to all attached subnets to solicit group membership information (see Figure 5
on page 62). Within the query message is the maximum response delay value, specifying the maximum
allowed delay for the host to respond with a report message.
If interested listeners are attached to the host receiving the query, the host sends a report containing
the host's IPv6 address to the routing device (see Figure 6 on page 63). If the reported address is not
yet in the routing device's list of multicast addresses with interested listeners, the address is added to
63
the list and a timer is set for the address. If the address is already on the list, the timer is reset. The
host's address is transmitted to the RP in the PIM domain.
If the host has no interested multicast listeners, it sends a done message to the querier routing device.
On receipt, the querier routing device issues a multicast address-specific query containing the last
listener query interval value to the multicast address of the host. If the routing device does not receive a
report from the multicast address, it removes the multicast address from the list and notifies the RP in
the PIM domain of its removal (see Figure 7 on page 63).
Figure 7: Host Has No Interested Receivers and Sends a Done Message to Routing Device
If a done message is not received by the querier routing device, the querier routing device continues to
send multicast address-specific queries. If the timer set for the address on receipt of the last report
expires, the querier routing device assumes there are no longer interested listeners on that subnet,
64
removes the multicast address from the list, and notifies the RP in the PIM domain of its removal (see
Figure 8 on page 64).
Figure 8: Host Address Timer Expires and Address Is Removed from Multicast Address List
SEE ALSO
Enabling MLD
Example: Recording MLD Join and Leave Events
Example: Modifying the MLD Robustness Variable
Configuring MLD
To configure the Multicast Listener Discovery (MLD) Protocol, include the mld statement:
mld {
accounting;
interface interface-name {
disable;
(accounting | no-accounting);
group-policy [ policy-names ];
immediate-leave;
oif-map [ map-names ];
passive;
ssm-map ssm-map-name;
static {
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
65
source-count number;
source-increment increment;
}
}
}
version version;
}
maximum-transmit-rate packets-per-second;
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
• [edit protocols]
By default, MLD is enabled on all broadcast interfaces when you configure Protocol Independent
Multicast (PIM) or the Distance Vector Multicast Routing Protocol (DVMRP).
Enabling MLD
The Multicast Listener Discovery (MLD) Protocol manages multicast groups by establishing, maintaining,
and removing groups on a subnet. Multicast routing devices use MLD to learn which groups have
members on each of their attached physical networks. MLD must be enabled for the router to receive
IPv6 multicast packets. MLD is only needed for IPv6 networks, because multicast is handled differently
in IPv4 networks. MLD is enabled on all IPv6 interfaces on which you configure PIM and on all IPv6
broadcast interfaces when you configure DVMRP.
MLD specifies different behaviors for multicast listeners and for routers. When a router is also a listener,
the router responds to its own messages. If a router has more than one interface to the same link, it
needs to perform the router behavior over only one of those interfaces. Listeners, on the other hand,
must perform the listener behavior on all interfaces connected to potential receivers of multicast traffic.
If MLD is not running on an interface—either because PIM and DVMRP are not configured on the
interface or because MLD is explicitly disabled on the interface—you can explicitly enable MLD.
1. If PIM and DVMRP are not running on the interface, explicitly enable MLD by including the interface
name.
2. Check to see if MLD is disabled on any interfaces. In the following example, MLD is disabled on a
Gigabit Ethernet interface.
interface fe-0/0/0.0;
interface ge-0/0/0.0 {
disable;
}
interface fe-0/0/0.0;
interface ge-0/0/0.0;
5. Verify the operation of MLD by checking the output of the show mld interface command.
SEE ALSO
Understanding MLD | 0
Disabling MLD | 0
67
If you configure the MLD version setting at the individual interface hierarchy level, it overrides
configuring the IGMP version using the interface all statement.
If a source address is specified in a multicast group that is statically configured, the version must be set
to MLDv2.
2. Verify the configuration by checking the version field in the output of the show mld interface
command. The show mld statistics command has version-specific output fields, such as the
counters in the MLD Message type field.
SEE ALSO
Understanding MLD | 0
Source-Specific Multicast Groups Overview | 0
Example: Configuring Source-Specific Multicast Groups with Any-Source Override | 458
Example: Configuring an SSM-Only Domain | 454
Example: Configuring PIM SSM on a Network | 452
Example: Configuring SSM Mapping | 455
scope all-nodes address FF02::1. A general host-query message has a maximum response time that you
can set by configuring the query response interval.
The query response timeout, the query interval, and the robustness variable are related in that they are
all variables that are used to calculate the multicast listener interval. The multicast listener interval is the
number of seconds that must pass before a multicast router determines that no more members of a host
group exist on a subnet. The multicast listener interval is calculated as the (robustness variable x query-
interval) + (1 x query-response-interval). If no reports are received for a particular group before the
multicast listener interval has expired, the routing device stops forwarding remotely-originated multicast
packets for that group onto the attached network.
By default, host-query messages are sent every 125 seconds. You can change this interval to change the
number of MLD messages sent on the subnet.
SEE ALSO
Understanding MLD | 0
Modifying the MLD Query Response Interval | 0
Example: Modifying the MLD Robustness Variable | 0
show mld interface | 2230
CLI Explorer
show mld statistics | 2237
CLI Explorer
interval to adjust the burst peaks of MLD messages on the subnet. Set a larger interval to make the
traffic less bursty.
The query response timeout, the query interval, and the robustness variable are related in that they are
all variables that are used to calculate the multicast listener interval. The multicast listener interval is the
number of seconds that must pass before a multicast router determines that no more members of a host
group exist on a subnet. The multicast listener interval is calculated as the (robustness variable x query-
interval) + (1 x query-response-interval). If no reports are received for a particular group before the
multicast listener interval has expired, the routing device stops forwarding remotely-originated multicast
packets for that group onto the attached network.
The default query response interval is 10 seconds. You can configure a subsecond interval up to one
digit to the right of the decimal point. The configurable range is 0.1 through 0.9, then in 1-second
intervals 1 through 999,999.
2. Verify the configuration by checking the MLD Query Response Interval field in the output of the
show mld interface command.
3. Verify the operation of the query interval by checking the Listener Query field in the output of the
show mld statistics command.
SEE ALSO
Understanding MLD | 0
Modifying the MLD Host-Query Message Interval | 0
Example: Modifying the MLD Robustness Variable | 0
show mld interface | 2230
CLI Explorer
show mld statistics | 2237
CLI Explorer
on the link-scope-all-routers address FF02::2. You can lower this interval to reduce the amount of time it
takes a router to detect the loss of the last member of a group.
When the routing device that is serving as the querier receives a leave-group (done) message from a
host, the routing device sends multiple group-specific queries to the group. The querier sends a specific
number of these queries, and it sends them at a specific interval. The number of queries sent is called
the last-listener query count. The interval at which the queries are sent is called the last-listener query
interval. Both settings are configurable, thus allowing you to adjust the leave latency. The IGMP leave
latency is the time between a request to leave a multicast group and the receipt of the last byte of data
for the multicast group.
The last-listener query count x (times) the last-listener query interval = (equals) the amount of time it
takes a routing device to determine that the last member of a group has left the group and to stop
forwarding group traffic.
The default last-listener query interval is 1 second. You can configure a subsecond interval up to one
digit to the right of the decimal point. The configurable range is 0.1 through 0.9, then in 1-second
intervals 1 through 999,999.
1. Configure the time (in seconds) that the routing device waits for a report in response to a group-
specific query.
2. Verify the configuration by checking the MLD Last Member Query Interval field in the output of the
show igmp interfaces command.
NOTE: You can configure the last-member query count by configuring the robustness variable.
The two are always equal.
SEE ALSO
Understanding MLD | 0
Modifying the MLD Query Response Interval | 0
Example: Modifying the MLD Robustness Variable | 0
show mld interface | 2230
CLI Explorer
71
The immediate-leave setting enables host tracking, meaning that the device keeps track of the hosts that
send join messages. This allows MLD to determine when the last host sends a leave message for the
multicast group.
When the immediate leave setting is enabled, the device removes an interface from the forwarding-table
entry without first sending MLD group-specific queries to the interface. The interface is pruned from the
multicast tree for the multicast group specified in the MLD leave message. The immediate leave setting
ensures optimal bandwidth management for hosts on a switched network, even when multiple multicast
groups are being used simultaneously.
When immediate leave is disabled and one host sends a leave group message, the routing device first
sends a group query to determine if another receiver responds. If no receiver responds, the routing
device removes all hosts on the interface from the multicast group. Immediate leave is disabled by
default for both MLD version 1 and MLD version 2.
NOTE: Although host tracking is enabled for IGMPv2 and MLDv1 when you enable immediate
leave, use immediate leave with these versions only when there is one host on the interface. The
reason is that IGMPv2 and MLDv1 use a report suppression mechanism whereby only one host
on an interface sends a group join report in response to a membership query. The other
interested hosts suppress their reports. The purpose of this mechanism is to avoid a flood of
reports for the same group. But it also interferes with host tracking, because the router only
knows about the one interested host and does not know about the others.
2. Verify the configuration by checking the Immediate Leave field in the output of the show mld
interface command.
SEE ALSO
Understanding MLD | 0
72
When the group-policy statement is enabled on a router, after the router receives an MLD report, the
router compares the group against the specified group policy and performs the action configured in that
policy (for example, rejects the report if the policy matches the defined address or network).
You define the policy to match only MLD group addresses (for MLDv1) by using the policy's route-filter
statement to match the group address. You define the policy to match MLD (source, group) addresses
(for MLDv2) by using the policy's route-filter statement to match the group address and the policy's
source-address-filter statement to match the source address.
3. Apply the policies to the MLD interfaces where you prefer not to receive specific group or (source,
group) reports. In this example, ge-0/0/0.1 is running MLDv1 and ge-0/1/1.0 is running MLDv2.
4. Verify the operation of the filter by checking the Rejected Report field in the output of the show mld
statistics command.
73
SEE ALSO
Understanding MLD | 0
Routing Policies, Firewall Filters, and Traffic Policers User Guide
show mld statistics | 2237
CLI Explorer
IN THIS SECTION
Requirements | 73
Overview | 73
Configuration | 74
Verification | 75
This example shows how to configure and verify the MLD robustness variable in a multicast domain.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Enable IPv6 unicast routing. See the Junos OS Routing Protocols Library for Routing Devices.
Overview
The MLD robustness variable can be fine-tuned to allow for expected packet loss on a subnet.
Increasing the robust count allows for more packet loss but increases the leave latency of the
subnetwork.
The value of the robustness variable is used in calculating the following MLD message intervals:
74
• Group member interval—Amount of time that must pass before a multicast router determines that
there are no more members of a group on a network. This interval is calculated as follows:
(robustness variable x query-interval) + (1 x query-response-interval).
• Other querier present interval—Amount of time that must pass before a multicast router determines
that there is no longer another multicast router that is the querier. This interval is calculated as
follows: (robustness variable x query-interval) + (0.5 x query-response-interval).
• Last-member query count—Number of group-specific queries sent before the router assumes there
are no local members of a group. The default number is the value of the robustness variable.
By default, the robustness variable is set to 2. The number can be from 2 through 10. You might want to
increase this value if you expect a subnet to lose packets.
Configuration
IN THIS SECTION
Procedure | 74
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
Verification
To verify the configuration is working properly, check the MLD Robustness Count field in the output of
the show mld interfaces command.
SEE ALSO
Understanding MLD | 0
Modifying the MLD Query Response Interval | 0
Modifying the MLD Last-Member Query Interval | 0
show mld interface | 2230
CLI Explorer
Increasing the maximum number of MLD packets transmitted per second might be useful on a router
with a large number of interfaces participating in MLD.
To change the limit for the maximum number of MLD packets the router can transmit in 1 second,
include the maximum-transmit-rate statement and specify the maximum number of packets per second
to be transmitted.
76
IN THIS SECTION
You can create MLD static group membership to test multicast forwarding without a receiver host.
When you enable MLD static group membership, data is forwarded to an interface without that
interface receiving membership reports from downstream hosts.
Class-of-service (CoS) adjustment is not supported with MLD static group membership.
When you configure static groups on an interface on which you want to receive multicast traffic, you
can specify the number of static groups to be automatically created.
1. Configure the static groups to be created by including the static statement and group statement and
specifying which IPv6 multicast address of the group to be created.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d;
}
}
3. After you have committed the configuration and after the source is sending traffic, use the show mld
group command to verify that static group ff0e::1:ff05:1a8d has been created.
When you create MLD static group membership to test multicast forwarding on an interface on which
you want to receive multicast traffic, you can specify that a number of static groups be automatically
created. This is useful when you want to test forwarding to multiple receivers without having to
configure each receiver separately.
1. Configure the number of static groups to be created by including the group-count statement and
specifying the number of groups to be created.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
group-count 3;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static groups ff0e::1:ff05:1a8d, ff0e::1:ff05:1a8e, and ff0e::1:ff05:1a8f
have been created.
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8e
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8f
Source: fe80::2e0:81ff:fe05:1a8d
79
When you configure static groups on an interface on which you want to receive multicast traffic and you
specify the number of static groups to be automatically created, you can also configure the group
address to be automatically incremented by some number of addresses.
In this example, you create three groups and increase the group address by an increment of two for each
group.
1. Configure the group address increment by including the group-increment statement and specifying
the number by which the address should be incremented for each group. The increment is specified
in a format similar to an IPv6 address.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
group-increment ::2;
group-count 3;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static groups ff0e::1:ff05:1a8d, ff0e::1:ff05:1a8f, and ff0e::1:ff05:1a91
have been created.
Interface: fe-0/1/2
80
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8f
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a91
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
When you configure static groups on an interface on which you want to receive multicast traffic and
your network is operating in source-specific multicast (SSM) mode, you can specify the multicast source
address to be accepted.
If you specify a group address in the SSM range, you must also specify a source.
If a source address is specified in a multicast group that is statically configured, the MLD version must be
set to MLDv2 on the interface. MLDv1 is the default value.
In this example, you create group ff0e::1:ff05:1a8d and accept IPv6 address fe80::2e0:81ff:fe05:1a8d as
the only source.
1. Configure the source address by including the source statement and specifying the IPv6 address of
the source host.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
source fe80::2e0:81ff:fe05:1a8d;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static group ff0e::1:ff05:1a8d has been created and that source
fe80::2e0:81ff:fe05:1a8d has been accepted.
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
When you configure static groups on an interface on which you want to receive multicast traffic, you
can specify a number of multicast sources to be automatically accepted.
In this example, you create static group ff0e::1:ff05:1a8d and accept fe80::2e0:81ff:fe05:1a8d,
fe80::2e0:81ff:fe05:1a8e, and fe80::2e0:81ff:fe05:1a8f as the source addresses.
1. Configure the number of multicast source addresses to be accepted by including the source-count
statement and specifying the number of sources to be accepted.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
source fe80::2e0:81ff:fe05:1a8d {
source-count 3;
}
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static group ff0e::1:ff05:1a8d has been created and that sources
fe80::2e0:81ff:fe05:1a8d, fe80::2e0:81ff:fe05:1a8e, and fe80::2e0:81ff:fe05:1a8f have been
accepted.
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8e
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8f
Last reported by: Local
Timeout: 0 Type: Static
83
When you configure static groups on an interface on which you want to receive multicast traffic, and
specify a number of multicast sources to be automatically accepted, you can also specify the number by
which the address should be incremented for each source accepted.
In this example, you create static group ff0e::1:ff05:1a8d and accept fe80::2e0:81ff:fe05:1a8d,
fe80::2e0:81ff:fe05:1a8f, and fe80::2e0:81ff:fe05:1a91 as the sources.
1. Configure the number of multicast source addresses to be accepted by including the source-
increment statement and specifying the number of sources to be accepted.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
source fe80::2e0:81ff:fe05:1a8d {
source-count 3;
source-increment ::2;
}
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group command to verify that static group ff0e::1:ff05:1a8d has been created and that sources
fe80::2e0:81ff:fe05:1a8d, fe80::2e0:81ff:fe05:1a8f, and fe80::2e0:81ff:fe05:1a91 have been
accepted.
Interface: fe-0/1/2
84
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a8f
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e2::1:ff05:1a8d
Source: fe80::2e0:81ff:fe05:1a91
Last reported by: Local
Timeout: 0 Type: Static
Interface: fe-0/1/2
Group: ff0e::1:ff05:1a8d
Group mode: Include
Source: fe80::2e0:81ff:fe05:1a8d
Last reported by: Local
Timeout: 0 Type: Static
Group: ff0e::1:ff05:1a8d
Group mode: Include
Source: fe80::2e0:81ff:fe05:1a8f
Last reported by: Local
Timeout: 0 Type: Static
Group: ff0e::1:ff05:1a8d
Group mode: Include
Source: fe80::2e0:81ff:fe05:1a91
Last reported by: Local
Timeout: 0 Type: Static
When you configure static groups on an interface on which you want to receive multicast traffic and
your network is operating in source-specific multicast (SSM) mode, you can specify that certain
multicast source addresses be excluded.
By default the multicast source address configured in a static group operates in include mode. In include
mode the multicast traffic for the group is accepted from the configured source address. You can also
configure the static group to operate in exclude mode. In exclude mode the multicast traffic for the
group is accepted from any address other than the configured source address.
85
If a source address is specified in a multicast group that is statically configured, the MLD version must be
set to MLDv2 on the interface. MLDv1 is the default value.
In this example, you exclude address fe80::2e0:81ff:fe05:1a8d as a source for group ff0e::1:ff05:1a8d.
1. Configure a multicast static group to operate in exclude mode by including the exclude statement
and specifying which IPv6 source address to be excluded.
2. After you commit the configuration, use the show configuration protocol mld command to verify the
MLD protocol configuration.
interface fe-0/1/2.0 {
static {
group ff0e::1:ff05:1a8d {
exclude;
source fe80::2e0:81ff:fe05:1a8d;
}
}
}
3. After you have committed the configuration and the source is sending traffic, use the show mld
group detail command to verify that static group ff0e::1:ff05:1a8d has been created and that the
static group is operating in exclude mode.
Similar configuration is available for IPv4 multicast traffic using the IGMP protocol.
86
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 86
Overview | 86
Configuration | 87
Verification | 89
This example shows how to determine whether MLD tuning is needed in a network by configuring the
routing device to record MLD join and leave events.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Enable IPv6 unicast routing. See the Junos OS Routing Protocols Library for Routing Devices.
Overview
Table 3 on page 86 describes the recordable MLD join and leave events.
Configuration
IN THIS SECTION
Procedure | 87
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
1. Enable accounting globally or on an MLD interface. This example shows the interface configuration.
2. Configure the events to be recorded, and filter the events to a system log file with a descriptive
filename, such as mld-events.
This example rotates the file every 24 hours (1440 minutes) when it reaches 100 KB and keeps three
files.
Verification
You can view the system log file by running the file show command.
You can monitor the system log file as entries are added to the file by running the monitor start and
monitor stop commands.
SEE ALSO
Understanding MLD | 0
When configuring limits for MLD multicast groups, keep the following in mind:
• Each any-source group (*,G) counts as one group toward the limit.
90
• Each source-specific group (S,G) counts as one group toward the limit.
• Multiple source-specific groups count individually toward the group limit, even if they are for the
same group. For example, (S1, G1) and (S2, G1) would count as two groups toward the configured
limit.
• Combinations of any-source groups and source-specific groups count individually toward the group
limit, even if they are for the same group. For example, (*, G1) and (S, G1) would count as two groups
toward the configured limit.
• Configuring and committing a group limit on a network that is lower than what already exists on the
network results in the removal of all groups from the configuration. The groups must then request to
rejoin the network (up to the newly configured group limit).
• You can dynamically limit multicast groups on MLD logical interfaces by using dynamic profiles. For
detailed information about creating dynamic profiles, see the Junos OS Subscriber Management and
Services Library .
Beginning with Junos OS 12.2, you can optionally configure a system log warning threshold for MLD
multicast group joins received on the logical interface. It is helpful to review the system log messages for
troubleshooting purposes and to detect if an excessive amount of MLD multicast group joins have been
received on the interface. These log messages convey when the configured group limit has been
exceeded, when the configured threshold has been exceeded, and when the number of groups drop
below the configured threshold.
The group-threshold statement enables you to configure the threshold at which a warning message is
logged. The range is 1 through 100 percent. The warning threshold is a percentage of the group limit, so
you must configure the group-limit statement to configure a warning threshold. For instance, when the
number of groups exceed the configured warning threshold, but remain below the configured group
limit, multicast groups continue to be accepted, and the device logs a warning message. In addition, the
device logs a warning message after the number of groups drop below the configured warning threshold.
You can further specify the amount of time (in seconds) between the log messages by configuring the
log-interval statement. The range is 6 through 32,767 seconds.
You might consider throttling log messages because every entry added after the configured threshold
and every entry rejected after the configured limit causes a warning message to be logged. By
configuring a log interval, you can throttle the amount of system log warning messages generated for
MLD multicast group joins.
[edit]
user@host# edit protocols mld interface interface-name
To confirm your configuration, use the show protocols mld command. To verify the operation of MLD on
the interface, including the configured group limit and the optional warning threshold and interval
between log messages, use the show mld interface command.
SEE ALSO
Disabling MLD
To disable MLD on an interface, include the disable statement:
interface interface-name {
disable;
}
SEE ALSO
Enabling MLD | 0
Release Description
12.2 Beginning with Junos OS 12.2, you can optionally configure a system log warning threshold for MLD
multicast group joins received on the logical interface.
RELATED DOCUMENTATION
Configuring IGMP | 25
IN THIS SECTION
By default, Internet Group Management Protocol (IGMP) processing takes place on the Routing Engine
for MX Series routers. This centralized architecture may lead to reduced performance in scaled
environments or when the Routing Engine undergoes CLI changes or route updates. You can improve
system performance for IGMP processing by enabling distributed IGMP, which utilizes the Packet
Forwarding Engine to maintain a higher system-wide processing rate for join and leave events.
Distributed IGMP works by moving IGMP processing from the Routing Engine to the Packet Forwarding
Engine. When distributed IGMP is not enabled, IGMP processing is centralized on the routing protocol
process (rpd) running on the Routing Engine. When you enable distributed IGMP, join and leave events
93
are processed across Modular Port Concentrators (MPCs) on the Packet Forwarding Engine. Because
join and leave processing is distributed across multiple MPCs instead of being processed through a
centralized rpd on the Routing Engine, performance improves and join and leave latency decreases.
When you enable distributed IGMP, each Packet Forwarding Engine processes reports and generates
queries, maintains local group membership to the interface mapping table and updates the forwarding
state based on this table, runs distributed IGMP independently, and implements the group-policy and
ssm-map-policy IGMP interface options.
NOTE: Information from group-policy and ssm-map-policy IGMP interface options passes from
the Routing Engine to the Packet Forwarding Engine.
When you enable distributed IGMP, the rpd on the Routing Engine synchronizes all IGMP configurations
(including global and interface-level configurations) from the rpd to each Packet Forwarding Engine, runs
passive IGMP on distributed interfaces, and notifies Protocol Independent Multicast (PIM) of all group
memberships per distributed IGMP interface.
Consider the following guidelines when you configure distributed IGMP on an MX Series router with
MPCs:
• Distributed IGMP increases network performance by reducing the maximum join and leave latency
and by increasing join and leave events.
NOTE: Join and leave latency may increase if multicast traffic is not preprovisioned and
destined for an MX Series router when a join or leave event is received from a client interface.
• Distributed IGMP is supported for Ethernet interfaces. It does not improve performance on PIM
interfaces.
• Starting in Junos OS release 18.2, distributed IGMP is supported on aggregated Ethernet interfaces,
and for enhanced subscriber management. As such, IGMP processing for subscriber flows is moved
from the Routing Engine to the Packet Forwarding Engine of supported line cards. Multicast groups
can be comprised of mixed receivers, that is, some centralized IGMP and some distributed IGMP.
• You can reduce initial join delays by enabling Protocol Independent Multicast (PIM) static joins or
IGMP static joins. You can reduce initial delays even more by preprovisioning multicast traffic. When
you preprovision multicast traffic, MPCs with distributed IGMP interfaces receive multicast traffic.
94
• For distributed IGMP to function properly, you must enable enhanced IP network services on a
single-chassis MX Series router. Virtual Chassis is not supported.
• When you enable distributed IGMP, the following interface options are not supported on the Packet
Forwarding Engine: oif-map, group-limit, ssm-map, and static. The traceoptions and accounting
statements can only be enabled for IGMP operations still performed on the Routing Engine; they are
not supported on the Packet Forwarding Engine. The clear igmp membership command is not
supported when distributed IGMP is enabled.
Release Description
18.2 Starting in Junos OS release 18.2, distributed IGMP is supported on aggregated Ethernet interfaces, and
for enhanced subscriber management. As such, IGMP processing for subscriber flows is moved from the
Routing Engine to the Packet Forwarding Engine of supported line cards. Multicast groups can be
comprised of mixed receivers, that is, some centralized IGMP and some distributed IGMP.
RELATED DOCUMENTATION
Understanding IGMP | 27
Junos OS Multicast Protocols User Guide
IN THIS SECTION
Configuring distributed IGMP improves performance by reducing join and leave latency. This works by
moving IGMP processing from the Routing Engine to the Packet Forwarding Engine. In contrast to
centralized IGMP processing on the Routing Engine, the Packet Forwarding Engine disperses traffic
across multiple Modular Port Concentrators (MPCs).
95
You can enable distributed IGMP on static interfaces or dynamic interfaces. As a prerequisite, you must
enable enhanced IP network services on a single-chassis MX Series router.
Issuing the distributed keyword at this hierarchy level enables static joins for specific multicast (S,G)
groups and preprovisions all of them so that all distributed IGMP Packet Forwarding Engines receive
traffic.
Issuing the distributed keyword at this hierarchy level enables static joins for multicast (S,G) groups
so that all distributed IGMP Packet Forwarding Engines receive traffic and preprovisions a specific
multicast group address (G).
Issuing the distributed keyword at this hierarchy level enables static joins for multicast (S,G) groups
so that all Packet Forwarding Engines receive traffic, but preprovisions a specific multicast (S,G)
group.
2. (Optional) Enable static joins for specific (S,G) addresses and preprovision all of them so that all
distributed IGMP Packet Forwarding Engines receive traffic. In the example, multicast traffic for all of
the groups (225.0.0.1, 10.10.10.1), (225.0.0.1, 10.10.10.2), and (225.0.0.2, *) is preprovisioned.
3. (Optional) Enable static joins for specific multicast (S,G) groups so that all distributed IGMP Packet
Forwarding Engines receive traffic and preprovision a specific multicast group address (G). In the
97
example, multicast traffic for groups (225.0.0.1, 10.10.10.1) and (225.0.0.1, 10.10.10.2) is
preprovisioned, but group (225.0.0.2, *) is not preprovisioned.
4. (Optional) Enable a static join for specific multicast (S,G) groups so that all Packet Forwarding Engines
receive traffic, but preprovision only one specific multicast address group. In the example, multicast
traffic for group (225.0.0.1, 10.10.10.1) is preprovisioned, but all other groups are not preprovisioned.
SEE ALSO
CHAPTER 3
IN THIS CHAPTER
IN THIS SECTION
Internet Group Management Protocol (IGMP) snooping constrains the flooding of IPv4 multicast traffic
on VLANs on a device. With IGMP snooping enabled, the device monitors IGMP traffic on the network
and uses what it learns to forward multicast traffic to only the downstream interfaces that are
connected to interested receivers. The device conserves bandwidth by sending multicast traffic only to
interfaces connected to devices that want to receive the traffic, instead of flooding the traffic to all the
downstream interfaces in a VLAN.
Devices usually learn unicast MAC addresses by checking the source address field of the frames they
receive and then send any traffic for that unicast address only to the appropriate interfaces. However, a
multicast MAC address can never be the source address for a packet. As a result, when a device receives
traffic for a multicast destination address, it floods the traffic on the relevant VLAN, sending a significant
amount of traffic for which there might not necessarily be interested receivers.
IGMP snooping prevents this flooding. When you enable IGMP snooping, the device monitors IGMP
packets between receivers and multicast routers and uses the content of the packets to build a multicast
forwarding table—a database of multicast groups and the interfaces that are connected to members of
the groups. When the device receives multicast packets, it uses the multicast forwarding table to
selectively forward the traffic to only the interfaces that are connected to members of the appropriate
multicast groups.
On EX Series and QFX Series switches that do not support the Enhanced Layer 2 Software (ELS)
configuration style, IGMP snooping is enabled by default on all VLANs (or only on the default VLAN on
some devices) and you can disable it selectively on one or more VLANs. On all other devices, you must
explicitly configure IGMP snooping on a VLAN or in a bridge domain to enable it.
100
NOTE: You can’t configure IGMP snooping on a secondary (private) VLAN (PVLAN). However,
starting in Junos OS Release 18.3R1 on EX4300 switches and EX4300 Virtual Chassis, and Junos
OS Release 19.2R1 on EX4300 multigigabit switches, when you enable IGMP snooping on a
primary VLAN, you also implicitly enable it on any secondary VLANs defined for that primary
VLAN. See "IGMP Snooping on Private VLANs (PVLANs)" on page 104 for details.
The device can use a routed VLAN interface (RVI) to forward traffic between VLANs in its configuration.
IGMP snooping works with Layer 2 interfaces and RVIs to forward multicast traffic in a switched
network.
When the device receives a multicast packet, its Packet Forwarding Engines perform a multicast lookup
on the packet to determine how to forward the packet to its local interfaces. From the results of the
lookup, each Packet Forwarding Engine extracts a list of Layer 3 interfaces that have ports local to the
Packet Forwarding Engine. If the list includes an RVI, the device provides a bridge multicast group ID for
the RVI to the Packet Forwarding Engine.
For VLANs that include multicast receivers, the bridge multicast ID includes a sub-next-hop ID, which
identifies the Layer 2 interfaces in the VLAN that are interested in receiving the multicast stream. The
Packet Forwarding Engine then forwards multicast traffic to bridge multicast IDs that have multicast
receivers for a given multicast group.
Multicast routers use IGMP to learn which groups have interested listeners for each of their attached
physical networks. In any given subnet, one multicast router acts as an IGMP querier. The IGMP querier
sends out the following types of queries to hosts:
• Group-specific query—(IGMPv2 and IGMPv3 only) Asks whether any host is listening to a specific
multicast group. This query is sent in response to a host leaving the multicast group and allows the
router to quickly determine if any remaining hosts are interested in the group.
Hosts that are multicast listeners send the following kinds of messages:
101
• Membership report—Indicates that the host wants to join a particular multicast group.
• Leave report—(IGMPv2 and IGMPv3 only) Indicates that the host wants to leave a particular
multicast group.
• By sending an unsolicited IGMP join message to a multicast router that specifies the IP multicast
group the host wants to join.
• By sending an IGMP join message in response to a general query from a multicast router.
A multicast router continues to forward multicast traffic to a VLAN provided that at least one host on
that VLAN responds to the periodic general IGMP queries. For a host to remain a member of a multicast
group, it must continue to respond to the periodic general IGMP queries.
• By not responding to periodic queries within a particular interval of time, which is considered a
“silent leave.” This is the only leave method for IGMPv1 hosts.
• By sending a leave report. This method can be used by IGMPv2 and IGMPv3 hosts.
In IGMPv3, a host can send a membership report that includes a list of source addresses. When the host
sends a membership report in INCLUDE mode, the host is interested in group multicast traffic only from
those sources in the source address list. If host sends a membership report in EXCLUDE mode, the host
is interested in group multicast traffic from any source except the sources in the source address list. A
host can also send an EXCLUDE report in which the source-list parameter is empty, which is known as
an EXCLUDE NULL report. An EXCLUDE NULL report indicates that the host wants to join the multicast
group and receive packets from all sources.
Devices that support IGMPv3 process INCLUDE and EXCLUDE membership reports, and most devices
forward source-specific multicast (SSM) traffic only from requested sources to subscribed receivers
accordingly. However, you might see the device doesn’t strictly forward multicast traffic on a per-source
basis in some configurations such as:
• EX Series and QFX Series switches that do not use the Enhanced Layer 2 Software (ELS)
configuration style
• EX4300 switches running Junos OS Releases prior to 18.2R1, 18.1R2, 17.4R2, 17.3R3, 17.2R3, and
14.1X53-D47
In these cases, the device might consolidate all INCLUDE and EXCLUDE mode reports they receive on a
VLAN for a specified group into a single route that includes all multicast sources for that group, with the
next hop representing all interfaces that have interested receivers for the group. As a result, interested
receivers on the VLAN can receive traffic from a source that they did not include in their INCLUDE
report or from a source they excluded in their EXCLUDE report. For example, if Host 1 wants traffic for
G from Source A and Host 2 wants traffic for group G from Source B, they both receive traffic for group
G regardless of whether A or B sends the traffic.
To determine how to forward multicast traffic, the device with IGMP snooping enabled maintains
information about the following interfaces in its multicast forwarding table:
• Group-member interfaces—These interfaces lead toward hosts that are members of multicast groups.
The device learns about these interfaces by monitoring IGMP traffic. If an interface receives IGMP
queries or Protocol Independent Multicast (PIM) updates, the device adds the interface to its multicast
forwarding table as a multicast-router interface. If an interface receives membership reports for a
multicast group, the device adds the interface to its multicast forwarding table as a group-member
interface.
Learned interface table entries age out after a time period. For example, if a learned multicast-router
interface does not receive IGMP queries or PIM hellos within a certain interval, the device removes the
entry for that interface from its multicast forwarding table.
NOTE: For the device to learn multicast-router interfaces and group-member interfaces, the
network must include an IGMP querier. This is often in a multicast router, but if there is no
multicast router on the local network, you can configure the device itself to be an IGMP querier.
An interface in a VLAN with IGMP snooping enabled receives multicast traffic and forwards it according
to the following rules.
IGMP traffic:
• Forward IGMP general queries received on a multicast-router interface to all other interfaces in the
VLAN.
• Forward IGMP reports received on a host interface to multicast-router interfaces in the same VLAN,
but not to the other host interfaces in the VLAN.
• Flood multicast packets with a destination address of 233.252.0.0/24 to all other interfaces on the
VLAN.
• Forward unregistered multicast packets (packets for a group that has no current members) to all
multicast-router interfaces in the VLAN.
• Forward registered multicast packets to those host interfaces in the VLAN that are members of the
multicast group and to all multicast-router interfaces in the VLAN.
With IGMP snooping on a pure Layer 2 local network (that is, Layer 3 is not enabled on the network), if
the network doesn’t include a multicast router, multicast traffic might not be properly forwarded
through the network. You might see this problem if the local network is configured such that multicast
traffic must be forwarded between devices in order to reach a multicast receiver. In this case, an
upstream device does not forward multicast traffic to a downstream device (and therefore to the
multicast receivers attached to the downstream device) because the downstream device does not
forward IGMP reports to the upstream device. You can solve this problem by configuring one of the
devices to be an IGMP querier. The IGMP querier device sends periodic general query packets to all the
devices in the network, which ensures that the snooping membership tables are updated and prevents
multicast traffic loss.
If you configure multiple devices to be IGMP queriers, the device with the lowest (smallest) IGMP
querier source address takes precedence and acts as the querier. The devices with higher IGMP querier
source addresses stop sending IGMP queries unless they do not receive IGMP queries for 255 seconds.
If the device with a higher IGMP querier source address does not receive any IGMP queries during that
period, it starts sending queries again.
104
NOTE: QFabric systems in Junos OS Release 14.1X53-D15 support the igmp-querier statement,
but do not support this statement in Junos OS 15.1.
[edit protocols]
user@host# set igmp-snooping vlan vlan-name l2-querier source-address source address
To configure a QFabric Node device switch to act as an IGMP querier, enter the following:
[edit protocols]
user@host# set igmp-snooping vlan vlan-name igmp-querier source-address source address
A PVLAN consists of secondary isolated and community VLANs configured within a primary VLAN.
Without IGMP snooping support on the secondary VLANs, multicast streams received on the primary
VLAN are flooded to the secondary VLANs.
Starting in Junos OS Release 18.3R1, EX4300 switches and EX4300 Virtual Chassis support IGMP
snooping with PVLANs. Starting in Junos OS Release 19.2R1, EX4300 multigigabit model switches
support IGMP snooping with PVLANs. When you enable IGMP snooping on a primary VLAN, you also
implicitly enabled it on all secondary VLANs. The device learns and stores multicast group information
on the primary VLAN, and also learns the multicast group information on the secondary VLANs in the
context of the primary VLAN. As a result, the device further constrains multicast streams only to
interested receivers on secondary VLANs, rather than flooding the traffic in all secondary VLANs.
The CLI prevents you from explicitly configuring IGMP snooping on secondary isolated or community
VLANs. You only need to configure IGMP snooping on the primary VLAN under which the secondary
VLANs are defined. For example, for a primary VLAN vlan-pri with a secondary isolated VLAN vlan-iso
and a secondary community VLAN vlan-comm:
IGMP reports and leave messages received on secondary VLAN ports are learned in the context of the
primary VLAN. Promiscuous trunk ports or inter-switch links acting as multicast router interfaces for the
PVLAN receive incoming multicast data streams from multicast sources and forward them only to the
secondary VLAN ports with learned multicast group entries.
This feature does not support secondary VLAN ports as multicast router interfaces. The CLI does not
strictly prevent you from statically configuring an interface on a community VLAN as a multicast router
port, but IGMP snooping does not work properly on PVLANs with this configuration. When IGMP
snooping is configured on a PVLAN, the switch also automatically disables dynamic multicast router port
learning on any isolated or community VLAN interfaces. IGMP snooping with PVLANs also does not
support configurations with an IGMP querier on isolated or community VLAN interfaces.
See Understanding Private VLANs and Creating a Private VLAN Spanning Multiple EX Series Switches
with ELS Support (CLI Procedure) for details on configuring PVLANs.
Release Description
19.2R1 Starting in Junos OS Release 19.2R1, EX4300 multigigabit model switches support IGMP snooping
with PVLANs.
18.3R1 Starting in Junos OS Release 18.3R1, EX4300 switches and EX4300 Virtual Chassis support IGMP
snooping with PVLANs.
14.1X53-D15 QFabric systems in Junos OS Release 14.1X53-D15 support the igmp-querier statement, but do
not support this statement in Junos OS 15.1.
RELATED DOCUMENTATION
IN THIS SECTION
Supported IGMP or MLD Versions and Group Membership Report Modes | 108
Use Case 2: Inter-VLAN Multicast Routing and Forwarding—IRB Interfaces with PIM | 114
Use Case 3: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer 2 Connectivity | 118
Use Case 4: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer 3 Connectivity | 121
Use Case 5: Inter-VLAN Multicast Routing and Forwarding—External Multicast Router | 123
Internet Group Management Protocol (IGMP) snooping and Multicast Listener Discovery (MLD)
snooping constrain multicast traffic in a broadcast domain to interested receivers and multicast devices.
In an environment with a significant volume of multicast traffic, using IGMP or MLD snooping preserves
bandwidth because multicast traffic is forwarded only on those interfaces where there are multicast
listeners. IGMP snooping optimizes IPv4 multicast traffic flow. MLD snooping optimizes IPv6 multicast
traffic flow.
Starting with Junos OS Release 17.2R1, QFX10000 switches support IGMP snooping in an Ethernet
VPN (EVPN)-Virtual Extensible LAN (VXLAN) edge-routed bridging overlay (EVPN-VXLAN topology
with a collapsed IP fabric).
Starting with Junos OS Release 17.3R1, QFX10000 switches support the exchange of traffic between
multicast sources and receivers in an EVPN-VXLAN edge-routed bridging overlay, which uses IGMP, and
sources and receivers in an external Protocol Independent Multicast (PIM) domain. A Layer 2 multicast
VLAN (MVLAN) and associated IRB interfaces enable the exchange of multicast traffic between these
two domains.
IGMP snooping support in an EVPN-VXLAN network is available on the following switches in the
QFX5000 line. In releases up until Junos OS Releases 18.4R2 and 19.1R2, with IGMP snooping enabled,
these switches only constrain flooding for multicast traffic coming in on the VXLAN tunnel network
ports; they still flood multicast traffic coming in from an access interface to all other access and network
interfaces:
107
• Starting with Junos OS Release 18.1R1, QFX5110 switches support IGMP snooping in an EVPN-
VXLAN centrally-routed bridging overlay (EVPN-VXLAN topology with a two-layer IP fabric) for
forwarding multicast traffic within VLANs. You can’t configure IRB interfaces on a VXLAN with IGMP
snooping for forwarding multicast traffic between VLANs. (You can only configure and use IRB
interfaces for unicast traffic.)
• Starting with Junos OS Release 18.4R2 (but not Junos OS Releases 19.1R1 and 19.2R1),
QFX5120-48Y switches support IGMP snooping in an EVPN-VXLAN centrally-routed bridging
overlay.
• Starting with Junos OS Release 19.1R1, QFX5120-32C switches support IGMP snooping in EVPN-
VXLAN centrally-routed and edge-routed bridging overlays.
• Starting in Junos OS Releases 18.4R2 and 19.1R2, selective multicast forwarding is enabled by
default on QFX5110 and QFX5120 switches when you configure IGMP snooping in EVPN-VXLAN
networks, further constraining multicast traffic flooding. With IGMP snooping and selective multicast
forwarding, these switches send the multicast traffic only to interested receivers in both the EVPN
core and on the access side for multicast traffic coming in either from an access interface or an EVPN
network interface.
Starting with Junos OS Release 19.3R1, EX9200 switches, MX Series routers, and vMX virtual routers
support IGMP version 2 (IGMPv2) and IGMP version 3 (IGMPv3), IGMP snooping, selective multicast
forwarding, external PIM gateways, and external multicast routers with an EVPN-VXLAN centrally-
routed bridging overlay.
NOTE: Unless called out explicitly, the information in this topic applies to IGMPv2, IGMPv3,
MLDv1, and MLDv2 on the devices that support these protocols in the following IP fabric
architectures:
NOTE: On a Juniper Networks switching device, for example, a QFX10000 switch, you can
configure a VLAN. On a Juniper Networks routing device, for example, an MX480 router, you can
configure the same entity, which is called a bridge domain. To keep things simple, this topic uses
the term VLAN when referring to the same entity configured on both Juniper Networks
switching and routing devices.
• In an environment with a significant volume of multicast traffic, using IGMP snooping or MLD
snooping constrains multicast traffic in a VLAN to interested receivers and multicast devices, which
conserves network bandwidth.
• Synchronizing the IGMP or MLD state among all EVPN devices for multihomed receivers ensures
that all subscribed listeners receive multicast traffic, even in cases such as the following:
• IGMP or MLD membership reports for a multicast group might arrive on an EVPN device that is
not the Ethernet segment’s designated forwarder (DF).
• An IGMP or MLD message to leave a multicast group arrives at a different EVPN device than the
EVPN device where the corresponding join message for the group was received.
• Selective multicast forwarding conserves bandwidth usage in the EVPN core and reduces the load on
egress EVPN devices that do not have listeners.
• The support of external PIM gateways enables the exchange of multicast traffic between sources and
listeners in an EVPN-VXLAN network and sources and listeners in an external PIM domain. Without
this support, the sources and listeners in these two domains would not be able to communicate.
Table 4 on page 109 outlines the supported IGMP versions and the membership report modes
supported for each version.
109
To explicitly configure EVPN devices to process only SSM (S,G) membership reports for IGMPv3 or
MLDv2, set the evpn-ssm-reports-only configuration option at the [edit protocols igmp-snooping vlan
vlan-name] hierarchy level.
You can enable SSM-only processing for one or more VLANs in an EVPN routing instance (EVI). When
enabling this option for a routing instance of type virtual switch, the behavior applies to all VLANs in the
virtual switch instance. When you enable this option, the device doesn’t process ASM reports and drops
them.
If you don’t configure the evpn-ssm-reports-only option, by default, EVPN devices process IGMPv2,
IGMPv3, MLDv1, or MLDv2 ASM reports and drop IGMPv3 or MLDv2 SSM reports.
Table 5 on page 110 provides a summary of the multicast traffic forwarding and routing use cases that
we support in EVPN-VXLAN networks and our recommendation for when you should apply a use case
to your EVPN-VXLAN network.
110
Table 5: Supported Multicast Traffic Forwarding and Routing Use Cases and Recommended Usage
For example, in a typical EVPN-VXLAN edge-routed bridging overlay, you can implement use case 1 for
intra-VLAN forwarding and use case 2 for inter-VLAN routing and forwarding. Or, if you want an
111
external multicast router to handle inter-VLAN routing in your EVPN-VXLAN network instead of EVPN
devices with IRB interfaces running PIM, you can implement use case 5 instead of use case 2. If there
are hosts in an existing external PIM domain that you want hosts in your EVPN-VXLAN network to
communicate with, you can also implement use case 3.
When implementing any of the use cases in an EVPN-VXLAN centrally-routed bridging overlay, you can
use a mix of spine devices—for example, MX Series routers, EX9200 switches, and QFX10000 switches.
However, if you do this, keep in mind that the functionality of all spine devices is determined by the
limitations of each spine device. For example, QFX10000 switches support a single routing instance of
type virtual-switch. Although MX Series routers and EX9200 switches support multiple routing
instances of type evpn or virtual-switch, on each of these devices, you would have to configure a single
routing instance of type virtual-switch to interoperate with the QFX10000 switches.
This use case supports the forwarding of multicast traffic to hosts within the same VLAN and includes
the following key features:
• Hosts that are single-homed to an EVPN device or multihomed to more than one EVPN device in all-
active mode.
NOTE: EVPN-VXLAN multicast uses special IGMP and MLD group leave processing to handle
multihomed sources and receivers, so we don’t support the immediate-leave configuration
option in the [edit protocols igmp-snooping] or [edit protocols mld-snooping] hierarchies in
EVPN-VXLAN networks.
• Routing instances:
• (MX Series routers, vMX virtual routers, and EX9200 switches) Multiple routing instances of type
evpn or virtual-switch.
• EVI route target extended community attributes associated with multihomed EVIs. BGP EVPN
Type 7 (Join Sync Route) and Type 8 (Leave Synch Route) routes carry these attributes to
enable the simultaneous support of multiple EVPN routing instances.
For information about another supported extended community, see the “EVPN Multicast Flags
Extended Community” section.
• IGMPv2, IGMPv3, MLDv1 or MLDv2. For information about the membership report modes
supported for each IGMP or MLD version, see Table 4 on page 109. For information about IGMP or
112
MLD route synchronization between multihomed EVPN devices, see Overview of Multicast
Forwarding with IGMP or MLD Snooping in an EVPN-MPLS Environment.
• IGMP snooping or MLD snooping. Hosts in a network send IGMP reports (for IPv4 traffic) or MLD
reports (for IPv6 traffic) expressing interest in particular multicast groups from multicast sources.
EVPN devices with IGMP snooping or MLD snooping enabled listen to the IGMP or MLD reports,
and use the snooped information on the access side to establish multicast routes that only forward
traffic for a multicast group to interested receivers.
IGMP snooping or MLD snooping supports multicast senders and receivers in the same or different
sites. A site can have either receivers only, sources only, or both senders and receivers attached to it.
• Selective multicast forwarding (advertising EVPN Type 6 Selective Multicast Ethernet Tag (SMET)
routes for forwarding only to interested receivers). This feature enables EVPN devices to selectively
forward multicast traffic to only the devices in the EVPN core that have expressed interest in that
multicast group.
NOTE: We support selective multicast forwarding to devices in the EVPN core only in EVPN-
VXLAN centrally-routed bridging overlays.
When you enable IGMP snooping or MLD snooping, selective multicast forwarding is enabled
by default.
• EVPN devices that do not support IGMP snooping, MLD snooping, and selective multicast
forwarding.
Although you can implement this use case in an EVPN single-homed environment, this use case is
particularly effective in an EVPN multihomed environment with a high volume of multicast traffic.
All multihomed interfaces must have the same configuration, and all multihomed peer EVPN devices
must be in active mode (not standby or passive mode).
An EVPN device that initially receives traffic from a multicast source is known as the ingress device. The
ingress device handles the forwarding of intra-VLAN multicast traffic as follows:
• With IGMP snooping or MLD snooping enabled (which also enable selective multicast forwarding on
supporting devices):
• As shown in Figure 9 on page 113, the ingress device (leaf 1) selectively forwards the traffic to
other EVPN devices with access interfaces where there are interested receivers for the same
multicast group.
• The traffic is then selectively forwarded to egress devices in the EVPN core that have advertised
the EVPN Type 6 SMET routes.
113
• If any EVPN devices do not support IGMP snooping or MLD snooping, or the ability to originate
EVPN Type 6 SMET routes, the ingress device floods multicast traffic to these devices.
• If a host is multihomed to more than one EVPN device, the EVPN devices exchange EVPN Type 7
and Type 8 routes as shown in Figure 9 on page 113. This exchange synchronizes IGMP or MLD
membership reports received on multihomed interfaces to coordinate status from messages that go
to different EVPN devices or in case one of the EVPN devices fails.
NOTE: The EVPN Type 7 and Type 8 routes carry EVI route extended community attributes
to ensure the right EVPN instance gets the IGMP state information on devices with multiple
routing instances. QFX Series switches support IGMP snooping only in the default EVPN
routing instance (default-switch). In Junos OS releases before 17.4R2, 17.3R3, or 18.1R1,
these switches did not include EVI route extended community attributes in Type 7 and Type 8
routes, so they don’t properly synchronize the IGMP state if you also have other routing
instances configured. Starting in Junos OS releases 17.4R2, 17.3R3, and 18.1R1, QFX10000
switches include the EVI route extended community attributes that identify the target routing
instance, and can synchronize IGMP state if IGMP snooping is enabled in the default EVPN
routing instance when other routing instances are configured.
In releases that support MLD and MLD snooping in EVPN-VXLAN fabrics with multihoming,
the same behavior applies to synchronizing the MLD state.
Figure 9: Intra-VLAN Multicast Traffic Flow with IGMP Snooping and Selective Multicast Forwarding
114
If you have configured IRB interfaces with PIM on one or more of the Layer 3 devices in your EVPN-
VXLAN network (use case 2), note that the ingress device forwards the multicast traffic to the Layer 3
devices. The ingress device takes this action to register itself with the Layer 3 device that acts as the
PIM rendezvous point (RP).
Use Case 2: Inter-VLAN Multicast Routing and Forwarding—IRB Interfaces with PIM
We recommend this basic use case for all EVPN-VXLAN networks except when you prefer to use an
external multicast router to handle inter-VLAN routing (see Use Case 5: Inter-VLAN Multicast Routing
and Forwarding—External Multicast Router).
For this use case, IRB interfaces using Protocol Independent Multicast (PIM) route multicast traffic
between source and receiver VLANs. The EVPN devices on which the IRB interfaces reside then forward
the routed traffic using these key features:
The default behavior of inclusive multicast forwarding is to replicate multicast traffic and flood the
traffic to all devices. For this use case, however, we support inclusive multicast forwarding coupled with
IGMP snooping (or MLD snooping) and selective multicast forwarding. As a result, the multicast traffic is
replicated but selectively forwarded to access interfaces and devices in the EVPN core that have
interested receivers.
For information about the EVPN multicast flags extended community, which Juniper Networks devices
that support EVPN and IGMP snooping (or MLD snooping) include in EVPN Type 3 (Inclusive Multicast
Ethernet Tag) routes, see the “EVPN Multicast Flags Extended Community” section.
In an EVPN-VXLAN centrally-routed bridging overlay, you can configure the spine devices so that some
of them perform inter-VLAN routing and forwarding of multicast traffic and some do not. At a minimum,
we recommend that you configure two spine devices to perform inter-VLAN routing and forwarding.
When there are multiple devices that can perform the inter-VLAN routing and forwarding of multicast
traffic, one device is elected as the designated router (DR) for each VLAN.
115
In the sample EVPN-VXLAN centrally-routed bridging overlay shown in Figure 10 on page 115, assume
that multicast traffic needs to be routed from source VLAN 100 to receiver VLAN 101. Receiver VLAN
101 is configured on spine 1, which is designated as the DR for that VLAN.
Figure 10: Inter-VLAN Multicast Traffic Flow with IRB Interface and PIM
After the inter-VLAN routing occurs, the EVPN device forwards the routed traffic to:
• Access interfaces that have multicast listeners (IGMP snooping or MLD snooping).
• Egress devices in the EVPN core that have sent EVPN Type 6 SMET routes for the multicast group
members in receiver VLAN 2 (selective multicast forwarding).
To understand how IGMP snooping (or MLD snooping) and selective multicast forwarding reduce the
impact of the replicating and flooding behavior of inclusive multicast forwarding, assume that an EVPN-
VXLAN centrally-routed bridging overlay includes the following elements:
• 100 IRB interfaces using PIM starting with irb.1 and going up to irb.100
• 100 VLANs
• 20 EVPN devices
For the sample EVPN-VXLAN centrally-routed bridging overlay, m represents the number of VLANs, and
n represents the number of EVPN devices. Assuming that IGMP snooping (or MLD snooping) and
selective multicast forwarding are disabled, when multicast traffic arrives on irb.1, the EVPN device
replicates the traffic m * n times or 100 * 20 times, which equals a rate of 20,000 packets. If the
116
incoming traffic rate for a particular multicast group is 100 packets per second (pps), the EVPN device
would have to replicate 200,000 pps for that multicast group.
If IGMP snooping (or MLD snooping) and selective multicast forwarding are enabled in the sample
EVPN-VXLAN centrally-routed bridging overlay, assume that there are interested receivers for a
particular multicast group on only 4 VLANs and 3 EVPN devices. In this case, the EVPN device replicates
the traffic at a rate of 100 * m * n times (100 * 4 * 3), which equals 1200 pps. Note the significant
reduction in the replication rate and the amount of traffic that must be forwarded.
When implementing this use case, keep in mind that there are important differences for EVPN-VXLAN
centrally-routed bridging overlays and EVPN-VXLAN edge-routed bridging overlays. Table 6 on page
116 outlines these differences
Table 6: Use Case 2: Important Differences for EVPN-VXLAN Edge-routed and Centrally-routed
Bridging Overlays
EVPN VXLAN Support Mix of Juniper All EVPN All EVPN Required PIM
IP Fabric Networks Devices? Devices Devices Configuration
Architectures Required to Required to
Host All Host All VLANs
VLANs In that Include
EVPN- Multicast
VXLAN Listeners?
Network?
Table 6: Use Case 2: Important Differences for EVPN-VXLAN Edge-routed and Centrally-routed
Bridging Overlays (Continued)
EVPN VXLAN Support Mix of Juniper All EVPN All EVPN Required PIM
IP Fabric Networks Devices? Devices Devices Configuration
Architectures Required to Required to
Host All Host All VLANs
VLANs In that Include
EVPN- Multicast
VXLAN Listeners?
Network?
Table 6: Use Case 2: Important Differences for EVPN-VXLAN Edge-routed and Centrally-routed
Bridging Overlays (Continued)
EVPN VXLAN Support Mix of Juniper All EVPN All EVPN Required PIM
IP Fabric Networks Devices? Devices Devices Configuration
Architectures Required to Required to
Host All Host All VLANs
VLANs In that Include
EVPN- Multicast
VXLAN Listeners?
Network?
In addition to the differences described in Table 6 on page 116, a hair pinning issue exists with an EVPN-
VXLAN centrally-routed bridging overlay. Multicast traffic typically flows from a source host to a leaf
device to a spine device, which handles the inter-VLAN routing. The spine device then replicates and
forwards the traffic to VLANs and EVPN devices with multicast listeners. When forwarding the traffic in
this type of EVPN-VXLAN overlay, be aware that the spine device returns the traffic to the leaf device
from which the traffic originated (hair-pinning). This issue is inherent with the design of the EVPN-
VXLAN centrally-routed bridging overlay. When designing your EVPN-VXLAN overlay, keep this issue in
mind especially if you expect the volume of multicast traffic in your overlay to be high and the
replication rate of traffic (m * n times) to be large.
Use Case 3: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer
2 Connectivity
We recommend the PIM gateway with Layer 2 connectivity use case for both EVPN-VXLAN edge-
routed bridging overlays and EVPN-VXLAN centrally-routed bridging overlays.
• There are multicast sources and receivers within the data center that you want to communicate with
multicast sources and receivers in an external PIM domain.
NOTE: We support this use case with both EVPN-VXLAN edge-routed bridging overlays and
EVPN-VXLAN centrally-routed bridging overlays.
The use case provides a mechanism for the data center, which uses IGMP (or MLD) and PIM, to
exchange multicast traffic with the external PIM domain. Using a Layer 2 multicast VLAN (MVLAN) and
associated IRB interfaces on the EVPN devices in the data center to connect to the PIM domain, you
can enable the forwarding of multicast traffic from:
NOTE: In this section, external refers to components in the PIM domain. Internal refers to
components in your EVPN-VXLAN network that supports a data center.
Figure 11 on page 119 shows the required key components for this use case in a sample EVPN-VXLAN
centrally-routed bridging overlay.
Figure 11: Use Case 3: PIM Gateway with Layer 2 Connectivity—Key Components
120
• A PIM gateway that acts as an interface between an existing PIM domain and the EVPN-VXLAN
network. The PIM gateway is a Juniper Networks or third-party Layer 3 device on which PIM and
a routing protocol such as OSPF are configured. The PIM gateway does not run EVPN. You can
connect the PIM gateway to one, some, or all EVPN devices.
• A PIM rendezvous point (RP) is a Juniper Networks or third-party Layer 3 device on which PIM
and a routing protocol such as OSPF are configured. You must also configure the PIM RP to
translate PIM join or prune messages into corresponding IGMP (or MLD) report or leave messages
then forward the reports and messages to the PIM gateway.
NOTE: These components are in addition to the components already configured for use cases
1 and 2.
• EVPN devices. For redundancy, we recommend multihoming the EVPN devices to the PIM
gateway through an aggregated Ethernet interface on which you configure an Ethernet segment
identifier (ESI). On each EVPN device, you must also configure the following for this use case:
• A Layer 2 multicast VLAN (MVLAN). The MVLAN is a VLAN that is used to connect the PIM
gateway. In the MVLAN, PIM is enabled.
• An MVLAN IRB interface on which you configure PIM, IGMP snooping (or MLD snooping), and
a routing protocol such as OSPF. To reach the PIM gateway, the EVPN device forwards
multicast traffic out of this interface.
• To enable the EVPN devices to forward multicast traffic to the external PIM domain, configure:
• PIM-to-IGMP translation:
For EVPN-VXLAN centrally-routed bridging overlays, you do not need to include the pim-
to-igmp-proxy upstream-interface irb-interface-name or pim-to-mld-proxy upstream-
interface irb-interface-name configuration statements. In this type of overlay, the PIM
protocol handles the routing of multicast traffic from the PIM domain to the EVPN-VXLAN
network and vice versa.
121
• PIM passive mode. For EVPN-VXLAN edge-routed bridging overlays only, you must ensure that
the PIM gateway views the data center as only a Layer 2 multicast domain. To do so, include the
passive configuration statement at the [edit protocols pim] hierarchy level.
Use Case 4: Inter-VLAN Multicast Routing and Forwarding—PIM Gateway with Layer
3 Connectivity
We recommend the PIM gateway with Layer 3 connectivity use case for EVPN-VXLAN centrally-routed
bridging overlays only.
• There are multicast sources and receivers within the data center that you want to communicate with
multicast sources and receivers in an external PIM domain.
NOTE: We recommend the PIM gateway with Layer 3 connectivity use case for EVPN-VXLAN
centrally-routed bridging overlays only.
This use case provides a mechanism for the data center, which uses IGMP (or MLD) and PIM, to
exchange multicast traffic with the external PIM domain. Using Layer 3 interfaces on the EVPN devices
in the data center to connect to the PIM domain, you can enable the forwarding of multicast traffic
from:
NOTE: In this section, external refers to components in the PIM domains. Internal refers to
components in your EVPN-VXLAN network that supports a data center.
Figure 12 on page 122 shows the required key components for this use case in a sample EVPN-VXLAN
centrally-routed bridging overlay.
Figure 12: Use Case 4: PIM Gateway with Layer 3 Connectivity—Key Components
• A PIM gateway that acts as an interface between an existing PIM domain and the EVPN-VXLAN
network. The PIM gateway is a Juniper Networks or third-party Layer 3 device on which PIM and
a routing protocol such as OSPF are configured. The PIM gateway does not run EVPN. You can
connect the PIM gateway to one, some, or all EVPN devices.
• A PIM rendezvous point (RP) is a Juniper Networks or third-party Layer 3 device on which PIM
and a routing protocol such as OSPF are configured. You must also configure the PIM RP to
translate PIM join or prune messages into corresponding IGMP or MLD report or leave messages
then forward the reports and messages to the PIM gateway.
NOTE: These components are in addition to the components already configured for use cases
1 and 2.
• EVPN devices. You can connect one, some, or all EVPN devices to a PIM gateway. You must make
each connection through a Layer 3 interface on which PIM is configured. Other than the Layer 3
interface with PIM, this use case does not require additional configuration on the EVPN devices.
Starting with Junos OS Release 17.3R1, you can configure an EVPN device to perform inter-VLAN
forwarding of multicast traffic without having to configure IRB interfaces on the EVPN device. In such a
scenario, an external multicast router is used to send IGMP or MLD queries to solicit reports and to
forward VLAN traffic through a Layer 3 multicast protocol such as PIM. IRB interfaces are not supported
with the use of an external multicast router.
For this use case, you must include the igmp-snooping proxy or mld-snooping proxy configuration
statements at the [edit routing-instances routing-instance-name protocols vlan vlan-name] hierarchy
level.
Juniper Networks devices that support EVPN-VXLAN and IGMP snooping also support the EVPN
multicast flags extended community. When you have enabled IGMP snooping on one of these devices,
the device adds the community to EVPN Type 3 (Inclusive Multicast Ethernet Tag) routes.
The absence of this community in an EVPN Type 3 route can indicate the following about the device
that advertises the route:
• The device is running a Junos OS software release that doesn’t support the community.
• The device does not support the advertising of EVPN Type 6 SMET routes.
• The device has IGMP snooping and a Layer 3 interface with PIM enabled on it. Although the Layer 3
interface with PIM performs snooping on the access side and selective multicast forwarding on the
EVPN core, the device needs to attract all traffic to perform source registration to the PIM RP and
inter-VLAN routing.
The behavior described above also applies to devices that support EVPN-VXLAN with MLD and MLD
snooping.
124
Figure 13 on page 124 shows the EVPN multicast flag extended community, which has the following
characteristics:
• The IGMP Proxy Support flag is set to 1, which means that the device supports IGMP proxy.
The same applies to the MLD Proxy Support flag; if that flag is set to 1, the device supports MLD
proxy. Either or both flags might be set.
Release Description
20.4R1 Starting in Junos OS Releases 20.4R1, in EVPN-VXLAN centrally-routed bridging overlay fabrics,
QFX5110, QFX5120 and the QFX10000 line of switches support IGMPv3 with IGMP snooping for IPv4
multicast traffic, and MLD version 1 (MLDv1) and MLD version 2 (MLDv2) with MLD snooping for IPv6
multicast traffic.
19.3R1 Starting with Junos OS Release 19.3R1, EX9200 switches, MX Series routers, and vMX virtual routers
support IGMP version 2 (IGMPv2) and IGMP version 3 (IGMPv3), IGMP snooping, selective multicast
forwarding, external PIM gateways, and external multicast routers with an EVPN-VXLAN centrally-
routed bridging overlay.
19.1R1 Starting with Junos OS Release 19.1R1, QFX5120-32C switches support IGMP snooping in EVPN-
VXLAN centrally-routed and edge-routed bridging overlays.
18.4R2 Starting with Junos OS Release 18.4R2 (but not Junos OS Releases 19.1R1 and 19.2R1), QFX5120-48Y
switches support IGMP snooping in an EVPN-VXLAN centrally-routed bridging overlay.
125
18.4R2 Starting in Junos OS Releases 18.4R2 and 19.1R2, selective multicast forwarding is enabled by default
on QFX5110 and QFX5120 switches when you configure IGMP snooping in EVPN-VXLAN networks,
further constraining multicast traffic flooding. With IGMP snooping and selective multicast forwarding,
these switches send the multicast traffic only to interested receivers in both the EVPN core and on the
access side for multicast traffic coming in either from an access interface or an EVPN network interface.
18.1R1 Starting with Junos OS Release 18.1R1, QFX5110 switches support IGMP snooping in an EVPN-VXLAN
centrally-routed bridging overlay (EVPN-VXLAN topology with a two-layer IP fabric) for forwarding
multicast traffic within VLANs.
17.3R1 Starting with Junos OS Release 17.3R1, QFX10000 switches support the exchange of traffic between
multicast sources and receivers in an EVPN-VXLAN edge-routed bridging overlay, which uses IGMP, and
sources and receivers in an external Protocol Independent Multicast (PIM) domain. A Layer 2 multicast
VLAN (MVLAN) and associated IRB interfaces enable the exchange of multicast traffic between these
two domains.
17.3R1 Starting with Junos OS Release 17.3R1, you can configure an EVPN device to perform inter-VLAN
forwarding of multicast traffic without having to configure IRB interfaces on the EVPN device.
17.2R1 Starting with Junos OS Release 17.2R1, QFX10000 switches support IGMP snooping in an Ethernet
VPN (EVPN)-Virtual Extensible LAN (VXLAN) edge-routed bridging overlay (EVPN-VXLAN topology
with a collapsed IP fabric).
RELATED DOCUMENTATION
distributed-dr
igmp-snooping
mld-snooping
multicast-router-interface
Example: Preserving Bandwidth with IGMP Snooping in an EVPN-VXLAN Environment
Internet Group Management Protocol (IGMP) snooping constrains the flooding of IPv4 multicast traffic
on VLANs on a device. With IGMP snooping enabled, the device monitors IGMP traffic on the network
and uses what it learns to forward multicast traffic to only the downstream interfaces that are
connected to interested receivers. The device conserves bandwidth by sending multicast traffic only to
126
interfaces connected to devices that want to receive the traffic, instead of flooding the traffic to all the
downstream interfaces in a VLAN.
NOTE: You cannot configure IGMP snooping on a secondary (private) VLAN (PVLAN). However,
starting in Junos OS Release 18.3R1 on EX4300 switches and EX4300 Virtual Chassis, and Junos
OS Release 19.2R1 on EX4300 multigigabit switches, you can configure the vlan statement at
the [edit protocols igmp-snooping] hierarchy level with a primary VLAN, which implicitly enables
IGMP snooping on its secondary VLANs and avoids flooding multicast traffic on PVLANs. See
"IGMP Snooping on Private VLANs (PVLANs)" on page 98 for details.
NOTE: Starting in Junos OS Releases 14.1X53 and 15.2, QFabric Systems support the igmp-
querier statement to configure a Node device as an IGMP querier.
The default factory configuration on legacy EX Series switches enables IGMP snooping by default on all
VLANs. In this case, you don’t need any other configuration for IGMP snooping to work. However, if you
want IGMP snooping enabled only on some VLANs, you can either disable the feature on all VLANs and
then enable it selectively on the desired VLANs, or simply disable the feature selectively on those where
you do not want IGMP snooping. You can also customize other available IGMP snooping options.
TIP: When you configure IGMP snooping using the vlan all statement (where supported), any
VLAN that is not individually configured for IGMP snooping inherits the vlan all configuration.
Any VLAN that is individually configured for IGMP snooping, on the other hand, does not inherit
the vlan all configuration. Any parameters that are not explicitly defined for the individual VLAN
assume their default values, not the values specified in the vlan all configuration. For example, in
the following configuration:
protocols {
igmp-snooping {
vlan all {
robust-count 8;
}
vlan employee-vlan {
interface ge-0/0/8.0 {
static {
group 233.252.0.1;
}
}
}
127
}
}
all VLANs except employee-vlan have a robust count of 8. Because you individually configured
employee-vlan, its robust count value is not determined by the value set under vlan all. Instead,
its robust-count value is 2, the default value.
On switches without IGMP snooping enabled in the default factory configuration, you must explicitly
enable IGMP snooping and configure any other of the available IGMP snooping options you want on a
VLAN.
Use the following configuration steps as needed for your network to enable IGMP snooping on all
VLANs (where supported), enable or disable IGMP snooping selectively on a VLAN, and configure
available IGMP snooping options:
1. To enable IGMP snooping on all VLANs (where supported, such as on some EX Series switches):
[edit protocols]
user@switch# set igmp-snooping vlan all
NOTE: The default factory configuration on legacy EX Series switches has IGMP snooping
enabled on all VLANs.
Or disable IGMP snooping on all VLANs (where supported, such as on some EX Series switches):
[edit protocols]
user@switch# set igmp-snooping vlan all disable
2. To enable IGMP snooping on a specified VLAN, for example, on a VLAN named employee-vlan:
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan
128
3. To configure the switch to immediately remove group memberships from interfaces on a VLAN when
it receives a leave message through that VLAN, so it doesn’t forward any membership queries for the
multicast group to the VLAN (IGMPv2 only):
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name immediate-leave
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name interface interface-name static group group-address
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name interface interface-name multicast-router-interface
6. To change the default number of timeout intervals the device waits before timing out and removing a
multicast group on a VLAN:
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name robust-count number
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name l2-querier source-address source address
Or on QFabric Systems only, if you want a QFabric Node device to act as an IGMP querier, enter the
following:
[edit protocols]
user@switch# set igmp-snooping vlan vlan-name igmp-querier source-address source address
The switch sends IGMP queries with the configured source address. To ensure this switch is always
the IGMP querier on the network, make sure the source address is greater (a higher number) than the
IP addresses for any other multicast routers on the same local network.
129
Release Description
14.1X53 Starting in Junos OS Releases 14.1X53 and 15.2, QFabric Systems support the igmp-querier statement
to configure a Node device as an IGMP querier.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 129
Configuration | 132
You can enable IGMP snooping on a VLAN to constrain the flooding of IPv4 multicast traffic on a VLAN.
When IGMP snooping is enabled, a switch examines IGMP messages between hosts and multicast
routers and learns which hosts are interested in receiving multicast traffic for a multicast group. Based
on what it learns, the switch then forwards multicast traffic only to those interfaces connected to
interested receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
IN THIS SECTION
Topology | 131
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the switch are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/12, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the IGMP querier and forwards multicast
traffic for group 255.100.100.100 to the switch from a multicast source.
131
Topology
In this example topology, the multicast router forwards multicast traffic to the switch from the source
when it receives a memberhsip report for group 255.100.100.100 from one of the hosts—for example,
Host B. If IGMP snooping is not enabled on vlan100, the switch floods the multicast traffic on all
interfaces in vlan100 (except for interface ge-0/0/12). If IGMP snooping is enabled on vlan100, the
switch monitors the IGMP messages between the hosts and router, allowing it to determine that only
Host B is interested in receiving the multicast traffic. The switch then forwards the multicast traffic only
to interface ge-0/0/1.
IGMP snooping is enabled on all VLANs in the default factory configuration. For many implementations,
IGMP snooping requires no additional configuration. This example shows how to perform the following
optional configurations, which can reduce group join and leave latency:
• Configure immediate leave on the VLAN. When immediate leave is configured, the switch stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
group has left the group. If immediate leave is not configured, the switch waits until the group-
specific queries time out before it stops forwarding traffic.
Immediate leave is supported by IGMP version 2 (IGMPv2) and IGMPv3. With IGMPv2, we
recommend that you configure immediate leave only when there is only one IGMP host on an
interface. In IGMPv2, only one host on a interface sends a membership report in response to a
132
group-specifc query—any other interested hosts suppress their reports to avoid a flood of reports for
the same group. This report-suppression feature means that the switch only knows about one
interested host at any given time.
• Configure ge-0/0/12 as a static multicast-router interface. In this topology, ge-0/0/12 always leads
to the multicast router. By statically configuring ge-0/0/12 as a multicast-router interface, you avoid
any delay imposed by the switch having to learn that ge-0/0/12 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 132
Procedure
To quickly configure IGMP snooping, copy the following commands and paste them into the switch
terminal window:
[edit]
set protocols igmp-snooping vlan vlan100 immediate-leave
set protocols igmp-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Step-by-Step Procedure
1. Configure the switch to immediately remove a group membership from an interface when it receives
a leave report from the last member of the group on the interface:
[edit protocols]
user@switch# set igmp-snooping vlan vlan100 immediate-leave
133
[edit protocols]
user@switch# set igmp-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Results
[edit protocols]
user@switch# show igmp-snooping
vlan all;
vlan vlan100 {
immediate-leave;
interface ge-0/0/12.0 {
multicast-router-interface;
}
}
IN THIS SECTION
To verify that IGMP snooping is operating as configured, perform the following task:
Purpose
Verify that IGMP snooping is enabled on vlan100 and that ge-0/0/12 is recognized as a multicast-router
interface.
134
Action
Meaning
By showing information for vlan100, the command output confirms that IGMP snooping is configured
on the VLAN. Interface ge-0/0/12.0 is listed as multicast-router interface, as configured. Because none
of the host interfaces are listed, none of the hosts are currently receivers for the multicast group.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 135
Configuration | 136
Internet Group Management Protocol (IGMP) snooping constrains the flooding of IPv4 multicast traffic
on VLANs on a device. With IGMP snooping enabled, the device monitors IGMP traffic on the network
and uses what it learns to forward multicast traffic to only the downstream interfaces that are
connected to interested receivers. The device conserves bandwidth by sending multicast traffic only to
135
interfaces connected to devices that want to receive the traffic, instead of flooding the traffic to all the
downstream interfaces in a VLAN.
Requirements
This example requires Junos OS Release 11.1 or later on a QFX Series product.
IN THIS SECTION
Topology | 135
In this example you configure an interface to receive multicast traffic from a source and configure some
multicast-related behavior for downstream interfaces. The example assumes that IGMP snooping was
previously disabled for the VLAN.
Topology
Table 7 on page 135 shows the components of the topology for this example.
Components Settings
Configuration
IN THIS SECTION
Procedure | 136
Procedure
To quickly configure IGMP snooping, copy the following commands and paste them into a terminal
window:
[edit protocols]
set igmp-snooping vlan employee-vlan
set igmp-snooping vlan employee-vlan interface ge-0/0/3 static group 225.100.100.100
set igmp-snooping vlan employee-vlan interface ge-0/0/2 multicast-router-interface
set igmp-snooping vlan employee-vlan robust-count 4
Step-by-Step Procedure
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan interface ge-0/0/3 static group 225.100.100.100
137
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan interface ge-0/0/2 multicast-router-interface
4. Configure the switch to wait for four timeout intervals before timing out a multicast group on a
VLAN:
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan robust-count 4
Results
RELATED DOCUMENTATION
The IGMP snooping group timeout value determines how long a switch waits to receive an IGMP query
from a multicast router before removing a multicast group from its multicast cache table. A switch
calculates the timeout value by using the query-interval and query-response-interval values.
When you enable IGMP snooping, the query-interval and query-response-interval values are applied to
all VLANs on the switch. The values are:
• query-interval—125 seconds
• query-response-interval—10 seconds
The switch automatically calculates the group timeout value for an IGMP snooping-enabled switch by
multiplying the query-interval value by 2 (the default robust-count value) and then adding the query-
response-interval value. By default, the switch waits 260 seconds to receive an IGMP query before
removing a multicast group from its multicast cache table: (125 x 2) + 10 = 260.
You can modify the group timeout value by changing the robust-count value. For example, if you want
the system to wait 510 seconds before timing groups out—(125 x 4) + 10 = 510—enter this command:
[edit protocols]
user@switch# set igmp-snooping vlan employee-vlan robust-count 4
RELATED DOCUMENTATION
IN THIS SECTION
Purpose | 139
Action | 139
Meaning | 140
Purpose
Use the monitoring feature to view status and information about the IGMP snooping configuration.
Action
To display details about IGMP snooping, enter the following operational commands:
• show igmp snooping interface—Display information about interfaces enabled with IGMP snooping,
including which interfaces are being snooped in a learning domain and the number of groups on each
interface.
• show igmp snooping membership—Display IGMP snooping membership information, including the
multicast group address and the number of active multicast groups.
• show igmp snooping options—Display brief or detailed information about IGMP snooping.
• show igmp snooping statistics—Display IGMP snooping statistics, including the number of messages
sent and received.
The show igmp snooping interface, show igmp snooping membership, and show igmp snooping
statistics commands also support the following options:
• instance instance-name
• interface interface-name
• qualified-vlan vlan-identifier
• vlan vlan-name
140
Meaning
Field Values
Next-Hop Next hop assigned by the switch after performing the route lookup.
RELATED DOCUMENTATION
IN THIS SECTION
Internet Group Management Protocol (IGMP) snooping constrains the flooding of IPv4 multicast traffic
on VLANs on a switch. This topic describes how to verify IGMP snooping operation on the switch.
It covers:
IN THIS SECTION
Purpose | 141
Action | 141
Meaning | 142
Purpose
Determine group memberships, multicast-router interfaces, host IGMP versions, and the current values
of timeout counters.
Action
Group: 233.252.0.1
ge-1/0/17.0 259 Last reporter: 10.0.0.90 Receiver count: 1
Uptime: 00:00:19 timeout: 259 Flags: <V3-hosts>
Include source: 10.2.11.5, 10.2.11.12
Meaning
The switch has multicast membership information for one VLAN on the switch, vlan2. IGMP snooping
might be enabled on other VLANs, but the switch does not have any multicast membership information
for them. The following information is provided:
• Information on the multicast-router interfaces for the VLAN—in this case, ge-1/0/0.0. The multicast-
router interface has been learned by IGMP snooping, as indicated by the dynamic value. The timeout
value shows how many seconds from now the interface will be removed from the multicast
forwarding table if the switch does not receive IGMP queries or Protocol Independent Multicast
(PIM) updates on the interface.
• Currently, the VLAN has membership in only one multicast group, 233.252.0.1.
• The host or hosts that have reported membership in the group are on interface ge-1/0/17.0. The
last host that reported membership in the group has address 10.0.0.90. The number of hosts
belonging to the group on the interface is shown in the Receiver count field, which is displayed
only when host tracking is enabled if immediate leave is configured on the VLAN.
• The Uptime field shows that the multicast group has been active on the interface for 19 seconds.
The interface group membership will time out in 259 seconds if no hosts respond to membership
queries during this interval. The Flags field shows the lowest version of IGMP used by a host that
is currently a member of the group, which in this case is IGMP version 3 (IGMPv3).
• Because the interface has IGMPv3 hosts on it, the source addresses from which the IGMPv3
hosts want to receive group multicast traffic are shown (addresses 10.2.11.5 and 10.2.11.12). The
timeout value for the interface group membership is derived from the largest timeout value for all
sources addresses for the group.
IN THIS SECTION
Purpose | 143
143
Action | 143
Meaning | 143
Purpose
Display IGMP snooping statistics, such as number of IGMP queries, reports, and leaves received and
how many of these IGMP messages contained errors.
Action
Meaning
The output shows how many IGMP messages of each type—Queries, Reports, Leaves—the switch
received or transmitted on interfaces on which IGMP snooping is enabled. For each message type, it also
shows the number of IGMP packets the switch received that had errors—for example, packets that do
not conform to the IGMPv1, IGMPv2, or IGMPv3 standards. If the Recv Errors count increases, verify
that the hosts are compliant with IGMP standards. If the switch is unable to recognize the IGMP
message type for a packet, it counts the packet under Receive unknown.
IN THIS SECTION
Purpose | 144
144
Action | 144
Meaning | 144
Purpose
Action
Meaning
The output shows the next-hop interfaces for a given multicast group on a VLAN.
RELATED DOCUMENTATION
IN THIS SECTION
Routers can handle both Layer 2 and Layer 3 addressing information because the frame and its
addresses must be processed to access the encapsulated packet inside. Routers can run Layer 3
multicast protocols such as PIM or IGMP and determine where to forward multicast content or when a
host on an interface joins or leaves a group. However, bridges and LAN switches, as Layer 2 devices, are
not supposed to have access to the multicast information inside the packets that their frames carry.
How then are bridges and other Layer 2 devices to determine when a device on an interface joins or
leaves a multicast tree, or whether a host on an attached LAN wants to receive the content of a
particular multicast group?
The answer is for the Layer 2 device to implement multicast snooping. Multicast snooping is a general
term and applies to the process of a Layer 2 device “snooping” at the Layer 3 packet content to
determine which actions are taken to process or forward a frame. There are more specific forms of
snooping, such as IGMP snooping or PIM snooping. In all cases, snooping involves a device configured to
function at Layer 2 having access to normally “forbidden” Layer 3 (packet) information. Snooping makes
multicasting more efficient in these devices.
SEE ALSO
Layer 2 devices (LAN switches or bridges) handle multicast packets and the frames that contain them
much in the same way the Layer 3 devices (routers) handle broadcasts. So, a Layer 2 switch processes an
arriving frame having a multicast destination media access control (MAC) address by forwarding a copy
of the packet (frame) onto each of the other network interfaces of the switch that are in a forwarding
state.
However, this approach (sending multicast frames everywhere the device can) is not the most efficient
use of network bandwidth, particularly for IPTV applications. IGMP snooping functions by “snooping” at
the IGMP packets received by the switch interfaces and building a multicast database similar to that a
multicast router builds in a Layer 3 network. Using this database, the switch can forward multicast traffic
only onto downstream interfaces with interested receivers, and this technique allows more efficient use
of network bandwidth.
You configure IGMP snooping for each bridge on the router. A bridge instance without qualified learning
has just one learning domain. For a bridge instance with qualified learning, snooping will function
separately within each learning domain in the bridge. That is, IGMP snooping and multicast forwarding
will proceed independently in each learning domain in the bridge.
This discussion focuses on bridge instances without qualified learning (those forming one learning
domain on the device). Therefore, all the interfaces mentioned are logical interfaces of the bridge or
VPLS instance.
• Bridge or VPLS instance interfaces are either multicast-router interfaces or host-side interfaces.
NOTE: When integrated routing and bridging (IRB) is used, if the router is an IGMP querier, any
leave message received on any Layer 2 interface will cause a group-specific query on all Layer 2
interfaces (as a result of this practice, some corresponding reports might be received on all
Layer 2 interfaces). However, if some of the Layer 2 interfaces are also router (Layer 3)
interfaces, reports and leaves from other Layer 2 interfaces will not be forwarded on those
interfaces.
147
If an IRB interface is used as an outgoing interface in a multicast forwarding cache entry (as determined
by the routing process), then the output interface list is expanded into a subset of the Layer 2 interface
in the corresponding bridge. The subset is based on the snooped multicast membership information,
according to the multicast forwarding cache entry installed by the snooping process for the bridge.
If no snooping is configured, the IRB output interface list is expanded to all Layer 2 interfaces in the
bridge.
The Junos OS does not support IGMP snooping in a VPLS configuration on a virtual switch. This
configuration is disallowed in the CLI.
SEE ALSO
All other interfaces that are not multicast-router interfaces are considered host-side interfaces.
Any multicast traffic received on a bridge interface with IGMP snooping configured will be forwarded
according to following rules:
• Any IGMP packet is sent to the Routing Engine for snooping processing.
• Other multicast traffic with destination address 224.0.0/24 is flooded onto all other interfaces of the
bridge.
• Other multicast traffic is sent to all the multicast-router interfaces but only to those host-side
interfaces that have hosts interested in receiving that multicast group.
148
SEE ALSO
• Query—All general and group-specific IGMP query messages received on a multicast-router interface
are forwarded to all other interfaces (both multicast-router interfaces and host-side interfaces) on
the bridge.
• Report—IGMP reports received on any interface of the bridge are forwarded toward other multicast-
router interfaces. The receiving interface is added as an interface for that group if a multicast routing
entry exists for this group. Also, a group timer is set for the group on that interface. If this timer
expires (that is, there was no report for this group during the IGMP group timer period), then the
interface is removed as an interface for that group.
• Leave—IGMP leave messages received on any interface of the bridge are forwarded toward other
multicast-router interfaces on the bridge. The Leave Group message reduces the time it takes for the
multicast router to stop forwarding multicast traffic when there are no longer any members in the
host group.
Proxy snooping reduces the number of IGMP reports sent toward an IGMP router.
NOTE: With proxy snooping configured, an IGMP router is not able to perform host tracking.
As proxy for its host-side interfaces, IGMP snooping in proxy mode replies to the queries it receives
from an IGMP router on a multicast-router interface. On the host-side interfaces, IGMP snooping in
proxy mode behaves as an IGMP router and sends general and group-specific queries on those
interfaces.
NOTE: Only group-specific queries are generated by IGMP snooping directly. General queries
received from the multicast-router interfaces are flooded to host-side interfaces.
149
All the queries generated by IGMP snooping are sent using 0.0.0.0 as the source address. Also, all
reports generated by IGMP snooping are sent with 0.0.0.0 as the source address unless there is a
configured source address to use.
Proxy mode functions differently on multicast-router interfaces than it does on host-side interfaces.
SEE ALSO
Besides replying to queries, IGMP snooping in proxy mode forwards all queries, reports, and leaves
received on a multicast-router interface to other multicast-router interfaces. IGMP snooping keeps the
membership information learned on this interface but does not send a group-specific query for leave
messages received on this interface. It simply times out the groups learned on this interface if there are
no reports for the same group within the timer duration.
NOTE: For the hosts on all the multicast-router interfaces, it is the IGMP router, not the IGMP
snooping proxy, that generates general and group-specific queries.
SEE ALSO
If a group is removed from a host-side interface and this was the last host-side interface for that group, a
leave is sent to the multicast-router interfaces. If a group report is received on a host-side interface and
this was the first host-side interface for that group, a report is sent to all multicast-router interfaces.
150
SEE ALSO
SEE ALSO
igmp-snooping {
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
vlan vlan-id {
151
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
}
By default, IGMP snooping is not enabled. Statements configured at the VLAN level apply only to that
particular VLAN.
SEE ALSO
vlan vlan-id;
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
SEE ALSO
IN THIS SECTION
Requirements | 153
Configuration | 157
Verification | 161
This example shows how to configure IGMP snooping. IGMP snooping can reduce unnecessary traffic
from IP multicast applications.
Requirements
• Configure the interfaces. See the Interfaces User Guide for Security Devices.
• Configure an interior gateway protocol. See the Junos OS Routing Protocols Library for Routing
Devices.
• Configure a multicast protocol. This feature works with the following multicast protocols:
• DVMRP
• PIM-DM
• PIM-SM
• PIM-SSM
154
IN THIS SECTION
Topology | 156
IGMP snooping controls multicast traffic in a switched network. When IGMP snooping is not enabled,
the Layer 2 device broadcasts multicast traffic out of all of its ports, even if the hosts on the network do
not want the multicast traffic. With IGMP snooping enabled, a Layer 2 device monitors the IGMP join
and leave messages sent from each connected host to a multicast router. This enables the Layer 2
device to keep track of the multicast groups and associated member ports. The Layer 2 device uses this
information to make intelligent decisions and to forward multicast traffic to only the intended
destination hosts.
• proxy—Enables the Layer 2 device to actively filter IGMP packets to reduce load on the multicast
router. Joins and leaves heading upstream to the multicast router are filtered so that the multicast
router has a single entry for the group, regardless of how many active listeners have joined the
group. When a listener leaves a group but other listeners remain in the group, the leave message is
filtered because the multicast router does not need this information. The status of the group remains
the same from the router's point of view.
• immediate-leave—When only one IGMP host is connected, the immediate-leave statement enables
the multicast router to immediately remove the group membership from the interface and suppress
the sending of any group-specific queries for the multicast group.
When you configure this feature on IGMPv2 interfaces, ensure that the IGMP interface has only one
IGMP host connected. If more than one IGMPv2 host is connected to a LAN through the same
interface, and one host sends a leave message, the router removes all hosts on the interface from the
multicast group. The router loses contact with the hosts that properly remain in the multicast group
until they send join requests in response to the next general multicast listener query from the router.
When IGMP snooping is enabled on a router running IGMP version 3 (IGMPv3) snooping, after the
router receives a report with the type BLOCK_OLD_SOURCES, the router suppresses the sending of
group-and-source queries but relies on the Junos OS host-tracking mechanism to determine whether
or not it removes a particular source group membership from the interface.
• query-interval—Enables you to change the number of IGMP messages sent on the subnet by
configuring the interval at which the IGMP querier router sends general host-query messages to
solicit membership information.
155
By default, the query interval is 125 seconds. You can configure any value in the range 1 through
1024 seconds.
The last-member query interval is the maximum amount of time between group-specific query
messages, including those sent in response to leave-group messages.
By default, the last-member query interval is 1 second. You can configure any value in the range 0.1
through 0.9 seconds, and then 1-second intervals from 1 through 1024 seconds.
• query-response-interval—Configures how long the router waits to receive a response from its host-
query messages.
By default, the query response interval is 10 seconds. You can configure any value in the range 1
through 1024 seconds. This interval should be less than the interval set in the query-interval
statement.
• robust-count—Provides fine-tuning to allow for expected packet loss on a subnet. It is basically the
number of intervals to wait before timing out a group. You can wait more intervals if subnet packet
loss is high and IGMP report messages might be lost.
By default, the robust count is 2. You can configure any value in the range 2 through 10 intervals.
• group-limit—Configures a limit for the number of multicast groups (or [S,G] channels in IGMPv3) that
can join an interface. After this limit is reached, new reports are ignored and all related flows are
discarded, not flooded.
By default, there is no limit to the number of groups that can join an interface. You can configure a
limit in the range 0 through a 32-bit number.
By default, the router learns about multicast groups on the interface dynamically.
156
Topology
Figure 15 on page 156 shows networks without IGMP snooping. Suppose host A is an IP multicast
sender and hosts B and C are multicast receivers. The router forwards IP multicast traffic only to those
segments with registered receivers (hosts B and C). However, the Layer 2 devices flood the traffic to all
hosts on all interfaces.
Figure 16 on page 157 shows the same networks with IGMP snooping configured. The Layer 2 devices
forward multicast traffic to registered receivers only.
Configuration
IN THIS SECTION
Procedure | 158
158
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
3. Configure the limit for the number of multicast groups allowed on the ge-0/0/1.1 interface to 50.
4. Configure the router to immediately remove a group membership from an interface when it receives
a leave message from that interface without waiting for any other IGMP messages to be exchanged.
7. Configure an interface to be an exclusively host-facing interface (to drop IGMP query messages).
user@host# commit
Results
}
}
Verification
SEE ALSO
Flag Description
(Continued)
Flag Description
You can configure tracing operations for IGMP snooping globally or in a routing instance. The following
example shows the global configuration.
5. Configure tracing flags. Suppose you are troubleshooting issues with a policy related to received
packets on a particular logical interface with an IP address of 192.168.0.1. The following example
shows how to flag all policy events for received packets associated with the IP address.
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 164
Configuration | 165
You can enable IGMP snooping on a VLAN to constrain the flooding of IPv4 multicast traffic on a VLAN.
When IGMP snooping is enabled, the device examines IGMP messages between hosts and multicast
routers and learns which hosts are interested in receiving multicast traffic for a multicast group. Based
on what it learns, the device then forwards multicast traffic only to those interfaces that are connected
to relevant receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following hardware and software components:
IN THIS SECTION
Topology | 165
165
IGMP snooping controls multicast traffic in a switched network. When IGMP snooping is not enabled,
the SRX Series device broadcasts multicast traffic out of all of its ports, even if the hosts on the network
do not want the multicast traffic. With IGMP snooping enabled, the SRX Series device monitors the
IGMP join and leave messages sent from each connected host to a multicast router. This enables the
SRX Series device to keep track of the multicast groups and associated member ports. The SRX Series
device uses this information to make intelligent decisions and to forward multicast traffic to only the
intended destination hosts.
Topology
In this sample topology, the multicast router forwards multicast traffic to the device from the source
when it receives a membership report for group 233.252.0.100 from one of the hosts—for example,
Host B. If IGMP snooping is not enabled on vlan100, the device floods the multicast traffic on all
interfaces in vlan100 (except for interface ge-0/0/2.0). If IGMP snooping is enabled on vlan100, the
device monitors the IGMP messages between the hosts and router, allowing it to determine that only
Host B is interested in receiving the multicast traffic. The device then forwards the multicast traffic only
to interface ge-0/0/2.
Configuration
IN THIS SECTION
Procedure | 166
166
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
instructions on how to do that, see Using the CLI Editor in Configuration Mode in the CLI User Guide.
[edit]
user@host# set interfaces ge-0/0/1 unit 0 family ethernet-switching interface-mode access
167
[edit]
user@host# set vlans v1 vlan-id 100
[edit]
user@host# set protocols igmp-snooping vlan v1 proxy
4. Configure the limit for the number of multicast groups allowed on the ge-0/0/1.0 interface to 50.
[edit]
user@host# set protocols igmp-snooping vlan v1 interface ge-0/0/1.0 group-limit 50
5. Configure the device to immediately remove a group membership from an interface when it receives
a leave message from that interface without waiting for any other IGMP messages to be exchanged.
[edit]
user@host# set protocols igmp-snooping vlan v1 immediate-leave
[edit]
user@host# set protocols igmp-snooping vlan v1 interface ge-0/0/4.0 static group 233.252.0.100
168
7. Configure an interface to be an exclusively host-facing interface (to drop IGMP query messages).
[edit]
user@host# set protocols igmp-snooping vlan v1 interface ge-0/0/1.0 host-only-interface
[edit]
user@host# set protocols igmp-snooping vlan v1 query-interval 200
user@host# set protocols igmp-snooping vlan v1 query-response-interval 0.4
user@host# set protocols igmp-snooping vlan v1 query-last-member-interval 0.1
user@host# set protocols igmp-snooping vlan v1 robust-count 4
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show protocols igmp-snooping
command. If the output does not display the intended configuration, repeat the configuration
instructions in this example to correct it.
[edit]
user@host# show protocols igmp-snooping
vlan v1 {
query-interval 200;
query-response-interval 0.4;
query-last-member-interval 0.1;
robust-count 4;
immediate-leave;
proxy;
interface ge-0/0/1.0 {
host-only-interface;
group-limit 50;
}
interface ge-0/0/4.0 {
static {
169
group 233.252.0.100;
}
}
}
IN THIS SECTION
To verify that IGMP snooping is operating as configured, perform the following task:
Purpose
Verify that IGMP snooping is enabled on vlan v1 and that ge-0/0/4 is recognized as a multicast-router
interface.
Action
From operational mode, enter the show igmp snooping membership command.
Vlan: v1
Learning-Domain: default
Interface: ge-0/0/4.0, Groups: 1
Group: 233.252.0.100
Group mode: Exclude
Source: 0.0.0.0
Last reported by: Local
Group timeout: 0 Type: Static
170
Meaning
By showing information for vlanv1, the command output confirms that IGMP snooping is configured on
the VLAN. Interface ge-0/0/4.0 is listed as a multicast-router interface, as configured. Because none of
the host interfaces are listed, none of the hosts are currently receivers for the multicast group.
RELATED DOCUMENTATION
By default, IGMP snooping in VPLS uses multiple parallel streams when forwarding multicast traffic to
PE routers participating in the VPLS. However, you can enable point-to-multipoint LSP for IGMP
snooping to have multicast data traffic in the core take the point-to-multipoint path rather than using a
pseudowire path. The effect is a reduction in the amount of traffic generated on the PE router when
sending multicast packets for multiple VPLS sessions.
Figure 1 shows the effect on multicast traffic generated on the PE1 router (the device where the setting
is enabled). When pseudowire LSP is used, the PE1 router sends multiple packets whereas with point-
to-multipoint LSP enabled, only a single copy of the packets on the PE1 router is sent.
The options configured for IGMP snooping are applied on a per routing-instance, so all IGMP snooping
routes in the same instance will use the same mode, point-to-multipoint or pseudowire.
NOTE: The point-to-multipoint option is available on MX960, MX480, MX240, and MX80
routers running Junos OS 13.3 and later.
171
NOTE: IGMP snooping is not supported on the core-facing pseudowire interfaces; all PE routers
participating in VPLS will continue to receive multicast data traffic even when this option is
enabled.
Figure 18: Point-to-multipoint LSP generates less traffic on the PE router than pseudowire.
In a VPLS instance with IGMP-snooping that uses a point-to-multipoint LSP, mcsnoopd (the multicast
snooping process that allows Layer 3 inspection from Layer 2 device) will start listening for point-to-
multipoint next-hop notifications and then manage the IGMP snooping routes accordingly. Enabling the
use-p2mp-lsp command in Junos allows the IGMP snooping routes to start using this next-hop. In short,
172
if point-to-multipoint is configured for a VPLS instance, multicast data traffic in the core can avoid
ingress replication by taking the point-to-multipoint path. If the point-to-multipoint next-hop is
unavailable, packets are handled in the VPLS instance in the same way as broadcast packets or unknown
unicast frames. Note that IGMP snooping is not supported on the core-facing pseudowire interfaces. PE
routers participating in VPLS will continue to receive multicast data traffic regardless of how Point-to-
Multipoint is set.
[edit]
user@host> set routing-instances instance name instance-type vpls igmp-snooping-
options use-p2mp-lsp
routing-instances {
<instance-name> {
instance-type vpls;
igmp-snooping-options {
use-p2mp-lsp;
}
}
}
To show the operational status of point-to-multipoint LSP for IGMP snooping routes, use the following
CLI command:
Instance: master
P2MP LSP in use: no
Instance: default-switch
P2MP LSP in use: no
Instance: name
P2MP LSP in use: yes
173
RELATED DOCUMENTATION
use-p2mp-lsp | 2010
show igmp snooping options | 2180
multicast-snooping-options | 1703
174
CHAPTER 4
IN THIS CHAPTER
Configuring MLD Snooping on a Switch VLAN with ELS Support (CLI Procedure) | 195
Configuring MLD Snooping Tracing Operations on EX Series Switches (CLI Procedure) | 214
Configuring MLD Snooping Tracing Operations on EX Series Switch VLANs (CLI Procedure) | 217
IN THIS SECTION
Multicast Listener Discovery (MLD) snooping constrains the flooding of IPv6 multicast traffic on VLANs.
When MLD snooping is enabled on a VLAN, a Juniper Networks device examines MLD messages
between hosts and multicast routers and learns which hosts are interested in receiving traffic for a
multicast group. On the basis of what it learns, the device then forwards multicast traffic only to those
interfaces in the VLAN that are connected to interested receivers instead of flooding the traffic to all
interfaces.
MLD snooping supports MLD version 1 (MLDv1) and MLDv2. For details on MLDv1 and MLDv2, see
the following standards:
• MLDv2—See RFC 3810, Multicast Listener Discovery Version 2 (MLDv2) for IPv6.
By default, the device floods Layer 2 multicast traffic on all of the interfaces belonging to that VLAN on
the device, except for the interface that is the source of the multicast traffic. This behavior can consume
significant amounts of bandwidth.
You can enable MLD snooping to avoid this flooding. When you enable MLD snooping, the device
monitors MLD messages between receivers (hosts) and multicast routers and uses the content of the
messages to build an IPv6 multicast forwarding table—a database of IPv6 multicast groups and the
interfaces that are connected to the interested members of each group. When the device receives
multicast traffic for a multicast group, it uses the forwarding table to forward the traffic only to
interfaces that are connected to receivers that belong to the multicast group.
176
Figure 19 on page 176 shows an example of multicast traffic flow with MLD snooping enabled.
Multicast routers use MLD to learn, for each of their attached physical networks, which groups have
interested listeners. In any given subnet, one multicast router is elected to act as an MLD querier. The
MLD querier sends out the following types of queries to hosts:
• Group-specific query—Asks whether any host is listening to a specific multicast group. This query is
sent in response to a host leaving the multicast group and allows the router to quickly determine if
any remaining hosts are interested in the group.
• Group-and-source-specific query—(MLD version 2 only) Asks whether any host is listening to group
multicast traffic from a specific multicast source. This query is sent in response to a host indicating
that it is no longer interested in receiving group multicast traffic from the multicast source and allows
the router to quickly determine any remaining hosts are interested in receiving group multicast traffic
from that source.
Hosts that are multicast listeners send the following kinds of messages:
• Membership report—Indicates that the host wants to join a particular multicast group.
• Leave report—Indicates that the host wants to leave a particular multicast group.
Only MLDv1 hosts use two different kinds of reports to indicate whether they want to join or leave a
group. MLDv2 hosts send only one kind of report, the contents of which indicate whether they want to
join or leave a group. However, for simplicity’s sake, the MLD snooping documentation uses the term
membership report for a report that indicates that a host wants to join a group and uses the term leave
report for a report that indicates a host wants to leave a group.
• By sending an unsolicited membership report that specifies the multicast group that the host is
attempting to join.
A multicast router continues to forward multicast traffic to an interface provided that at least one host
on that interface responds to the periodic general queries indicating its membership. For a host to
remain a member of a multicast group, therefore, it must continue to respond to the periodic general
queries.
• By not responding to periodic queries within a set interval of time. This results in what is known as a
“silent leave.”
NOTE: If a host is connected to the device through a hub, the host does not automatically leave
the multicast group if it disconnects from the hub. The host remains a member of the group until
group membership times out and a silent leave occurs. If another host connects to the hub port
before the silent leave occurs, the new host might receive the group multicast traffic until the
silent leave, even though it never sent an membership report.
In MLDv2, a host can send a membership report that includes a list of source addresses. When the host
sends a membership report in INCLUDE mode, the host is interested in group multicast traffic only from
those sources in the source address list. If host sends a membership report in EXCLUDE mode, the host
is interested in group multicast traffic from any source except the sources in the source address list. A
host can also send an EXCLUDE report in which the source-list parameter is empty, which is known as
an EXCLUDE NULL report. An EXCLUDE NULL report indicates that the host wants to join the multicast
group and receive packets from all sources.
Devices that support MLD snooping support MLDv2 membership reports that are in INCLUDE and
EXCLUDE mode. However, SRX Series devices, QFX Series switches, and EX Series switches running
MLD snooping, except for EX9200 switches, do not support forwarding on a per-source basis. Instead,
the device consolidates all INCLUDE and EXCLUDE mode reports it receives on a VLAN for a specified
group into a single route that includes all multicast sources for that group, with the next hop being all
interfaces that have interested receivers for the group. As a result, interested receivers on the VLAN can
receive traffic from a source that they did not include in their INCLUDE report or from a source they
excluded in their EXCLUDE report. For example, if Host 1 wants traffic for group G from Source A and
Host 2 wants traffic for group G from Source B, they both receive traffic for group G regardless of
whether A or B sends the traffic.
To determine how to forward multicast traffic, the device with MLD snooping enabled maintains
information about the following interfaces in its multicast forwarding table:
• Group-member interfaces—These interfaces lead toward hosts that are members of multicast groups.
179
The device learns about these interfaces by monitoring MLD traffic. If an interface receives MLD
queries, the device adds the interface to its multicast forwarding table as a multicast-router interface. If
an interface receives membership reports for a multicast group, the device adds the interface to its
multicast forwarding table as a group-member interface.
Table entries for interfaces that the device learns about are subject to aging. For example, if a learned
multicast-router interface does not receive MLD queries within a certain interval, the device removes
the entry for that interface from its multicast forwarding table.
NOTE: For the device to learn multicast-router interfaces and group-member interfaces, an MLD
querier must exist in the network. For the device itself to function as an MLD querier, MLD must
be enabled on the device.
Multicast traffic received on the device interface in a VLAN on which MLD snooping is enabled is
forwarded according to the following rules.
• MLD general queries received on a multicast-router interface are forwarded to all other interfaces in
the VLAN.
• MLD group-specific queries received on a multicast-router interface are forwarded to only those
interfaces in the VLAN that are members of the group.
• MLD reports received on a host interface are forwarded to multicast-router interfaces in the same
VLAN, but not to the other host interfaces in the VLAN.
• An unregistered multicast packet—that is, a packet for a group that has no current members—is
forwarded to all multicast-router interfaces in the VLAN.
• A registered multicast packet is forwarded only to those host interfaces in the VLAN that are
members of the multicast group and to all multicast-router interfaces in the VLAN.
180
NOTE: When IGMP and MLD snooping are both enabled on the same VLAN, multicast-router
interfaces are created as part of IGMP and MLD snooping configuration. Unregistered multicast
traffic is not blocked and can be passed through router interfaces, so due to hardware limitations,
unregistered IPv4 multicast traffic might be passed through the multicast router interfaces
created as part of MLD snooping configuration, and unregistered IPv6 multicast traffic might
pass through multicast-router interfaces created as part of IGMP snooping configuration.
The following examples are provided to illustrate how MLD snooping forwards multicast traffic in
different topologies:
In the topology shown in Figure 20 on page 181, the device acting as a Layer 2 device receives multicast
traffic belonging to multicast group ff1e::2010 from Source A, which is connected to the multicast
router. It also receives multicast traffic belonging to multicast group ff15::2 from Source B, which is
connected directly to the device. All interfaces on the device belong to the same VLAN.
Because the device receives MLD queries from the multicast router on interface P1, MLD snooping
learns that interface P1 is a multicast-router interface and adds the interface to its multicast forwarding
table. It forwards any MLD general queries it receives on this interface to all host interfaces on the
device, and, in turn, forwards membership reports it receives from hosts to the multicast-router
interface.
In the example, Hosts A and C have responded to the general queries with membership reports for
group ff1e::2010. MLD snooping adds interfaces P2 and P4 to its multicast forwarding table as member
interfaces for group ff1e::2010. It forwards the group multicast traffic received from Source A to Hosts
A and C, but not to Hosts B and D.
Host B has responded to the general queries with a membership report for group ff15::2. The device
adds interface P3 to its multicast forwarding table as a member interface for group ff15::2 and forwards
181
multicast traffic it receives from Source B to Host B. The device also forwards the multicast traffic it
receives from Source B to the multicast-router interface P1.
Figure 20: Scenario 1: Device Forwarding Multicast Traffic to a Multicast Router and Hosts
In the topology show in Figure 21 on page 182, a multicast source is connected to Device A. Device A in
turn is connected to another device, Device B. Hosts on both Device A and B are potential members of
the multicast group. Both devices are acting as Layer 2 devices, and all interfaces on the devices are
members of the same VLAN.
Device A receives MLD queries from the multicast router on interface P1, making interface P1 a
multicast-router interface for Device A. Device A forwards all general queries it receives on this
interface to the other interfaces on the device, including the interface connecting Device B. Because
Device B receives the forwarded MLD queries on interface P6, P6 is the multicast-router interface for
182
Device B. Device B forwards the membership report it receives from Host C to Device A through its
multicast-router interface. Device A forwards the membership report to its multicast-router interface,
includes interface P5 in its multicast forwarding table as a group-member interface, and forwards
multicast traffic from the source to Device B.
In the topology shown in Figure 22 on page 184, the device is connected to a multicast source and to
hosts. There is no multicast router in this topology—hence there is no MLD querier. Without an MLD
querier to respond to, a host does not send periodic membership reports. As a result, even if the host
sends an unsolicited membership report to join a multicast group, its membership in the multicast group
will time out.
For MLD snooping to work correctly in this network so that the device forwards multicast traffic to
Hosts A and C only, you can either:
• Configure a routed VLAN interface (RVI), also referred to as an integrated routing and bridging (IRB)
interface, on the VLAN and enable MLD on it. In this case, the device itself acts as an MLD querier,
184
and the hosts can dynamically join the multicast group and refresh their group membership by
responding to the queries.
Figure 22: Scenario 3: Device Connected to Hosts Only (No MLD Querier)
In the topology shown in Figure 23 on page 185, a multicast source, Multicast Router A, and Hosts A
and B are connected to the device and are in VLAN 10. Multicast Router B and Hosts C and D are also
connected to the device and are in VLAN 20.
185
In a pure Layer 2 environment, traffic is not forwarded between VLANs. For Host C to receive the
multicast traffic from the source on VLAN 10, RVIs (or IRB interfaces) must be created on VLAN 10 and
VLAN 20 to permit routing of the multicast traffic between the VLANs.
Figure 23: Scenario 4: Layer 2/Layer 3 device Forwarding Multicast Traffic Between VLANs
RELATED DOCUMENTATION
IN THIS SECTION
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast routers
and learns which hosts are interested in receiving multicast traffic for a multicast group. Based on what
it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to interested
receivers instead of flooding the traffic to all interfaces.
MLD snooping is not enabled on the switch by default. To enable MLD snooping on all VLANs:
[edit]
user@switch# set protocols mld-snooping vlan all
• Specify the MLD version for the general query that the switch sends on an interface when the
interface comes up.
• Enable immediate leave on a VLAN or all VLANs. Immediate leave reduces the length of time it takes
the switch to stop forwarding multicast traffic when the last member host on the interface leaves the
group.
• Configure an interface as a static multicast-router interface for a VLAN or for all VLANs so that the
switch does not need to dynamically learn that the interface is a multicast-router interface.
• Configure an interface as a static member of a multicast group so that the switch does not need to
dynamically learn the interface’s membership.
• Change the value for certain timers and counters to match the values configured on the multicast
router serving as the MLD querier.
TIP: When you configure MLD snooping using the vlan all statement, any VLAN that is not
individually configured for MLD snooping inherits the vlan all configuration. Any VLAN that is
individually configured for MLD snooping, on the other hand, inherits none of its configuration
from vlan all. Any parameters that are not explicitly defined for the individual VLAN assume their
default values, not the values specified in the vlan all configuration. For example, in the following
configuration:
protocols {
mld-snooping {
vlan all {
robust-count 8;
}
vlan employee {
interface ge-0/0/8.0 {
static {
group ff1e::1;
}
}
}
}
}
all VLANs, except employee, have a robust count of 8. Because employee has been individually
configured, its robust count value is not determined by the value set under vlan all. Instead, its
robust count is the default value of 2.
188
This topic describes how you can enable or disable MLD snooping on specific VLANs or on all VLANs on
the switch.
For example, to enable MLD snooping on all VLANs except vlan100 and vlan200:
You can also deactivate the MLD snooping protocol on the switch without changing the MLD snooping
VLAN configurations:
[edit]
user@switch# deactivate protocols mld-snooping
Typically, a switch passively monitors MLD messages sent between multicast routers and hosts and does
not send MLD queries. The exception is when a switch detects that an interface has come up. When an
interface comes up, the switch sends an immediate general membership query to all hosts on the
interface. By doing so, the switch enables the multicast routers to learn group memberships more
quickly than they would if they had to wait until the MLD querier sent its next general query.
The MLD version of the general query determines the MLD version of the host membership reports as
follows:
• MLD version 1 (MLDv1) general query—Both MLDv1 and MLDv2 hosts respond with an MLDv1
membership report.
• MLDv2 general query—MLDv2 hosts respond with an MLDv2 membership report, while MLDv1
hosts are unable to respond to the query.
By default, the switch sends MLDv1 queries. This ensures compatibility with hosts and multicast routers
that support MLDv1 only and cannot process MLDv2 reports. However, if your VLAN contains MLDv2
190
multicast routers and hosts and the routers are running PIM-SSM, we recommend that you configure
MLD snooping for MLDv2. Doing so enables the routers to quickly learn which multicast sources the
hosts on the interface want to receive traffic from.
NOTE: Configuring the MLD version does not limit the version of MLD messages that the switch
can snoop. A switch can snoop both MLDv1 and MLDv2 messages regardless of the MLD
version configured.
For example, to set the MLD version to version 2 for VLAN marketing:
You can decrease the leave latency created by this default behavior by enabling immediate leave on a
VLAN.
When you enable immediate leave on a VLAN, host tracking is also enabled, allowing the switch to keep
track of the hosts on a interface that have joined a multicast group. When the switch receives a leave
report from the last member of the group, it immediately stops forwarding traffic to the interface and
does not wait for the interface group membership to time out.
Immediate leave is supported for both MLD version 1 (MLDv1) and MLDv2. However, with MLDv1, we
recommend that you configure immediate leave only when there is only one MLD host on an interface.
In MLDv1, only one host on a interface sends a membership report in response to a group-specifc query
—any other interested hosts suppress their reports. This report-suppression feature means that the
switch only knows about one interested host at any given time.
191
In addition to dynamically learned interfaces, the multicast forwarding table can include interfaces that
you explicitly configure to be multicast router interfaces. Unlike the table entries for dynamically learned
interfaces, table entries for statically configured interfaces are not subject to aging and deletion from the
forwarding table.
Examples of when you might want to configure a static multicast-router interface include:
• You have an unusual network configuration that prevents MLD snooping from reliably learning about
a multicast-router interface through monitoring MLD queries or PIM updates.
• You have a stable topology and want to avoid the delay the dynamic learning process entails.
NOTE: If the interface you are configuring as a multicast-router interface is a trunk port, the
interface becomes a multicast-router interface for all VLANs configured on the trunk port even if
you have not explicitly configured it for all the VLANs. In addition, all unregistered multicast
packets, whether they are IPv4 or IPv6 packets, are forwarded to the multicast-router interface,
even if the interface is configured as a multicast-router interface only for MLD snooping.
For example, to configure ge-0/0/5.0 as a multicast-router interface for all VLANs on the switch:
In addition to such dynamically learned interfaces, the multicast forwarding table can include interfaces
that you statically configure to be members of multicast groups. When you configure a static group
interface, the switch adds the interface to the forwarding table as a host interface for the group. Unlike
an entry for a dynamically learned interface, a static interface entry is not subject to aging and deletion
from the forwarding table.
Examples of when you might want to configure static group membership on an interface include:
• The interface has receivers that cannot send MLD membership reports.
• You want the multicast traffic for a specific group to be immediately available to a receiver without
any delay imposed by the dynamic join process.
You cannot configure multicast source addresses for a static group interface. The MLD version of a
static group interface is always MLD version 1.
NOTE: The switch does not simulate MLD membership reports on behalf of a statically
configured interface. Thus a multicast router might be unaware that the switch has an interface
that is a member of the multicast group. You can configure a static group interface on the router
to ensure that the switch receives the group multicast traffic.
For example, to configure interface ge-0/0/11.0 in VLAN ip-camera-vlan as a static member of multicast
group ff1e::1:
There might be cases, however, where you might want to adjust the timer and counter values—for
example, to reduce burstiness, to reduce leave latency, or to adjust for expected packet loss on a subnet.
If you change a timer or counter value for the MLD querier on a VLAN, we recommend that you change
the value for all multicast routers and switches on the VLAN so that all devices time out group
memberships at approximately the same time.
• query-interval—The length of time the MLD querier waits between sending general queries (the
default is 125 seconds). You can change this interval to tune the number of MLD messages on the
subnet; larger values cause general queries to be sent less often.
You cannot configure this value directly for MLD snooping. MLD snooping inherits the value from the
MLD value configured on the switch, which is applied to all VLANs on the switch.
• query-response-interval—The maximum length of time the host can wait until it responds (the default
is 10 seconds). You can change this interval to adjust the burst peaks of MLD messages on the
subnet. Set a larger interval to make the traffic less bursty.
You cannot configure this value directly for MLD snooping. MLD snooping inherits the value from the
MLD value configured on the switch, which is applied to all VLANs on the switch.
194
• query-last-member-interval—The length of time the MLD querier waits between sending group-
specific membership queries (the default is 1 second). The MLD querier sends a group-specific query
after receiving a leave report from a host. You can decrease this interval to reduce the amount of
time it takes for multicast traffic to stop forwarding after the last member leaves a group.
You cannot configure this value directly for MLD snooping. MLD snooping inherits the value from the
MLD value configured on the switch, which is applied to all VLANs on the switch.
• robust-count—The number of times the querier resends a general membership query or a group-
specific membership query (the default is 2 times). You can increase this count to tune for higher
expected packet loss.
For MLD snooping, you can configure robust-count for a specific VLAN. If a VLAN does not have
robust-count configured, the robust-count value is inherited from the value configured for MLD.
The values configured for query-interval, query-response-interval, and robust-count determine the
multicast listener interval—the length of time the switch waits for a group membership report after a
general query before removing a multicast group from its multicast forwarding table. The switch
calculates the multicast listener interval by multiplying query-interval by robust-count and then adding
query-response-interval:
For example, the multicast listener interval is 260 seconds when the default settings for query-interval,
query-response-interval, and robust-count are used:
(125 x 2) + 10 = 260
You can display the time remaining in the multicast listener interval before a group times out by using
the show mld-snooping membership command.
195
RELATED DOCUMENTATION
Configuring MLD | 60
IN THIS SECTION
NOTE: This task uses Junos OS with support for the Enhanced Layer 2 Software (ELS)
configuration style. If your switch runs software that does not support ELS, see "Configuring
MLD Snooping on an EX Series Switch VLAN (CLI Procedure)" on page 186. For ELS details, see
Using the Enhanced Layer 2 Software CLI.
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on the
VLAN. When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast
routers and learns which hosts are interested in receiving multicast traffic for a multicast group. Based
on what it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to
interested receivers instead of flooding the traffic to all interfaces.
• Specify the MLD version for the general query that the switch sends on an interface when the
interface comes up.
• Enable immediate leave to reduce the length of time it takes the switch to stop forwarding multicast
traffic when the last member host on the interface leaves the group.
196
• Configure an interface as a static multicast-router interface so that the switch does not need to
dynamically learn that the interface is a multicast-router interface.
• Configure an interface as a static member of a multicast group so that the switch does not need to
dynamically learn the interface’s membership.
• Change the value for certain timers and counters to match the values configured on the multicast
router serving as the MLD querier.
You can also deactivate the MLD snooping protocol on the switch without changing the MLD snooping
VLAN configurations:
[edit]
user@switch# deactivate protocols mld-snooping
197
Typically, a switch passively monitors MLD messages sent between multicast routers and hosts and does
not send MLD queries. The exception is when a switch detects that an interface has come up. When an
interface comes up, the switch sends an immediate general membership query to all hosts on the
interface. By doing so, the switch enables the multicast routers to learn group memberships more
quickly than they would if they had to wait until the MLD querier sent its next general query.
The MLD version of the general query determines the MLD version of the host membership reports as
follows:
• MLD version 1 (MLDv1) general query—Both MLDv1 and MLDv2 hosts respond with an MLDv1
membership report.
• MLDv2 general query—MLDv2 hosts respond with an MLDv2 membership report, while MLDv1
hosts are unable to respond to the query.
By default, the switch sends MLDv1 queries. This ensures compatibility with hosts and multicast routers
that support MLDv1 only and cannot process MLDv2 reports. However, if your VLAN contains MLDv2
multicast routers and hosts and the routers are running PIM-SSM, we recommend that you configure
MLD snooping for MLDv2. Doing so enables the routers to quickly learn which multicast sources the
hosts on the interface want to receive traffic from.
NOTE: Configuring the MLD version does not limit the version of MLD messages that the switch
can snoop. A switch can snoop both MLDv1 and MLDv2 messages regardless of the MLD
version configured.
You can decrease the leave latency created by this default behavior by enabling immediate leave on a
VLAN.
When you enable immediate leave on a VLAN, host tracking is also enabled, allowing the switch to keep
track of the hosts on a interface that have joined a multicast group. When the switch receives a leave
report from the last member of the group, it immediately stops forwarding traffic to the interface and
does not wait for the interface group membership to time out.
Immediate leave is supported for both MLD version 1 (MLDv1) and MLDv2. However, with MLDv1, we
recommend that you configure immediate leave only when there is only one MLD host on an interface.
In MLDv1, only one host on a interface sends a membership report in response to a group-specifc query
—any other interested hosts suppress their reports. This report-suppression feature means that the
switch only knows about one interested host at any given time.
In addition to dynamically learned interfaces, the multicast forwarding table can include interfaces that
you explicitly configure to be multicast router interfaces. Unlike the table entries for dynamically learned
interfaces, table entries for statically configured interfaces are not subject to aging and deletion from the
forwarding table.
Examples of when you might want to configure a static multicast-router interface include:
• You have an unusual network configuration that prevents MLD snooping from reliably learning about
a multicast-router interface through monitoring MLD queries or PIM updates.
• You have a stable topology and want to avoid the delay the dynamic learning process entails.
In addition to such dynamically learned interfaces, the multicast forwarding table can include interfaces
that you statically configure to be members of multicast groups. When you configure a static group
interface, the switch adds the interface to the forwarding table as a host interface for the group. Unlike
an entry for a dynamically learned interface, a static interface entry is not subject to aging and deletion
from the forwarding table.
Examples of when you might want to configure static group membership on an interface include:
• The interface has receivers that cannot send MLD membership reports.
• You want the multicast traffic for a specific group to be immediately available to a receiver without
any delay imposed by the dynamic join process.
You cannot configure multicast source addresses for a static group interface. The MLD version of a
static group interface is always MLD version 1.
NOTE: The switch does not simulate MLD membership reports on behalf of a statically
configured interface. Thus a multicast router might be unaware that the switch has an interface
that is a member of the multicast group. You can configure a static group interface on the router
to ensure that the switch receives the group multicast traffic.
200
For example, to configure interface ge-0/0/11.0 in VLAN employee as a static member of multicast
group ff1e::1:
There might be cases, however, where you might want to adjust the timer and counter values—for
example, to reduce burstiness, to reduce leave latency, or to adjust for expected packet loss on a subnet.
If you change a timer or counter value for the MLD querier on a VLAN, we recommend that you change
the value for all multicast routers and switches on the VLAN so that all devices time out group
memberships at approximately the same time.
• query-interval—The length of time in seconds the MLD querier waits between sending general
queries (the default is 125 seconds). You can change this interval to tune the number of MLD
messages on the subnet; larger values cause general queries to be sent less often.
• query-response-interval—The maximum length of time in seconds the host waits before it responds
(the default is 10 seconds). You can change this interval to accommodate the burst peaks of MLD
messages on the subnet. Set a larger interval to make the traffic less bursty.
• query-last-member-interval—The length of time the MLD querier waits between sending group-
specific membership queries (the default is 1 second). The MLD querier sends a group-specific query
after receiving a leave report from a host. You can decrease this interval to reduce the amount of
time it takes for multicast traffic to stop forwarding after the last member leaves a group.
• robust-count—The number of times the querier resends a general membership query or a group-
specific membership query (the default is 2 times). You can increase this count to tune for higher
anticipated packet loss.
For MLD snooping, you can configure robust-count for a specific VLAN. If a VLAN does not have
robust-count configured, the value is inherited from the value configured for MLD.
The values configured for query-interval, query-response-interval, and robust-count determine the
multicast listener interval—the length of time the switch waits for a group membership report after a
general query before removing a multicast group from its multicast forwarding table. The switch
calculates the multicast listener interval by multiplying query-interval value by the robust-count value
and then adding the query-response-interval to the product:
For example, the multicast listener interval is 260 seconds when the default settings for query-interval,
query-response-interval, and robust-count are used:
(125 x 2) + 10 = 260
To display the time remaining in the multicast listener interval before a group times out, use the show
mld-snooping membership command.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 202
Configuration | 204
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast routers
and learns which hosts are interested in receiving multicast traffic for a multicast group. Based on what
it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to interested
receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
IN THIS SECTION
Topology | 203
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the switch are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/12, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the MLD querier and forwards multicast
traffic for group ff1e::2010 to the switch from a multicast source.
Topology
In this example topology, the multicast router forwards multicast traffic to the switch from the source
when it receives a memberhsip report for group ff1e::2010 from one of the hosts—for example, Host B.
If MLD snooping is not enabled on vlan100, the switch floods the multicast traffic on all interfaces in
vlan100 (except for interface ge-0/0/12). If MLD snooping is enabled on vlan100, the switch monitors
the MLD messages between the hosts and router, allowing it to determine that only Host B is interested
in receiving the multicast traffic. The switch then forwards the multicast traffic only to interface
ge-0/0/1.
This example shows how to enable MLD snooping on vlan100. It also shows how to perform the
following optional configurations, which can reduce group join and leave latency:
• Configure immediate leave on the VLAN. When immediate leave is configured, the switch stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
group has left the group. If immediate leave is not configured, the switch waits until the group-
specific membership queries time out before it stops forwarding traffic.
• Configure ge-0/0/12 as a static multicast-router interface. In this topology, ge-0/0/12 always leads
to the multicast router. By statically configuring ge-0/0/12 as a multicast-router interface, you avoid
any delay imposed by the switch having to learn that ge-0/0/12 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 204
Procedure
To quickly configure MLD snooping, copy the following commands and paste them into the switch
terminal window:
[edit]
set protocols mld-snooping vlan vlan100
205
Step-by-Step Procedure
[edit protocols]
user@switch# set mld-snooping vlan vlan100
2. Configure the switch to immediately remove a group membership from an interface when it receives
a leave report from the last member of the group on the interface:
[edit protocols]
user@switch# set mld-snooping vlan vlan100 immediate-leave
[edit protocols]
user@switch# set mld-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Results
[edit protocols]
user@switch# show mld-snooping
vlan vlan100 {
immediate-leave;
interface ge-0/0/12.0 {
multicast-router-interface;
}
}
206
IN THIS SECTION
To verify that MLD snooping is enabled on the VLAN and the MLD snooping forwarding interfaces are
correct, perform the following task:
Purpose
Verify that MLD snooping is enabled on vlan100 and that the multicast-router interface is statically
configured:
Action
Meaning
MLD snooping is running on vlan100, and interface ge-0/0/12.0 is a statically configured multicast-
router interface. Because the multicast group ff1e::2010 is listed, at least one host in the VLAN is a
current member of the multicast group and that host is on interface ge-0/0/1.0.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 207
Configuration | 209
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, SRX Series device examines MLD messages between hosts and
multicast routers and learns which hosts are interested in receiving multicast traffic for a multicast
group. Based on what it learns, the device then forwards IPv6 multicast traffic only to those interfaces
connected to interested receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
IN THIS SECTION
Topology | 208
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the device are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/3, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the MLD querier and forwards multicast
traffic for group 2001:db8::1 to the device from a multicast source.
Topology
In this example topology, the multicast router forwards multicast traffic to the device from the source
when it receives a memberhsip report for group 2001:db8::1 from one of the hosts—for example, Host
B. If MLD snooping is not enabled on vlan100, then the device floods the multicast traffic on all
interfaces in vlan100 (except for interface ge-0/0/3). If MLD snooping is enabled on vlan100, the device
monitors the MLD messages between the hosts and router, allowing it to determine that only Host B is
interested in receiving the multicast traffic. The device then forwards the multicast traffic only to
interface ge-0/0/1.
This example shows how to enable MLD snooping on vlan100. It also shows how to perform the
following optional configurations, which can reduce group join and leave latency:
209
• Configure immediate leave on the VLAN. When immediate leave is configured, the device stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
group has left the group. If immediate leave is not configured, the device waits until the group-
specific membership queries time out before it stops forwarding traffic
• Configure ge-0/0/3 as a static multicast-router interface. In this topology, ge-0/0/3 always leads to
the multicast router. By statically configuring ge-0/0/3 as a multicast-router interface, you avoid any
delay imposed by the device having to learn that ge-0/0/3 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 209
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
instructions on how to do that, see Using the CLI Editor in Configuration Mode in the CLI User Guide.
[edit interfaces]
user@host# set ge-0/0/0 unit 0 family ethernet-switching interface-mode access
user@host# set ge-0/0/0 unit 0 family ethernet-switching vlan members vlan100
user@host# set ge-0/0/1 unit 0 family ethernet-switching interface-mode access
user@host# set ge-0/0/1 unit 0 family ethernet-switching vlan members vlan100
user@host# set ge-0/0/2 unit 0 family ethernet-switching interface-mode access
user@host# set ge-0/0/2 unit 0 family ethernet-switching vlan members vlan100
[edit interfaces]
user@host# set ge-0/0/3 unit 0 family ethernet-switching interface-mode trunk
user@host# set ge-0/0/3 unit 0 family ethernet-switching vlan members vlan100
[edit]
user@host# set routing-options nonstop-routing
5. Configure the limit for the number of multicast groups allowed on the ge-0/0/1.0 interface to 50.
6. Configure the device to immediately remove a group membership from an interface when it
receives a leave message from that interface without waiting for any other MLD messages to be
exchanged.
9. Configure an interface to be an exclusively host-facing interface (to drop MLD query messages).
11. If you are done configuring the device, commit the configuration.
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show protocols mld-snooping
command. If the output does not display the intended configuration, repeat the configuration
instructions in this example to correct it.
[edit]
user@host# show protocols mld-snooping
vlan vlan100 {
query-interval 200;
query-response-interval 0.4;
query-last-member-interval 0.1;
robust-count 4;
immediate-leave;
interface ge-0/0/1.0 {
host-only-interface;
}
interface ge-0/0/0.0 {
group-limit 50;
}
interface ge-0/0/2.0 {
static {
group 2001:db8::1;
}
}
interface ge-0/0/3.0 {
multicast-router-interface;
}
}
213
IN THIS SECTION
To verify that MLD snooping is enabled on the VLAN and the MLD snooping forwarding interfaces are
correct, perform the following task:
Purpose
Verify that MLD snooping is enabled on vlan100 and that the multicast-router interface is statically
configured:
Action
From operational mode, enter the show mld snooping membership command.
Vlan: vlan100
Learning-Domain: default
Interface: ge-0/0/0.0, Groups: 0
Interface: ge-0/0/1.0, Groups: 0
Interface: ge-0/0/2.0, Groups: 1
Group: 2001:db8::1
Group mode: Exclude
Source: ::
Last reported by: Local
Group timeout: 0 Type: Static
214
Meaning
MLD snooping is running on vlan100, and interface ge-0/0/3.0 is a statically configured multicast-
router interface. Because the multicast group 2001:db8::1 is listed, at least one host in the VLAN is a
current member of the multicast group and that host is on interface ge-0/0/1.0.
RELATED DOCUMENTATION
mld-snooping | 1669
Understanding MLD Snooping | 174
IN THIS SECTION
By enabling tracing operations for MLD snooping, you can record detailed messages about the
operation of the protocol, such as the various types of protocol packets sent and received. Table 9 on
page 214 describes the tracing operations you can enable and the flags used to specify them in the
tracing configuration.
Trace normal MLD snooping protocol events. If you do not specify this flag, only normal
unusual or abnormal operations are traced.
For example:
2. (Optional) Configure the maximum number of trace files and size of the trace files:
[edit protocols mld-snooping ]user@switch # set file files number size size
For example:
causes the contents of the trace file to be emptied and archived in a .gz file when the file reaches 1
MB. Four archive files are maintained, the contents of which are rotated whenever the current active
trace file is archived.
If you omit this step, the maximum number of trace files defaults to 10, with the maximum file size
defaulting to 128 K.
3. Specify one of the tracing flags shown in Table 9 on page 214:
For example, to perform trace operations on VLAN-related events and MLD query messages:
You can stop and restart tracing operations by deactivating and reactivating the configuration:
RELATED DOCUMENTATION
IN THIS SECTION
By enabling tracing operations for MLD snooping, you can record detailed messages about the
operation of the protocol, such as the various types of protocol packets sent and received. Table 10 on
page 218 describes the tracing operations you can enable and the flags used to specify them in the
tracing configuration.
218
Trace normal MLD snooping protocol events. If you do not specify this flag, only normal
unusual or abnormal operations are traced.
[edit protocols mld-snooping ]user@switch# set vlan vlan-name traceoptions file filename
For example:
[edit protocols mld-snooping ]user@switch# set vlan vlan100 traceoptions file mld-snoop-
trace
2. (Optional) Configure the maximum number of trace files and size of the trace files:
[edit protocols mld-snooping ]user@switch # set vlan vlan-name traceoptions file files
number size size
For example:
[edit protocols mld-snooping ]user@switch # set vlan vlan100 traceoptions file files 5 size
1m
causes the contents of the trace file to be emptied and archived in a .gz file when the file reaches 1
MB. Four archive files are maintained, the contents of which are rotated whenever the current active
trace file is archived.
If you omit this step, the maximum number of trace files defaults to 10, and the maximum file size to
128 KB.
220
[edit protocols mld-snooping ]user@switch # set vlan vlan-name traceoptions flag flagname
For example, to perform trace operations on VLAN-related events and on MLD query messages:
[edit protocols mld-snooping ]user@switch# set vlan vlan100 traceoptions flag vlan
[edit protocols mld-snooping ]user@switch# set vlan vlan100 traceoptions flag query
You can stop and restart tracing operations by deactivating and reactivating the configuration:
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 221
Configuration | 223
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast routers
and learns which hosts are interested in receiving multicast traffic for a multicast group. Based on what
it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to interested
receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
IN THIS SECTION
Topology | 222
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the switch are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/12, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the MLD querier and forwards multicast
traffic for group ff1e::2010 to the switch from a multicast source.
Topology
In this example topology, the multicast router forwards multicast traffic to the switch from the source
when it receives a memberhsip report for group ff1e::2010 from one of the hosts—for example, Host B.
If MLD snooping is not enabled on vlan100, the switch floods the multicast traffic on all interfaces in
vlan100 (except for interface ge-0/0/12). If MLD snooping is enabled on vlan100, the switch monitors
the MLD messages between the hosts and router, allowing it to determine that only Host B is interested
in receiving the multicast traffic. The switch then forwards the multicast traffic only to interface
ge-0/0/1.
This example shows how to enable MLD snooping on vlan100. It also shows how to perform the
following optional configurations, which can reduce group join and leave latency:
• Configure immediate leave on the VLAN. When immediate leave is configured, the switch stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
group has left the group. If immediate leave is not configured, the switch waits until the group-
specific membership queries time out before it stops forwarding traffic.
• Configure ge-0/0/12 as a static multicast-router interface. In this topology, ge-0/0/12 always leads
to the multicast router. By statically configuring ge-0/0/12 as a multicast-router interface, you avoid
any delay imposed by the switch having to learn that ge-0/0/12 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 223
Procedure
To quickly configure MLD snooping, copy the following commands and paste them into the switch
terminal window:
[edit]
set protocols mld-snooping vlan vlan100
224
Step-by-Step Procedure
[edit protocols]
user@switch# set mld-snooping vlan vlan100
2. Configure the switch to immediately remove a group membership from an interface when it receives
a leave report from the last member of the group on the interface:
[edit protocols]
user@switch# set mld-snooping vlan vlan100 immediate-leave
[edit protocols]
user@switch# set mld-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Results
[edit protocols]
user@switch# show mld-snooping
vlan vlan100 {
immediate-leave;
interface ge-0/0/12.0 {
multicast-router-interface;
}
}
225
IN THIS SECTION
To verify that MLD snooping is enabled on the VLAN and the MLD snooping forwarding interfaces are
correct, perform the following task:
Purpose
Verify that MLD snooping is enabled on vlan100 and that the multicast-router interface is statically
configured:
Action
Meaning
MLD snooping is running on vlan100, and interface ge-0/0/12.0 is a statically configured multicast-
router interface. Because the multicast group ff1e::2010 is listed, at least one host in the VLAN is a
current member of the multicast group and that host is on interface ge-0/0/1.0.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 226
Configuration | 229
NOTE: This example uses Junos OS with support for the Enhanced Layer 2 Software (ELS)
configuration style. For ELS details, see Using the Enhanced Layer 2 Software CLI.
You can enable MLD snooping on a VLAN to constrain the flooding of IPv6 multicast traffic on a VLAN.
When MLD snooping is enabled, a switch examines MLD messages between hosts and multicast routers
and learns which hosts are interested in receiving multicast traffic for a multicast group. On the basis of
what it learns, the switch then forwards IPv6 multicast traffic only to those interfaces connected to
interested receivers instead of flooding the traffic to all interfaces.
Requirements
This example uses the following software and hardware components:
• Junos OS Release 13.3 or later for EX Series switches or Junos OS Release 15.1X53-D10 or later for
QFX10000 switches
See Configuring VLANs for EX Series Switches or Configuring VLANs on Switches with Enhanced Layer
2 Support.
IN THIS SECTION
Topology | 228
In this example, interfaces ge-0/0/0, ge-0/0/1, and ge-0/0/2 on the switch are in vlan100 and are
connected to hosts that are potential multicast receivers. Interface ge-0/0/12, a trunk interface also in
vlan100, is connected to a multicast router. The router acts as the MLD querier and forwards multicast
traffic for group ff1e::2010 to the switch from a multicast source.
228
Topology
In this sample topology, the multicast router forwards multicast traffic to the switch from the source
when it receives a membership report for group ff1e::2010 from one of the hosts—for example, Host B.
If MLD snooping is not enabled on vlan100, the switch floods the multicast traffic on all interfaces in
vlan100 (except for interface ge-0/0/12). If MLD snooping is enabled on vlan100, the switch monitors
the MLD messages between the hosts and router, allowing it to determine that only Host B is interested
in receiving the multicast traffic. The switch then forwards the multicast traffic only to interface
ge-0/0/1.
This example shows how to enable MLD snooping on vlan100. It also shows how to perform the
following optional configurations, which can reduce group join and leave latency:
• Configure immediate leave on the VLAN. When immediate leave is configured, the switch stops
forwarding multicast traffic on an interface when it detects that the last member of the multicast
229
group has left the group. If immediate leave is not configured, the switch waits until the group-
specific membership queries time out before it stops forwarding traffic.
• Configure ge-0/0/12 as a static multicast-router interface. In this topology, ge-0/0/12 always leads
to the multicast router. By statically configuring ge-0/0/12 as a multicast-router interface, you avoid
any delay imposed by the switch having to learn that ge-0/0/12 is a multicast-router interface.
Configuration
IN THIS SECTION
Procedure | 229
Procedure
To quickly configure MLD snooping, copy the following commands and paste them into the switch
terminal window:
[edit]
set protocols mld-snooping vlan vlan100
Step-by-Step Procedure
[edit protocols]
user@switch# set mld-snooping vlan vlan100
230
2. Configure the switch to immediately remove a group membership from an interface when it receives
a leave report from the last member of the group on the interface:
[edit protocols]
user@switch# set mld-snooping vlan vlan100 immediate-leave
[edit protocols]
user@switch# set mld-snooping vlan vlan100 interface ge-0/0/12 multicast-router-interface
Results
[edit protocols]
user@switch# show mld-snooping
vlan vlan100 {
immediate-leave;
interface ge-0/0/12.0 {
multicast-router-interface;
}
}
IN THIS SECTION
To verify that MLD snooping is enabled on the VLAN and the MLD snooping forwarding interfaces are
correct, perform the following task:
231
Purpose
Verify that MLD snooping is enabled on the VLAN vlan 100 and that the multicast-router interface is
statically configured:
Action
Vlan: vlan100
Learning-Domain: default
Interface: ge-0/0/12.0
State: Up Groups: 3
Immediate leave: On
Router interface: yes
Configured Parameters:
MLD Query Interval: 125.0
MLD Query Response Interval: 10.0
MLD Last Member Query Interval: 1.0
MLD Robustness Count: 2
Meaning
MLD snooping is running on vlan100, and interface ge-0/0/12.0 is a statically configured multicast-
router interface. Immediate leave is enabled on the interface.
RELATED DOCUMENTATION
Configuring MLD Snooping on a Switch VLAN with ELS Support (CLI Procedure) | 195
Verifying MLD Snooping on Switches | 237
Understanding MLD Snooping | 174
232
IN THIS SECTION
Multicast Listener Discovery (MLD) snooping constrains the flooding of IPv6 multicast traffic on VLANs
on a switch. This topic describes how to verify MLD snooping operation on the switch.
IN THIS SECTION
Purpose | 232
Action | 232
Meaning | 233
Purpose
Determine group memberships, multicast-router interfaces, host MLD versions, and the current values
of timeout counters.
Action
Meaning
The switch has multicast membership information for one VLAN on the switch, mld-vlan. MLD snooping
might be enabled on other VLANs, but the switch does not have any multicast membership information
for them. The following information is provided:
• Information on the multicast-router interfaces for the VLAN—in this case, ge-1/0/0.0. The multicast-
router interface has been learned by MLD snooping, as indicated by dynamic. The timeout value
shows how many seconds from now the interface will be removed from the multicast forwarding
table if the switch does not receive MLD queries or Protocol Independent Multicast (PIM) updates on
the interface.
• Currently, the VLAN has membership in only one multicast group, ff1e::2010.
• The host or hosts that have reported membership in the group are on interface ge-1/0/30.0. The
interface group membership will time out in 180 seconds if no hosts respond to membership
queries during this interval. The flags field shows the lowest version of MLD used by a host that is
currently a member of the group, which in this case is MLD version 2 (MLDv2).
• The last host that reported membership in the group has address fe80::2020:1:1:3.
• Because interface has MLDv2 hosts on it, the source addresses from which the MLDv2 hosts
want to receive group multicast traffic are shown (addresses 2020:1:1:1::2 and 2020:1:1:1::5). The
timeout value for the interface group membership is derived from the largest timeout value for all
sources addresses for the group.
IN THIS SECTION
Purpose | 234
Action | 234
Meaning | 234
234
Purpose
Verify that MLD snooping is enabled on a VLAN and display MLD snooping information for each VLAN
on which MLD snooping is enabled.
Action
Meaning
MLD snooping is configured on two VLANs on the switch: v10 and v20. Each interface in each VLAN is
listed and the following information is provided:
IN THIS SECTION
Purpose | 235
Action | 235
Meaning | 235
235
Purpose
Display MLD snooping statistics, such as number of MLD queries, reports, and leaves received and how
many of these MLD messages contained errors.
Action
Meaning
The output shows how many MLD messages of each type—Queries, Reports, Leaves—the switch
received or transmitted on interfaces on which MLD snooping is enabled. For each message type, it also
shows the number of MLD packets the switch received that had errors—for example, packets that do
not conform to the MLDv1 or MLDv2 standards. If the Recv Errors count increases, verify that the hosts
are compliant with MLDv1 or MLDv2 standards. If the switch is unable to recognize the MLD message
type for a packet, it counts the packet under Receive unknown.
IN THIS SECTION
Purpose | 236
Action | 236
Meaning | 236
236
Purpose
Action
Meaning
The output shows the next-hop interfaces for a given multicast group on a VLAN. Only the last 32 bits
of the group address are shown because the switch uses only these bits in determining multicast routes.
For example, route ::0000:2010 on mld-vlan has next-hop interfaces ge-1/0/30.0 and ge-1/0/33.0.
RELATED DOCUMENTATION
IN THIS SECTION
NOTE: This topic uses Junos OS with support for the Enhanced Layer 2 Software (ELS)
configuration style. If your switch runs software that does not support ELS, see "Verifying MLD
Snooping on EX Series Switches (CLI Procedure)" on page 232. For ELS details, see Using the
Enhanced Layer 2 Software CLI.
Multicast Listener Discovery (MLD) snooping constrains the flooding of IPv6 multicast traffic on VLANs.
This topic describes how to verify MLD snooping operation on a VLAN.
IN THIS SECTION
Purpose | 237
Action | 238
Meaning | 238
Purpose
Verify that MLD snooping is enabled on a VLAN and determine group memberships.
238
Action
Vlan: v1
Learning-Domain: default
Interface: ge-0/0/1.0, Groups: 1
Group: ff05::1
Group mode: Exclude
Source: ::
Last reported by: fe80::
Group timeout: 259 Type: Dynamic
Interface: ge-0/0/2.0, Groups: 0
Meaning
The switch has multicast membership information for one VLAN on the switch, v1. MLD snooping might
be enabled on other VLANs, but the switch does not have any multicast membership information for
them.
• The following information is provided about the group memberships for the VLAN:
• Currently, the VLAN has membership in only one multicast group, ff05::1.
• The host or hosts that have reported membership in the group are on interface ge-0/0/1.0.
• The last host that reported membership in the group has address fe80::.
• The interface group membership will time out in 259 seconds if no hosts respond to membership
queries during this interval.
• The group membership has been learned by MLD snooping, as indicated by Dynamic.
IN THIS SECTION
Purpose | 239
239
Action | 239
Meaning | 239
Purpose
Display MLD snooping information for each interface on which MLD snooping is enabled.
Action
Vlan: v100
Learning-Domain: default
Interface: ge-0/0/1.0
State: Up Groups: 1
Immediate leave: Off
Router interface: no
Interface: ge-0/0/2.0
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Configured Parameters:
MLD Query Interval: 125.0
MLD Query Response Interval: 10.0
MLD Last Member Query Interval: 1.0
MLD Robustness Count: 2
Meaning
MLD snooping is configured on one VLAN on the switch, v100. Each interface in each VLAN is listed
and the following information is provided:
The output also shows the configured parameters for the MLD querier.
IN THIS SECTION
Purpose | 240
Action | 240
Meaning | 241
Purpose
Display MLD snooping statistics, such as number of MLD queries, reports, and leaves received and how
many of these MLD messages contained errors.
Action
Vlan: v2
MLD Message type Received Sent Rx errors
Listener Query (v1/v2) 0 4 0
Listener Report (v1) 154 0 0
Listener Done (v1/v2) 0 0 0
Listener Report (v2) 0 0 0
Other Unknown types 0
241
Instance: default-switch
MLD Message type Received Sent Rx errors
Listener Query (v1/v2) 0 8 0
Listener Report (v1) 601 0 0
Listener Done (v1/v2) 0 0 0
Listener Report (v2) 0 0 0
Other Unknown types 0
Meaning
The output shows how many MLD messages of each type—Queries, Done, Report—the switch received
or transmitted on interfaces on which MLD snooping is enabled. For each message type, it also shows
the number of MLD packets the switch received that had errors—for example, packets that do not
conform to the MLDv1 or MLDv2 standards. If the Rx errors count increases, verify that the hosts are
compliant with MLDv1 or MLDv2 standards. If the switch is unable to recognize the MLD message type
for a packet, it counts the packet under Other Unknown types.
IN THIS SECTION
Purpose | 241
Action | 242
Meaning | 242
Purpose
Display the next-hop information maintained in the multicast snooping forwarding table.
242
Action
Family: INET6
Group: ff00::/8
Source: ::/128
Vlan: v1
Group: ff02::1/128
Source: ::/128
Vlan: v1
Downstream interface list:
ge-1/0/16.0
Group: ff05::1/128
Source: ::/128
Vlan: v1
Downstream interface list:
ge-1/0/16.0
Group: ff06::1/128
Source: ::/128
Vlan: v1
Downstream interface list:
ge-1/0/16.0
Meaning
The output shows the next-hop interfaces for a given multicast group on a VLAN. For example, route
ff02::1/128 on VLAN v1 has the next-hop interface ge-1/0/16.0.
RELATED DOCUMENTATION
CHAPTER 5
IN THIS CHAPTER
Example: Configuring Multicast VLAN Registration on EX Series Switches Without ELS | 266
IN THIS SECTION
Multicast VLAN registration (MVR) enables more efficient distribution of IPTV multicast streams across
an Ethernet ring-based Layer 2 network.
In a standard Layer 2 network, a multicast stream received on one VLAN is never distributed to
interfaces outside that VLAN. If hosts in multiple VLANs request the same multicast stream, a separate
copy of that multicast stream is distributed to each requesting VLAN.
When you configure MVR, you create a multicast VLAN (MVLAN) that becomes the only VLAN over
which IPTV multicast traffic flows throughout the Layer 2 network. Devices with MVR enabled
selectively forward IPTV multicast traffic from interfaces on the MVLAN (source interfaces) to hosts that
are connected to interfaces that are not part of the MVLAN that you designate as MVR receiver ports.
MVR receiver ports can receive traffic from a port on the MVLAN but cannot send traffic onto the
MVLAN, and those ports remain in their own VLANs for bandwidth and security reasons.
244
• Reduces the bandwidth required to distribute IPTV multicast streams by eliminating duplication of
multicast streams from the same source to interested receivers on different VLANs.
MVR operates similarly to and in conjunction with Internet Group Management Protocol (IGMP)
snooping. Both MVR and IGMP snooping monitor IGMP join and leave messages and build forwarding
tables based on the media access control (MAC) addresses of the hosts sending those IGMP messages.
Whereas IGMP snooping operates within a given VLAN to regulate multicast traffic, MVR can operate
with hosts on different VLANs in a Layer 2 network to selectively deliver IPTV multicast traffic to any
requesting hosts. This reduces the bandwidth needed to forward the traffic.
MVR Basics
MVR is not enabled by default on devices that support MVR. You explicitly configure an MVLAN and
assign a range of multicast group addresses to it. That VLAN carries MVLAN traffic for the configured
multicast groups. You then configure other VLANs to be MVR receiver VLANs that receive multicast
streams from the MVLAN. When MVR is configured on a device, the device receives only one copy of
each MVR multicast stream, and then replicates the stream only to the hosts that want to receive it,
while forwarding all other types of multicast traffic without modification.
You can configure multiple MVLANs on a device, but they must have disjoint multicast group subnets.
An MVR receiver VLAN can be associated with more than one MVLAN on the device.
MVR does not support MVLANs or MVR receiver VLANs on a private VLAN (PVLAN).
On non-ELS switches, the MVR receiver ports comprise all the interfaces that exist on any of the MVR
receiver VLANs.
On ELS switches, the MVR receiver ports are all the interfaces on the MVR receiver VLANs except the
multicast router ports; an interface can be configured in both an MVR receiver VLAN and its MVLAN
only if it is configured as a multicast router port in both VLANs. ELS EX Series switches support MVR as
follows:
• Starting in Junos OS Release 18.3R1, EX4300 switches and Virtual Chassis support MVR. You can
configure up to 10 MVLANs on these devices.
• Starting in Junos OS Release 18.4R1, EX2300 and EX3400 switches and Virtual Chassis support
MVR. You can configure up to 5 MVLANs on these devices.
245
• Starting in Junos OS Release 19.4R1, EX4300 multigigabit model (EX4300-48MP) switches and
Virtual Chassis support MVR. You can configure up to 10 MVLANs on these devices.
NOTE: MVR has some configuration and operational differences on EX Series switches that use
the Enhanced Layer 2 Software (ELS) configuration style compared to MVR on switches that do
not support ELS. Where applicable, the following sections explain these differences.
MVR Modes
MVR can operate in two modes: MVR transparent mode and MVR proxy mode. Both modes enable
MVR to forward only one copy of a multicast stream to the Layer 2 network. However, the main
difference between the two modes is in how the device sends IGMP reports upstream to the multicast
router. The device essentially handles IGMP queries the same way in either mode.
You configure MVR modes differently on non-ELS and ELS switches. Also, on ELS switches, you can
associate an MVLAN with some MVR receiver VLANs operating in proxy mode and others operating in
transparent mode if you have multicast requirements for both modes in your network.
Transparent mode is the default mode when you configure an MVR receiver VLAN, also called a data-
forwarding receiver VLAN.
NOTE: On ELS switches, you can explicitly configure transparent mode, although it is also the
default setting if you don’t configure an MVR receiver mode.
In MVR transparent mode, the device handles IGMP packets destined for both the multicast source
VLAN and multicast receiver VLANs similarly to the way that it handles them when MVR is not being
used. Without MVR, when a host on a VLAN sends IGMP join and leave messages, the device forwards
the messages to all multicast router interfaces in the VLAN. Similarly, when a VLAN receives IGMP
queries from its multicast router interfaces, it forwards the queries to all interfaces in the VLAN.
With MVR in transparent mode, the device handles IGMP reports and queries as follows:
• Receives IGMP join and leave messages on MVR receiver VLAN interfaces and forwards them to the
multicast router ports on the MVR receiver VLAN.
• Forwards IGMP queries on the MVR receiver VLAN to all MVR receiver ports.
• Forwards IGMP queries received on the MVLAN only to the MVR receiver ports that are in receiver
VLANs associated with that MVLAN, even though those ports might not be on the MVLAN itself.
246
NOTE: Devices in transparent mode only send IGMP reports in the context of the MVR receiver
VLAN. In other words, if MVR receiver ports receive an IGMP query from an upstream multicast
router on the MVLAN, they only send replies on the MVR receiver VLAN multicast router ports.
The upstream router (that sent the queries on the MVLAN) does not receive the replies and does
not forward any traffic, so to solve this problem, you must configure static membership. As a
result, we recommend that you use MVR proxy mode instead of transparent mode on the device
that is closest to the upstream multicast router. See "MVR Proxy Mode" on page 246.
If a host on a multicast receiver port in the MVR receiver VLAN joins a group, the device adds the
appropriate bridging entry on the MVLAN for that group. When the device receives traffic on the
MVLAN for that group, it forwards the traffic on that port tagged with the MVLAN tag (even though the
port is not in the MVLAN). Likewise, if a host on a multicast receiver port on the MVR receiver VLAN
leaves a group, the device deletes the matching bridging entry, and the MVLAN stops forwarding that
group’s MVR traffic on that port.
When in transparent mode, by default, the device installs bridging entries only on the MVLAN that is the
source for the group address, so if the device receives MVR receiver VLAN traffic for that group, the
device would not forward the traffic to receiver ports on the MVR receiver VLAN that sent the join
message for that group. The device only forwards traffic to MVR receiver interfaces on the MVLAN. To
enable MVR receiver VLAN ports to receive traffic forwarded on the MVR receiver VLAN, you can
configure the install option at the [edit protocols igmp-snooping vlans vlan-name data-forwarding
receiver] hierarchy level so the device also installs the bridging entries on the MVR receiver VLAN.
When you configure MVR in proxy mode, the device acts as an IGMP proxy to the multicast router for
MVR group membership requests received on MVR receiver VLANs. That means the device forwards
IGMP reports from hosts on MVR receiver VLANs in the context of the MVLAN. and only forwards
them to the multicast router ports on the MVLAN. The multicast router receives IGMP reports only on
the MVLAN for those MVR receiver hosts.
The device handles IGMP queries in the same way as in transparent mode:
• Forwards IGMP queries received on the MVR receiver VLAN to all MVR receiver ports.
• Forwards IGMP queries received on the MVLAN only to the MVR receiver ports that are in receiver
VLANs belonging to that MVLAN, even though those ports might not be on the MVLAN itself.
In proxy mode, for multicast group memberships established in the context of the MVLAN, the device
installs bridging entries only on the MVLAN and forwards incoming MVLAN traffic to hosts on the MVR
receiver VLANs subscribed to those groups. Proxy mode doesn’t support the install option that enables
the device to also install bridging entries on the MVR receiver VLANs. As a result, when the device
247
receives traffic on an MVR receiver VLAN, it does not forward the traffic to the hosts on the MVR
receiver VLAN because the device does not have bridging entries for those MVR receiver ports on the
MVR receiver VLANs.
On non-ELS switches, you configure MVR proxy mode on an MVLAN using the "proxy" on page 1795
statement at the [edit protocols igmp-snooping vlan vlan-name] hierarchy level along with other IGMP
snooping configuration options.
NOTE: On non-ELS switches, this proxy configuration statement only supports MVR proxy mode
configuration. General IGMP snooping proxy operation is not supported.
When this option is enabled on non-ELS switches, the device acts as an IGMP proxy for any MVR
groups sourced by the MVLAN in both the upstream and downstream directions. In the downstream
direction, the device acts as the querier for those multicast groups in the MVR receiver VLANs. In the
upstream direction, the device originates the IGMP reports and leave messages, and answers IGMP
queries from multicast routers. Configuring this proxy option on an MVLAN automatically enables MVR
proxy operation for all MVR receiver VLANs associated with the MVLAN.
On ELS switches, you configure MVR proxy mode on the MVR receiver VLANs. You can configure MVR
proxy mode separately from IGMP snooping proxy mode, as follows:
• IGMP snooping proxy mode—You can use the "proxy" on page 1795 statement at the [edit protocols
igmp-snooping vlan vlan-name] hierarchy level on ELS switches to enable IGMP proxy operation
with or without MVR configuration. When you configure this option for a VLAN without configuring
MVR, the device acts as an IGMP proxy to the multicast router for ports in that VLAN. When you
configure this option on an MVLAN, the device acts as an IGMP proxy between the multicast router
and hosts in any associated MVR receiver VLANs.
NOTE: You configure this proxy mode on the MVLAN only, not on MVR receiver VLANs.
• MVR proxy mode—On ELS switches, you configure MVR proxy mode on an MVR receiver VLAN
(rather than on the MVLAN), using the proxy option at the [edit igmp-snooping vlan vlan-name data-
forwarding receiver mode] hierarchy level, when you associate the MVR receiver VLAN with an
MVLAN. An ELS switch operating in MVR proxy mode for an MVR receiver VLAN acts as an IGMP
proxy for that MVR receiver VLAN to the multicast router in the context of the MVLAN.
248
When you configure MVR, the device sends multicast traffic and IGMP queries packets downstream to
hosts in the context of the MVLAN by default. The MVLAN tag is included for VLAN-tagged traffic
egressing on trunk ports, while traffic egressing on access ports is untagged.
On ELS EX Series switches that support MVR, for VLANs with trunk ports and hosts on a multicast
receiver VLAN that expect traffic in the context of that receiver VLAN, you can configure the device to
translate the MVLAN tags into the multicast receiver VLAN tags. See the translate option at the [edit
protocols igmp-snooping vlans vlan-name data-forwarding receiver] hierarchy level.
Based on the access layer topology of your network, the following sections describe recommended
ways you should configure MVR on devices in the access layer to smoothly deliver a single multicast
stream to subscribed hosts in multiple VLANs.
NOTE: These sections apply to EX Series switches running Junos OS with the Enhanced Layer 2
Software (ELS) configuration style only.
249
Figure 28 on page 249 shows a device in a single-tier access layer topology. The device is connected to
a multicast router in the upstream direction (INTF-1), with host trunk or access ports in the downstream
direction connected to multicast receivers in two different VLANs (v10 on INTF-2 and v20 on INTF-3).
Without MVR, the upstream interface (INTF-1) acts as a multicast router interface to the upstream
router and a trunk port in both VLANs. In this configuration, the upstream router would require two
integrated routing and bridging (IRB) interfaces to send two copies of the multicast stream to the device,
which then would forward the traffic to the receivers on the two different VLANs on INTF-2 and
INTF-3.
With MVR configured as indicated in Figure 28 on page 249, the multicast stream can be sent to
receivers in different VLANs in the context of a single MVLAN, and the upstream router only requires
one downstream IRB interface on which to send one MVLAN stream to the device.
For MVR to operate smoothly in this topology, we recommend you set up the following elements on the
single–tier device as illustrated in Figure 28 on page 249:
• An MVLAN with the device’s upstream multicast router interface configured as a trunk port and a
multicast router interface in the MVLAN. This upstream interface was already a trunk port and a
multicast router port for the receiver VLANs that will be associated with the MVLAN.
250
Figure 28 on page 249 shows an MVLAN configured on the device, and the upstream interface
INTF-1 configured previously as a trunk port and multicast router port in v10 and v20, is
subsequently added as a trunk and multicast router port in the MVLAN as well.
In Figure 28 on page 249, the device is connected to Host 1 on VLAN v10 (using trunk interface
INTF-2) and Host 2 on v20 (using access interface INTF-3). VLANs v10 and v20 use INTF-1 as a
trunk port and multicast router port in the upstream direction. These VLANs become MVR receiver
VLANs for the MVLAN, with INTF-1 also added as a trunk port and multicast router port in the
MVLAN.
• MVR running in proxy mode on the device, so the device processes MVR receiver VLAN IGMP group
memberships in the context of the MVLAN. The upstream router sends only one multicast stream on
the MVLAN downstream to the device, which is forwarded to hosts on the MVR receiver VLANs that
are subscribed to the multicast groups sourced by the MVLAN.
The device in Figure 28 on page 249 is configured in proxy mode and establishes group memberships
on the MVLAN for hosts on MVR receiver VLANs v10 and v20. The upstream router in the figure
sends only one multicast stream on the MVLAN through INTF-1 to the device, which forwards the
traffic to subscribed hosts on MVR receiver VLANs v10 and v20.
• MVR receiver VLAN tag translation enabled on receiver VLANs that have hosts on trunk ports, so
those hosts receive the multicast traffic in the context of their receiver VLANs. Hosts reached by way
of access ports receive untagged multicast packets (and don’t need MVR VLAN tag translation).
In Figure 28 on page 249, the device has translation enabled on v10 and substitutes the v10 VLAN
tag for the mvlan VLAN tag when forwarding the multicast stream on trunk interface INTF-2. The
device does not have translation enabled on v20, and forwards untagged multicast packets on access
port INTF-3.
Figure 29 on page 251 shows devices in a two-tier access layer topology. The upper or upstream device
is connected to the multicast router in the upstream direction (INTF-1) and to a second device
downstream (INTF-2). The lower or downstream device connects to the upstream device (INTF-3), and
251
uses trunk or access ports in the downstream direction to connect to multicast receivers in two different
VLANs (v10 on INTF-4 and v20 on INTF-5).
Without MVR, similar to the single-tier access layer topology, the upper device connects to the
upstream multicast router using a multicast router interface that is also a trunk port in both receiver
VLANs. The two layers of devices are connected with trunk ports in the receiver VLANs. The lower
device has trunk or access ports in the receiver VLANs connected to the multicast receiver hosts. In this
configuration, the upstream router must duplicate the multicast stream and use two IRB interfaces to
send copies of the same data to the two VLANs. The upstream device also sends duplicate streams
downstream for receivers on the two VLANs.
With MVR configured as shown in Figure 29 on page 251, the multicast stream can be sent to receivers
in different VLANs in the context of a single MVLAN from the upstream router and through the multiple
tiers in the access layer.
For MVR to operate smoothly in this topology, we recommend to set up the following elements on the
different tiers of devices in the access layer, as illustrated in Figure 29 on page 251:
252
• An MVLAN configured on the devices in all tiers in the access layer. The device in the uppermost tier
connects to the upstream multicast router with a multicast router interface and a trunk port in the
MVLAN. This upstream interface was already a trunk port and a multicast router port for the receiver
VLANs that will be associated with the MVLAN.
Figure 29 on page 251 shows an MVLAN configured on all tiers of devices. The upper-tier device is
connected to the multicast router using interface INTF-1, configured previously as a trunk port and
multicast router port in v10 and v20, and subsequently added to the configuration as a trunk and
multicast router port in the MVLAN as well.
• MVR receiver VLANs associated with the MVLAN on the devices in all tiers in the access layer.
In Figure 29 on page 251, the lower-tier device is connected to Host 1 on VLAN v10 (using trunk
interface INTF-4) and Host 2 on v20 (using access interface INTF-5). VLANs v10 and v20 use INTF-3
as a trunk port and multicast router port in the upstream direction to the upper-tier device. The
upper device connects to the lower device using INTF-2 as a trunk port in the downstream direction
to send IGMP queries and forward multicast traffic on v10 and v20. VLANs v10 and v20 are then
configured as MVR receiver VLANs for the MVLAN, with INTF-3 also added as a trunk port and
multicast router port in the MVLAN. VLANs v10 and v20 are also configured on the upper-tier
device as MVR receiver VLANs for the MVLAN.
• MVR running in proxy mode on the device in the uppermost tier for the MVR receiver VLANs, so the
device acts as a proxy to the multicast router for group membership requests received on the MVR
receiver VLANs. The upstream router sends only one multicast stream on the MVLAN downstream
to the device.
In Figure 29 on page 251, the upper-tier device is configured in proxy mode and establishes group
memberships on the MVLAN for hosts on MVR receiver VLANs v10 and v20. The upstream router in
the figure sends only one multicast stream on the MVLAN, which reaches the upper device through
INTF-1. The upper device forwards the stream to the devices in the lower tiers using INTF-2.
• No MVR receiver VLAN tag translation enabled on MVLAN traffic egressing from upper-tier devices.
Devices in the intermediate tiers should forward MVLAN traffic downstream in the context of the
MVLAN, tagged with the MVLAN tag.
The upper device in the figure does not have translation enabled for either receiver VLAN v10 or v20
for the interface INTF-2 that connects to the lower-tier device.
• MVR running in transparent mode on the devices in the lower tiers of the access layer. The lower
devices send IGMP reports upstream in the context of the receiver VLANs because they are
operating in transparent mode, and install bridging entries for the MVLAN only, by default, or with
the install option configured, for both the MVLAN and the MVR receiver VLANs. The uppermost
device is running in proxy mode and installs bridging entries for the MVLAN only. The upstream
router sends only one multicast stream on the MVLAN downstream toward the receivers, and the
traffic is forwarded to the MVR receiver VLANs in the context of the MVLAN, with VLAN tag
translation if the translate option is enabled (described next).
253
In Figure 29 on page 251, the lower device is connected to the upper device with INTF-3 as a trunk
port and the multicast router port for receiver VLANs v10 and v20. To enable MVR on the lower-tier
device, the two MVR receiver VLANs are configured in MVR transparent mode, and INTF-3 is
additionally configured to be a trunk port and multicast router port for the MVLAN.
• MVR receiver VLAN tag translation enabled on receiver VLANs on lower-tier devices that have hosts
on trunk ports, so those hosts receive the multicast traffic in the context of their receiver VLANs.
Hosts reached by way of access ports receive untagged packets, so no VLAN tag translation is
needed in that case.
In Figure 29 on page 251, the device has translation enabled on v10 and substitutes the v10 receiver
VLAN tag for mvlan’s VLAN tag when forwarding the multicast stream on trunk interface INTF-4.
The device does not have translation enabled on v20, and forwards untagged multicast packets on
access port INTF-5.
Release Description
19.4R1 Starting in Junos OS Release 19.4R1, EX4300 multigigabit model (EX4300-48MP) switches and Virtual
Chassis support MVR. You can configure up to 10 MVLANs on these devices.
18.4R1 Starting in Junos OS Release 18.4R1, EX2300 and EX3400 switches and Virtual Chassis support MVR.
You can configure up to 5 MVLANs on these devices.
18.3R1 Starting in Junos OS Release 18.3R1, EX4300 switches and Virtual Chassis support MVR. You can
configure up to 10 MVLANs on these devices.
RELATED DOCUMENTATION
IN THIS SECTION
Viewing MVLAN and MVR Receiver VLAN Information on EX Series Switches with ELS | 263
Multicast VLAN registration (MVR) enables hosts that are not part of a multicast VLAN (MVLAN) to
receive multicast streams from the MVLAN, sharing the MVLAN across multiple VLANs in a Layer 2
network. Hosts remain in their own VLANs for bandwidth and security reasons but are able to receive
multicast streams on the MVLAN.
MVR is not enabled by default on switches that support MVR. You must explicitly configure a switch
with a data-forwarding source MVLAN and associate it with one or more data-forwarding MVR receiver
VLANs. When you configure one or more VLANs on a switch to be MVR receiver VLANs, you must
configure at least one associated source MVLAN. However, you can configure a source MVLAN without
associating MVR receiver VLANs with it at the same time.
The overall purpose and benefits of employing MVR are the same on switches that use Enhanced
Layer 2 Software (ELS) configuration style and those that do not use ELS. However, there are differences
in MVR configuration and operation on the two types of switches.
• In an access layer with a single tier of switches, where a switch is connected to a multicast router in
the upstream direction, and has host trunk or access ports connecting to downstream multicast
receivers:
• Statically configure the upstream interface to the multicast router as a multicast router port in the
MVLAN.
• Configure the translate option on MVR receiver VLANs that have trunk ports, so hosts on those
trunk ports receive the multicast packets tagged for their own VLANs.
255
• In an access layer with multiple tiers of switches, with a switch connected upstream to the multicast
router and a path through one or more downstream switches to multicast receivers:
• Configure MVR on the receiver VLANs to operate in proxy mode on the uppermost switch that is
directly connected to the upstream multicast router.
• Configure MVR on the receiver VLANs to operate in transparent mode for the remaining
downstream tiers of switches.
• Statically configure a multicast router port to the switch in the upstream direction on each tier for
the MVLAN.
• On the lowest tier of MVR switches (connected to receiver hosts), configure MVLAN tag
translation for MVR receiver VLANs that have trunk ports, so hosts on those trunk ports receive
the multicast stream with the packets tagged with their own VLANs.
NOTE: When enabling MVR on ELS switches, depending on your multicast network
requirements, you can have some MVR receiver VLANs configured in proxy mode and some in
transparent mode that are associated with the same MVLAN, because the MVR mode setting
applies individually to an MVR receiver VLAN. The mode configurations described here are only
recommendations for smooth MVR operation in those topologies.
The following constraints apply when configuring MVR on ELS EX Series switches:
• A VLAN can be configured as either an MVLAN or an MVR receiver VLAN, not both. However, an
MVR receiver VLAN can be associated with more than one MVLAN.
• An MVLAN can be the source for only one multicast group subnet, so multiple MVLANs configured
on a switch must have unique multicast group subnet ranges.
• You can configure an interface in both an MVR receiver VLAN and its MVLAN only if it is configured
as a multicast router port in both VLANs.
• You cannot configure proxy mode with the install option to also install forwarding entries on an MVR
receiver VLAN. In proxy mode, IGMP reports are sent to the upstream router only in the context of
the MVLAN. Multicast sources will not receive IGMP reports on the MVR receiver VLAN , and
multicast traffic will not be sent on the MVR receiver VLAN.
• MVR does not support configuring an MVLAN or MVR receiver VLANs on private VLANs (PVLANs).
256
For example, configure VLAN mvlan as an MVLAN for multicast group subnet 233.252.0.0/8:
2. Configure one or more data-forwarding MVR receiver VLANs associated with the source MVLAN:
For example, configure two MVR receiver VLANs v10 and v20 associated with the MVLAN named
mvlan:
For example, configure the two MVR receiver VLANs v10 and v20 (associated with the MVLAN
named mvlan) from the previous step to use proxy mode:
NOTE: On ELS switches, the MVR mode setting applies to individual MVR receiver VLANs. All
MVR receiver VLANS associated with an MVLAN are not required to have the same mode
setting. Depending on your multicast network requirements, you might want to configure
some MVR receiver VLANs in proxy mode and others that are associated with the same
MVLAN in transparent mode.
4. In a multiple-tier topology, for the remaining switches that are not the uppermost switch, configure
each MVR receiver VLAN on each switch to operate in transparent mode. An MVR receiver VLAN
operates in transparent mode by default if you do not set the mode explicitly, so this step is optional
on these switches.
For example, configure two MVR receiver VLANs v10 and v20 that are associated with the MVLAN
named mvlan to use transparent mode:
NOTE:
5. Configure a multicast router port in the upstream direction for the MVLAN on the MVR switch in a
single-tier topology or on the MVR switch in each tier of a multiple-tier topology:
For example, configure a multicast router interface ge-0/0/10.0 for the MVLAN named mvlan:
6. On an MVR switch connected to the receiver hosts with trunk or access ports (applies only to the
lowest tier in a multiple-tier topology), configure MVLAN tag translation on MVR receiver VLANs
that have trunk ports, so hosts on the trunk ports can receive the multicast stream with the packets
tagged with their own VLANs:
For example, a switch connects to receiver hosts on MVR receiver VLAN v10 using a trunk port, but
reaches receiver hosts on MVR receiver VLAN v20 on an access port, so configure the MVR translate
option only on VLAN v10:
7. (Optional and applicable only to MVR receiver VLANs configured in transparent mode) Install
forwarding entries for an MVR receiver VLAN as well as the MVLAN:
NOTE: This option cannot be configured for an MVR receiver VLAN configured in proxy
mode.
For example:
Figure 30 on page 259 illustrates a single-tier access layer topology in which MVR is employed with an
MVLAN named mvlan and receiver hosts on MVR receiver VLANs v10 and v20. A sample of the
recommended MVR configuration for this topology follows the figure.
The MVR switch in Figure 30 on page 259 is configured in proxy mode, connects to the upstream
multicast router on interface INTF-1, and connects to receiver hosts on v10 using trunk port INTF-2 and
on v20 using access port INTF-3. The switch is configured to translate MVLAN tags in the multicast
stream into the receiver VLAN tags only for v10 on INTF-2.
Figure 31 on page 261 illustrates a two-tier access layer topology in which MVR is employed with an
MVLAN named mvlan, MVR receiver VLANs v10 and v20, and receiver hosts connected to trunk port
261
INTF-4 on v10 and access port INTF-5 on v20. A sample of the recommended MVR configuration for
this topology follows the figure.
The upper switch in Figure 31 on page 261 connects to the upstream multicast router on INTF-1, and
the lower switch connects to the upper switch on INTF-3, both configured as trunk ports and multicast
router interfaces in the MVLAN. The upper switch is configured in proxy mode and the lower switch is
configured in transparent mode for all MVR receiver VLANs. The lower switch is configured to translate
MVLAN tags in the multicast stream into the receiver VLAN tags for v10 on INTF-4.
Upper Switch:
Lower Switch:
Viewing MVLAN and MVR Receiver VLAN Information on EX Series Switches with
ELS
On EX Series switches with the Enhanced Layer 2 Software (ELS) configuration style that support MVR,
you can use the "show igmp snooping data-forwarding" on page 2159 command to view information
about the MVLANs and MVR receiver VLANs configured on a switch, as follows:
Vlan: v2
Learning-Domain : default
Type : MVR Source Vlan
Group subnet : 225.0.0.0/24
Receiver vlans:
vlan: v1
vlan: v3
Vlan: v1
264
Learning-Domain : default
Type : MVR Receiver Vlan
Mode : PROXY
Egress translate : FALSE
Install route : FALSE
Source vlans:
vlan: v2
Vlan: v3
Learning-Domain : default
Type : MVR Receiver Vlan
Mode : TRANSPARENT
Egress translate : FALSE
Install route : TRUE
Source vlans:
vlan: v2
MVLANs are listed as Type: MVR Source Vlan with the associated group subnet range and MVR
receiver VLANs. MVR receiver VLANs are listed as Type: MVR Receiver Vlan with the associated source
MVLANs and configured options (proxy or transparent mode, VLAN tag translation, and installation of
receiver VLAN forwarding entries).
In addition, the "show igmp snooping interface" on page 2163 and "show igmp snooping membership" on
page 2171 commands on ELS EX Series switches list MVR receiver VLAN interfaces under both the MVR
receiver VLAN and its MVLAN, and display the output field Data-forwarding receiver: yes when MVR
receiver ports are listed under the MVLAN. This field is not displayed for other interfaces in an MVLAN
listed under the MVLAN that are not in MVR receiver VLANs.
• A VLAN can be configured as an MVLAN or an MVR receiver VLAN, but not both. However, an MVR
receiver VLAN can be associated with more than one MVLAN.
• An MVLAN can be the source for only one multicast group subnet, so multiple MVLANs configured
on a switch must have disjoint multicast group subnets.
• After you configure a VLAN as an MVLAN, that VLAN is no longer available for other uses.
265
• You cannot enable multicast protocols on VLAN interfaces that are members of MVLANs.
• If you configure an MVLAN in proxy mode, IGMP snooping proxy mode is automatically enabled on
all MVR receiver VLANs of this MVLAN. If a VLAN is an MVR receiver VLAN for multiple MVLANs,
all of the MVLANs must have proxy mode enabled or all must have proxy mode disabled. You can
enable proxy mode only on VLANs that are configured as MVR source VLANs and that are not
configured for Q-in-Q tunneling.
• You cannot configure proxy mode with the install option to also install forwarding entries for
received IGMP packets on an MVR receiver VLAN.
[edit protocols]
user@switch# set igmp-snooping vlan mv0 data-forwarding source groups 225.10.0.0/16
[edit protocols]
user@switch# set igmp-snooping vlan mv0 proxy source-address 10.0.0.1
3. Configure the VLAN named v2 to be an MVR receiver VLAN with mv0 as its source:
[edit protocols]
user@switch# set igmp-snooping vlan v2 data-forwarding receiver source-vlans mv0
[edit protocols]
user@switch# set igmp-snooping vlan v2 data-forwarding receiver install
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 266
Configuration | 270
Multicast VLAN registration (MVR) enables hosts that are not part of a multicast VLAN (MVLAN) to
receive multicast streams from the MVLAN, which enable the MVLAN to be shared across the Layer 2
network and eliminate the need to send duplicate multicast streams to each requesting VLAN in the
network. Hosts remain in their own VLANs for bandwidth and security reasons.
NOTE: This example describes configuring MVR only on EX Series and QFX Series switches that
do not support the Enhanced Layer 2 Software configuration style.
Requirements
This example uses the following hardware and software components:
• Junos OS Release 9.6 or later for EX Series switches or Junos OS Release 12.3 or later for the QFX
Series
• Configured two or more VLANs on the switch. See the task for your platform:
• Example: Setting Up Bridging with Multiple VLANs on Switches for the QFX Series and EX4600
switch
267
• Connected the switch to a network that can transmit IPTV multicast streams from a video server.
• Connected a host that is capable of receiving IPTV multicast streams to an interface in one of the
VLANs.
IN THIS SECTION
Topology | 267
In a standard Layer 2 network, a multicast stream received on one VLAN is never distributed to
interfaces outside that VLAN. If hosts in multiple VLANs request the same multicast stream, a separate
copy of that multicast stream is distributed to the requesting VLANs.
MVR introduces the concept of a multicast source VLAN (MVLAN), which is created by MVR and
becomes the only VLAN over which multicast traffic flows throughout the Layer 2 network. Multicast
traffic can then be selectively forwarded from interfaces on the MVLAN (source ports) to hosts that are
connected to interfaces (multicast receiver ports) that are not part of the multicast source VLAN. When
you configure an MVLAN, you assign a range of multicast group addresses to it. You then configure
other VLANs to be MVR receiver VLANs, which receive multicast streams from the MVLAN. The MVR
receiver ports comprise all the interfaces that exist on any of the MVR receiver VLANs.
Topology
You can configure MVR to operate in one of two modes: transparent mode (the default mode) or proxy
mode. Both modes enable MVR to forward only one copy of a multicast stream to the Layer 2 network.
In transparent mode, the switch receives one copy of each IPTV multicast stream and then replicates the
stream only to those hosts that want to receive it, while forwarding all other types of multicast traffic
without modification. Figure 32 on page 268 shows how MVR operates in transparent mode.
In proxy mode, the switch acts as a proxy for the IGMP multicast router in the MVLAN for MVR group
memberships established in the MVR receiver VLANs and generates and sends IGMP packets into the
MVLAN as needed. Figure 33 on page 269 shows how MVR operates in proxy mode.
This example shows how to configure MVR in both transparent mode and proxy mode on an EX Series
switch or the QFX Series. The topology includes a video server that is connected to a multicast router,
which in turn forwards the IPTV multicast traffic in the MVLAN to the Layer 2 network.
Figure 32 on page 268 shows the MVR topology in transparent mode. Interfaces P1 and P2 on Switch C
belong to service VLAN s0 and MVLAN mv0. Interface P4 of Switch C also belongs to service VLAN s0.
268
In the upstream direction of the network, only non-IPTV traffic is being carried in individual customer
VLANs of service VLAN s0. VLAN c0 is an example of this type of customer VLAN. IPTV traffic is being
carried on MVLAN mv0. If any host on any customer VLAN connected to port P4 requests an MVR
stream, Switch C takes the stream from VLAN mv0 and replicates that stream onto port P4 with tag
mv0. IPTV traffic, along with other network traffic, flows from port P4 out to the Digital Subscriber Line
Access Multiplexer (DSLAM) D1.
Figure 33 on page 269 shows the MVR topology in proxy mode. Interfaces P1 and P2 on Switch C
belong to MVLAN mv0 and customer VLAN c0. Interface P4 on Switch C is an access port of customer
VLAN c0. In the upstream direction of the network, only non-IPTV traffic is being carried on customer
VLAN c0. Any IPTV traffic requested by hosts on VLAN c0 is replicated untagged to port P4 based on
streams received in MVLAN mv0. IPTV traffic flows from port P4 out to an IPTV-enabled device in Host
269
H1. Other traffic, such as data and voice traffic, also flows from port P4 to other network devices in
Host H1.
For information on VLAN tagging, see the topic for your platform:
Configuration
IN THIS SECTION
Procedure | 270
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit protocols igmp-snooping] hierarchy level.
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User
Guide.
To configure MVR:
Results
From configuration mode, confirm your configuration by entering the show command at the [edit
protocols igmp-snooping] hierarchy level. If the output does not display the intended configuration,
repeat the instructions in this example to correct the configuration.
RELATED DOCUMENTATION
Routing Content to Densely Clustered Receivers with PIM Dense Mode | 294
Routing Content to Larger, Sparser Groups with PIM Sparse Mode | 305
Rapidly Detecting Communication Failures with PIM and the BFD Protocol | 499
CHAPTER 6
Understanding PIM
IN THIS CHAPTER
PIM Overview
IN THIS SECTION
The predominant multicast routing protocol in use on the Internet today is Protocol Independent
Multicast, or PIM. The type of PIM used on the Internet is PIM sparse mode. PIM sparse mode is so
accepted that when the simple term “PIM” is used in an Internet context, some form of sparse mode
operation is assumed.
PIM emerged as an algorithm to overcome the limitations of dense-mode protocols such as the Distance
Vector Multicast Routing Protocol (DVMRP), which was efficient for dense clusters of multicast
receivers, but did not scale well for the larger, sparser, groups encountered on the Internet. The Core
Based Trees (CBT) Protocol was intended to support sparse mode as well, but CBT, with its all-powerful
core approach, made placement of the core critical, and large conference-type applications (many-to-
many) resulted in bottlenecks in the core. PIM was designed to avoid the dense-mode scaling issues of
DVMRP and the potential performance issues of CBT at the same time.
Starting in Junos OS Release 15.2, only PIM version 2 is supported. In the CLI, the command for
specifying a version (1 or 2) is removed.
PIMv1 and PIMv2 can coexist on the same routing device and even on the same interface. The main
difference between PIMv1 and PIMv2 is the packet format. PIMv1 messages use Internet Group
Management Protocol (IGMP) packets, whereas PIMv2 has its own IP protocol number (103) and packet
275
structure. All routing devices connecting to an IP subnet such as a LAN must use the same PIM version.
Some PIM implementations can recognize PIMv1 packets and automatically switch the routing device
interface to PIMv1. Because the difference between PIMv1 and PIMv2 involves the message format, but
not the meaning of the message or how the routing device processes the PIM message, a routing device
can easily mix PIMv1 and PIMv2 interfaces.
PIM is used for efficient routing to multicast groups that might span wide-area and interdomain
internetworks. It is called “protocol independent” because it does not depend on a particular unicast
routing protocol. Junos OS supports bidirectional mode, sparse mode, dense mode, and sparse-dense
mode.
NOTE: ACX Series routers supports only sparse mode. Dense mode on ACX series is supported
only for control multicast groups for auto-discovery of rendezvous point (auto-RP).
PIM operates in several modes: bidirectional mode, sparse mode, dense mode, and sparse-dense mode.
In sparse-dense mode, some multicast groups are configured as dense mode (flood-and-prune, [S,G]
state) and others are configured as sparse mode (explicit join to rendezvous point [RP], [*,G] state).
PIM drafts also establish a mode known as PIM source-specific mode, or PIM SSM. In PIM SSM there is
only one specific source for the content of a multicast group within a given domain.
Because the PIM mode you choose determines the PIM configuration properties, you first must decide
whether PIM operates in bidirectional, sparse, dense, or sparse-dense mode in your network. Each mode
has distinct operating advantages in different network environments.
• In sparse mode, routing devices must join and leave multicast groups explicitly. Upstream routing
devices do not forward multicast traffic to a downstream routing device unless the downstream
routing device has sent an explicit request (by means of a join message) to the rendezvous point (RP)
routing device to receive this traffic. The RP serves as the root of the shared multicast delivery tree
and is responsible for forwarding multicast data from different sources to the receivers.
Sparse mode is well suited to the Internet, where frequent interdomain join messages and prune
messages are common.
Starting in Junos OS Release 19.2R1, on SRX300, SRX320, SRX340, SRX345, SRX550, SRX1500, and
vSRX 2.0 and vSRX 3.0 (with 2 vCPUs) Series devices, Protocol Independent Multicast (PIM) using
point-to-multipoint (P2MP) mode supports AutoVPN and Auto Discovery VPN in which a new p2mp
interface type is introduced for PIM. The p2mp interface tracks all PIM joins per neighbor to ensure
multicast forwarding or replication only happens to those neighbors that are in joined state. In
addition, the PIM using point-to-multipoint mode supports chassis cluster mode.
276
NOTE: On all the EX series switches (except EX4300 and EX9200), QFX5100 switches, and
OCX series switches, the rate limit is set to 1pps per SG to avoid overwhelming the
rendezvous point (RP), First hop router (FHR) with PIM-sparse mode (PIM-SM) register
messages and cause CPU hogs. This rate limit helps in improving scaling and convergence
times by avoiding duplicate packets being trapped, and tunneled to RP in software. (Platform
support depends on the Junos OS release in your installation.)
• Bidirectional PIM is similar to sparse mode, and is especially suited to applications that must scale to
support a large number of dispersed sources and receivers. In bidirectional PIM, routing devices build
shared bidirectional trees and do not switch to a source-based tree. Bidirectional PIM scales well
because it needs no source-specific (S,G) state. Instead, it builds only group-specific (*,G) state.
• Unlike sparse mode and bidirectional mode, in which data is forwarded only to routing devices
sending an explicit PIM join request, dense mode implements a flood-and-prune mechanism, similar
to the Distance Vector Multicast Routing Protocol (DVMRP). In dense mode, a routing device
receives the multicast data on the incoming interface, then forwards the traffic to the outgoing
interface list. Flooding occurs periodically and is used to refresh state information, such as the source
IP address and multicast group pair. If the routing device has no interested receivers for the data, and
the outgoing interface list becomes empty, the routing device sends a PIM prune message upstream.
Dense mode works best in networks where few or no prunes occur. In such instances, dense mode is
actually more efficient than sparse mode.
• Sparse-dense mode, as the name implies, allows the interface to operate on a per-group basis in
either sparse or dense mode. A group specified as “dense” is not mapped to an RP. Instead, data
packets destined for that group are forwarded by means of PIM dense mode rules. A group specified
as “sparse” is mapped to an RP, and data packets are forwarded by means of PIM sparse-mode rules.
Sparse-dense mode is useful in networks implementing auto-RP for PIM sparse mode.
NOTE: On SRX Series devices, PIM does not support upstream and downstream interfaces
across different virtual routers in flow mode.
PIM dense mode requires only a multicast source and series of multicast-enabled routing devices
running PIM dense mode to allow receivers to obtain multicast content. Dense mode makes sure that all
multicast traffic gets everywhere by periodically flooding the network with multicast traffic, and relies
on prune messages to make sure that subnets where all receivers are uninterested in that particular
multicast group stop receiving packets.
277
PIM sparse mode is more complicated and requires the establishment of special routing devices called
rendezvous points (RPs) in the network core. These routing devices are where upstream join messages
from interested receivers meet downstream traffic from the source of the multicast group content. A
network can have many RPs, but PIM sparse mode allows only one RP to be active for any multicast
group.
If there is only one RP in a routing domain, the RP and adjacent links might become congested and form
a single point of failure for all multicast traffic. Thus, multiple RPs are the rule, but the issue then
becomes how other multicast routing devices find the RP that is the source of the multicast group the
receiver is trying to join. This RP-to-group mapping is controlled by a special bootstrap router (BSR)
running the PIM BSR mechanism. There can be more than one bootstrap router as well, also for single-
point-of-failure reasons.
The bootstrap router does not have to be an RP itself, although this is a common implementation. The
bootstrap router's main function is to manage the collection of RPs and allow interested receivers to find
the source of their group's multicast traffic. PIM bootstrap messages are sourced from the loopback
address, which is always up. The loopback address must be routable. If it is not routable, then the
bootstrap router is unable to send bootstrap messages to update the RP domain members. The show
pim bootstrap command displays only those bootstrap routers that have routable loopback addresses.
PIM SSM can be seen as a subset of a special case of PIM sparse mode and requires no specialized
equipment other than that used for PIM sparse mode (and IGMP version 3).
Bidirectional PIM RPs, unlike RPs for PIM sparse mode, do not need to perform PIM Register tunneling
or other specific protocol action. Bidirectional PIM RPs implement no specific functionality. RP
addresses are simply a location in the network to rendezvous toward. In fact, for bidirectional PIM, RP
addresses need not be loopback interface addresses or even be addresses configured on any routing
device, as long as they are covered by a subnet that is connected to a bidirectional PIM-capable routing
device and advertised to the network.
Release Description
19.2R1 Starting in Junos OS Release 19.2R1, on SRX300, SRX320, SRX340, SRX345, SRX550, SRX1500, and
vSRX 2.0 and vSRX 3.0 (with 2 vCPUs) Series devices, Protocol Independent Multicast (PIM) using point-
to-multipoint (P2MP) mode supports AutoVPN and Auto Discovery VPN in which a new p2mp interface
type is introduced for PIM.
15.2 Starting in Junos OS Release 15.2, only PIM version 2 is supported. In the CLI, the command for
specifying a version (1 or 2) is removed.
278
RELATED DOCUMENTATION
You can configure several Protocol Independent Multicast (PIM) features on an interface regardless of its
PIM mode (bidirectional, sparse, dense, or sparse-dense mode).
NOTE: ACX Series routers supports only sparse mode. Dense mode on ACX series is supported
only for control multicast groups for auto-discovery of rendezvous point (auto-RP).
If you configure PIM on an aggregated (ae- or as-) interface, each of the interfaces in the aggregate is
included in the multicast output interface list and carries the single stream of replicated packets in a
load-sharing fashion. The multicast aggregate interface is “expanded” into its constituent interfaces in
the next-hop database.
RELATED DOCUMENTATION
CHAPTER 7
IN THIS CHAPTER
PIM instances are supported only for VRF instance types. You can configure multiple instances of PIM to
support multicast over VPNs.
routing-instances {
routing-instance-name {
interface interface-name;
instance-type vrf;
protocols {
pim {
... pim-configuration ...
}
}
}
}
280
RELATED DOCUMENTATION
Starting in Junos OS Release 15.2, it is no longer necessary to configure the PIM version. Support for
PIM version 1 has been removed and the remaining, default, version is PIM 2.
PIM version 2 is the default for both rendezvous point (RP) mode (at the [edit protocols pim rp static
address address] hierarchy level) and for interface mode (at the [edit protocols pim interface interface-
name] hierarchy level).
15.2 Starting in Junos OS Release 15.2, it is no longer necessary to configure the PIM version.
Because of the distributed nature of QFabric systems, the default configuration does not allow the
maximum number of supported Layer 3 multicast flows to be created. To allow a QFabric system to
create the maximum number of supported flows, configure the following statement:
After configuring this statement, you must reboot the QFabric Director group to make the change take
effect.
281
Routing devices send hello messages at a fixed interval on all PIM-enabled interfaces. By using hello
messages, routing devices advertise their existence as PIM routing devices on the subnet. With all PIM-
enabled routing devices advertised, a single designated router for the subnet is established.
When a routing device is configured for PIM, it sends a hello message at a 30-second default interval.
The interval range is from 0 through 255. When the interval counts down to 0, the routing device sends
another hello message, and the timer is reset. A routing device that receives no response from a
neighbor in 3.5 times the interval value drops the neighbor. In the case of a 30-second interval, the
amount of time a routing device waits for a response is 105 seconds.
If a PIM hello message contains the hold-time option, the neighbor timeout is set to the hold-time sent
in the message. If a PIM hello message does not contain the hold-time option, the neighbor timeout is
set to the default hello hold time.
To modify how often the routing device sends hello messages out of an interface:
1. This example shows the configuration for the routing instance. Configure the interface globally or in
the routing instance.
2. Verify the configuration by checking the Hello Option Holdtime field in the output of the show pim
neighbors detail command.
Interface: lo0.0
Address: 10.255.245.91, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 255 seconds
282
Interface: pd-6/0/0.32768
Address: 0.0.0.0, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 255 seconds
Hello Option DR Priority: 0
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Join Suppression supported
RELATED DOCUMENTATION
The ping utility uses ICMP Echo messages to verify connectivity to any device with an IP address.
However, in the case of multicast applications, a single ping sent to a multicast address can degrade the
performance of routers because the stream of packets is replicated multiple times.
You can disable the router's response to ping (ICMP Echo) packets sent to multicast addresses. The
system responds normally to unicast ping packets.
[edit system]
user@host# set no-multicast-echo
283
2. Verify the configuration by checking the echo drops with broadcast or multicast destination address
field in the output of the show system statistics icmp command.
icmp:
0 drops due to rate limit
0 calls to icmp_error
0 errors not generated because old message was icmp
Output histogram:
echo reply: 21
0 messages with bad code fields
0 messages less than the minimum length
0 messages with bad checksum
0 messages with bad source address
0 messages with bad length
100 echo drops with broadcast or multicast destination address
0 timestamp drops with broadcast or multicast destination address
Input histogram:
echo: 21
21 message responses generated
RELATED DOCUMENTATION
Tracing operations record detailed messages about the operation of routing protocols, such as the
various types of routing protocol packets sent and received, and routing policy actions. You can specify
which trace operations are logged by including specific tracing flags. The following table describes the
flags that you can include.
284
Flag Description
join Trace join messages, which are sent to join a branch onto
the multicast distribution tree.
(Continued)
Flag Description
prune Trace prune messages, which are sent to prune a branch off
the multicast distribution tree.
In the following example, tracing is enabled for all routing protocol packets. Then tracing is narrowed to
focus only on PIM packets of a particular type.
1. (Optional) Configure tracing at the [routing-options hierarchy level to trace all protocol packets.
6. Configure tracing flags. Suppose you are troubleshooting issues with PIM version 1 control packets
that are received on an interface configured for PIM version 2. The following example shows how to
trace messages associated with this problem.
RELATED DOCUMENTATION
The Bidirectional Forwarding Detection (BFD) Protocol is a simple hello mechanism that detects failures
in a network. BFD works with a wide variety of network environments and topologies. A pair of routing
devices exchanges BFD packets. Hello packets are sent at a specified, regular interval. A neighbor failure
is detected when the routing device stops receiving a reply after a specified interval. The BFD failure
detection timers have shorter time limits than the Protocol Independent Multicast (PIM) hello hold time,
so they provide faster detection.
The BFD failure detection timers are adaptive and can be adjusted to be faster or slower. The lower the
BFD failure detection timer value, the faster the failure detection and vice versa. For example, the
timers can adapt to a higher value if the adjacency fails (that is, the timer detects failures more slowly).
Or a neighbor can negotiate a higher value for a timer than the configured value. The timers adapt to a
higher value when a BFD session flap occurs more than three times in a span of 15 seconds. A back-off
algorithm increases the receive (Rx) interval by two if the local BFD instance is the reason for the session
flap. The transmission (Tx) interval is increased by two if the remote BFD instance is the reason for the
session flap. You can use the clear bfd adaptation command to return BFD interval timers to their
configured values. The clear bfd adaptation command is hitless, meaning that the command does not
affect traffic flow on the routing device.
You must specify the minimum transmit and minimum receive intervals to enable BFD on PIM.
3. Configure the minimum interval after which the routing device expects to receive a reply from a
neighbor with which it has established a BFD session.
288
Specifying an interval smaller than 300 ms can cause undesired BFD flapping.
5. Configure the threshold for the adaptation of the BFD session detection time.
When the detection time adapts to a value equal to or greater than the threshold, a single trap and a
single system log message are sent.
6. Configure the number of hello packets not received by a neighbor that causes the originating
interface to be declared down.
8. Specify that BFD sessions should not adapt to changing network conditions.
We recommend that you not disable BFD adaptation unless it is preferable not to have BFD
adaptation enabled in your network.
9. Verify the configuration by checking the output of the show bfd session command.
289
RELATED DOCUMENTATION
IN THIS SECTION
Beginning with Junos OS Release 9.6, you can configure authentication for Bidirectional Forwarding
Detection (BFD) sessions running over Protocol Independent Multicast (PIM). Routing instances are also
supported.
The following sections provide instructions for configuring and viewing BFD authentication on PIM:
NOTE: Nonstop active routing (NSR) is not supported with the meticulous-keyed-md5 and
meticulous-keyed-sha-1 authentication algorithms. BFD sessions using these algorithms
might go down after a switchover.
2. Specify the keychain to be used to associate BFD sessions on the specified PIM route or routing
instance with the unique security authentication keychain attributes.
The keychain you specify must match the keychain name configured at the [edit security
authentication key-chains] hierarchy level.
NOTE: The algorithm and keychain must be configured on both ends of the BFD session, and
they must match. Any mismatch in configuration prevents the BFD session from being
created.
• At least one key, a unique integer between 0 and 63. Creating multiple keys allows multiple clients
to use the BFD session.
• The time at which the authentication key becomes active, in the format yyyy-mm-dd.hh:mm:ss.
[edit security]
user@host# set authentication-key-chains key-chain bfd-pim key 53 secret $ABC123$/ start-time
2009-06-14.10:00:00
4. (Optional) Specify loose authentication checking if you are transitioning from nonauthenticated
sessions to authenticated sessions.
5. (Optional) View your configuration by using the show bfd session detail or show bfd session
extensive command.
6. Repeat these steps to configure the other end of the BFD session.
The following example shows BFD authentication configured for the ge-0/1/5 interface. It specifies the
keyed SHA-1 authentication algorithm and a keychain name of bfd-pim. The authentication keychain is
configured with two keys. Key 1 contains the secret data “$ABC123/” and a start time of June 1, 2009,
at 9:46:02 AM PST. Key 2 contains the secret data “$ABC123/” and a start time of June 1, 2009, at
3:29:20 PM PST.
}
}
}
If you commit these updates to your configuration, you see output similar to the following example. In
the output for the show bfd session detail command, Authenticate is displayed to indicate that BFD
authentication is configured. For more information about the configuration, use the show bfd session
extensive command. The output for this command provides the keychain name, the authentication
algorithm and mode for each client in the session, and the overall BFD authentication configuration
status, keychain name, and authentication algorithm and mode.
Detect Transmit
Address State Interface Time Interval Multiplier
192.0.2.2 Up ge-0/1/5.0 0.900 0.300 3
Client PIM, TX interval 0.300, RX interval 0.300, Authenticate
Session up time 3d 00:34
Local diagnostic None, remote diagnostic NbrSignal
Remote state Up, version 1
Replicated
Release Description
9.6 Beginning with Junos OS Release 9.6, you can configure authentication for Bidirectional Forwarding
Detection (BFD) sessions running over Protocol Independent Multicast (PIM). Routing instances are also
supported.
RELATED DOCUMENTATION
CHAPTER 8
IN THIS CHAPTER
PIM dense mode is less sophisticated than PIM sparse mode. PIM dense mode is useful for multicast
LAN applications, the main environment for all dense mode protocols.
PIM dense mode implements the same flood-and-prune mechanism that DVMRP and other dense mode
routing protocols employ. The main difference between DVMRP and PIM dense mode is that PIM dense
mode introduces the concept of protocol independence. PIM dense mode can use the routing table
populated by any underlying unicast routing protocol to perform reverse-path-forwarding (RPF) checks.
Internet service providers (ISPs) typically appreciate the ability to use any underlying unicast routing
protocol with PIM dense mode because they do not need to introduce and manage a separate routing
protocol just for RPF checks. While unicast routing protocols extended as multiprotocol BGP (MBGP)
and Multitopology Routing in IS-IS (M-IS-IS) were later employed to build special tables to perform RPF
checks, PIM dense mode does not require them.
PIM dense mode can use the unicast routing table populated by OSPF, IS-IS, BGP, and so on, or PIM
dense mode can be configured to use a special multicast RPF table populated by MBGP or M-IS-IS when
performing RPF checks.
Unlike sparse mode, in which data is forwarded only to routing devices sending an explicit request,
dense mode implements a flood-and-prune mechanism, similar to DVMRP. In PIM dense mode, there is
295
no RP. A routing device receives the multicast data on the interface closest to the source, then forwards
the traffic to all other interfaces (see Figure 34 on page 295).
Figure 34: Multicast Traffic Flooded from the Source Using PIM Dense Mode
Flooding occurs periodically. It is used to refresh state information, such as the source IP address and
multicast group pair. If the routing device has no interested receivers for the data, and the OIL becomes
296
empty, the routing device sends a prune message upstream to stop delivery of multicast traffic (see
Figure 35 on page 296).
Figure 35: Prune Messages Sent Back to the Source to Stop Unwanted Multicast Traffic
Sparse-dense mode, as the name implies, allows the interface to operate on a per-group basis in either
sparse or dense mode. A group specified as dense is not mapped to an RP. Instead, data packets
destined for that group are forwarded by means of PIM dense-mode rules. A group specified as sparse is
mapped to an RP, and data packets are forwarded by means of PIM sparse-mode rules.
For information about PIM sparse-mode and PIM dense-mode rules, see "Understanding PIM Sparse
Mode" on page 305 and "Understanding PIM Dense Mode" on page 294.
297
RELATED DOCUMENTATION
It is possible to mix PIM dense mode, PIM sparse mode, and PIM source-specific multicast (SSM) on the
same network, the same routing device, and even the same interface. This is because modes are
effectively tied to multicast groups, an IP multicast group address must be unique for a particular
group's traffic, and scoping limits enforce the division between potential or actual overlaps.
NOTE: PIM sparse mode was capable of forming shortest-path trees (SPTs) already. Changes to
PIM sparse mode to support PIM SSM mainly involved defining behavior in the SSM address
range, because shared-tree behavior is prohibited for groups in the SSM address range.
A multicast routing device employing sparse-dense mode is a good example of mixing PIM modes on the
same network or routing device or interface. Dense modes are easy to support because of the flooding,
but scaling issues make dense modes inappropriate for Internet use beyond very restricted uses.
IN THIS SECTION
PIM dense mode implements the same flood-and-prune mechanism that DVMRP and other dense mode
routing protocols employ. The main difference between DVMRP and PIM dense mode is that PIM dense
298
mode introduces the concept of protocol independence. PIM dense mode can use the routing table
populated by any underlying unicast routing protocol to perform reverse-path-forwarding (RPF) checks.
Internet service providers (ISPs) typically appreciate the ability to use any underlying unicast routing
protocol with PIM dense mode because they do not need to introduce and manage a separate routing
protocol just for RPF checks. While unicast routing protocols extended as multiprotocol BGP (MBGP)
and Multitopology Routing in IS-IS (M-IS-IS) were later employed to build special tables to perform RPF
checks, PIM dense mode does not require them.
PIM dense mode can use the unicast routing table populated by OSPF, IS-IS, BGP, and so on, or PIM
dense mode can be configured to use a special multicast RPF table populated by MBGP or M-IS-IS when
performing RPF checks.
Unlike sparse mode, in which data is forwarded only to routing devices sending an explicit request,
dense mode implements a flood-and-prune mechanism, similar to DVMRP. In PIM dense mode, there is
299
no RP. A routing device receives the multicast data on the interface closest to the source, then forwards
the traffic to all other interfaces (see Figure 36 on page 299).
Figure 36: Multicast Traffic Flooded from the Source Using PIM Dense Mode
Flooding occurs periodically. It is used to refresh state information, such as the source IP address and
multicast group pair. If the routing device has no interested receivers for the data, and the OIL becomes
300
empty, the routing device sends a prune message upstream to stop delivery of multicast traffic (see
Figure 37 on page 300).
Figure 37: Prune Messages Sent Back to the Source to Stop Unwanted Multicast Traffic
By default, PIM is disabled. When you enable PIM, it operates in sparse mode by default.
301
You can configure PIM dense mode globally or for a routing instance. This example shows how to
configure the routing instance and how to specify that PIM dense mode use inet.2 as its RPF routing
table instead of inet.0.
1. (Optional) Create an IPv4 routing table group so that interface routes are installed into two routing
tables, inet.0 and inet.2.
2. (Optional) Associate the routing table group with a PIM routing instance.
3. Configure the PIM interface. If you do not specify any interfaces, PIM is enabled on all router
interfaces. Generally, you specify interface names only if you are disabling PIM on certain interfaces.
NOTE: You cannot configure both PIM and Distance Vector Multicast Routing Protocol
(DVMRP) in forwarding mode on the same interface. You can configure PIM on the same
interface only if you configured DVMRP in unicast-routing mode.
4. Monitor the operation of PIM dense mode by running the show pim interfaces, show pim join, show
pim neighbors, and show pim statistics commands.
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
For information about PIM sparse-mode and PIM dense-mode rules, see "Understanding PIM Sparse
Mode" on page 305 and "Understanding PIM Dense Mode" on page 294.
SEE ALSO
NOTE: PIM sparse mode was capable of forming shortest-path trees (SPTs) already. Changes to
PIM sparse mode to support PIM SSM mainly involved defining behavior in the SSM address
range, because shared-tree behavior is prohibited for groups in the SSM address range.
A multicast routing device employing sparse-dense mode is a good example of mixing PIM modes on the
same network or routing device or interface. Dense modes are easy to support because of the flooding,
but scaling issues make dense modes inappropriate for Internet use beyond very restricted uses.
By default, PIM is disabled. When you enable PIM, it operates in sparse mode by default.
You can configure PIM sparse-dense mode globally or for a routing instance. This example shows how to
configure PIM sparse-dense mode globally on all interfaces, specifying that the groups 224.0.1.39 and
224.0.1.40 are using dense mode.
[protocols pim]
user@host# set dense-groups 224.0.1.39
user@host# set dense-groups 224.0.1.40
2. Configure all interfaces on the routing device to use sparse-dense mode. When configuring all
interfaces, exclude the fxp0.0 management interface by adding the disable statement for that
interface.
3. Monitor the operation of PIM sparse-dense mode by running the show pim interfaces, show pim join,
show pim neighbors, and show pim statistics commands.
304
SEE ALSO
RELATED DOCUMENTATION
CHAPTER 9
IN THIS CHAPTER
IN THIS SECTION
A Protocol Independent Multicast (PIM) sparse-mode domain uses reverse-path forwarding (RPF) to
create a path from a data source to the receiver requesting the data. When a receiver issues an explicit
306
join request, an RPF check is triggered. A (*,G) PIM join message is sent toward the RP from the
receiver's designated router (DR). (By definition, this message is actually called a join/prune message, but
for clarity in this description, it is called either join or prune, depending on its context.) The join message
is multicast hop by hop upstream to the ALL-PIM-ROUTERS group (224.0.0.13) by means of each
router’s RPF interface until it reaches the RP. The RP router receives the (*,G) PIM join message and
adds the interface on which it was received to the outgoing interface list (OIL) of the rendezvous-point
tree (RPT) forwarding state entry. This builds the RPT connecting the receiver with the RP. The RPT
remains in effect, even if no active sources generate traffic.
NOTE: State—the (*,G) or (S,G) entries—is the information used for forwarding unicast or
multicast packets. S is the source IP address, G is the multicast group address, and * represents
any source sending to group G. Routers keep track of the multicast forwarding state for the
incoming and outgoing interfaces for each group.
When a source becomes active, the source DR encapsulates multicast data packets into a PIM register
message and sends them by means of unicast to the RP router.
If the RP router has interested receivers in the PIM sparse-mode domain, it sends a PIM join message
toward the source to build a shortest-path tree (SPT) back to the source. The source sends multicast
packets out on the LAN, and the source DR encapsulates the packets in a PIM register message and
forwards the message toward the RP router by means of unicast. The RP router receives PIM register
messages back from the source, and thus adds a new source to the distribution tree, keeping track of
sources in a PIM table. Once an RP router receives packets natively (with S,G), it sends a register stop
message to stop receiving the register messages by means of unicast.
In actual application, many receivers with multiple SPTs are involved in a multicast traffic flow. To
illustrate the process, we track the multicast traffic from the RP router to one receiver. In such a case,
the RP router begins sending multicast packets down the RPT toward the receiver’s DR for delivery to
the interested receivers. When the receiver’s DR receives the first packet from the RPT, the DR sends a
PIM join message toward the source DR to start building an SPT back to the source. When the source
DR receives the PIM join message from the receiver’s DR, it starts sending traffic down all SPTs. When
the first multicast packet is received by the receiver’s DR, the receiver’s DR sends a PIM prune message
to the RP router to stop duplicate packets from being sent through the RPT. In turn, the RP router stops
sending multicast packets to the receiver’s DR, and sends a PIM prune message for this source over the
RPT toward the source DR to halt multicast packet delivery to the RP router from that particular source.
If the RP router receives a PIM register message from an active source but has no interested receivers in
the PIM sparse-mode domain, it still adds the active source into the PIM table. However, after adding
the active source into the PIM table, the RP router sends a register stop message. The RP router
discovers the active source’s existence and no longer needs to receive advertisement of the source
(which utilizes resources).
307
NOTE: If the number of PIM join messages exceeds the configured MTU, the messages are
fragmented in IPv6 PIM sparse mode. To avoid the fragmentation of PIM join messages, the
multicast traffic receives the interface MTU instead of the path MTU.
• Routers with downstream receivers join a PIM sparse-mode tree through an explicit join message.
• PIM sparse-mode RPs are the routers where receivers meet sources.
• Senders announce their existence to one or more RPs, and receivers query RPs to find multicast
sessions.
• Once receivers get content from sources through the RP, the last-hop router (the router closest to
the receiver) can optionally remove the RP from the shared distribution tree (*,G) if the new source-
based tree (S,G) is shorter. Receivers can then get content directly from the source.
The transitional aspect of PIM sparse mode from shared to source-based tree is one of the major
features of PIM, because it prevents overloading the RP or surrounding core links.
There are related issues regarding source, RPs, and receivers when sparse mode multicast is used:
• Receivers initially need to know only one RP (they later learn about others).
• Receivers that never transition to a source-based tree are effectively running Core Based Trees (CBT).
PIM sparse mode has standard features for all of these issues.
Rendezvous Point
The RP router serves as the information exchange point for the other routers. All routers in a PIM
domain must provide mapping to an RP router. It is the only router that needs to know the active
sources for a domain—the other routers just need to know how to reach the RP. In this way, the RP
matches receivers with sources.
308
The RP router is downstream from the source and forms one end of the shortest-path tree. As shown in
Figure 38 on page 308, the RP router is upstream from the receiver and thus forms one end of the
rendezvous-point tree.
The benefit of using the RP as the information exchange point is that it reduces the amount of state in
non-RP routers. No network flooding is required to provide non-RP routers information about active
sources.
RP Mapping Options
• Static configuration
• Anycast RP
• Auto-RP
• Bootstrap router
We recommend a static RP mapping with anycast RP and a bootstrap router (BSR) with auto-RP
configuration, because static mapping provides all the benefits of a bootstrap router and auto-RP
without the complexity of the full BSR and auto-RP mechanisms.
RELATED DOCUMENTATION
IN THIS SECTION
Example: Configuring Multicast for Virtual Routers with IPv6 Interfaces | 334
IN THIS SECTION
A Protocol Independent Multicast (PIM) sparse-mode domain uses reverse-path forwarding (RPF) to
create a path from a data source to the receiver requesting the data. When a receiver issues an explicit
join request, an RPF check is triggered. A (*,G) PIM join message is sent toward the RP from the
receiver's designated router (DR). (By definition, this message is actually called a join/prune message, but
for clarity in this description, it is called either join or prune, depending on its context.) The join message
is multicast hop by hop upstream to the ALL-PIM-ROUTERS group (224.0.0.13) by means of each
router’s RPF interface until it reaches the RP. The RP router receives the (*,G) PIM join message and
adds the interface on which it was received to the outgoing interface list (OIL) of the rendezvous-point
tree (RPT) forwarding state entry. This builds the RPT connecting the receiver with the RP. The RPT
remains in effect, even if no active sources generate traffic.
310
NOTE: State—the (*,G) or (S,G) entries—is the information used for forwarding unicast or
multicast packets. S is the source IP address, G is the multicast group address, and * represents
any source sending to group G. Routers keep track of the multicast forwarding state for the
incoming and outgoing interfaces for each group.
When a source becomes active, the source DR encapsulates multicast data packets into a PIM register
message and sends them by means of unicast to the RP router.
If the RP router has interested receivers in the PIM sparse-mode domain, it sends a PIM join message
toward the source to build a shortest-path tree (SPT) back to the source. The source sends multicast
packets out on the LAN, and the source DR encapsulates the packets in a PIM register message and
forwards the message toward the RP router by means of unicast. The RP router receives PIM register
messages back from the source, and thus adds a new source to the distribution tree, keeping track of
sources in a PIM table. Once an RP router receives packets natively (with S,G), it sends a register stop
message to stop receiving the register messages by means of unicast.
In actual application, many receivers with multiple SPTs are involved in a multicast traffic flow. To
illustrate the process, we track the multicast traffic from the RP router to one receiver. In such a case,
the RP router begins sending multicast packets down the RPT toward the receiver’s DR for delivery to
the interested receivers. When the receiver’s DR receives the first packet from the RPT, the DR sends a
PIM join message toward the source DR to start building an SPT back to the source. When the source
DR receives the PIM join message from the receiver’s DR, it starts sending traffic down all SPTs. When
the first multicast packet is received by the receiver’s DR, the receiver’s DR sends a PIM prune message
to the RP router to stop duplicate packets from being sent through the RPT. In turn, the RP router stops
sending multicast packets to the receiver’s DR, and sends a PIM prune message for this source over the
RPT toward the source DR to halt multicast packet delivery to the RP router from that particular source.
If the RP router receives a PIM register message from an active source but has no interested receivers in
the PIM sparse-mode domain, it still adds the active source into the PIM table. However, after adding
the active source into the PIM table, the RP router sends a register stop message. The RP router
discovers the active source’s existence and no longer needs to receive advertisement of the source
(which utilizes resources).
NOTE: If the number of PIM join messages exceeds the configured MTU, the messages are
fragmented in IPv6 PIM sparse mode. To avoid the fragmentation of PIM join messages, the
multicast traffic receives the interface MTU instead of the path MTU.
• Routers with downstream receivers join a PIM sparse-mode tree through an explicit join message.
311
• PIM sparse-mode RPs are the routers where receivers meet sources.
• Senders announce their existence to one or more RPs, and receivers query RPs to find multicast
sessions.
• Once receivers get content from sources through the RP, the last-hop router (the router closest to
the receiver) can optionally remove the RP from the shared distribution tree (*,G) if the new source-
based tree (S,G) is shorter. Receivers can then get content directly from the source.
The transitional aspect of PIM sparse mode from shared to source-based tree is one of the major
features of PIM, because it prevents overloading the RP or surrounding core links.
There are related issues regarding source, RPs, and receivers when sparse mode multicast is used:
• Receivers initially need to know only one RP (they later learn about others).
• Receivers that never transition to a source-based tree are effectively running Core Based Trees (CBT).
PIM sparse mode has standard features for all of these issues.
Rendezvous Point
The RP router serves as the information exchange point for the other routers. All routers in a PIM
domain must provide mapping to an RP router. It is the only router that needs to know the active
sources for a domain—the other routers just need to know how to reach the RP. In this way, the RP
matches receivers with sources.
312
The RP router is downstream from the source and forms one end of the shortest-path tree. As shown in
Figure 39 on page 312, the RP router is upstream from the receiver and thus forms one end of the
rendezvous-point tree.
The benefit of using the RP as the information exchange point is that it reduces the amount of state in
non-RP routers. No network flooding is required to provide non-RP routers information about active
sources.
RP Mapping Options
• Static configuration
• Anycast RP
• Auto-RP
• Bootstrap router
We recommend a static RP mapping with anycast RP and a bootstrap router (BSR) with auto-RP
configuration, because static mapping provides all the benefits of a bootstrap router and auto-RP
without the complexity of the full BSR and auto-RP mechanisms.
SEE ALSO
• The receiver DR sends PIM join and PIM prune messages from the receiver network toward the RP.
• The source DR sends PIM register messages from the source network to the RP.
Neighboring PIM routers multicast periodic PIM hello messages to each other every 30 seconds (the
default). The PIM hello message usually includes a holdtime value for the neighbor to use, but this is not
a requirement. If the PIM hello message does not include a holdtime value, a default timeout value (in
Junos OS, 105 seconds) is used. On receipt of a PIM hello message, a router stores the IP address and
priority for that neighbor. If the DR priorities match, the router with the highest IP address is selected as
the DR.
If a DR fails, a new one is selected using the same process of comparing IP addresses.
NOTE: DR priority is specific to PIM sparse mode; as per RFC 3973, PIM DR priority cannot be
configured explicitly in PIM Dense Mode (PIM-DM) in IGMPv2 – PIM-DM only support DRs with
IGMPv1.
CAUTION: For redundancy, we strongly recommend that each routing device has
multiple Tunnel Services PICs. In the case of MX Series routers, the recommendation is
to configure multiple tunnel-services statements.
We also recommend that the Tunnel PICs be installed (or configured) on different FPCs.
If you have only one Tunnel PIC or if you have multiple Tunnel PICs installed on a single
FPC and then that FPC is removed, the multicast session will not come up. Having
redundant Tunnel PICs on separate FPCs can help ensure that at least one Tunnel PIC is
available and that multicast will continue working.
314
On MX Series routers, the redundant configuration looks like the following example:
[edit chassis]
user@mx-host# set fpc 1 pic 0 tunnel-services bandwidth 1g
user@mx-host# set fpc 2 pic 0 tunnel-services bandwidth 1g
In PIM sparse mode, the source DR takes the initial multicast packets and encapsulates them in PIM
register messages. The source DR then unicasts the packets to the PIM sparse-mode RP router, where
the PIM register message is de-encapsulated.
When a router is configured as a PIM sparse-mode RP router (by specifying an address using the
address statement at the [edit protocols pim rp local] hierarchy level) and a Tunnel PIC is present on the
router, a PIM register de-encapsulation interface, or pd interface, is automatically created. The pd
interface receives PIM register messages and de-encapsulates them by means of the hardware.
If PIM sparse mode is enabled and a Tunnel Services PIC is present on the router, a PIM register
encapsulation interface (pe interface) is automatically created for each RP address. The pe interface is
used to encapsulate source data packets and send the packets to RP addresses on the PIM DR and the
PIM RP. The pe interface receives PIM register messages and encapsulates the packets by means of the
hardware.
Do not confuse the configurable pe and pd hardware interfaces with the nonconfigurable pime and pimd
software interfaces. Both pairs encapsulate and de-encapsulate multicast packets, and are created
automatically. However, the pe and pd interfaces appear only if a Tunnel Services PIC is present. The
pime and pimd interfaces are not useful in situations requiring the pe and pd interfaces.
If the source DR is the RP, then there is no need for PIM register messages and consequently no need
for a Tunnel Services PIC.
When PIM sparse mode is used with IP version 6 (IPv6), a Tunnel PIC is required on the RP, but not on
the IPv6 PIM DR. The lack of a Tunnel PIC requirement on the IPv6 DR applies only to IPv6 PIM sparse
mode and is not to be confused with IPv4 PIM sparse-mode requirements.
Table 11 on page 314 shows the complete matrix of IPv4 and IPv6 PIM Tunnel PIC requirements.
Table 11: Tunnel PIC Requirements for IPv4 and IPv6 Multicast
Table 11: Tunnel PIC Requirements for IPv4 and IPv6 Multicast (Continued)
IPv6 Yes No
Starting in Junos OS Release 16.1, PIM is disabled by default. When you enable PIM, it operates in
sparse mode by default. You do not need to configure Internet Group Management Protocol (IGMP)
version 2 for a sparse mode configuration. After you enable PIM, by default, IGMP version 2 is also
enabled.
Junos OS uses PIM version 2 for both rendezvous point (RP) mode (at the [edit protocols pim rp static
address address] hierarchy level) and interface mode (at the [edit protocols pim interface interface-
name] hierarchy level).
You can configure PIM sparse mode globally or for a routing instance. This example shows how to
configure PIM sparse mode globally on all interfaces. It also shows how to configure a static RP router
and how to configure the non-RP routers.
2. Configure the RP router interfaces. When configuring all interfaces, exclude the fxp0.0 management
interface by including the disable statement for that interface.
3. Configure the non-RP routers. Include the following configuration on all of the non-RP routers.
SEE ALSO
For PIM sparse mode, you can configure PIM join load balancing to spread join messages and traffic
across equal-cost upstream paths (interfaces and routing devices) provided by unicast routing toward a
source. PIM join load balancing is only supported for PIM sparse mode configurations.
PIM join load balancing is supported on draft-rosen multicast VPNs (also referred to as dual PIM
multicast VPNs) and multiprotocol BGP-based multicast VPNs (also referred to as next-generation
Layer 3 VPN multicast). When PIM join load balancing is enabled in a draft-rosen Layer 3 VPN scenario,
the load balancing is achieved based on the join counts for the far-end PE routing devices, not for any
intermediate P routing devices.
If an internal BGP (IBGP) multipath forwarding VPN route is available, the Junos OS uses the multipath
forwarding VPN route to send join messages to the remote PE routers to achieve load balancing over
the VPN.
By default, when multiple PIM joins are received for different groups, all joins are sent to the same
upstream gateway chosen by the unicast routing protocol. Even if there are multiple equal-cost paths
317
available, these alternative paths are not utilized to distribute multicast traffic from the source to the
various groups.
When PIM join load balancing is configured, the PIM joins are distributed equally among all equal-cost
upstream interfaces and neighbors. Every new join triggers the selection of the least-loaded upstream
interface and neighbor. If there are multiple neighbors on the same interface (for example, on a LAN),
join load balancing maintains a value for each of the neighbors and distributes multicast joins (and
downstream traffic) among these as well.
Join counts for interfaces and neighbors are maintained globally, not on a per-source basis. Therefore,
there is no guarantee that joins for a particular source are load-balanced. However, the joins for all
sources and all groups known to the routing device are load-balanced. There is also no way to
administratively give preference to one neighbor over another: all equal-cost paths are treated the same
way.
You can configure message filtering globally or for a routing instance. This example shows the global
configuration.
You configure PIM join load balancing on the non-RP routers in the PIM domain.
1. Determine if there are multiple paths available for a source (for example, an RP) with the output of
the show pim join extensive or show pim source commands.
Group: 224.1.1.1
Source: *
RP: 10.255.245.6
Flags: sparse,rptree,wildcard
Upstream interface: t1-0/2/3.0
Upstream neighbor: 192.168.38.57
Upstream state: Join to RP
Downstream neighbors:
Interface: t1–0/2/1.0
192.168.38.16 State: JOIN Flags; SRW Timeout: 164
Group: 224.2.127.254
Source: *
RP: 10.255.245.6
Flags: sparse,rptree,wildcard
Upstream interface: so–0/3/0.0
Upstream neighbor: 192.168.38.47
Upstream state: Join to RP
Downstream neighbors:
318
Interface: t1–0/2/3.0
192.168.38.16 State: JOIN Flags; SRW Timeout: 164
Note that for this router, the RP at IP address 10.255.245.6 is the source for two multicast groups:
224.1.1.1 and 224.2.127.254. This router has two equal-cost paths through two different upstream
interfaces (t1-0/2/3.0 and so-0/3/0.0) with two different neighbors (192.168.38.57 and
192.168.38.47). This router is a good candidate for PIM join load balancing.
2. On the non-RP router, configure PIM sparse mode and join load balancing.
Note that the two equal-cost paths shown by the show pim interfaces command now have nonzero
join counts. If the counts differ by more than one and were zero (0) when load balancing commenced,
319
an error occurs (joins before load balancing are not redistributed). The join count also appears in the
show pim neighbors detail output:
Interface: t1-0/2/3.0
Note that the join count is nonzero on the two load-balanced interfaces toward the upstream
neighbors.
PIM join load balancing only takes effect when the feature is configured. Prior joins are not
redistributed to achieve perfect load balancing. In addition, if an interface or neighbor fails, the new
joins are redistributed among remaining active interfaces and neighbors. However, when the
interface or neighbor is restored, prior joins are not redistributed. The clear pim join-distribution
command redistributes the existing flows to new or restored upstream neighbors. Redistributing the
320
existing flows causes traffic to be disrupted, so we recommend that you perform PIM join
redistribution during a maintenance window.
SEE ALSO
A downstream router periodically sends join messages to refresh the join state on the upstream router. If
the join state is not refreshed before the timeout expires, the join state is removed.
By default, the join state timeout is 210 seconds. You can change this timeout to allow additional time
to receive the join messages. Because the messages are called join-prune messages, the name used is
the join-prune-timeout statement.
The join timeout value can be from 210 through 420 seconds.
SEE ALSO
join-prune-timeout
IN THIS SECTION
Requirements | 321
Overview | 321
321
Configuration | 323
Verification | 326
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure PIM Sparse Mode on the interfaces. See Enabling PIM Sparse Mode.
Overview
IN THIS SECTION
Topology | 322
PIM join suppression enables a router on a multiaccess network to defer sending join messages to an
upstream router when it sees identical join messages on the same network. Eventually, only one router
sends these join messages, and the other routers suppress identical messages. Limiting the number of
join messages improves scalability and efficiency by reducing the number of messages sent to the same
router.
• override-interval—Sets the maximum time in milliseconds to delay sending override join messages.
When a router sees a prune message for a join it is currently suppressing, it waits before it sends an
override join message. Waiting helps avoid multiple downstream routers sending override join
messages at the same time. The override interval is a random timer with a value of 0 through the
maximum override value.
• propagation-delay—Sets a value in milliseconds for a prune pending timer, which specifies how long
to wait before executing a prune on an upstream router. During this period, the router waits for any
322
prune override join messages that might be currently suppressed. The period for the prune pending
timer is the sum of the override-interval value and the value specified for propagation-delay.
When multiple identical join messages are received, a random join suppression timer is activated,
with a range of 66 through 84 milliseconds. The timer is reset each time join suppression is triggered.
Topology
• Routers R2, R3, R4, and R5 are downstream routers in the multicast LAN.
This example shows the configuration of the downstream devices: Routers R2, R3, R4, and R5.
Configuration
IN THIS SECTION
Procedure | 324
Results | 325
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
[edit]
set protocols pim traceoptions file pim.log
set protocols pim traceoptions file size 5m
set protocols pim traceoptions file world-readable
set protocols pim traceoptions flag join detail
set protocols pim traceoptions flag prune detail
set protocols pim traceoptions flag normal detail
set protocols pim traceoptions flag register detail
set protocols pim rp static address 10.255.112.160
set protocols pim interface all mode sparse
set protocols pim interface all version 2
set protocols pim interface fxp0.0 disable
set protocols pim reset-tracking-bit
324
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
To configure PIM join suppression on a non-RP downstream router in the multicast LAN:
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set rp static address 10.255.112.160
[edit protocols pim]
user@host# set interface all mode sparse version 2
[edit protocols pim]
user@host# set interface all version 2
[edit protocols pim]
user@host# set interface fxp0.0 disable
Results
From configuration mode, confirm your configuration by entering the show protocols command. If the
output does not display the intended configuration, repeat the instructions in this example to correct
the configuration.
address 10.255.112.160;
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
reset-tracking-bit;
propagation-delay 500;
override-interval 4000;
}
Verification
To verify the configuration, run the following commands on the upstream and downstream routers:
SEE ALSO
The tunnel endpoints do not need to be the same platform type. For example, the device on one end of
the tunnel can be a JCS1200 router, while the device on the other end can be a standalone T Series
router. The two routers that are the tunnel endpoints can be in the same autonomous system or in
different autonomous systems.
In the configuration shown in this example, OSPF is configured between the tunnel endpoints. In Figure
41 on page 327, the tunnel endpoints are R0 and R1. The network that contains the multicast source is
connected to R0. The network that contains the multicast receivers is connected to R1. R1 serves as the
statically configured rendezvous point (RP).
[edit interfaces]
user@host# set ge-0/1/1 description "incoming interface"
user@host# set ge-0/1/1 unit 0 family inet address 10.20.0.1/30
[edit interfaces]
user@host# set ge-0/0/7 description "outgoing interface"
user@host# set ge-0/0/7 unit 0 family inet address 10.10.1.1/30
328
3. On R0, configure unit 0 on the sp- interface. The Junos OS uses unit 0 for service logging and other
communication from the services PIC.
[edit interfaces]
user@host# set sp-0/2/0 unit 0 family inet
4. On R0, configure the logical interfaces that participate in the IPsec services. In this example, unit 1
is the inward-facing interface. Unit 1001 is the interface that faces the remote IPsec site.
[edit interfaces]
user@host# set sp-0/2/0 unit 1 family inet
user@host# set sp-0/2/0 unit 1 service-domain inside
user@host# set sp-0/2/0 unit 1001 family inet
user@host# set sp-0/2/0 unit 1001 service-domain outside
6. On R0, configure PIM sparse mode. This example uses static RP configuration. Because R0 is a non-
RP router, configure the address of the RP router, which is the routable address assigned to the
loopback interface on R1.
7. On R0, create a rule for a bidirectional dynamic IKE security association (SA) that references the IKE
policy and the IPsec policy.
8. On R0, configure the IPsec proposal. This example uses the Authentication Header (AH) Protocol.
12. On R0, create a service set that defines IPsec-specific information. The first command associates
the IKE SA rule with IPsec. The second command defines the address of the local end of the IPsec
security tunnel. The last two commands configure the logical interfaces that participate in the IPsec
330
services. Unit 1 is for the IPsec inward-facing traffic. Unit 1001 is for the IPsec outward-facing
traffic.
[edit interfaces]
user@host# set ge-2/0/1 description "incoming interface"
user@host# set ge-2/0/1 unit 0 family inet address 10.10.1.2/30
[edit interfaces]
user@host# set ge-2/0/0 description "outgoing interface"
user@host# set ge-2/0/0 unit 0 family inet address 10.20.0.5/30
[edit interfaces]
user@host# set lo0.0 family inet address 10.255.0.156
16. On R1, configure unit 0 on the sp- interface. The Junos OS uses unit 0 for service logging and other
communication from the services PIC.
[edit interfacesinterfaces]
user@host# set sp-2/1/0 unit 0 family inet
331
17. On R1, configure the logical interfaces that participate in the IPsec services. In this example, unit 1
is the inward-facing interface. Unit 1001 is the interface that faces the remote IPsec site.
[edit interfaces]
user@host# set sp-2/1/0 unit 1 family inet
user@host# set sp-2/1/0 unit 1 service-domain inside
user@host# set sp-2/1/0 unit 1001 family inet
user@host# set sp-2/1/0 unit 1001 service-domain outside
19. On R1, configure PIM sparse mode. R1 is an RP router. When you configure the local RP address,
use the shared address, which is the address of R1’s loopback interface.
20. On R1, create a rule for a bidirectional dynamic Internet Key Exchange (IKE) security association
(SA) that references the IKE policy and the IPsec policy.
21. On R1, define the IPsec proposal for the dynamic SA.
25. On R1, create a service set that defines IPsec-specific information. The first command associates
the IKE SA rule with IPsec. The second command defines the address of the local end of the IPsec
security tunnel. The last two commands configure the logical interfaces that participate in the IPsec
services. Unit 1 is for the IPsec inward-facing traffic. Unit 1001 is for the IPsec outward-facing
traffic.
SEE ALSO
IN THIS SECTION
Requirements | 334
Overview | 334
Configuration | 335
Verification | 340
A virtual router is a type of simplified routing instance that has a single routing table. This example
shows how to configure PIM in a virtual router.
Requirements
Before you begin, configure an interior gateway protocol or static routing. See the Junos OS Routing
Protocols Library for Routing Devices.
Overview
IN THIS SECTION
Topology | 335
You can configure PIM for the virtual-router instance type as well as for the vrf instance type. The
virtual-router instance type is similar to the vrf instance type used with Layer 3 VPNs, except that it is
used for non-VPN-related applications.
The virtual-router instance type has no VPN routing and forwarding (VRF) import, VRF export, VRF
target, or route distinguisher requirements. The virtual-router instance type is used for non-Layer 3 VPN
situations.
When PIM is configured under the virtual-router instance type, the VPN configuration is not based on
RFC 2547, BGP/MPLS VPNs, so PIM operation does not comply with the Internet draft draft-rosen-
vpn-mcast-07.txt, Multicast in MPLS/BGP VPNs. In the virtual-router instance type, PIM operates in a
routing instance by itself, forming adjacencies with PIM neighbors over the routing instance interfaces
as the other routing protocols do with neighbors in the routing instance.
335
1. On R1, configure a virtual router instance with three interfaces (ge-0/0/0.0, ge-0/1/0.0, and
ge-0/1/1.0).
After you configure this example, you should be able to send multicast traffic from R2 through ge-0/0/0
on R1 to the static group and verify that the traffic egresses from ge-0/1/0.0 and ge-0/1/1.0.
NOTE: Do not include the group-address statement for the virtual-router instance type.
Topology
Configuration
IN THIS SECTION
Procedure | 336
Results | 338
336
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
[edit]
set interfaces ge-0/0/0 unit 0 family inet6 address 2001:4:4:4::1/64
set interfaces ge-0/1/0 unit 0 family inet6 address 2001:24:24:24::1/64
set interfaces ge-0/1/1 unit 0 family inet6 address 2001:7:7:7::1/64
set protocols mld interface ge-0/1/0.0 static group ff0e::10
set protocols mld interface ge-0/1/1.0 static group ff0e::10
set routing-instances mvrf1 instance-type virtual-router
set routing-instances mvrf1 interface ge-0/0/0.0
set routing-instances mvrf1 interface ge-0/1/0.0
set routing-instances mvrf1 interface ge-0/1/1.0
set routing-instances mvrf1 protocols pim rp local family inet6 address 2001:1:1:1::1
set routing-instances mvrf1 protocols pim interface ge-0/0/0.0
set routing-instances mvrf1 protocols pim interface ge-0/1/0.0
set routing-instances mvrf1 protocols pim interface ge-0/1/1.0
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit]
user@host# edit interfaces
[edit interfaces]
user@host# set ge-0/0/0 unit 0 family inet6 address 2001:4:4:4::1/64
[edit interfaces]
user@host# set ge-0/1/0 unit 0 family inet6 address 2001:24:24:24::1/64
[edit interfaces]
337
[edit]
user@host# edit routing-instances
[edit routing-instances]
user@host# set mvrf1 instance-type virtual-router
[edit routing-instances]
user@host# set mvrf1 interface ge-0/0/0
[edit routing-instances]
user@host# set mvrf1 interface ge-0/1/0
[edit routing-instances]
user@host# set mvrf1 interface ge-0/1/1
[edit routing-instances]
user@host# set mvrf1 protocols pim rp local family inet6 address 2001:1:1:1::1
[edit routing-instances]
user@host# set mvrf1 protocols pim interface ge-0/0/0
[edit routing-instances]
user@host# set mvrf1 protocols pim interface ge-0/1/0
[edit routing-instances]
user@host# set mvrf1 protocols pim interface ge-0/1/1
[edit routing-instances]
user@host# exit
338
[edit]
user@host# edit protocols mld
[edit protocols mld]
user@host# set interface ge-0/1/0.0 static group ff0e::10
[edit protocols mld]
user@host# set interface ge-0/1/1.0 static group ff0e::10
[edit routing-instances]
user@host# commit
Results
Confirm your configuration by entering the show interfaces, show routing-instances, and show protocols
commands.
}
}
Verification
SEE ALSO
Release Description
16.1 Starting in Junos OS Release 16.1, PIM is disabled by default. When you enable PIM, it operates in
sparse mode by default.
RELATED DOCUMENTATION
Configuring Static RP
IN THIS SECTION
Configuring the Static PIM RP Address on the Non-RP Routing Device | 349
Understanding Static RP
Protocol Independent Multicast (PIM) sparse mode is the most common multicast protocol used on the
Internet. PIM sparse mode is the default mode whenever PIM is configured on any interface of the
device. However, because PIM must not be configured on the network management interface, you must
disable it on that interface.
Each any-source multicast (ASM) group has a shared tree through which receivers learn about new
multicast sources and new receivers learn about all multicast sources. The rendezvous point (RP) router
is the root of this shared tree and receives the multicast traffic from the source. To receive multicast
traffic from the groups served by the RP, the device must determine the IP address of the RP for the
source.
You can configure a static rendezvous point (RP) configuration that is similar to static routes. A static
configuration has the benefit of operating in PIM version 1 or version 2. When you configure the static
342
RP, the RP address that you select for a particular group must be consistent across all routers in a
multicast domain.
Starting in Junos OS Release 15.2, the static configuration uses PIM version 2 by default, which is the
only version supported in that release and beyond..
One common way for the device to locate RPs is by static configuration of the IP address of the RP. A
static configuration is simple and convenient. However, if the statically defined RP router becomes
unreachable, there is no automatic failover to another RP router. To remedy this problem, you can use
anycast RP.
SEE ALSO
You can configure a local RP globally or for a routing instance. This example shows how to configure a
local RP in a routing instance for IPv4 or IPv6.
By default, PIM operates in sparse mode on an interface. If you explicitly configure sparse mode, PIM
uses this setting for all IPv6 multicast groups. However, if you configure sparse-dense mode, PIM
does not accept IPv6 multicast groups as dense groups and operates in sparse mode over them.
NOTE: The priority statement is not supported for IPv6, but is included here for informational
purposes. The routing device’s priority value for becoming the RP is included in the bootstrap
messages that the routing device sends. Use a smaller number to increase the likelihood that
the routing device becomes the RP for local multicast groups. Each PIM routing device uses
the priority value and other factors to determine the candidate RPs for a particular group
range. After the set of candidate RPs is distributed, each routing device determines
algorithmically the RP from the candidate RP set using a hash function. By default, the priority
value is set to 1. If this value is set to 0, the bootstrap router can override the group range
being advertised by the candidate RP.
4. Configure the groups for which the routing device is the RP.
By default, a routing device running PIM is eligible to be the RP for all IPv4 or IPv6 groups
(224.0.0.0/4 or FF70::/12 to FFF0::/12). The following example limits the groups for which this
routing device can be the RP.
receive a candidate RP advertisement from an RP within the hold time, it removes that routing device
from its list of candidate RPs. The default hold time is 150 seconds.
If you exclude this statement from the configuration and you use both static and dynamic RP
mechanisms for different group ranges within the same routing instance, the dynamic RP mapping
takes precedence over the static RP mapping, even if static RP is defined for a specific group range.
7. Monitor the operation of PIM by running the show pim commands. Run show pim ? to display the
supported commands.
SEE ALSO
PIM Overview
Understanding MLD
IN THIS SECTION
Requirements | 345
Overview | 345
Configuration | 345
Verification | 347
This example shows how to configure PIM sparse mode and RP static IP addresses.
345
Requirements
1. Determine whether the router is directly attached to any multicast sources. Receivers must be able
to locate these sources.
2. Determine whether the router is directly attached to any multicast group receivers. If receivers are
present, IGMP is needed.
3. Determine whether to configure multicast to use sparse, dense, or sparse-dense mode. Each mode
has different configuration considerations.
5. Determine whether to locate the RP with the static configuration, BSR, or auto-RP method.
6. Determine whether to configure multicast to use its own RPF routing table when configuring PIM in
sparse, dense, or sparse-dense mode.
7. Configure the SAP and SDP protocols to listen for multicast session announcements.
8. Configure IGMP.
Overview
In this example, you set the interface value to all and disable the ge-0/0/0 interface. Then you configure
the IP address of the RP as 192.168.14.27.
Configuration
IN THIS SECTION
Procedure | 346
346
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
instructions on how to do that, see Using the CLI Editor in Configuration Mode in the Junos OS CLI User
Guide.
1. Configure PIM.
[edit]
user@host# edit protocols pim
4. Configure RP.
[edit]
user@host# edit protocols pim rp
[edit]
user@host# set static address 192.168.14.27
Results
From configuration mode, confirm your configuration by entering the show protocols command. If the
output does not display the intended configuration, repeat the configuration instructions in this example
to correct it.
[edit]
user@host# show protocols
pim {
rp {
static {
address 192.168.14.27;
}
}
interface all;
interface ge-0/0/0.0 {
disable;
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Purpose
Verify that SAP and SDP are configured to listen on the correct group addresses and ports.
Action
Purpose
Action
Purpose
Action
SEE ALSO
You configure a static RP address on the non-RP routing device. This enables the non-RP routing device
to recognize the local statically defined RP. For example, if R0 is a non-RP router and R1 is the local RP
router, you configure R0 with the static RP address of R1. The static IP address is the routable address
assigned to the loopback interface on R1. In the following example, the loopback address of the RP is
2001:db8:85a3::8a2e:370:7334.
Starting in Junos OS Release 15.2, the default PIM version is version 2, and version 1 is not supported.
For Junsos OS Release 15.1 and earlier, the default PIM version can be version 1 or version 2, depending
on the mode you are configuring. PIM version 1 is the default for RP mode ([edit pim rp static address
address]). PIM version 2 is the default for interface mode ([edit pim interface interface-name]). An
explicitly configured PIM version will override the default setting.
You can configure a static RP address globally or for a routing instance. This example shows how to
configure a static RP address in a routing instance for IPv6.
1. On a non-RP routing device, configure the routing instance to point to the routable address assigned
to the loopback interface of the RP.
NOTE: Logical systems are also supported. You can configure a static RP address in a logical
system only if the logical system is not directly connected to a source.
350
The RP that you select for a particular group must be consistent across all routers in a multicast
domain.
4. (Optional) Override dynamic RP for the specified group address range.
If you configure both static RP mapping and dynamic RP mapping (such as auto-RP) in a single
routing instance, allow the static mapping to take precedence for the given static RP group range,
and allow dynamic RP mapping for all other groups.
If you exclude this statement from the configuration and you use both static and dynamic RP
mechanisms for different group ranges within the same routing instance, the dynamic RP mapping
takes precedence over the static RP mapping, even if static RP is defined for a specific group range.
5. Monitor the operation of PIM by running the show pim commands. Run show pim ? to display the
supported commands.
SEE ALSO
PIM Overview
Understanding MLD
351
Release Description
15.2 Starting in Junos OS Release 15.2, the static configuration uses PIM version 2 by default, which is the
only version supported in that release and beyond.
15.2 Starting in Junos OS Release 15.2, the default PIM version is version 2, and version 1 is not supported.
15.1 For Junsos OS Release 15.1 and earlier, the default PIM version can be version 1 or version 2, depending
on the mode you are configuring. PIM version 1 is the default for RP mode ([edit pim rp static address
address]). PIM version 2 is the default for interface mode ([edit pim interface interface-name]). An
explicitly configured PIM version will override the default setting.
RELATED DOCUMENTATION
IN THIS SECTION
idle, and convergence is slow when the resource fails. In multicast specifically, there might be closer RPs
on the shared tree, so the use of a single RP is suboptimal.
For the purposes of load balancing and redundancy, you can configure anycast RP. You can use anycast
RP within a domain to provide redundancy and RP load sharing. When an RP fails, sources and receivers
are taken to a new RP by means of unicast routing. When you configure anycast RP, you bypass the
restriction of having one active RP per multicast group, and instead deploy multiple RPs for the same
group range. The RP routers share one unicast IP address. Sources from one RP are known to other RPs
that use the Multicast Source Discovery Protocol (MSDP). Sources and receivers use the closest RP, as
determined by the interior gateway protocol (IGP).
Anycast means that multiple RP routers share the same unicast IP address. Anycast addresses are
advertised by the routing protocols. Packets sent to the anycast address are sent to the nearest RP with
this address. Anycast addressing is a generic concept and is used in PIM sparse mode to add load
balancing and service reliability to RPs.
Anycast RP is defined in RFC3446 , Anycast RP Mechanism Using PIM and MSDP, and can be found
here: https://www.ietf.org/rfc/rfc3446.txt .
SEE ALSO
IN THIS SECTION
Requirements | 353
Overview | 353
Configuration | 353
Verification | 356
This example shows how to configure anycast RP on each RP router in the PIM-SM domain. With this
configuration you can deploy more than one RP for a single group range. This enables load balancing and
redundancy.
353
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure PIM Sparse Mode on the interfaces. See Enabling PIM Sparse Mode.
Overview
When you configure anycast RP, the RP routers in the PIM-SM domain use a shared address. In this
example, the shared address is 10.1.1.2/32. Anycast RP uses Multicast Source Discovery Protocol
(MSDP) to discover and maintain a consistent view of the active sources. Anycast RP also requires an RP
selection method, such as static, auto-RP, or bootstrap RP. This example uses static RP and shows only
one RP router configuration.
Configuration
IN THIS SECTION
Procedure | 354
Results | 355
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
RP Routers
Non-RP Routers
Procedure
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
1. On each RP router in the domain, configure the shared anycast address on the router’s loopback
address.
[edit interfaces]
user@host# set lo0 unit 0 family inet address 10.1.1.2/32
2. On each RP router in the domain, make sure that the router’s regular loopback address is the primary
address for the interface, and set the router ID.
[edit interfaces]
user@host# set lo0 unit 0 family inet address 192.168.132.1/32 primary
[edit routing-options]
user@host# set router-id 192.168.132.1
3. On each RP router in the domain, configure the local RP address, using the shared address.
4. On each RP router in the domain, create MSDP sessions to the other RPs in the domain.
5. On each non-RP router in the domain, configure a static RP address using the shared address.
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show interfaces, show protocols,
and show routing-options commands. If the output does not display the intended configuration, repeat
the instructions in this example to correct the configuration.
On the RP routers:
Verification
To verify the configuration, run the show pim rps extensive inet command.
SEE ALSO
You can use anycast RP within a domain to provide redundancy and RP load sharing. When an RP stops
operating, sources and receivers are taken to a new RP by means of unicast routing.
You can configure anycast RP to use PIM and MSDP for IPv4, or PIM alone for both IPv4 and IPv6
scenarios. Both are discussed in this section.
We recommend a static RP mapping with anycast RP over a bootstrap router and auto-RP configuration
because it provides all the benefits of a bootstrap router and auto-RP without the complexity of the BSR
and auto-RP mechanisms.
Starting in Junos OS Release 16.1, all systems on a subnet must run the same version of PIM.
The default PIM version can be version 1 or version 2, depending on the mode you are configuring.
PIMv1 is the default RP mode (at the [edit protocols pim rp static address address] hierarchy level).
However, PIMv2 is the default for interface mode (at the [edit protocols pim interface interface-name]
hierarchy level). Explicitly configured versions override the defaults. This example explicitly configures
PIMv2 on the interfaces.
The following example shows an anycast RP configuration for the RP routers, first with MSDP and then
using PIM alone, and for non-RP routers.
1. For a network using an RP with MSDP, configure the RP using the lo0 loopback interface, which is
always up. Include the address statement and specify the unique and routable router ID and the RP
address at the [edit interfaces lo0 unit 0 family inet] hierarchy level. In this example, the router ID is
198.51.100.254 and the shared RP address is 198.51.100.253. Include the primary statement for the
first address. Including the primary statement selects the router’s primary address from all the
preferred addresses on all interfaces.
interfaces {
lo0 {
description "PIM RP";
unit 0 {
family inet {
address 198.51.100.254/32;
primary;
address 198.51.100.253/32;
}
358
}
}
}
2. Specify the RP address. Include the address statement at the [edit protocols pim rp local] hierarchy
level (the same address as the secondary lo0 interface).
For all interfaces, include the mode statement to set the mode to sparse and the version statement
to specify PIM version 2 at the [edit protocols pim rp local interface all] hierarchy level. When
configuring all interfaces, exclude the fxp0.0 management interface by including the disable
statement for that interface.
protocols {
pim {
rp {
local {
family inet;
address 198.51.100.253;
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
}
}
3. Configure MSDP peering. Include the peer statement to configure the address of the MSDP peer at
the [edit protocols msdp] hierarchy level. For MSDP peering, use the unique, primary addresses
instead of the anycast address. To specify the local address for MSDP peering, include the local-
address statement at the [edit protocols msdp peer] hierarchy level.
protocols {
msdp {
peer 198.51.100.250 {
local-address address 198.51.100.254;
}
359
}
}
NOTE: If you need to configure a PIM RP for both IPv4 and IPv6 scenarios, perform Step "4"
on page 359 and Step "5" on page 359. Otherwise, go to Step "6" on page 360.
4. Configure an RP using the lo0 loopback interface, which is always up. Include the address statement
to specify the unique and routable router address and the RP address at the [edit interfaces lo0 unit
0 family inet] hierarchy level. In this example, the router ID is 198.51.100.254 and the shared RP
address is 198.51.100.253. Include the primary statement on the first address. Including the primary
statement selects the router’s primary address from all the preferred addresses on all interfaces.
interfaces {
lo0 {
description "PIM RP";
unit 0 {
family inet {
address 198.51.100.254/32 {
primary;
}
address 198.51.100.253/32;
}
}
}
}
5. Include the address statement at the [edit protocols pim rp local] hierarchy level to specify the RP
address (the same address as the secondary lo0 interface).
For all interfaces, include the mode statement to set the mode to sparse, and the version statement
to specify PIM version 2 at the [edit protocols pim rp local interface all] hierarchy level. When
configuring all interfaces, exclude the fxp0.0 management interface by Including the disable
statement for that interface.
Include the anycast-pim statement to configure anycast RP without MSDP (for example, if IPv6 is
used for multicasting). The other RP routers that share the same IP address are configured using the
rp-set statement. There is one entry for each RP, and the maximum that can be configured is 15. For
each RP, specify the routable IP address of the router and whether MSDP source active (SA)
messages are forwarded to the RP.
360
MSDP configuration is not necessary for this type of IPv4 anycast RP configuration.
protocols {
pim {
rp {
local {
family inet {
address 198.51.100.253;
anycast-pim {
rp-set {
address 198.51.100.240;
address 198.51.100.241 forward-msdp-sa;
}
local-address 198.51.100.254; #If not configured, use
lo0 primary
}
}
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
}
6. Configure the non-RP routers. The anycast RP configuration for a non-RP router is the same whether
MSDP is used or not. Specify a static RP by adding the address at the [edit protocols pim rp static]
hierarchy level. Include the version statement at the [edit protocols pim rp static address] hierarchy
level to specify PIM version 2.
protocols {
pim {
rp {
static {
address 198.51.100.253 {
version 2;
}
361
}
}
}
}
7. Include the mode statement at the [edit protocols pim interface all] hierarchy level to specify sparse
mode on all interfaces. Then include the version statement at the [edit protocols pim rp interface all
mode] to configure all interfaces for PIM version 2. When configuring all interfaces, exclude the
fxp0.0 management interface by including the disable statement for that interface.
protocols {
pim {
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
}
interfaces {
lo0 {
description "PIM RP";
unit 0 {
family inet {
address 198.51.100.254/32 {
primary;
}
address 198.51.100.253/32;
}
}
362
}
}
Add the address statement at the [edit protocols pim rp local] hierarchy level to specify the RP address
(the same address as the secondary lo0 interface).
For all interfaces, use the mode statement to set the mode to sparse, and include the version statement
to specify PIM version 2 at the [edit protocols pim rp local interface all] hierarchy level. When
configuring all interfaces, exclude the fxp0.0 management interface by adding the disable statement for
that interface.
Use the anycast-pim statement to configure anycast RP without MSDP (for example, if IPv6 is used for
multicasting). The other RP routers that share the same IP address are configured using the rp-set
statement. There is one entry for each RP, and the maximum that can be configured is 15. For each RP,
specify the routable IP address of the router and whether MSDP source active (SA) messages are
forwarded to the RP.
protocols {
pim {
rp {
local {
family inet {
address 198.51.100.253;
anycast-pim {
rp-set {
address 198.51.100.240;
address 198.51.100.241 forward-msdp-sa;
}
local-address 198.51.100.254; #If not configured, use
lo0 primary
}
}
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
363
}
}
MSDP configuration is not necessary for this type of IPv4 anycast RP configuration.
SEE ALSO
Release Description
16.1 Starting in Junos OS Release 16.1, all systems on a subnet must run the same version of PIM.
RELATED DOCUMENTATION
IN THIS SECTION
Example: Rejecting PIM Bootstrap Messages at the Boundary of a PIM Domain | 368
SEE ALSO
NOTE: For legacy configuration purposes, there are two sections that describe the configuration
of bootstrap routers: one section for both IPv4 and IPv6, and this section, which is for IPv4 only.
The method described in Configuring PIM Bootstrap Properties for IPv4 or IPv6 is
recommended. A commit error occurs if the same IPv4 bootstrap statements are included in both
the IPv4-only and the IPv4-and-IPv6 sections of the hierarchy. The error message is “duplicate
IPv4 bootstrap configuration.”
To determine which routing device is the RP, all routing devices within a PIM domain collect bootstrap
messages. A PIM domain is a contiguous set of routing devices that implement PIM. All are configured
to operate within a common boundary. The domain's bootstrap router initiates bootstrap messages,
which are sent hop by hop within the domain. The routing devices use bootstrap messages to distribute
RP information dynamically and to elect a bootstrap router when necessary.
You can configure bootstrap properties globally or for a routing instance. This example shows the global
configuration.
with the highest IP address is elected to be the bootstrap router. A simple bootstrap configuration
assigns a bootstrap priority value to a routing device.
2. (Optional) Create import and export policies to control the flow of IPv4 bootstrap messages to and
from the RP, and apply the policies to PIM. Import and export policies are useful when some of the
routing devices in your PIM domain have interfaces that connect to other PIM domains. Configuring
a policy prevents bootstrap messages from crossing domain boundaries. The bootstrap-import
statement prevents messages from being imported into the RP. The bootstrap-export statement
prevents messages from being exported from the RP.
4. Monitor the operation of PIM bootstrap routing devices by running the show pim bootstrap
command.
SEE ALSO
NOTE: For legacy configuration purposes, there are two sections that describe the configuration
of bootstrap routers: one section for IPv4 only, and this section, which is for both IPv4 and IPv6.
The method described in this section is recommended. A commit error occurs if the same IPv4
bootstrap statements are included in both the IPv4-only and the IPv4-and-IPv6 sections of the
hierarchy. The error message is “duplicate IPv4 bootstrap configuration.”
To determine which routing device is the RP, all routing devices within a PIM domain collect bootstrap
messages. A PIM domain is a contiguous set of routing devices that implement PIM. All devices are
configured to operate within a common boundary. The domain's bootstrap router initiates bootstrap
messages, which are sent hop by hop within the domain. The routing devices use bootstrap messages to
distribute RP information dynamically and to elect a bootstrap router when necessary.
You can configure bootstrap properties globally or for a routing instance. This example shows the global
configuration.
priority field. To disable the bootstrap function in the IPv4 and IPv6 configuration, delete the
bootstrap statement.
2. (Optional) Create import and export policies to control the flow of bootstrap messages to and from
the RP, and apply the policies to PIM. Import and export policies are useful when some of the routing
devices in your PIM domain have interfaces that connect to other PIM domains. Configuring a policy
prevents bootstrap messages from crossing domain boundaries. The import statement prevents
messages from being imported into the RP. The export statement prevents messages from being
exported from the RP.
4. Monitor the operation of PIM bootstrap routing devices by running the show pim bootstrap
command.
SEE ALSO
protocols {
pim {
rp {
bootstrap {
family inet {
priority 1;
import pim-import;
export pim-export;
}
family inet6 {
priority 1;
import pim-import;
export pim-export;
}
}
}
}
}
policy-options {
policy-statement pim-import {
from interface so-0/1/0;
then reject;
}
policy-statement pim-export {
to interface so-0/1/0;
then reject;
}
}
protocols {
pim {
369
rp {
bootstrap-import no-bsr;
bootstrap-export no-bsr;
}
}
}
policy-options {
policy-statement no-bsr {
then reject;
}
}
RELATED DOCUMENTATION
You can configure a more dynamic way of assigning rendezvous points (RPs) in a multicast network by
means of auto-RP. When you configure auto-RP for a router, the router learns the address of the RP in
the network automatically and has the added advantage of operating in PIM version 1 and version 2.
Although auto-RP is a nonstandard (non-RFC-based) function that typically uses dense mode PIM to
advertise control traffic, it provides an important failover advantage that simple static RP assignment
does not. You can configure multiple routers as RP candidates. If the elected RP fails, one of the other
preconfigured routers takes over the RP functions. This capability is controlled by the auto-RP mapping
agent.
RELATED DOCUMENTATION
Use the mode statement at the [edit protocols pim rp interface all] hierarchy level to specify sparse
mode on all interfaces. Then add the version statement at the [edit protocols pim rp interface all mode]
to configure all interfaces for PIM version 2. When configuring all interfaces, exclude the fxp0.0
management interface by adding the disable statement for that interface.
protocols {
pim {
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
}
Add the address statement at the [edit protocols pim rp local] hierarchy level to specify the RP address
(the same address as the secondary lo0 interface).
For all interfaces, use the mode statement to set the mode to sparse and the version statement to
specify PIM version 2 at the [edit protocols pim rp local interface all] hierarchy level. When configuring
all interfaces, exclude the fxp0.0 management interface by adding the disable statement for that
interface.
protocols {
pim {
rp {
local {
family inet;
address 198.51.100.253;
}
interface all {
mode sparse;
371
version 2;
}
interface fxp0.0 {
disable;
}
}
}
}
To configure MSDP peering, add the peer statement to configure the address of the MSDP peer at the
[edit protocols msdp] hierarchy level. For MSDP peering, use the unique, primary addresses instead of
the anycast address. To specify the local address for MSDP peering, add the local-address statement at
the [edit protocols msdp peer] hierarchy level.
protocols {
msdp {
peer 198.51.100.250 {
local-address 198.51.100.254;
}
}
}
Configuring Embedded RP
IN THIS SECTION
resolve this issue without requiring SSM. This feature embeds the RP address in an IPv6 multicast
address.
All IPv6 multicast addresses begin with 8 1-bits (1111 1111) followed by a 4-bit flag field normally set to
0011. The flag field is set to 0111 when embedded RP is used. Then the low-order bits of the normally
reserved field in the IPv6 multicast address carry the 4-bit RP interface identifier (RIID).
When the IPv6 address of the RP is embedded in a unicast-prefix-based any-source multicast (ASM)
address, all of the following conditions must be true:
• The address must be an IPv6 multicast address and have 0111 in the flags field (that is, the address is
part of the prefix FF70::/12).
• The 8-bit prefix length (plen) field must not be all 0. An all 0 plen field implies that SSM is in use.
• The 8-bit prefix length field value must not be greater than 64, which is the length of the network
prefix field in unicast-prefix-based ASM addresses.
The routing platform derives the value of the interdomain RP by copying the prefix length field number
of bits from the 64-bit network prefix field in the received IPv6 multicast address to an empty 128-bit
IPv6 address structure and copying the last bits from the 4-bit RIID. For example, if the prefix length
field bits have the value 32, then the routing platform copies the first 32 bits of the IPv6 multicast
address network prefix field to an all-0 IPv6 address and appends the last four bits determined by the
RIID. See Figure 43 on page 372 for an illustration of this process.
For example, the administrator of IPv6 network 2001:DB8::/32 sets up an RP for the
2001:DB8:BEEF:FEED::/96 subnet. In that case, the received embedded RP IPv6 ASM address has the
form:
FF70:y40:2001:DB8:BEEF:FEED::/96
2001:DB8:BEEF:FEED::y
When configured, the routing platform checks for embedded RP information in every PIM join request
received for IPv6. The use of embedded RP does not change the processing of IPv6 multicast and RPs in
any way, except that the embedded RP address is used if available and selected for use. There is no need
to specify the IPv6 address family for embedded RP configuration because the information can be used
only if IPv6 multicast is properly configured on the routing platform.
The following receive events trigger extraction of an IPv6 embedded RP address on the routing
platform:
• Multicast Listener Discovery (MLD) report for an embedded RP multicast group address
The embedded RP node discovered through these events is added if it does not already exist on the
routing platform. The routing platform chooses the embedded RP as the RP for a multicast group before
choosing an RP learned through BSRs or a statically configured RP. The embedded RP is removed
whenever all PIM join states using this RP are removed or the configuration changes to remove the
embedded RP feature.
concept of an embedded RP to resolve this issue without requiring SSM. Thus, embedded RP enables
you can deploy IPv6 with any-source multicast (ASM).
When you configure embedded RP for IPv6, embedded RPs are preferred to RPs discovered by IPv6 any
other way. You configure embedded RP independent of any other IPv6 multicast properties. This feature
is applied only when IPv6 multicast is properly configured.
You can configure embedded RP globally or for a routing instance. This example shows the routing
instance configuration.
1. Define which multicast addresses or prefixes can embed RP address information. If messages within
a group range contain embedded RP information and the group range is not configured, the
embedded RP in that group range is ignored. Any valid unicast-prefix-based ASM address can be
used as a group range. The default group range is FF70::/12 to FFF0::/12. Messages with embedded
RP information that do not match any configured group ranges are treated as normal multicast
addresses.
If the derived RP address is not a valid IPv6 unicast address, it is treated as any other multicast group
address and is not used for RP information. Verification fails if the extracted RP address is a local
interface, unless the routing device is configured as an RP and the extracted RP address matches the
configured RP address. Then the local RP determines whether it is configured to act as an RP for the
embedded RP multicast address.
2. Limit the number of embedded RPs created in a specific routing instance. The range is from 1
through 500. The default is 100.
3. Monitor the operation by running the show pim rps and show pim statistics commands.
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
You can configure multicast filtering to control the sending and receiving of multicast control messages.
To prevent unauthorized groups and sources from registering with an RP router, you can define a routing
policy to reject PIM register messages from specific groups and sources and configure the policy on the
designated router or the RP router.
• If you configure the reject policy on an RP router, it rejects incoming PIM register messages from the
specified groups and sources. The RP router also sends a register stop message by means of unicast
to the designated router. On receiving the register stop message, the designated router sends
periodic null register messages for the specified groups and sources to the RP router.
• If you configure the reject policy on a designated router, it stops sending PIM register messages for
the specified groups and sources to the RP router.
NOTE: If you have configured the reject policy on an RP router, we recommend that you
configure the same policy on all the RP routers in your multicast network.
NOTE: If you delete a group and source address from the reject policy configured on an RP
router and commit the configuration, the RP router will register the group and source only when
the designated router sends a null register message.
SEE ALSO
Register messages that are filtered at a DR are not sent to the RP, but the sources are available to local
users. Register messages that are filtered at an RP arrive from source DRs, but are ignored by the router.
Sources on multicast group traffic can be limited or directed by using RP or DR register message filtering
alone or together.
If the action of the register filter policy is to discard the register message, the router needs to send a
register-stop message to the DR. Register-stop messages are throttled to prevent malicious users from
triggering them on purpose to disrupt the routing process.
Multicast group and source information is encapsulated inside unicast IP packets. This feature allows the
router to inspect the multicast group and source information before sending or accepting the PIM
register message.
Incoming register messages to an RP are passed through the configured register message filtering policy
before any further processing. If the register message is rejected, the RP router sends a register-stop
message to the DR. When the DR receives the register-stop message, the DR stops sending register
messages for the filtered groups and sources to the RP. Two fields are used for register message filtering:
• Source address
The syntax of the existing policy statements is used to configure the filtering on these two fields. The
route-filter statement is useful for multicast group address filtering, and the source-address-filter
statement is useful for source address filtering. In most cases, the action is to reject the register
messages, but more complex filtering policies are possible.
Filtering cannot be performed on other header fields, such as DR address, protocol, or port. In some
configurations, an RP might not send register-stop messages when the policy action is to discard the
register messages. This has no effect on the operation of the feature, but the router will continue to
receive register messages.
When anycast RP is configured, register messages can be sent or received by the RP. All the RPs in the
anycast RP set need to be configured with the same RP register message filtering policies. Otherwise, it
might be possible to circumvent the filtering policy.
378
SEE ALSO
If you did not use multicast scoping to create boundary filters for all customer-facing interfaces, you
might want to use PIM join filters. Multicast scopes prevent the actual multicast data packets from
flowing in or out of an interface. PIM join filters prevent PIM sparse-mode state from being created in
the first place. Since PIM join filters apply only to the PIM sparse-mode state, it might be more beneficial
to use multicast scoping to filter the actual data.
NOTE: When you apply firewall filters, firewall action modifiers, such as log, sample, and count,
work only when you apply the filter on an inbound interface. The modifiers do not work on an
outbound interface.
SEE ALSO
If you configure a PIM neighbor policy after PIM has already established a neighbor adjacency to an
unwanted PIM neighbor, the adjacency remains intact until the neighbor hold time expires. When the
unwanted neighbor sends another hello message to update its adjacency, the router recognizes the
unwanted address and rejects the neighbor.
1. Configure the policy. The neighbor policy must be a properly structured policy statement that uses a
prefix list (or a route filter) containing the neighbor primary address (or any secondary IP addresses) in
a prefix list, and the reject option to reject the unwanted address.
[edit policy-options]
user@host# set prefix-list nbrGroup 1 20.20.20.1/32
user@host# set policy-statement nbr-policy from prefix-list nbrGroup1
user@host# set policy-statement nbr-policy then reject
2. Configure the interface globally or in the routing instance. This example shows the configuration for
the routing instance.
3. Verify the configuration by checking the Hello dropped on neighbor policy field in the output of the
show pim statistics command.
SEE ALSO
When the core of your network is using a mix of IP and MPLS, you might want to filter certain PIM join
and prune messages at the upstream egress interface of the CE routers.
You can filter PIM sparse mode (PIM-SM) join and prune messages at the egress interfaces for IPv4 and
IPv6 in the upstream direction. The messages can be filtered based on the group address, source
address, outgoing interface, PIM neighbor, or a combination of these values. If the filter is removed, the
join is sent after the PIM periodic join timer expires.
To filter PIM sparse mode join and prune messages at the egress interfaces, create a policy rejecting the
group address, source address, outgoing interface, or PIM neighbor, and then apply the policy.
The following example filters PIM join and prune messages for group addresses 224.0.1.2 and 225.1.1.1.
380
4. After the configuration is committed, use the show pim statistics command to verify that outgoing
PIM join and prune messages are being filtered.
Rx Joins/Prunes filtered 0
SEE ALSO
IN THIS SECTION
Requirements | 381
Overview | 382
Configuration | 382
Verification | 384
This example shows how to stop outgoing PIM register messages on a designated router.
Requirements
1. Determine whether the router is directly attached to any multicast sources. Receivers must be able
to locate these sources.
2. Determine whether the router is directly attached to any multicast group receivers. If receivers are
present, IGMP is needed.
3. Determine whether to configure multicast to use sparse, dense, or sparse-dense mode. Each mode
has different configuration considerations.
5. Determine whether to locate the RP with the static configuration, BSR, or auto-RP method.
6. Determine whether to configure multicast to use its own RPF routing table when configuring PIM
in sparse, dense, or sparse-dense mode.
7. Configure the SAP and SDP protocols to listen for multicast session announcements.
8. Configure IGMP.
10. Filter PIM register messages from unauthorized groups and sources. See Example: Rejecting
Incoming PIM Register Messages on RP Routers.
382
Overview
In this example, you configure the group address as 224.2.2.2/32 and the source address in the group as
20.20.20.1/32. You set the match action to not send PIM register messages for the group and source
address. Then you configure the policy on the designated router to stop-pim-register-msg-dr.
Configuration
IN THIS SECTION
Procedure | 382
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
instructions on how to do that, see Using the CLI Editor in Configuration Mode in the Junos OS CLI User
Guide.
[edit]
user@host# edit policy-options
[edit policy-options]
user@host# set policy statement stop-pim-register-msg-dr from route-filter 224.2.2.2/32 exact
[edit policy-options]
user@host# set policy statement stop-pim-register-msg-dr from source-address-filter 20.20.20.1/32
exact
[edit policy-options]
user@host# set policy statement stop-pim-register-msg-dr then reject
[edit]
user@host# set dr-register-policy stop-pim-register-msg-dr
Results
From configuration mode, confirm your configuration by entering the show policy-options and show
protocols commands. If the output does not display the intended configuration, repeat the configuration
instructions in this example to correct it.
[edit]
user@host# show policy-options
policy-statement stop-pim-register-msg-dr {
from {
384
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Purpose
Verify that SAP and SDP are configured to listen on the correct group addresses and ports.
Action
Purpose
Action
Purpose
Action
Purpose
Verify that the PIM RP is statically configured with the correct IP address.
Action
SEE ALSO
network and the dropping of packets at a scope at the edge of the network. Also, PIM join filters reduce
the potential for denial-of-service (DoS) attacks and PIM state explosion—large numbers of PIM join
messages forwarded to each router on the rendezvous-point tree (RPT), resulting in memory
consumption.
To use PIM join filters to efficiently restrict multicast traffic from certain source addresses, create and
apply the routing policy across all routers in the network.
neighbor Neighbor address (the source address in the IP header of the join and prune
message)
route-filter Multicast group address embedded in the join and prune message
source-address-filter Multicast source address embedded in the join and prune message
The following example shows how to create a PIM join filter. The filter is composed of a route filter and
a source address filter—bad-groups and bad-sources, respectively. the bad-groups filter prevents (*,G) or
(S,G) join messages from being received for all groups listed. The bad-sources filter prevents (S,G) join
messages from being received for all sources listed. The bad-groups filter and bad-sources filter are in
two different terms. If route filters and source address filters are in the same term, they are logically
ANDed.
2. Apply one or more policies to routes being imported into the routing table from PIM.
3. Verify the configuration by checking the output of the show pim join and show policy commands.
SEE ALSO
IN THIS SECTION
Requirements | 388
Overview | 388
Configuration | 389
388
Verification | 391
This example shows how to reject incoming PIM register messages on RP routers.
Requirements
1. Determine whether the router is directly attached to any multicast sources. Receivers must be able
to locate these sources.
2. Determine whether the router is directly attached to any multicast group receivers. If receivers are
present, IGMP is needed.
3. Determine whether to configure multicast to use sparse, dense, or sparse-dense mode. Each mode
has different configuration considerations.
5. Determine whether to locate the RP with the static configuration, BSR, or auto-RP method.
6. Determine whether to configure multicast to use its own RPF routing table when configuring PIM in
sparse, dense, or sparse-dense mode.
7. Configure the SAP and SDP protocols to listen for multicast session announcements. See Configuring
the Session Announcement Protocol.
Overview
In this example, you configure the group address as 224.1.1.1/32 and the source address in the group as
10.10.10.1/32. You set the match action to reject PIM register messages and assign reject-pim-register-
msg-rp as the policy on the RP.
389
Configuration
IN THIS SECTION
Procedure | 389
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
instructions on how to do that, see Using the CLI Editor in Configuration Mode in the Junos OS CLI User
Guide.
[edit]
user@host# edit policy-options
390
[edit policy-options]
user@host# set policy statement reject-pim-register-msg-rp from route-filter 224.1.1.1/32 exact
[edit policy-options]
user@host# set policy statement reject-pim-register-msg-rp from source-address-filter 10.10.10.1/32
exact
[edit policy-options]
user@host# set policy statement reject-pim-register-msg-rp then reject
[edit]
user@host# edit protocols pim rp
[edit]
user@host# set rp-register-policy reject-pim-register-msg-rp
Results
From configuration mode, confirm your configuration by entering the show policy-options and show
protocols pim command. If the output does not display the intended configuration, repeat the
configuration instructions in this example to correct it.
[edit]
user@host# show policy-options
policy-statement reject-pim-register-msg-rp {
from {
391
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Purpose
Verify that SAP and SDP are configured to listen on the correct group addresses and ports.
Action
Purpose
Action
Purpose
Action
Purpose
Action
From configuration mode, enter the show policy-options and show protocols pim command.
SEE ALSO
• Deliver the initial multicast packets sent by the source to the RP for delivery down the shortest-path
tree (SPT).
The PIM RP keeps track of all active sources in a single PIM sparse mode domain. In some cases, you
want more control over which sources an RP discovers, or which sources a DR notifies other RPs about.
A high degree of control over PIM register messages is provided by RP or DR register message filtering.
Message filtering prevents unauthorized groups and sources from registering with an RP router.
You configure RP or DR register message filtering to control the number and location of multicast
sources that an RP discovers. You can apply register message filters on a DR to control outgoing register
messages, or apply them on an RP to control incoming register messages.
When anycast RP is configured, all RPs in the anycast RP set need to be configured with the same
register message filtering policy.
You can configure message filtering globally or for a routing instance. These examples show the global
configuration.
To configure an RP filter to drop the register packets for multicast group range 224.1.1.0/24 from source
address 10.10.94.2:
To configure a DR filter to prevent sending register packets for group range 224.1.1.0/24 and source
address 10.10.10.1/32:
The static address is the address of the RP to which you do not want the DR to send the filtered
register messages.
To configure a policy expression to accept register messages for multicast group 224.1.1.5 but reject
those for 224.1.1.1:
To monitor the operation of the filters, run the show pim statistics command. The command output
contains the following fields related to filtering:
• RP Filtered Source
• Rx Joins/Prunes filtered
• Tx Joins/Prunes filtered
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
Understanding Multicast Rendezvous Points, Shared Trees, and Rendezvous-Point Trees | 396
In the RP model, other routers do not need to know the addresses of the sources for every multicast
group. All they need to know is the IP address of the RP router. The RP router discovers the sources for
all multicast groups.
The RP model shifts the burden of finding sources of multicast content from each router (the (S,G)
notation) to the network (the (*,G) notation knows only the RP). Exactly how the RP finds the unicast IP
address of the source varies, but there must be some method to determine the proper source for
multicast content for a particular group.
Consider a set of multicast routers without any active multicast traffic for a certain group. When a
router learns that an interested receiver for that group is on one of its directly connected subnets, the
router attempts to join the distribution tree for that group back to the RP, not to the actual source of the
content.
To join the shared tree, or () as it is called in PIM sparse mode, the router must do the following:
397
• Determine the IP address of the RP for that group. Determining the address can be as simple as
static configuration in the router, or as complex as a set of nested protocols.
• Build the shared tree for that group. The router executes an RPF check on the RP address in its
routing table, which produces the interface closest to the RP. The router now detects that multicast
packets from this RP for this group need to flow into the router on this RPF interface.
• Send a join message out on this interface using the proper multicast protocol (probably PIM sparse
mode) to inform the upstream router that it wants to join the shared tree for that group. This
message is a (*,G) join message because S is not known. Only the RP is known, and the RP is not
actually the source of the multicast packets. The router receiving the (*,G) join message adds the
interface on which the message was received to its outgoing interface list (OIL) for the group and also
performs an RPF check on the RP address. The upstream router then sends a (*,G) join message out
from the RPF interface toward the source, informing the upstream router that it also wants to join
the group.
Each upstream router repeats this process, propagating join messages from the RPF interface, building
the shared tree as it goes. The process stops when the join message reaches one of the following:
• A router along the RPT that already has a multicast forwarding state for the group that is being
joined
In either case, the branch is created, and packets can flow from the source to the RP and from the RP to
the receiver. Note that there is no guarantee that the shared tree (RPT) is the shortest path tree to the
source. Most likely it is not. However, there are ways to “migrate” a shared tree to an SPT once the flow
of packets begins. In other words, the forwarding state can transition from (*,G) to (S,G). The formation
of both types of tree depends heavily on the operation of the RPF check and the RPF table. For more
information about the RPF table, see Understanding Multicast Reverse Path Forwarding.
1. A receiver sends a request to join group (G) in an Internet Group Management Protocol (IGMP) host
membership report. A PIM sparse-mode router, the receiver’s DR, receives the report on a directly
attached subnet and creates an RPT branch for the multicast group of interest.
2. The receiver’s DR sends a PIM join message to its RPF neighbor, the next-hop address in the RPF
table, or the unicast routing table.
3. The PIM join message travels up the tree and is multicast to the ALL-PIM-ROUTERS group
(224.0.0.13). Each router in the tree finds its RPF neighbor by using either the RPF table or the
unicast routing table. This is done until the message reaches the RP and forms the RPT. Routers along
398
the path set up the multicast forwarding state to forward requested multicast traffic back down the
RPT to the receiver.
1. The source becomes active, sending out multicast packets on the LAN to which it is attached. The
source’s DR receives the packets and encapsulates them in a PIM register message, which it sends to
the RP router (see Figure 45 on page 399).
399
2. When the RP router receives the PIM register message from the source, it sends a PIM join message
back to the source.
Figure 45: PIM Register Message and PIM Join Message Exchanged
3. The source’s DR receives the PIM join message and begins sending traffic down the SPT toward the
RP router (see Figure 46 on page 400).
400
4. Once traffic is received by the RP router, it sends a register stop message to the source’s DR to stop
the register process.
5. The RP router sends the multicast traffic down the RPT toward the receiver (see Figure 47 on page
401).
Figure 47: Traffic Sent from the RP Router Toward the Receiver
To join the distribution tree, the router determines the unicast IP address of the source for that group.
This address can be a simple static configuration on the router, or as complex as a set of protocols.
To build the SPT for that group, the router executes an a reverse path forwarding (RPF) check on the
source address in its routing table. The RPF check produces the interface closest to the source, which is
where multicast packets from this source for this group need to flow into the router.
402
The router next sends a join message out on this interface using the proper multicast protocol to inform
the upstream router that it wants to join the distribution tree for that group. This message is an (S,G) join
message because both S and G are known. The router receiving the (S,G) join message adds the interface
on which the message was received to its output interface list (OIL) for the group and also performs an
RPF check on the source address. The upstream router then sends an (S,G) join message out on the RPF
interface toward the source, informing the upstream router that it also wants to join the group.
Each upstream router repeats this process, propagating joins out on the RPF interface, building the SPT
as it goes. The process stops when the join message does one of two things:
• Reaches the router directly connected to the host that is the source.
• Reaches a router that already has multicast forwarding state for this source-group pair.
In either case, the branch is created, each of the routers has multicast forwarding state for the source-
group pair, and packets can flow down the distribution tree from source to receiver. The RPF check at
each router makes sure that the tree is an SPT.
SPTs are always the shortest path, but they are not necessarily short. That is, sources and receivers tend
to be on the periphery of a router network, not on the backbone, and multicast distribution trees have a
tendency to sprawl across almost every router in the network. Because multicast traffic can overwhelm
a slow interface, and one packet can easily become a hundred or a thousand on the opposite side of the
backbone, it makes sense to provide a shared tree as a distribution tree so that the multicast source can
be located more centrally in the network, on the backbone. This sharing of distribution trees with roots
in the core network is accomplished by a multicast rendezvous point. For more information about RPs,
see Understanding Multicast Rendezvous Points, Shared Trees, and Rendezvous-Point Trees.
SPT Cutover
Instead of continuing to use the SPT to the RP and the RPT toward the receiver, a direct SPT is created
between the source and the receiver in the following way:
1. Once the receiver’s DR receives the first multicast packet from the source, the DR sends a PIM join
message to its RPF neighbor (see Figure 48 on page 403).
2. The source’s DR receives the PIM join message, and an additional (S,G) state is created to form the
SPT.
403
3. Multicast packets from that particular source begin coming from the source's DR and flowing down
the new SPT to the receiver’s DR. The receiver’s DR is now receiving two copies of each multicast
packet sent by the source—one from the RPT and one from the new SPT.
4. To stop duplicate multicast packets, the receiver’s DR sends a PIM prune message toward the RP
router, letting it know that the multicast packets from this particular source coming in from the RPT
are no longer needed (see Figure 49 on page 404).
Figure 49: PIM Prune Message Is Sent from the Receiver’s DR Toward the RP Router
5. The PIM prune message is received by the RP router, and it stops sending multicast packets down to
the receiver’s DR. The receiver’s DR is getting multicast packets only for this particular source over
405
the new SPT. However, multicast packets from the source are still arriving from the source’s DR
toward the RP router (see Figure 50 on page 405).
6. To stop the unneeded multicast packets from this particular source, the RP router sends a PIM prune
message to the source’s DR (see Figure 51 on page 406).
7. The receiver’s DR now receives multicast packets only for the particular source from the SPT (see
Figure 52 on page 407).
Figure 52: Source’s DR Stops Sending Duplicate Multicast Packets Toward the RP Router
In these cases, you configure an SPT threshold policy on the last-hop router to control the transition to a
direct SPT. An SPT cutover threshold of infinity applied to a source-group address pair means the last-
hop router will never transition to a direct SPT. For all other source-group address pairs, the last-hop
router transitions immediately to a direct SPT rooted at the source DR.
408
IN THIS SECTION
Requirements | 408
Overview | 408
Configuration | 410
This example shows how to configure the timeout period for a PIM assert forwarder.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure PIM Sparse Mode on the interfaces. See Enabling PIM Sparse Mode.
Overview
IN THIS SECTION
Topology | 410
The role of PIM assert messages is to determine the forwarder on a network with multiple routers. The
forwarder is the router that forwards multicast packets to a network with multicast group members. The
forwarder is generally the same as the PIM DR.
A router sends an assert message when it receives a multicast packet on an interface that is listed in the
outgoing interface list of the matching routing entry. Receiving a message on an outgoing interface is an
indication that more than one router forwards the same multicast packets to a network.
In Figure 53 on page 410, both routing devices R1 and R2 forward multicast packets for the same (S,G)
entry on a network. Both devices detect this situation and both devices send assert messages on the
409
Ethernet network. An assert message contains, in addition to a source address and group address, a
unicast cost metric for sending packets to the source, and a preference metric for the unicast cost. The
preference metric expresses a preference between unicast routing protocols. The routing device with
the smallest preference metric becomes the forwarder (also called the assert winner). If the preference
metrics are equal, the device that sent the lowest unicast cost metric becomes the forwarder. If the
unicast metrics are also equal, the routing device with the highest IP address becomes the forwarder.
After the transmission of assert messages, only the forwarder continues to forward messages on the
network.
When an assert message is received and the RPF neighbor is changed to the assert winner, the assert
timer is set to an assert timeout period. The assert timeout period is restarted every time a subsequent
assert message for the route entry is received on the incoming interface. When the assert timer expires,
the routing device resets its RPF neighbor according to its unicast routing table. Then, if multiple
forwarders still exist, the forwarders reenter the assert message cycle. In effect, the assert timeout
period determines how often multicast routing devices enter a PIM assert message cycle.
The range is from 5 through 210 seconds. The default is 180 seconds.
Assert messages are useful for LANs that connect multiple routing devices and no hosts.
410
Topology
Configuration
IN THIS SECTION
Procedure | 411
411
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
user@host# commit
SEE ALSO
IN THIS SECTION
Requirements | 412
Overview | 412
Configuration | 414
Verification | 416
This example shows how to apply a policy that suppresses the transition from the rendezvous-point tree
(RPT) rooted at the RP to the shortest-path tree (SPT) rooted at the source.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure PIM Sparse Mode on the interfaces. See Enabling PIM Sparse Mode.
Overview
IN THIS SECTION
Topology | 414
Multicast routing devices running PIM sparse mode can forward the same stream of multicast packets
onto the same LAN through an RPT rooted at the RP or through an SPT rooted at the source. In some
cases, the last-hop routing device needs to stay on the shared RPT to the RP and not transition to a
direct SPT to the source. Receiving the multicast data traffic on SPT is optimal but introduces more state
in the network, which might not be desirable in some multicast deployments. Ideally, low-bandwidth
multicast streams can be forwarded on the RPT, and high-bandwidth streams can use the SPT. This
example shows how to configure such a policy.
413
• spt-threshold—Enables you to configure an SPT threshold policy on the last-hop routing device to
control the transition to a direct SPT. When you include this statement in the main PIM instance, the
PE router stays on the RPT for control traffic.
• infinity—Applies an SPT cutover threshold of infinity to a source-group address pair, so that the last-
hop routing device never transitions to a direct SPT. For all other source-group address pairs, the
last-hop routing device transitions immediately to a direct SPT rooted at the source DR. This
statement must reference a properly configured policy to set the SPT cutover threshold for a
particular source-group pair to infinity. The use of values other than infinity for the SPT threshold is
not supported. You can configure more than one policy.
• policy-statement—Configures the policy. The simplest type of SPT threshold policy uses a route filter
and source address filter to specify the multicast group and source addresses and to set the SPT
threshold for that pair of addresses to infinity. The policy is applied to the main PIM instance.
This example sets the SPT transition value for the source-group pair 10.10.10.1 and 224.1.1.1 to
infinity. When the policy is applied to the last-hop router, multicast traffic from this source-group pair
never transitions to a direct SPT to the source. Traffic will continue to arrive through the RP.
However, traffic for any other source-group address combination at this router transitions to a direct
SPT to the source.
• Configuration changes to the SPT threshold policy affect how the routing device handles the SPT
transition.
• Configuration changes to the SPT threshold policy affect how the routing device handles the SPT
transition.
• Configuration changes to the SPT threshold policy affect how the routing device handles the SPT
transition.
• When the policy is configured for the first time, the routing device continues to transition to the
direct SPT for the source-group address pair until the PIM-join state is cleared with the clear pim join
command.
• If you do not clear the PIM-join state when you apply the infinity policy configuration for the first
time, you must apply it before the PE router is brought up.
414
• When the policy is deleted for a source-group address pair for the first time, the routing device does
not transition to the direct SPT for that source-group address pair until the PIM-join state is cleared
with the clear pim join command.
• When the policy is changed for a source-group address pair for the first time, the routing device does
not use the new policy until the PIM-join state is cleared with the clear pim join command.
Topology
Configuration
IN THIS SECTION
Procedure | 414
Results | 416
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
[edit]
set policy-options policy-statement spt-infinity-policy term one from route-filter 224.1.1.1/32 exact
set policy-options policy-statement spt-infinity-policy term one from source-address-filter 10.10.10.1/32
exact
set policy-options policy-statement spt-infinity-policy term one then accept
set policy-options policy-statement spt-infinity-policy term two then reject
set protocols pim spt-threshold infinity spt-infinity-policy
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see the Junos OS CLI User Guide.
415
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set spt-threshold infinity spt-infinity-policy
[edit protocols pim]
user@host# exit
[edit]
user@host# edit policy-options policy-statement spt-infinity-policy
[edit policy-options policy-statement spt-infinity-policy]
user@host# set term one from route-filter 224.1.1.1/32 exact
[edit policy-options policy-statement spt-infinity-policy]
user@host# set term one from source-address-filter 10.10.10.1/32 exact
[edit policy-options policy-statement spt-infinity-policy]
user@host# set term one then accept
[edit policy-options policy-statement spt-infinity-policy]
user@host# set term two then reject
[edit policy-options policy-statement spt-infinity-policy]
user@host# exit
policy-statement {
[edit]
user@host# commit
4. Clear the PIM join cache to force the configuration to take effect.
[edit]
user@host# run clear pim join
416
Results
Confirm your configuration by entering the show policy-options command and the show protocols
command from configuration mode. If the output does not display the intended configuration, repeat
the instructions in this example to correct the configuration.
Verification
SEE ALSO
RELATED DOCUMENTATION
Disabling PIM
IN THIS SECTION
By default, when you enable the PIM protocol it applies to the specified interface only. To enable PIM
for all interfaces, include the all parameter (for example, set protocol pim interface all). You can disable
PIM at the protocol, interface, or family hierarchy levels.
The hierarchy in which you configure PIM is critical. In general, the most specific configuration takes
precedence. However, if PIM is disabled at the protocol level, then any disable statements with respect
to an interface or family are ignored.
For example, the order of precedence for disabling PIM on a particular interface family is:
1. If PIM is disabled at the [edit protocols pim interface interface-name family] hierarchy level, then
PIM is disabled for that interface family.
2. If PIM is not configured at the [edit protocols pim interface interface-name family] hierarchy level,
but is disabled at the [edit protocols pim interface interface-name] hierarchy level, then PIM is
disabled for all families on the specified interface.
418
3. If PIM is not configured at either the [edit protocols pim interface interface-name family] hierarchy
level or the [edit protocols pim interface interface-name] hierarchy level, but is disabled at the [edit
protocols pim] hierarchy level, then the PIM protocol is disabled globally for all interfaces and all
families.
The following sections describe how to disable PIM at the various hierarchy levels.
[edit protocols]
pim {
disable;
}
2. (Optional) Verify your configuration settings before committing them by using the show protocols
pim command.
SEE ALSO
[edit protocols]
pim {
419
interface interface-name {
disable;
}
}
2. (Optional) Verify your configuration settings before committing them by using the show protocols
pim command.
SEE ALSO
[edit protocols]
pim {
family inet {
disable;
}
family inet6 {
disable;
}
}
2. (Optional) Verify your configuration settings before committing them by using the show protocols
pim command.
SEE ALSO
[edit protocols]
pim {
rp {
local {
family inet {
disable;
}
family inet6 {
disable;
}
}
}
}
2. (Optional) Verify your configuration settings before committing them by using the show protocols
pim command.
SEE ALSO
CHAPTER 10
IN THIS CHAPTER
In a PIM sparse mode (PIM-SM) domain, there are two types of designated routers (DRs) to consider:
• The receiver DR sends PIM join and PIM prune messages from the receiver network toward the RP.
• The source DR sends PIM register messages from the source network to the RP.
Neighboring PIM routers multicast periodic PIM hello messages to each other every 30 seconds (the
default). The PIM hello message usually includes a holdtime value for the neighbor to use, but this is not
a requirement. If the PIM hello message does not include a holdtime value, a default timeout value (in
Junos OS, 105 seconds) is used. On receipt of a PIM hello message, a router stores the IP address and
priority for that neighbor. If the DR priorities match, the router with the highest IP address is selected as
the DR.
If a DR fails, a new one is selected using the same process of comparing IP addresses.
NOTE: DR priority is specific to PIM sparse mode; as per RFC 3973, PIM DR priority cannot be
configured explicitly in PIM Dense Mode (PIM-DM) in IGMPv2 – PIM-DM only support DRs with
IGMPv1.
423
IN THIS SECTION
By default, every PIM interface has an equal probability (priority 1) of being selected as the DR, but you
can change the value to increase or decrease the chances of a given DR being elected. A higher value
corresponds to a higher priority, that is, greater chance of being elected. Configuring the interface DR
priority helps ensure that changing an IP address does not alter your forwarding model.
NOTE: DR priority is specific to PIM sparse mode; as per RFC 3973, PIM DR priority cannot be
configured explicitly in PIM Dense Mode (PIM-DM) in IGMPv2 – PIM-DM only support DRs with
IGMPv1.
1. This example shows the configuration for the routing instance. Configure the interface globally or in
the routing instance.
2. Verify the configuration by checking the Hello Option DR Priority field in the output of the show pim
neighbors detail command.
Instance: PIM.master
Interface: ge-0/0/0.0
Address: 192.168.195.37, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 65535 seconds
Hello Option DR Priority: 5
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Join Suppression supported
Rx Join: Group Source Timeout
225.1.1.1 192.168.195.78 0
225.1.1.1 0
Interface: lo0.0
Address: 10.255.245.91, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 65535 seconds
Hello Option DR Priority: 1
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Join Suppression supported
Interface: pd-6/0/0.32768
Address: 0.0.0.0, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 65535 seconds
Hello Option DR Priority: 0
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Join Suppression supported
SEE ALSO
Configuring PIM and the Bidirectional Forwarding Detection (BFD) Protocol | 499
Configuring PIM Auto-RP
Configuring PIM Filtering | 375
1. On both point-to-point link routers, configure the router globally or in the routing instance. This
example shows the configuration for the routing instance.
2. Verify the configuration by checking the State field in the output of the show pim interfaces
command. The possible values for the State field are DR, NotDR, and P2P. When a point-to-point link
interface is elected to be the DR, the interface state becomes DR instead of P2P.
3. If the show pim interfaces command continues to report the P2P state, consider running the restart
routing command on both routers on the point-to-point link. Then recheck the state.
[edit]
user@host# run restart routing
SEE ALSO
A designated router (DR) sends periodic join messages and prune messages toward a group-specific
rendezvous point (RP) for each group for which it has active members. When a Protocol Independent
Multicast (PIM) router learns about a source, it originates a Multicast Source Discovery Protocol (MSDP)
source-address message if it is the DR on the upstream interface.
By default, every PIM interface has an equal probability (priority 1) of being selected as the DR, but you
can change the value to increase or decrease the chances of a given DR being elected. A higher value
corresponds to a higher priority, that is, greater chance of being elected. Configuring the interface DR
priority helps ensure that changing an IP address does not alter your forwarding model.
NOTE: DR priority is specific to PIM sparse mode; as per RFC 3973, PIM DR priority cannot be
configured explicitly in PIM Dense Mode (PIM-DM) in IGMPv2 – PIM-DM only support DRs with
IGMPv1.
1. This example shows the configuration for the routing instance. Configure the interface globally or in
the routing instance.
2. Verify the configuration by checking the Hello Option DR Priority field in the output of the show pim
neighbors detail command.
Instance: PIM.master
Interface: ge-0/0/0.0
Address: 192.168.195.37, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 65535 seconds
427
Interface: lo0.0
Address: 10.255.245.91, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 65535 seconds
Hello Option DR Priority: 1
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Join Suppression supported
Interface: pd-6/0/0.32768
Address: 0.0.0.0, IPv4, PIM v2, Mode: Sparse
Hello Option Holdtime: 65535 seconds
Hello Option DR Priority: 0
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Join Suppression supported
RELATED DOCUMENTATION
In PIM sparse mode, enable designated router (DR) election on all PIM interfaces, including point-to-
point (P2P) interfaces. (DR election is enabled by default on all other interfaces.) One of the two routers
428
might join a multicast group on its P2P link interface. The DR on that link is responsible for initiating the
relevant join messages. (DR priority is specific to PIM sparse mode; as per RFC 3973, PIM DR priority
cannot be configured explicitly in PIM Dense Mode (PIM-DM) in IGMPv2 – PIM-DM only support DRs
with IGMPv1.)
1. On both point-to-point link routers, configure the router globally or in the routing instance. This
example shows the configuration for the routing instance.
2. Verify the configuration by checking the State field in the output of the show pim interfaces
command. The possible values for the State field are DR, NotDR, and P2P. When a point-to-point link
interface is elected to be the DR, the interface state becomes DR instead of P2P.
3. If the show pim interfaces command continues to report the P2P state, consider running the restart
routing command on both routers on the point-to-point link. Then recheck the state.
[edit]
user@host# run restart routing
RELATED DOCUMENTATION
CHAPTER 11
IN THIS CHAPTER
Example: Configuring SSM Maps for Different Groups to Different Sources | 464
IN THIS SECTION
PIM source-specific multicast (SSM) uses a subset of PIM sparse mode and IGMP version 3 (IGMPv3) to
allow a client to receive multicast traffic directly from the source. PIM SSM uses the PIM sparse-mode
430
functionality to create an SPT between the receiver and the source, but builds the SPT without the help
of an RP.
RFC 1112, the original multicast RFC, supported both many-to-many and one-to-many models. These
came to be known collectively as any-source multicast (ASM) because ASM allowed one or many
sources for a multicast group's traffic. However, an ASM network must be able to determine the
locations of all sources for a particular multicast group whenever there are interested listeners, no
matter where the sources might be located in the network. In ASM, the key function of is a required
function of the network itself.
Multicast source discovery appears to be an easy process, but in sparse mode it is not. In dense mode, it
is simple enough to flood traffic to every router in the whole network so that every router learns the
source address of the content for that multicast group. However, the flooding presents scalability and
network resource use issues and is not a viable option in sparse mode.
PIM sparse mode (like any sparse mode protocol) achieves the required source discovery functionality
without flooding at the cost of a considerable amount of complexity. RP routers must be added and
must know all multicast sources, and complicated shared distribution trees must be built to the RPs.
PIM SSM is simpler than PIM sparse mode because only the one-to-many model is supported. Initial
commercial multicast Internet applications are likely to be available to (that is, receivers that issue join
messages) from only a single source (a special case of SSM covers the need for a backup source). PIM
SSM therefore forms a subset of PIM sparse mode. PIM SSM builds shortest-path trees (SPTs) rooted at
the source immediately because in SSM, the router closest to the interested receiver host is informed of
the unicast IP address of the source for the multicast traffic. That is, PIM SSM bypasses the RP
connection stage through shared distribution trees, as in PIM sparse mode, and goes directly to the
source-based distribution tree.
In an environment where many sources come and go, such as for a videoconferencing service, ASM is
appropriate. However, by ignoring the many-to-many model and focusing attention on the one-to-many
source-specific multicast (SSM) model, several commercially promising multicast applications, such as
television channel distribution over the Internet, might be brought to the Internet much more quickly
and efficiently than if full ASM functionality were required of the network.
431
An SSM-configured network has distinct advantages over a traditionally configured PIM sparse-mode
network. There is no need for shared trees or RP mapping (no RP is required), or for RP-to-RP source
discovery through MSDP.
PIM SSM is simpler than PIM sparse mode because only the one-to-many model is supported. Initial
commercial multicast Internet applications are likely to be available to (that is, receivers that issue join
messages) from only a single source (a special case of SSM covers the need for a backup source). PIM
SSM therefore forms a subset of PIM sparse mode. PIM SSM builds shortest-path trees (SPTs) rooted at
the source immediately because in SSM, the router closest to the interested receiver host is informed of
the unicast IP address of the source for the multicast traffic. That is, PIM SSM bypasses the RP
connection stage through shared distribution trees, as in PIM sparse mode, and goes directly to the
source-based distribution tree.
PIM Terminology
PIM SSM introduces new terms for many of the concepts in PIM sparse mode. PIM SSM can technically
be used in the entire 224/4 multicast address range, although PIM SSM operation is guaranteed only in
the 232/8 range (232.0.0/24 is reserved). The new SSM terms are appropriate for Internet video
applications and are summarized in Table 13 on page 431.
Group address range 224/4 excluding 232/8 224/4 (guaranteed only for
232/8)
Although PIM SSM describes receiver operations as and , the same PIM sparse mode join and leave
messages are used by both forms of the protocol. The terminology change distinguishes ASM from SSM
even though the receiver messages are identical.
432
PIM source-specific multicast (SSM) uses a subset of PIM sparse mode and IGMP version 3 (IGMPv3) to
allow a client to receive multicast traffic directly from the source. PIM SSM uses the PIM sparse-mode
functionality to create an SPT between the receiver and the source, but builds the SPT without the help
of an RP.
By default, the SSM group multicast address is limited to the IP address range from 232.0.0.0 through
232.255.255.255. However, you can extend SSM operations into another Class D range by including the
ssm-groups statement at the [edit routing-options multicast] hierarchy level. The default SSM address
range from 232.0.0.0 through 232.255.255.255 cannot be used in the ssm-groups statement. This
statement is for adding other multicast addresses to the default SSM group addresses. This statement
does not override the default SSM group address range.
In a PIM SSM-configured network, a host subscribes to an SSM channel (by means of IGMPv3),
announcing a desire to join group G and source S (see Figure 54 on page 432). The directly connected
PIM sparse-mode router, the receiver's DR, sends an (S,G) join message to its RPF neighbor for the
source. Notice in Figure 54 on page 432 that the RP is not contacted in this process by the receiver, as
would be the case in normal PIM sparse-mode operations.
The (S,G) join message initiates the source tree and then builds it out hop by hop until it reaches the
source. In Figure 55 on page 433, the source tree is built across the network to Router 3, the last-hop
router connected to the source.
Using the source tree, multicast traffic is delivered to the subscribing host (see Figure 56 on page 433).
Figure 56: (S,G) State Is Built Between the Source and the Receiver
You can configure Junos OS to accept any-source multicast (ASM) join messages (*,G) for group
addresses that are within the default or configured range of source-specific multicast (SSM) groups. This
allows you to support a mix of any-source and source-specific multicast groups simultaneously.
Deploying SSM is easy. You need to configure PIM sparse mode on all router interfaces and issue the
necessary SSM commands, including specifying IGMPv3 on the receiver's LAN. If PIM sparse mode is
not explicitly configured on both the source and group member interfaces, multicast packets are not
forwarded. Source lists, supported in IGMPv3, are used in PIM SSM. As sources become active and start
sending multicast packets, interested receivers in the SSM group receive the multicast packets.
434
To configure additional SSM groups, include the ssm-groups statement at the [edit routing-options
multicast] hierarchy level.
RELATED DOCUMENTATION
IN THIS SECTION
IN THIS SECTION
PIM source-specific multicast (SSM) uses a subset of PIM sparse mode and IGMP version 3 (IGMPv3) to
allow a client to receive multicast traffic directly from the source. PIM SSM uses the PIM sparse-mode
functionality to create an SPT between the receiver and the source, but builds the SPT without the help
of an RP.
RFC 1112, the original multicast RFC, supported both many-to-many and one-to-many models. These
came to be known collectively as any-source multicast (ASM) because ASM allowed one or many
sources for a multicast group's traffic. However, an ASM network must be able to determine the
locations of all sources for a particular multicast group whenever there are interested listeners, no
matter where the sources might be located in the network. In ASM, the key function of is a required
function of the network itself.
Multicast source discovery appears to be an easy process, but in sparse mode it is not. In dense mode, it
is simple enough to flood traffic to every router in the whole network so that every router learns the
source address of the content for that multicast group. However, the flooding presents scalability and
network resource use issues and is not a viable option in sparse mode.
PIM sparse mode (like any sparse mode protocol) achieves the required source discovery functionality
without flooding at the cost of a considerable amount of complexity. RP routers must be added and
must know all multicast sources, and complicated shared distribution trees must be built to the RPs.
PIM SSM is simpler than PIM sparse mode because only the one-to-many model is supported. Initial
commercial multicast Internet applications are likely to be available to (that is, receivers that issue join
messages) from only a single source (a special case of SSM covers the need for a backup source). PIM
SSM therefore forms a subset of PIM sparse mode. PIM SSM builds shortest-path trees (SPTs) rooted at
the source immediately because in SSM, the router closest to the interested receiver host is informed of
the unicast IP address of the source for the multicast traffic. That is, PIM SSM bypasses the RP
connection stage through shared distribution trees, as in PIM sparse mode, and goes directly to the
source-based distribution tree.
In an environment where many sources come and go, such as for a videoconferencing service, ASM is
appropriate. However, by ignoring the many-to-many model and focusing attention on the one-to-many
source-specific multicast (SSM) model, several commercially promising multicast applications, such as
television channel distribution over the Internet, might be brought to the Internet much more quickly
and efficiently than if full ASM functionality were required of the network.
436
An SSM-configured network has distinct advantages over a traditionally configured PIM sparse-mode
network. There is no need for shared trees or RP mapping (no RP is required), or for RP-to-RP source
discovery through MSDP.
PIM SSM is simpler than PIM sparse mode because only the one-to-many model is supported. Initial
commercial multicast Internet applications are likely to be available to (that is, receivers that issue join
messages) from only a single source (a special case of SSM covers the need for a backup source). PIM
SSM therefore forms a subset of PIM sparse mode. PIM SSM builds shortest-path trees (SPTs) rooted at
the source immediately because in SSM, the router closest to the interested receiver host is informed of
the unicast IP address of the source for the multicast traffic. That is, PIM SSM bypasses the RP
connection stage through shared distribution trees, as in PIM sparse mode, and goes directly to the
source-based distribution tree.
PIM Terminology
PIM SSM introduces new terms for many of the concepts in PIM sparse mode. PIM SSM can technically
be used in the entire 224/4 multicast address range, although PIM SSM operation is guaranteed only in
the 232/8 range (232.0.0/24 is reserved). The new SSM terms are appropriate for Internet video
applications and are summarized in Table 14 on page 436.
Group address range 224/4 excluding 232/8 224/4 (guaranteed only for
232/8)
Although PIM SSM describes receiver operations as and , the same PIM sparse mode join and leave
messages are used by both forms of the protocol. The terminology change distinguishes ASM from SSM
even though the receiver messages are identical.
PIM source-specific multicast (SSM) uses a subset of PIM sparse mode and IGMP version 3 (IGMPv3) to
allow a client to receive multicast traffic directly from the source. PIM SSM uses the PIM sparse-mode
437
functionality to create an SPT between the receiver and the source, but builds the SPT without the help
of an RP.
By default, the SSM group multicast address is limited to the IP address range from 232.0.0.0 through
232.255.255.255. However, you can extend SSM operations into another Class D range by including the
ssm-groups statement at the [edit routing-options multicast] hierarchy level. The default SSM address
range from 232.0.0.0 through 232.255.255.255 cannot be used in the ssm-groups statement. This
statement is for adding other multicast addresses to the default SSM group addresses. This statement
does not override the default SSM group address range.
In a PIM SSM-configured network, a host subscribes to an SSM channel (by means of IGMPv3),
announcing a desire to join group G and source S (see Figure 57 on page 437). The directly connected
PIM sparse-mode router, the receiver's DR, sends an (S,G) join message to its RPF neighbor for the
source. Notice in Figure 57 on page 437 that the RP is not contacted in this process by the receiver, as
would be the case in normal PIM sparse-mode operations.
The (S,G) join message initiates the source tree and then builds it out hop by hop until it reaches the
source. In Figure 58 on page 437, the source tree is built across the network to Router 3, the last-hop
router connected to the source.
Using the source tree, multicast traffic is delivered to the subscribing host (see Figure 59 on page 438).
Figure 59: (S,G) State Is Built Between the Source and the Receiver
You can configure Junos OS to accept any-source multicast (ASM) join messages (*,G) for group
addresses that are within the default or configured range of source-specific multicast (SSM) groups. This
allows you to support a mix of any-source and source-specific multicast groups simultaneously.
Deploying SSM is easy. You need to configure PIM sparse mode on all router interfaces and issue the
necessary SSM commands, including specifying IGMPv3 on the receiver's LAN. If PIM sparse mode is
not explicitly configured on both the source and group member interfaces, multicast packets are not
forwarded. Source lists, supported in IGMPv3, are used in PIM SSM. As sources become active and start
sending multicast packets, interested receivers in the SSM group receive the multicast packets.
To configure additional SSM groups, include the ssm-groups statement at the [edit routing-options
multicast] hierarchy level.
SEE ALSO
group address. The SSM (S,G) pairs are called channels to differentiate them from any-source multicast
(ASM) groups. Although ASM supports both one-to-many and many-to-many communications, ASM's
complexity is in its method of source discovery. For example, if you click a link in a browser, the receiver
is notified about the group information, but not the source information. With SSM, the client receives
both source and group information.
SSM is ideal for one-to-many multicast services such as network entertainment channels. However,
many-to-many multicast services might require ASM.
To deploy SSM successfully, you need an end-to-end multicast-enabled network and applications that
use an Internet Group Management Protocol version 3 (IGMPv3) or Multicast Listener Discovery version
2 (MLDv2) stack, or you need to configure SSM mapping from IGMPv1 or IGMPv2 to IGMPv3. An
IGMPv3 stack provides the capability of a host operating system to use the IGMPv3 protocol. IGMPv3 is
available for Windows XP, Windows Vista, and most UNIX operating systems.
SSM mapping allows operators to support an SSM network without requiring all hosts to support
IGMPv3. This support exists in static (S,G) configurations, but SSM mapping also supports dynamic per-
source group state information, which changes as hosts join and leave the group using IGMP.
SSM is typically supported with a subset of IGMPv3 and PIM sparse mode known as PIM SSM. Using
SSM, a client can receive multicast traffic directly from the source. PIM SSM uses the PIM sparse-mode
functionality to create an SPT between the client and the source, but builds the SPT without the help of
an RP.
An SSM-configured network has distinct advantages over a traditionally configured PIM sparse-mode
network. There is no need for shared trees or RP mapping (no RP is required), or for RP-to-RP source
discovery through the Multicast Source Discovery Protocol (MSDP).
IN THIS SECTION
Requirements | 440
Overview | 440
Configuration | 442
Verification | 444
This example shows how to extend source-specific multicast (SSM) group operations beyond the default
IP address range of 232.0.0.0 through 232.255.255.255. This example also shows how to accept any-
source multicast (ASM) join messages (*,G) for group addresses that are within the default or configured
440
range of SSM groups. This allows you to support a mix of any-source and source-specific multicast
groups simultaneously.
Requirements
Overview
IN THIS SECTION
Topology | 442
To deploy SSM, configure PIM sparse mode on all routing device interfaces and issue the necessary SSM
commands, including specifying IGMPv3 or MLDv2 on the receiver's LAN. If PIM sparse mode is not
explicitly configured on both the source and group members interfaces, multicast packets are not
forwarded. Source lists, supported in IGMPv3 and MLDv2, are used in PIM SSM. Only sources that are
specified send traffic to the SSM group.
In a PIM SSM-configured network, a host subscribes to an SSM channel (by means of IGMPv3 or
MLDv2) to join group G and source S (see Figure 60 on page 440). The directly connected PIM sparse-
mode router, the receiver's designated router (DR), sends an (S,G) join message to its reverse-path
forwarding (RPF) neighbor for the source. Notice in Figure 60 on page 440 that the RP is not contacted
in this process by the receiver, as would be the case in normal PIM sparse-mode operations.
The (S,G) join message initiates the source tree and then builds it out hop by hop until it reaches the
source. In Figure 61 on page 441, the source tree is built across the network to Router 3, the last-hop
router connected to the source.
Using the source tree, multicast traffic is delivered to the subscribing host (see Figure 62 on page 441).
Figure 62: (S,G) State Is Built Between the Source and the Receiver
SSM can operate in include mode or in exclude mode. In exclude mode the receiver specifies a list of
sources that it does not want to receive the multicast group traffic from. The routing device forwards
traffic to the receiver from any source except the sources specified in the exclusion list. The receiver
accepts traffic from any sources except the sources specified in the exclusion list.
442
Topology
This example works with the simple RPF topology shown in Figure 63 on page 442.
Configuration
IN THIS SECTION
Procedure | 442
Results | 444
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
1. Configure OSPF.
[edit routing-options]
user@host# set ssm-groups [ 232.0.0.0/8 239.0.0.0/8 ]
4. Configure the RP to accept ASM join messages for groups within the SSM address range.
[edit routing-options]
user@host# set multicast asm-override-ssm
user@host# commit
444
Results
Confirm your configuration by entering the show protocols and show routing-options commands.
Verification
SEE ALSO
[edit]
protocols {
pim {
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
igmp {
interface fe-0/1/2 {
version 3;
}
}
}
446
This example shows how to configure the IGMP version to IGMPv3 on all receiving host interfaces.
1. Enable IGMPv3 on all host-facing interfaces, and disable IGMP on the fxp0.0 interface on Router 1.
NOTE: When you configure IGMPv3 on a router, hosts on interfaces configured with IGMPv2
cannot join the source tree.
2. After the configuration is committed, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
disable;
}
3. Use the show igmp interface command to verify that IGMP interfaces are configured.
4. Use the show pim join extensive command to verify the PIM join state on Router 2 and Router 3 (the
upstream routers).
5. Use the show pim join extensive command to verify the PIM join state on Router 1 (the router
connected to the receiver).
Interface: fe-0/2/3.0
10.3.1.1 State: Join Flags: S Timeout: Infinity
NOTE: IP version 6 (IPv6) multicast routers use the Multicast Listener Discovery (MLD) Protocol
to manage the membership of hosts and routers in multicast groups and to learn which groups
have interested listeners for each attached physical networks. Each routing device maintains a
list of host multicast addresses that have listeners for each subnetwork, as well as a timer for
each address. However, the routing device does not need to know the address of each listener—
just the address of each host. The routing device provides addresses to the multicast routing
protocol it uses, which ensures that multicast packets are delivered to all subnetworks where
there are interested listeners. In this way, MLD is used as the transport for the Protocol
Independent Multicast (PIM) Protocol. MLD is an integral part of IPv6 and must be enabled on all
IPv6 routing devices and hosts that need to receive IP multicast traffic. The Junos OS supports
MLD versions 1 and 2. Version 2 is supported for source-specific multicast (SSM) include and
exclude modes.
SEE ALSO
SSM mapping applies to all group addresses that match the policy, not just those that conform to SSM
addressing conventions (232/8 for IPv4, ff30::/32 through ff3F::/32 for IPv6).
We recommend separate SSM maps for IPv4 and IPv6 if both address families require SSM support. If
you apply an SSM map containing both IPv4 and IPv6 addresses to an interface in an IPv4 context (using
IGMP), only the IPv4 addresses in the list are used. If there are no such addresses, no action is taken.
Similarly, if you apply an SSM map containing both IPv4 and IPv6 addresses to an interface in an IPv6
context (using MLD), only the IPv6 addresses in the list are used. If there are no such addresses, no
action is taken.
In this example, you create a policy to match the group addresses that you want to translate to IGMPv3.
Then you define the SSM map that associates the policy with the source addresses where these group
addresses are found. Finally, you apply the SSM map to one or more IGMP (for IPv4) or MLD (for IPv6)
interfaces.
449
1. Create an SSM policy named ssm-policy-example. The policy terms match the IPv4 SSM group
address 232.1.1.1/32 and the IPv6 SSM group address ff35::1/128. All other addresses are rejected.
2. After the configuration is committed, use the show configuration policy-options command to verify
the policy configuration.
[edit policy-options]
policy-statement ssm-policy-example {
term A {
from {
route-filter 232.1.1.1/32 exact;
}
then accept;
}
term B {
from {
route-filter ff35::1/128 exact;
}
then accept;
}
then reject;
}
The group addresses must match the configured policy for SSM mapping to occur.
3. Define two SSM maps, one called ssm-map-ipv6-example and one called ssm-map-ipv4-example, by
applying the policy and configuring the source addresses as a multicast routing option.
4. After the configuration is committed, use the show configuration routing-options command to verify
the policy configuration.
[edit routing-options]
multicast {
ssm-map ssm-map-ipv6-example {
policy ssm-policy-example;
source [ fec0::1 fec0::12 ];
}
ssm-map ssm-map-ipv4-example {
policy ssm-policy-example;
source [ 10.10.10.4 192.168.43.66 ];
}
}
5. Apply SSM maps for IPv4-to-IGMP interfaces and SSM maps for IPv6-to-MLD interfaces:
6. After the configuration is committed, use the show configuration protocol command to verify the
IGMP and MLD protocol configuration.
[edit protocols]
igmp {
interface fe-0/1/0.0 {
ssm-map ssm-map-ipv4-example;
451
}
}
mld {
interface fe-/0/1/1.0 {
ssm-map ssm-map-ipv6-example;
}
}
7. Use the show igmp interface and the show mld interface commands to verify that the SSM maps are
applied to the interfaces.
RELATED DOCUMENTATION
The following example shows how PIM SSM is configured between a receiver and a source in the
network illustrated in Figure 65 on page 452.
This example shows how to configure the IGMP version to IGMPv3 on all receiving host interfaces.
1. Enable IGMPv3 on all host-facing interfaces, and disable IGMP on the fxp0.0 interface on Router 1.
NOTE: When you configure IGMPv3 on a router, hosts on interfaces configured with IGMPv2
cannot join the source tree.
2. After the configuration is committed, use the show configuration protocol igmp command to verify
the IGMP protocol configuration.
disable;
}
3. Use the show igmp interface command to verify that IGMP interfaces are configured.
4. Use the show pim join extensive command to verify the PIM join state on Router 2 and Router 3 (the
upstream routers).
5. Use the show pim join extensive command to verify the PIM join state on Router 1 (the router
connected to the receiver).
Interface: fe-0/2/3.0
10.3.1.1 State: Join Flags: S Timeout: Infinity
NOTE: IP version 6 (IPv6) multicast routers use the Multicast Listener Discovery (MLD) Protocol
to manage the membership of hosts and routers in multicast groups and to learn which groups
have interested listeners for each attached physical networks. Each routing device maintains a
list of host multicast addresses that have listeners for each subnetwork, as well as a timer for
each address. However, the routing device does not need to know the address of each listener—
just the address of each host. The routing device provides addresses to the multicast routing
protocol it uses, which ensures that multicast packets are delivered to all subnetworks where
there are interested listeners. In this way, MLD is used as the transport for the Protocol
Independent Multicast (PIM) Protocol. MLD is an integral part of IPv6 and must be enabled on all
IPv6 routing devices and hosts that need to receive IP multicast traffic. The Junos OS supports
MLD versions 1 and 2. Version 2 is supported for source-specific multicast (SSM) include and
exclude modes.
RELATED DOCUMENTATION
Deploying an SSM-only domain is much simpler than deploying an ASM domain because it only requires
a few configuration steps. Enable PIM sparse mode on all interfaces by adding the mode statement at
the [edit protocols pim interface all] hierarchy level. When configuring all interfaces, exclude the fxp0.0
management interface by adding the disable statement for that interface. Then configure IGMPv3 on all
host-facing interfaces by adding the version statement at the [edit protocols igmp interface interface-
name] hierarchy level.
[edit]
protocols {
pim {
interface all {
mode sparse;
version 2;
455
}
interface fxp0.0 {
disable;
}
}
igmp {
interface fe-0/1/2 {
version 3;
}
}
}
SSM mapping does not require that all hosts support IGMPv3. SSM mapping translates IGMPv1 or
IGMPv2 membership reports to an IGMPv3 report. This enables hosts running IGMPv1 or IGMPv2 to
participate in SSM until the hosts transition to IGMPv3.
SSM mapping applies to all group addresses that match the policy, not just those that conform to SSM
addressing conventions (232/8 for IPv4, ff30::/32 through ff3F::/32 for IPv6).
We recommend separate SSM maps for IPv4 and IPv6 if both address families require SSM support. If
you apply an SSM map containing both IPv4 and IPv6 addresses to an interface in an IPv4 context (using
IGMP), only the IPv4 addresses in the list are used. If there are no such addresses, no action is taken.
Similarly, if you apply an SSM map containing both IPv4 and IPv6 addresses to an interface in an IPv6
context (using MLD), only the IPv6 addresses in the list are used. If there are no such addresses, no
action is taken.
In this example, you create a policy to match the group addresses that you want to translate to IGMPv3.
Then you define the SSM map that associates the policy with the source addresses where these group
addresses are found. Finally, you apply the SSM map to one or more IGMP (for IPv4) or MLD (for IPv6)
interfaces.
1. Create an SSM policy named ssm-policy-example. The policy terms match the IPv4 SSM group
address 232.1.1.1/32 and the IPv6 SSM group address ff35::1/128. All other addresses are rejected.
ff35::1/128 exact
user@router1# set policy-options policy-statement ssm-policy-example term B then accept
2. After the configuration is committed, use the show configuration policy-options command to verify
the policy configuration.
[edit policy-options]
policy-statement ssm-policy-example {
term A {
from {
route-filter 232.1.1.1/32 exact;
}
then accept;
}
term B {
from {
route-filter ff35::1/128 exact;
}
then accept;
}
then reject;
}
The group addresses must match the configured policy for SSM mapping to occur.
3. Define two SSM maps, one called ssm-map-ipv6-example and one called ssm-map-ipv4-example, by
applying the policy and configuring the source addresses as a multicast routing option.
4. After the configuration is committed, use the show configuration routing-options command to verify
the policy configuration.
[edit routing-options]
multicast {
ssm-map ssm-map-ipv6-example {
policy ssm-policy-example;
source [ fec0::1 fec0::12 ];
}
ssm-map ssm-map-ipv4-example {
policy ssm-policy-example;
source [ 10.10.10.4 192.168.43.66 ];
}
}
5. Apply SSM maps for IPv4-to-IGMP interfaces and SSM maps for IPv6-to-MLD interfaces:
6. After the configuration is committed, use the show configuration protocol command to verify the
IGMP and MLD protocol configuration.
[edit protocols]
igmp {
interface fe-0/1/0.0 {
ssm-map ssm-map-ipv4-example;
}
}
mld {
interface fe-/0/1/1.0 {
ssm-map ssm-map-ipv6-example;
458
}
}
7. Use the show igmp interface and the show mld interface commands to verify that the SSM maps are
applied to the interfaces.
IN THIS SECTION
Requirements | 459
Overview | 459
Configuration | 461
Verification | 463
This example shows how to extend source-specific multicast (SSM) group operations beyond the default
IP address range of 232.0.0.0 through 232.255.255.255. This example also shows how to accept any-
source multicast (ASM) join messages (*,G) for group addresses that are within the default or configured
range of SSM groups. This allows you to support a mix of any-source and source-specific multicast
groups simultaneously.
459
Requirements
Before you begin, configure the router interfaces.
Overview
IN THIS SECTION
Topology | 461
To deploy SSM, configure PIM sparse mode on all routing device interfaces and issue the necessary SSM
commands, including specifying IGMPv3 or MLDv2 on the receiver's LAN. If PIM sparse mode is not
explicitly configured on both the source and group members interfaces, multicast packets are not
forwarded. Source lists, supported in IGMPv3 and MLDv2, are used in PIM SSM. Only sources that are
specified send traffic to the SSM group.
In a PIM SSM-configured network, a host subscribes to an SSM channel (by means of IGMPv3 or
MLDv2) to join group G and source S (see Figure 66 on page 459). The directly connected PIM sparse-
mode router, the receiver's designated router (DR), sends an (S,G) join message to its reverse-path
forwarding (RPF) neighbor for the source. Notice in Figure 66 on page 459 that the RP is not contacted
in this process by the receiver, as would be the case in normal PIM sparse-mode operations.
The (S,G) join message initiates the source tree and then builds it out hop by hop until it reaches the
source. In Figure 67 on page 460, the source tree is built across the network to Router 3, the last-hop
router connected to the source.
Using the source tree, multicast traffic is delivered to the subscribing host (see Figure 68 on page 460).
Figure 68: (S,G) State Is Built Between the Source and the Receiver
SSM can operate in include mode or in exclude mode. In exclude mode the receiver specifies a list of
sources that it does not want to receive the multicast group traffic from. The routing device forwards
traffic to the receiver from any source except the sources specified in the exclusion list. The receiver
accepts traffic from any sources except the sources specified in the exclusion list.
461
Topology
This example works with the simple RPF topology shown in Figure 69 on page 461.
Configuration
IN THIS SECTION
Procedure | 461
Results | 463
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
1. Configure OSPF.
[edit routing-options]
user@host# set ssm-groups [ 232.0.0.0/8 239.0.0.0/8 ]
4. Configure the RP to accept ASM join messages for groups within the SSM address range.
[edit routing-options]
user@host# set multicast asm-override-ssm
user@host# commit
463
Results
Confirm your configuration by entering the show protocols and show routing-options commands.
Verification
To verify the configuration, run the following commands:
RELATED DOCUMENTATION
IN THIS SECTION
SEE ALSO
IN THIS SECTION
Requirements | 465
Overview | 465
Configuration | 465
465
Verification | 468
This example shows how to assign more than one SSM map to an IGMP interface.
Requirements
Overview
In this example, you configure a routing policy, POLICY-ipv4-example1, that maps multicast group join
messages over an IGMP logical interface to IPv4 multicast source addresses based on destination IP
address as follows:
Routing Policy Name Multicast Group Join Messages for a Multicast Source Addresses
Route Filter at This Destination
Address
192.168.43.66
192.168.43.67
Configuration
IN THIS SECTION
Procedure | 466
466
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see the Junos OS CLI User Guide.
To quickly configure this example, copy the following configuration commands into a text file, remove
any line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Procedure
Step-by-Step Procedure
1. Configure protocol-independent routing options for route filter 232.1.1.1, and specify the multicast
source addresses to which matching multicast groups are to be mapped.
2. Configure protocol-independent routing options for route filter 232.1.1.2, and specify the multicast
source addresses to which matching multicast groups are to be mapped.
Results
After the configuration is committed, confirm the configuration by entering the show policy-options and
show protocols configuration mode commands. If the command output does not display the intended
configuration, repeat the instructions in this procedure to correct the configuration.
}
}
Verification
IN THIS SECTION
Purpose
Verify that the SSM map policy POLICY-ipv4-example1 is applied to logical interface fe-0/1/0.0.
Action
Use the show igmp interface operational mode command for the IGMP logical interface to which you
applied the SSM map policy.
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Derived Parameters:
IGMP Membership Timeout: 260.0
IGMP Other Querier Present Timeout: 255.0
The command output displays the name of the IGMP logical interface (fe-0/1/0.0), which is the address
of the routing device that has been elected to send membership queries and group information.
Purpose
Verify the Protocol Independent Multicast (PIM) source and group pair (S,G) entries.
Action
Use the show pim join extensive 232.1.1.1 operational mode command to display the PIM source and
group pair (S,G) entries for the 232.1.1.1 group.
Purpose
Verify that the IP multicast forwarding table displays the multicast route state.
Action
Use the show multicast route extensive operational mode command to display the entries in the IP
multicast forwarding table to verify that the Route state is active and that the Forwarding state is
forwarding.
RELATED DOCUMENTATION
CHAPTER 12
IN THIS CHAPTER
IN THIS SECTION
IN THIS SECTION
Bidirectional PIM (PIM-Bidir) is specified by the IETF in RFC 5015, Bidirectional Protocol Independent
Multicast (BIDIR-PIM). It provides an alternative to other PIM modes, such as PIM sparse mode (PIM-
SM), PIM dense mode (PIM-DM), and PIM source-specific multicast (SSM). In bidirectional PIM,
multicast groups are carried across the network over bidirectional shared trees. This type of tree
minimizes the amount of PIM routing state information that must be maintained, which is especially
important in networks with numerous and dispersed senders and receivers. For example, one important
application for bidirectional PIM is distributed inventory polling. In many-to-many applications, a
multicast query from one station generates multicast responses from many stations. For each multicast
group, such an application generates a large number of (S,G) routes for each station in PIM-SM, PIM-
DM, or SSM. The problem is even worse in applications that use bursty sources, resulting in frequently
changing multicast tables and, therefore, performance problems in routers.
472
Figure 70 on page 472 shows the traffic flows generated to deliver traffic for one group to and from
three stations in a PIM-SM network.
Bidirectional PIM solves this problem by building only group-specific (*,G) state. Thus, only a single (*,G)
route is needed for each group to deliver traffic to and from all the sources.
473
Figure 71 on page 473 shows the traffic flows generated to deliver traffic for one group to and from
three stations in a bidirectional PIM network.
Bidirectional PIM builds bidirectional shared trees that are rooted at a rendezvous point (RP) address.
Bidirectional traffic does not switch to shortest path trees (SPTs) as in PIM-SM and is therefore
optimized for routing state size instead of path length. Bidirectional PIM routes are always wildcard-
source (*,G) routes. The protocol eliminates the need for (S,G) routes and data-triggered events. The
bidirectional (*,G) group trees carry traffic both upstream from senders toward the RP, and downstream
from the RP to receivers. As a consequence, the strict reverse path forwarding (RPF)-based rules found
in other PIM modes do not apply to bidirectional PIM. Instead, bidirectional PIM routes forward traffic
from all sources and the RP. Thus, bidirectional PIM routers must have the ability to accept traffic on
many potential incoming interfaces.
474
To prevent forwarding loops, only one router on each link or subnet (including point-to-point links) is a
designated forwarder (DF). The responsibilities of the DF are to forward downstream traffic onto the
link toward the receivers and to forward upstream traffic from the link toward the RP address.
Bidirectional PIM relies on a process called DF election to choose the DF router for each interface and
for each RP address. Each bidirectional PIM router in a subnet advertises its interior gateway protocol
(IGP) unicast route to the RP address. The router with the best IGP unicast route to the RP address wins
the DF election. Each router advertises its IGP route metrics in DF Offer, Winner, Backoff, and Pass
messages.
Junos OS implements the DF election procedures as stated in RFC 5015, except that Junos OS checks
RP unicast reachability before accepting incoming DF messages. DF messages for unreachable
rendezvous points are ignored.
In the Junos OS implementation, there are two modes for bidirectional PIM: bidirectional-sparse and
bidirectional-sparse-dense. The differences between bidirectional-sparse and bidirectional-sparse-dense
modes are the same as the differences between sparse mode and sparse-dense mode. Sparse-dense
mode allows the interface to operate on a per-group basis in either sparse or dense mode. A group
specified as “dense” is not mapped to an RP. Use bidirectional-sparse-dense mode when you have a mix
of bidirectional groups, sparse groups, and dense groups in your network. One typical scenario for this is
the use of auto-RP, which uses dense-mode flooding to bootstrap itself for sparse mode or bidirectional
mode. In general, the dense groups could be for any flows that the network design requires to be
flooded.
Each group-to-RP mapping is controlled by the RP group-ranges statement and the ssm-groups
statement.
The choice of PIM mode is closely tied to controlling how groups are mapped to PIM modes, as follows:
• bidirectional-sparse—Use if all multicast groups are operating in bidirectional, sparse, or SSM mode.
• bidirectional-sparse-dense—Use if multicast groups, except those that are specified in the dense-
groups statement, are operating in bidirectional, sparse, or SSM mode.
You can configure group-range-to-RP mappings network-wide statically, or only on routers connected to
the RP addresses and advertise them dynamically. Unlike rendezvous points for PIM-SM, which must
de-encapsulate PIM Register messages and perform other specific protocol actions, bidirectional PIM
rendezvous points implement no specific functionality. RP addresses are simply locations in the network
to rendezvous toward. In fact, RP addresses need not be loopback interface addresses or even be
475
addresses configured on any router, as long as they are covered by a subnet that is connected to a
bidirectional PIM-capable router and advertised to the network.
Thus, for bidirectional PIM, there is no meaningful distinction between static and local RP addresses.
Therefore, bidirectional PIM rendezvous points are configured at the [edit protocols pim rp
bidirectional] hierarchy level, not under static or local.
The settings at the [edit protocol pim rp bidirectional] hierarchy level function like the settings at the
[edit protocols pim rp local] hierarchy level, except that they create bidirectional PIM RP state instead of
PIM-SM RP state.
Where only a single local RP can be configured, multiple bidirectional rendezvous points can be
configured having group ranges that are the same, different, or overlapping. It is also permissible for a
group range or RP address to be configured as bidirectional and either static or local for sparse-mode.
If a bidirectional PIM RP is configured without a group range, the default group range is 224/4 for IPv4.
For IPv6, the default is ff00::/8. You can configure a bidirectional PIM RP group range to cover an SSM
group range, but in that case the SSM or DM group range takes precedence over the bidirectional PIM
RP configuration for those groups. In other words, because SSM always takes precedence, it is not
permitted to have a bidirectional group range equal to or more specific than an SSM or DM group range.
Group ranges for the specified RP address are flagged by PIM as bidirectional PIM group-to-RP
mappings and, if configured, are advertised using PIM bootstrap or auto-RP. Dynamic advertisement of
bidirectional PIM-flagged group-to-RP mappings using PIM bootstrap, and auto-RP is controlled as
normal using the bootstrap and auto-rp statements.
Bidirectional PIM RP addresses configured at the [edit protocols pim rp bidirectional address] hierarchy
level are advertised by auto-RP or PIM bootstrap if the following prerequisites are met:
• The routing instance must be configured to advertise candidate rendezvous points by way of auto-RP
or PIM bootstrap, and an auto-RP mapping agent or bootstrap router, respectively, must be elected.
• The RP address must either be configured locally on an interface in the routing instance, or the RP
address must belong to a subnet connected to an interface in the routing instance.
Internet Group Management Protocol (IGMP) version 1, version 2, and version 3 are supported with
bidirectional PIM. Multicast Listener Discovery (MLD) version 1 and version 2 are supported with
bidirectional PIM. However, in all cases, only anysource multicast (ASM) state is supported for
bidirectional PIM membership.
• IGMP and MLD (*,G) membership reports trigger the PIM DF to originate bidirectional PIM (*,G) join
messages.
• IGMP and MLD (S,G) membership reports do not trigger the PIM DF to originate bidirectional PIM
(*,G) join messages.
Bidirectional PIM accepts packets for a bidirectional route on multiple interfaces. This means that some
topologies might develop multicast routing loops if all PIM neighbors are not synchronized with regard
to the identity of the designated forwarder (DF) on each link. If one router is forwarding without actively
participating in DF elections, particularly after unicast routing changes, multicast routing loops might
occur.
If graceful restart for PIM is enabled and bidirectional PIM is enabled, the default graceful restart
behavior is to continue forwarding packets on bidirectional routes. If the gracefully restarting router was
serving as a DF for some interfaces to rendezvous points, the restarting router sends a DF Winner
message with a metric of 0 on each of these RP interfaces. This ensures that a neighbor router does not
become the DF due to unicast topology changes that might occur during the graceful restart period.
Sending a DF Winner message with a metric of 0 prevents another PIM neighbor from assuming the DF
role until after graceful restart completes. When graceful restart completes, the gracefully restarted
router sends another DF Winner message with the actual converged unicast metric.
The no-bidirectional-mode statement at the [edit protocols pim graceful-restart] hierarchy level
overrides the default behavior and disables forwarding for bidirectional PIM routes during graceful
restart recovery, both in cases of simple routing protocol process (rpd) restart and graceful Routing
Engine switchover. This configuration statement provides a very conservative alternative to the default
graceful restart behavior for bidirectional PIM routes. The reason to discontinue forwarding of packets
on bidirectional routes is that the continuation of forwarding might lead to short-duration multicast
loops in rare double-failure circumstances.
In addition to the functionality specified in RFC 5015, the following functions are included in the Junos
OS implementation of bidirectional PIM:
• Support for both IPv4 and IPv6 domain and multicast addresses
The following caveats are applicable for the bidirectional PIM configuration on the PTX5000:
• PTX5000 routers can be configured both as a bidirectional PIM rendezvous point and the source
node.
• For PTX5000 routers, you can configure the auto-rp statement at the [edit protocols pim rp] or the
[edit routing-instances routing-instance-name protocols pim rp] hierarchy level with the mapping
option, but not the announce option.
The Junos OS implementation of bidirectional PIM does not support the following functionality:
Starting with Release 12.2, Junos OS extends the nonstop active routing PIM support to draft-rosen
MVPNs.
PTX5000 routers do not support nonstop active routing or in-service software upgrade (ISSU) in Junos
OS Release 13.3.
Nonstop active routing PIM support for draft-rosen MVPNs enables nonstop active routing-enabled
devices to preserve draft-rosen MPVN-related information—such as default and data MDT states—
across switchovers.
• Graceful Routing Engine switchover is configurable with bidirectional PIM enabled, but bidirectional
routes do not forward packets during the switchover.
The bidirectional PIM protocol does not support the following functionality:
• Embedded RP
• Anycast RP
SEE ALSO
IN THIS SECTION
Requirements | 478
Overview | 478
Configuration | 482
Verification | 489
This example shows how to configure bidirectional PIM, as specified in RFC 5015, Bidirectional Protocol
Independent Multicast (BIDIR-PIM).
Requirements
• Eight Juniper Networks routers that can be M120, M320, MX Series, or T Series platforms. To
support bidirectional PIM, M Series platforms must have I-chip FPCs. M7i, M10i, M40e, and other
older M Series routers do not support bidirectional PIM.
Overview
IN THIS SECTION
Compared to PIM sparse mode, bidirectional PIM requires less PIM router state information. Because
less state information is required, bidirectional PIM scales well and is useful in deployments with many
dispersed sources and receivers.
In this example, two rendezvous points are configured statically. One RP is configured as a phantom RP.
A phantom RP is an RP address that is a valid address on a subnet, but is not assigned to a PIM router
interface. The subnet must be reachable by the bidirectional PIM routers in the network. For the other
(non-phantom) RP in this example, the RP address is assigned to a PIM router interface. It can be
479
assigned to either the loopback interface or any physical interface on the router. In this example, it is
assigned to a physical interface.
OSPF is used as the interior gateway protocol (IGP) in this example. The OSPF metric determines the
designated forwarder (DF) election process. In bidirectional PIM, the DF establishes a loop-free
shortest-path tree that is rooted at the RP. On every network segment and point-to-point link, all PIM
routers participate in DF election. The procedure selects one router as the DF for every RP of
bidirectional groups. This router forwards multicast packets received on that network upstream to the
RP. The DF election uses the same tie-break rules used by PIM assert processes.
This example uses the default DF election parameters. Optionally, at the [edit protocols pim interface
(interface-name | all) bidirectional] hierarchy level, you can configure the following parameters related to
the DF election:
• The robustness-count is the minimum number of DF election messages that must be lost for election
to fail.
• The offer period is the interval to wait between repeated DF Offer and Winner messages.
• The backoff period is the period that the acting DF waits between receiving a better DF Offer and
sending the Pass message to transfer DF responsibility.
This example uses bidirectional-sparse-dense mode on the interfaces. The choice of PIM mode is closely
tied to controlling how groups are mapped to PIM modes, as follows:
• bidirectional-sparse—Use if all multicast groups are operating in bidirectional, sparse, or SSM mode.
• bidirectional-sparse-dense—Use if multicast groups, except those that are specified in the dense-
groups statement, are operating in bidirectional, sparse, or SSM mode.
480
Topology Diagram
Configuration
IN THIS SECTION
Router R1 | 486
Results | 487
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Router R1
Router R2
Router R3
Router R4
Router R5
Router R6
Router R7
Router R8
Router R1
Step-by-Step Procedure
[edit interfaces]
user@R1# set ge-0/0/1 unit 0 family inet address 10.10.1.1/24
user@R1# set xe-2/1/0 unit 0 family inet address 10.10.2.1/24
user@R1# set lo0 unit 0 family inet address 10.255.11.11/32
The RP represented by IP address 10.10.1.3 is a phantom RP. The 10.10.1.3 address is not assigned
to any interface on any of the routers in the topology. It is, however, a reachable address. It is in the
subnet between Routers R1 and R2.
487
The RP represented by address 10.10.13.2 is assigned to the ge-2/0/0 interface on Router R6.
Results
From configuration mode, confirm your configuration by entering the show interfaces and show
protocols commands. If the output does not display the intended configuration, repeat the instructions
in this example to correct the configuration.
}
}
}
}
If you are done configuring the router, enter commit from configuration mode.
Repeat the procedure for every Juniper Networks router in the bidirectional PIM network, using the
appropriate interface names and addresses for each router.
Verification
IN THIS SECTION
Purpose
Action
Verifying Messages
Purpose
Check the number of DF election messages sent and received, and check bidirectional join and prune
error statistics.
Action
Global Statistics
...
Rx Bidir Join/Prune on non-Bidir if 0
Rx Bidir Join/Prune on non-DF if 0
Purpose
Action
Group: 224.1.1.0
Bidirectional group prefix length: 24
Source: *
RP: 10.10.13.2
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0
Upstream neighbor: 10.10.1.2
Upstream state: None
Bidirectional accepting interfaces:
Interface: ge-0/0/1.0 (RPF)
Interface: lo0.0 (DF Winner)
Group: 224.1.3.0
Bidirectional group prefix length: 24
Source: *
RP: 10.10.1.3
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0 (RP Link)
Upstream neighbor: Direct
Upstream state: Local RP
Bidirectional accepting interfaces:
Interface: ge-0/0/1.0 (RPF)
Interface: lo0.0 (DF Winner)
Interface: xe-2/1/0.0 (DF Winner)
Group: 225.1.1.0
Bidirectional group prefix length: 24
Source: *
RP: 10.10.13.2
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0
Upstream neighbor: 10.10.1.2
Upstream state: None
Bidirectional accepting interfaces:
Interface: ge-0/0/1.0 (RPF)
Interface: lo0.0 (DF Winner)
Group: 225.1.3.0
Bidirectional group prefix length: 24
Source: *
RP: 10.10.1.3
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0 (RP Link)
492
Meaning
The output shows a (*,G-range) entry for each active bidirectional RP group range. These entries provide
a hierarchy from which the individual (*,G) routes inherit RP-derived state (upstream information and
accepting interfaces). These entries also provide the control plane basis for the (*, G-range) forwarding
routes that implement the sender-only branches of the tree.
Purpose
Action
RPA: 10.10.1.3
Group ranges: 224.1.3.0/24, 225.1.3.0/24
Interfaces:
ge-0/0/1.0 (RPL) DF: none
lo0.0 (Win) DF: 10.255.179.246
xe-2/1/0.0 (Win) DF: 10.10.2.1
RPA: 10.10.13.2
Group ranges: 224.1.1.0/24, 225.1.1.0/24
Interfaces:
ge-0/0/1.0 (Lose) DF: 10.10.1.2
lo0.0 (Win) DF: 10.255.179.246
xe-2/1/0.0 (Lose) DF: 10.10.2.2
493
Purpose
Verify that the PIM interfaces have bidirectional-sparse-dense (SDB) mode assigned.
Action
Purpose
Check that the router detects that its neighbors are enabled for bidirectional PIM by verifying that the B
option is displayed.
Action
Purpose
Action
Purpose
For bidirectional PIM, the show multicast route extensive command shows the (*, G/prefix) forwarding
routes and the list of interfaces that accept bidirectional PIM traffic.
Action
Group: 224.0.0.0/4
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0 xe-4/1/0.0
Downstream interface list:
ge-0/0/1.0
Session description: zeroconfaddr
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 2097157
Incoming interface list ID: 559
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Group: 224.1.1.0/24
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0
Downstream interface list:
ge-0/0/1.0
Session description: NOB Cross media facilities
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 2097157
Incoming interface list ID: 579
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Group: 224.1.3.0/24
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0 xe-4/1/0.0
Downstream interface list:
ge-0/0/1.0
Session description: NOB Cross media facilities
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 2097157
Incoming interface list ID: 556
496
Group: 225.1.1.0/24
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0
Downstream interface list:
ge-0/0/1.0
Session description: Unknown
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 2097157
Incoming interface list ID: 579
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Group: 225.1.3.0/24
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0 xe-4/1/0.0
Downstream interface list:
ge-0/0/1.0
Session description: Unknown
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 2097157
Incoming interface list ID: 556
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Meaning
For information about how the incoming and outgoing interface lists are derived, see the forwarding
rules in RFC 5015.
497
Purpose
Verify that the correct accepting interfaces are shown in the incoming interface list.
Action
Meaning
The nexthop IDs for the outgoing and incoming next hops are referenced directly in the show multicast
route extensive command.
SEE ALSO
Release Description
13.3 PTX5000 routers do not support nonstop active routing or in-service software upgrade (ISSU) in Junos
OS Release 13.3.
498
12.2 Starting with Release 12.2, Junos OS extends the nonstop active routing PIM support to draft-rosen
MVPNs.
499
CHAPTER 13
IN THIS CHAPTER
Configuring PIM and the Bidirectional Forwarding Detection (BFD) Protocol | 499
IN THIS SECTION
IN THIS SECTION
Bidirectional Forwarding Detection (BFD) enables rapid detection of communication failures between
adjacent systems. By default, authentication for BFD sessions is disabled. However, when you run BFD
over Network Layer protocols, the risk of service attacks can be significant. We strongly recommend
using authentication if you are running BFD over multiple hops or through insecure tunnels.
Beginning with Junos OS Release 9.6, Junos OS supports authentication for BFD sessions running over
PIM. BFD authentication is only supported in the Canada and United States version of the Junos OS
image and is not available in the export version.
You authenticate BFD sessions by specifying an authentication algorithm and keychain, and then
associating that configuration information with a security authentication keychain using the keychain
name.
The following sections describe the supported authentication algorithms, security keychains, and level
of authentication that can be configured:
• simple-password—Plain-text password. One to 16 bytes of plain text are used to authenticate the
BFD session. One or more passwords can be configured. This method is the least secure and should
be used only when BFD sessions are not subject to packet interception.
• keyed-md5—Keyed Message Digest 5 hash algorithm for sessions with transmit and receive intervals
greater than 100 ms. To authenticate the BFD session, keyed MD5 uses one or more secret keys
(generated by the algorithm) and a sequence number that is updated periodically. With this method,
packets are accepted at the receiving end of the session if one of the keys matches and the sequence
number is greater than or equal to the last sequence number received. Although more secure than a
simple password, this method is vulnerable to replay attacks. Increasing the rate at which the
sequence number is updated can reduce this risk.
• keyed-sha-1—Keyed Secure Hash Algorithm I for sessions with transmit and receive intervals greater
than 100 ms. To authenticate the BFD session, keyed SHA uses one or more secret keys (generated
by the algorithm) and a sequence number that is updated periodically. The key is not carried within
the packets. With this method, packets are accepted at the receiving end of the session if one of the
keys matches and the sequence number is greater than the last sequence number received.
• meticulous-keyed-sha-1—Meticulous keyed Secure Hash Algorithm I. This method works in the same
manner as keyed SHA, but the sequence number is updated with every packet. Although more
501
secure than keyed SHA and simple passwords, this method might take additional time to
authenticate the session.
NOTE: Nonstop active routing (NSR) is not supported with meticulous-keyed-md5 and
meticulous-keyed-sha-1 authentication algorithms. BFD sessions using these algorithms might
go down after a switchover.
The security authentication keychain defines the authentication attributes used for authentication key
updates. When the security authentication keychain is configured and associated with a protocol
through the keychain name, authentication key updates can occur without interrupting routing and
signaling protocols.
The authentication keychain contains one or more keychains. Each keychain contains one or more keys.
Each key holds the secret data and the time at which the key becomes valid. The algorithm and keychain
must be configured on both ends of the BFD session, and they must match. Any mismatch in
configuration prevents the BFD session from being created.
BFD allows multiple clients per session, and each client can have its own keychain and algorithm
defined. To avoid confusion, we recommend specifying only one security authentication keychain.
By default, strict authentication is enabled, and authentication is checked at both ends of each BFD
session. Optionally, to smooth migration from nonauthenticated sessions to authenticated sessions, you
can configure loose checking. When loose checking is configured, packets are accepted without
authentication being checked at each end of the session. This feature is intended for transitional periods
only.
SEE ALSO
The BFD failure detection timers are adaptive and can be adjusted to be faster or slower. The lower the
BFD failure detection timer value, the faster the failure detection and vice versa. For example, the
timers can adapt to a higher value if the adjacency fails (that is, the timer detects failures more slowly).
Or a neighbor can negotiate a higher value for a timer than the configured value. The timers adapt to a
higher value when a BFD session flap occurs more than three times in a span of 15 seconds. A back-off
algorithm increases the receive (Rx) interval by two if the local BFD instance is the reason for the session
flap. The transmission (Tx) interval is increased by two if the remote BFD instance is the reason for the
session flap. You can use the clear bfd adaptation command to return BFD interval timers to their
configured values. The clear bfd adaptation command is hitless, meaning that the command does not
affect traffic flow on the routing device.
You must specify the minimum transmit and minimum receive intervals to enable BFD on PIM.
3. Configure the minimum interval after which the routing device expects to receive a reply from a
neighbor with which it has established a BFD session.
503
Specifying an interval smaller than 300 ms can cause undesired BFD flapping.
5. Configure the threshold for the adaptation of the BFD session detection time.
When the detection time adapts to a value equal to or greater than the threshold, a single trap and a
single system log message are sent.
6. Configure the number of hello packets not received by a neighbor that causes the originating
interface to be declared down.
8. Specify that BFD sessions should not adapt to changing network conditions.
We recommend that you not disable BFD adaptation unless it is preferable not to have BFD
adaptation enabled in your network.
9. Verify the configuration by checking the output of the show bfd session command.
504
SEE ALSO
IN THIS SECTION
Beginning with Junos OS Release 9.6, you can configure authentication for Bidirectional Forwarding
Detection (BFD) sessions running over Protocol Independent Multicast (PIM). Routing instances are also
supported.
The following sections provide instructions for configuring and viewing BFD authentication on PIM:
BFD authentication is only supported in the Canada and United States version of the Junos OS image
and is not available in the export version.
NOTE: Nonstop active routing (NSR) is not supported with the meticulous-keyed-md5 and
meticulous-keyed-sha-1 authentication algorithms. BFD sessions using these algorithms
might go down after a switchover.
2. Specify the keychain to be used to associate BFD sessions on the specified PIM route or routing
instance with the unique security authentication keychain attributes.
The keychain you specify must match the keychain name configured at the [edit security
authentication key-chains] hierarchy level.
NOTE: The algorithm and keychain must be configured on both ends of the BFD session, and
they must match. Any mismatch in configuration prevents the BFD session from being
created.
• At least one key, a unique integer between 0 and 63. Creating multiple keys allows multiple clients
to use the BFD session.
• The time at which the authentication key becomes active, in the format yyyy-mm-dd.hh:mm:ss.
[edit security]
user@host# set authentication-key-chains key-chain bfd-pim key 53 secret $ABC123$/ start-time
2009-06-14.10:00:00
4. (Optional) Specify loose authentication checking if you are transitioning from nonauthenticated
sessions to authenticated sessions.
5. (Optional) View your configuration by using the show bfd session detail or show bfd session
extensive command.
6. Repeat these steps to configure the other end of the BFD session.
You can view the existing BFD authentication configuration by using the show bfd session detail and
show bfd session extensive commands.
The following example shows BFD authentication configured for the ge-0/1/5 interface. It specifies the
keyed SHA-1 authentication algorithm and a keychain name of bfd-pim. The authentication keychain is
configured with two keys. Key 1 contains the secret data “$ABC123/” and a start time of June 1, 2009,
at 9:46:02 AM PST. Key 2 contains the secret data “$ABC123/” and a start time of June 1, 2009, at
3:29:20 PM PST.
}
}
}
If you commit these updates to your configuration, you see output similar to the following example. In
the output for the show bfd session detail command, Authenticate is displayed to indicate that BFD
authentication is configured. For more information about the configuration, use the show bfd session
extensive command. The output for this command provides the keychain name, the authentication
algorithm and mode for each client in the session, and the overall BFD authentication configuration
status, keychain name, and authentication algorithm and mode.
Detect Transmit
Address State Interface Time Interval Multiplier
192.0.2.2 Up ge-0/1/5.0 0.900 0.300 3
Client PIM, TX interval 0.300, RX interval 0.300, Authenticate
Session up time 3d 00:34
Local diagnostic None, remote diagnostic NbrSignal
Remote state Up, version 1
Replicated
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 509
Overview | 509
Configuration | 510
Verification | 515
This example shows how to configure Bidirectional Forwarding Detection (BFD) liveness detection for
IPv6 interfaces configured for the Protocol Independent Multicast (PIM) topology. BFD is a simple hello
mechanism that detects failures in a network.
4. Configure PIM, associating the authentication keychain with the desired protocol.
NOTE: You must perform these steps on both ends of the BFD session.
Requirements
Overview
IN THIS SECTION
Topology | 509
In this example. Device R1 and Device R2 are peers. Each router runs PIM, connected over a common
medium.
Topology
Assume that the routers initialize. No BFD session is yet established. For each router, PIM informs the
BFD process to monitor the IPv6 address of the neighbor that is configured in the routing protocol.
Addresses are not learned dynamically and must be configured.
510
Configure the IPv6 address and BFD liveness detection at the [edit protocols pim] hierarchy level for
each router.
Configure BFD liveness detection for the routing instance at the [edit routing-instancesinstance-name
protocols pim interface all family inet6] hierarchy level (here, the instance-name is instance1:
You will also configure the authentication algorithm and authentication keychain values for BFD.
In a BFD-configured network, when a client launches a BFD session with a peer, BFD begins sending
slow, periodic BFD control packets that contain the interval values that you specified when you
configured the BFD peers. This is known as the initialization state. BFD does not generate any up or
down notifications in this state. When another BFD interface acknowledges the BFD control packets,
the session moves into an up state and begins to more rapidly send periodic control packets. If a data
path failure occurs and BFD does not receive a control packet within the configured amount of time, the
data path is declared down and BFD notifies the BFD client. The BFD client can then perform the
necessary actions to reroute traffic. This process can be different for different BFD clients.
Configuration
IN THIS SECTION
Procedure | 512
Results | 513
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
511
Device R1
Device R2
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
To configure BFD liveness detection for PIM IPv6 interfaces on Device R1:
NOTE: This procedure is for Device R1. Repeat this procedure for Device R2, after modifying the
appropriate interface names, addresses, and any other parameters.
1. Configure the interface, using the inet6 statement to specify that this is an IPv6 address.
[edit interfaces]
user@R1# set ge-0/1/5 unit 0 description toRouter2
user@R1# set ge-0/1/5 unit 0 family inet6 address e80::21b:c0ff:fed5:e4dd
2. Specify the BFD authentication algorithm and keychain for the PIM protocol.
The keychain is used to associate BFD sessions on the specified PIM route or routing instance with
the unique security authentication keychain attributes. This keychain name should match the
keychain name configured at the [edit security authentication] hierarchy level.
[edit protocols]
user@R1# set pim interface ge-0/1/5.0 family inet6 bfd-liveness-detection authentication algorithm
keyed-sha-1
user@R1# set pim interface ge-0/1/5 family inet6 bfd-liveness-detection authentication key-chain bfd-
pim
NOTE: The algorithm and keychain must be configured on both ends of the BFD session, and
they must match. Any mismatch in configuration prevents the BFD session from being
created.
513
3. Configure a routing instance (here, instance1), specifying BFD authentication and associating the
security authentication algorithm and keychain.
[edit routing-instances]
user@R1# set instance1 protocols pim interface all family inet6 bfd-liveness-detection authentication
algorithm keyed-sha-1
user@R1# set instance1 protocols pim interface all family inet6 bfd-liveness-detection authentication
key-chain bfd-pim
• At least one key, a unique integer between 0 and 63. Creating multiple keys allows multiple clients
to use the BFD session.
• The time at which the authentication key becomes active, in the format YYYY-MM-DD.hh:mm:ss.
Results
Confirm your configuration by issuing the show interfaces, show protocols, show routing-instances, and
show security commands. If the output does not display the intended configuration, repeat the
instructions in this example to correct the configuration.
}
}
Verification
IN THIS SECTION
Purpose
Action
Instance: PIM.master
Interface: ge-0/1/5.0
Meaning
The display from the show pim neighbors detail command shows BFD: Enabled, Operational state: Up,
indicating that BFD is operating between the two PIM neighbors. For additional information about the
BFD session (including the session ID number), use the show bfd session extensive command.
SEE ALSO
authentication-key-chains
bfd-liveness-detection (Protocols PIM) | 1399
show bfd session
9.6 Beginning with Junos OS Release 9.6, Junos OS supports authentication for BFD sessions running over
PIM. BFD authentication is only supported in the Canada and United States version of the Junos OS
image and is not available in the export version.
9.6 Beginning with Junos OS Release 9.6, you can configure authentication for Bidirectional Forwarding
Detection (BFD) sessions running over Protocol Independent Multicast (PIM). Routing instances are also
supported.
RELATED DOCUMENTATION
CHAPTER 14
IN THIS CHAPTER
IN THIS SECTION
• Neighbor relationships
• RP-set information
• Synchronization between routes and next hops and the forwarding state between the two Routing
Engines
518
The PIM control state is maintained on the backup Routing Engine by the replication of state
information from the primary to the backup Routing Engine and having the backup Routing Engine react
to route installation and modification in the [instance].inet.1 routing table on the primary Routing
Engine. The backup Routing Engine does not send or receive PIM protocol packets directly. In addition,
the backup Routing Engine uses the dynamic interfaces created by the primary Routing Engine. These
dynamic interfaces include PIM encapsulation, de-encapsulation, and multicast tunnel interfaces.
NOTE: The clear pim join, clear pim register, and clear pim statistics operational mode commands
are not supported on the backup Routing Engine when nonstop active routing is enabled.
To enable nonstop active routing for PIM (in addition to the PIM configuration on the primary Routing
Engine), you must include the following statements at the [edit] hierarchy level:
• routing-options nonstop-routing
SEE ALSO
IN THIS SECTION
Requirements | 519
Overview | 519
Configuration | 521
Verification | 534
This example shows how to configure nonstop active routing for PIM-based multicast IPv4 and IPv6
traffic.
519
Requirements
For nonstop active routing for PIM-based multicast traffic to work with IPv6, the routing device must be
running Junos OS Release 10.4 or above.
• Configure the router interfaces. See the Network Interfaces Configuration Guide.
• Configure an interior gateway protocol or static routing. See the Routing Protocols Configuration
Guide.
• Configure a multicast group membership protocol (IGMP or MLD). See Understanding IGMP and
Understanding MLD.
Overview
IN THIS SECTION
Topology | 521
• Dense mode
• Sparse mode
• SSM
• Static RP
• Bootstrap router
• BFD support
• Draft Rosen Multicast VPNs and BGP Multicast VPNs (use the advertise-from-main-vpn-tables
option at the [edit protocols bgp] hierarchy level, to synchronize MVPN routes, cmcast, provider-
tunnel and forwarding information between the primary and the backup Routing Engines).
520
• Policy features such as neighbor policy, bootstrap router export and import policies, scope policy,
flow maps, and reverse path forwarding (RPF) check policies
In Junos OS release 13.3, multicast VPNs are not supported with nonstop active routing. Policy-based
features (such as neighbor policy, join policy, BSR policy, scope policy, flow maps, and RPF check policy)
are not supported with nonstop active routing.
This example uses static RP. The interfaces are configured to receive both IPv4 and IPv6 traffic. R2
provides RP services as the local RP. Note that nonstop active routing is not supported on the RP router.
The configuration shown in this example is on R1.
521
Topology
Configuration
IN THIS SECTION
Procedure | 524
522
Results | 529
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
R1
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit]
user@host# edit system
[edit system]
user@host# set commit synchronize
user@host# exit
[edit]
user@host# set chassis redundancy graceful-switchover
[edit]
user@host# edit interfaces
[edit interfaces]
user@host# set so-0/0/1 unit 0 description "to R0 so-0/0/1.0"
user@host# set so-0/0/1 unit 0 family inet address 10.210.1.2/30
user@host# set so-0/0/1 unit 0 family inet6 address FDCA:9E34:50CE:0001::2/126
user@host# set fe-0/1/3 unit 0 description "to R2 fe-0/1/3.0"
user@host# set fe-0/1/3 unit 0 family inet address 10.210.12.1/30
user@host# set fe-0/1/3 unit 0 family inet6 address FDCA:9E34:50CE:0012::1/126
user@host# set fe-1/1/0 unit 0 description "to H1"
user@host# set fe-1/1/0 unit 0 family inet address 10.240.0.250/30
user@host# set fe-1/1/0 unit 0 family inet6 address ::10.240.0.250/126
user@host# set lo0 unit 0 description "R1 Loopback"
user@host# set lo0 unit 0 family inet address 10.210.255.201/32 primary
user@host# set lo0 unit 0 family iso address 47.0005.80ff.f800.0000.0108.0001.0102.1025.5201.00
user@host# set lo0 unit 0 family inet6 address abcd::10:210:255:201/128
user@host# exit
[edit]
user@host# edit protocols ospf
[edit protocols ospf]
526
[edit]
user@host# edit protocols ospf3
[edit protocols ospf3]
user@host# set area 0.0.0.0 interface fe-1/1/0.0 passive
user@host# set area 0.0.0.0 interface fe-1/1/0.0 metric 1
user@host# set area 0.0.0.0 interface lo0.0 passive
user@host# set area 0.0.0.0 interface so-0/0/1.0 metric 1
user@host# set area 0.0.0.0 interface fe-0/1/3.0 metric 1
6. Configure PIM on R1. The PIM static address points to the RP router (R2).
[edit]
user@host# edit
[edit protocols pim]
user@host# set protocols pim rpstatic address 10.210.255.202
user@host# set protocols pim rp static address abcd::10:210:255:202
user@host# set protocols pim interface (Protocols PIM) lo0.0
user@host# set protocols pim interface fe-0/1/3.0 mode sparse
user@host# set protocols pim interface fe-0/1/3.0 version 2
user@host# set protocols pim interface so-0/0/1.0 mode sparse
user@host# set protocols pim interface so-0/0/1.0 version 2
user@host# set protocols pim interface fe-1/1/0.0 mode sparse
user@host# set protocols pim interface fe-1/1/0.0 version 2
[edit]
user@host# edit policy-options policy-statement load-balance
527
[edit]
user@host# set routing-options forwarding-table export load-balance
[edit]
user@host# set routing-options nonstop-routing
user@host# set routing-options router-id 10.210.255.201
Step-by-Step Procedure
[edit]
user@host# set system syslog archive size 10m
user@host# set system syslog file messages any info
[edit]
user@host# set interfaces traceoptions file dcd-trace
user@host# set interfaces traceoptions file size 10m
user@host# set interfaces traceoptions file files 10
user@host# set interfaces traceoptions flag all
[edit]
user@host# set protocols ospf traceoptions file r1-nsr-ospf2
user@host# set protocols ospf traceoptions file size 10m
528
[edit]
user@host# set protocols ospf3 traceoptions file r1-nsr-ospf3
user@host# set protocols ospf3 traceoptions file size 10m
user@host# set protocols ospf3 traceoptions file world-readable
user@host# set protocols ospf3 traceoptions flag lsa-update detail
user@host# set protocols ospf3 traceoptions flag flooding detail
user@host# set protocols ospf3 traceoptions flag lsa-request detail
user@host# set protocols ospf3 traceoptions flag state detail
user@host# set protocols ospf3 traceoptions flag event detail
user@host# set protocols ospf3 traceoptions flag hello detail
user@host# set protocols ospf3 traceoptions flag nsr-synchronization detail
[edit]
user@host# set protocols pim traceoptions file r1-nsr-pim
user@host# set protocols pim traceoptions file size 10m
user@host# set protocols pim traceoptions file files 10
user@host# set protocols pim traceoptions file world-readable
user@host# set protocols pim traceoptions flag mdt detail
user@host# set protocols pim traceoptions flag rp detail
user@host# set protocols pim traceoptions flag register detail
user@host# set protocols pim traceoptions flag packets detail
user@host# set protocols pim traceoptions flag autorp detail
user@host# set protocols pim traceoptions flag join detail
user@host# set protocols pim traceoptions flag hello detail
529
[edit]
user@host# set routing-options traceoptions file r1-nsr-sync
user@host# set routing-options traceoptions file size 10m
user@host# set routing-options traceoptions flag nsr-synchronization
user@host# set routing-options traceoptions flag commit-synchronize
[edit]
user@host# set routing-options forwarding-table traceoptions file r1-nsr-krt
user@host# set routing-options forwarding-table traceoptions file size 10m
user@host# set routing-options forwarding-table traceoptions file world-readable
user@host# set routing-options forwarding-table traceoptions flag queue
user@host# set routing-options forwarding-table traceoptions flag route
user@host# set routing-options forwarding-table traceoptions flag routes
user@host# set routing-options forwarding-table traceoptions flag synchronous
user@host# set routing-options forwarding-table traceoptions flag state
user@host# set routing-options forwarding-table traceoptions flag asynchronous
user@host# set routing-options forwarding-table traceoptions flag consistency-checking
[edit]
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show chassis, show interfaces,
show policy-options, show protocols, show routing-options, and show system commands. If the output
530
does not display the intended configuration, repeat the configuration instructions in this example to
correct it.
family inet6 {
address ::10.240.0.250/126;
}
}
}
lo0 {
unit 0 {
description "R1 Loopback";
family inet {
address 10.210.255.201/32 {
primary;
}
}
family iso {
address 47.0005.80ff.f800.0000.0108.0001.0102.1025.5201.00;
}
family inet6 {
address abcd::10:210:255:201/128;
}
}
}
interface fe-0/1/3.0 {
metric 1;
}
}
}
pim {
traceoptions {
file r1-nsr-pim size 10m files 10 world-readable;
flag mdt detail;
flag rp detail;
flag register detail;
flag packets detail;
flag autorp detail;
flag join detail;
flag hello detail;
flag assert detail;
flag normal detail;
flag state detail;
flag nsr-synchronization;
}
rp {
static {
address 10.210.255.202;
address abcd::10:210:255:202;
}
}
interface lo0.0;
interface fe-0/1/3.0 {
mode sparse;
version 2;
}
interface so-0/0/1.0 {
mode sparse;
version 2;
}
interface fe-1/1/0.0 {
mode sparse;
version 2;
534
}
}
Verification
SEE ALSO
The routing platform does not forward new streams until after the restart is complete. After restart, the
routing platform refreshes the forwarding state with any updates that were received from neighbors
during the restart period. For example, the routing platform relearns the join and prune states of
neighbors during the restart, but it does not apply the changes to the forwarding table until after the
restart.
When PIM sparse mode is enabled, the routing platform generates a unique 32-bit random number
called a generation identifier. Generation identifiers are included by default in PIM hello messages, as
specified in the Internet draft draft-ietf-pim-sm-v2-new-10.txt. When a routing platform receives PIM
hello messages containing generation identifiers on a point-to-point interface, the Junos OS activates an
algorithm that optimizes graceful restart.
Before PIM sparse mode graceful restart occurs, each routing platform creates a generation identifier
and sends it to its multicast neighbors. If a routing platform with PIM sparse mode restarts, it creates a
new generation identifier and sends it to neighbors. When a neighbor receives the new identifier, it
resends multicast updates to the restarting router to allow it to exit graceful restart efficiently. The
restart phase is complete when the restart duration timer expires.
536
Multicast forwarding can be interrupted in two ways. First, if the underlying routing protocol is unstable,
multicast RPF checks can fail and cause an interruption. Second, because the forwarding table is not
updated during the graceful restart period, new multicast streams are not forwarded until graceful
restart is complete.
You can configure graceful restart globally or for a routing instance. This example shows how to
configure graceful restart globally.
2. (Optional) Configure the amount of time the routing device waits (in seconds) to complete PIM
sparse mode graceful restart. By default, the router allows 60 seconds. The range is from 30 through
300 seconds. After this restart time, the Routing Engine resumes normal multicast operation.
3. Monitor the operation of PIM graceful restart by running the show pim neighbors command. In the
command output, look for the G flag in the Option field. The G flag stands for generation identifier.
Also run the show task replication command to verify the status of GRES and NSR.
SEE ALSO
Release Description
13.3 In Junos OS release 13.3, multicast VPNs are not supported with nonstop active routing. Policy-based
features (such as neighbor policy, join policy, BSR policy, scope policy, flow maps, and RPF check policy)
are not supported with nonstop active routing.
10.4 For nonstop active routing for PIM-based multicast traffic to work with IPv6, the routing device must be
running Junos OS Release 10.4 or above.
537
RELATED DOCUMENTATION
IN THIS SECTION
In some network configurations, customers are unable to run PIM between the customer edge-facing
PIM domain and the core-facing PIM domain, even though PIM is running in sparse mode within each of
these domains. Because PIM is not running between the domains, customers with this configuration
cannot use PIM to forward multicast traffic across the domains. Instead, they might want to use IGMP to
forward IPv4 multicast traffic, or MLD to forward IPv6 multicast traffic across the domains.
To enable the use of IGMP or MLD to forward multicast traffic across the PIM domains in such
topologies, you can configure the rendezvous point (RP) router that resides between the edge domain
and core domain to translate PIM join or prune messages received from PIM neighbors on downstream
interfaces into corresponding IGMP or MLD report or leave messages. The router then transmits the
report or leave messages by proxying them to one or two upstream interfaces that you configure on the
RP router. As a result, this feature is sometimes referred to as PIM-to-IGMP proxy or PIM-to-MLD
proxy.
To configure the RP router to translate PIM join or prune messages into IGMP report or leave messages,
include the pim-to-igmp-proxy statement at the [edit routing-options multicast] hierarchy level.
Similarly, to configure the RP router to translate PIM join or prune messages into MLD report or leave
messages, include the pim-to-mld-proxy statement at the [edit routing-options multicast] hierarchy
level. As part of the configuration, you must specify the full name of at least one, but not more than two,
upstream interfaces on which to enable the PIM-to-IGMP proxy or PIM-to-MLD proxy feature.
538
The following guidelines apply when you configure PIM-to-IGMP or PIM-to-MLD message translation:
• Make sure that the router connecting the PIM edge domain and the PIM core domain is the static or
elected RP router.
• Make sure that the RP router is using the PIM sparse mode (PIM-SM) multicast routing protocol.
• When you configure an upstream interface, use the full logical interface specification (for example,
ge-0/0/1.0) and not just the physical interface specification (ge-0/0/1).
• When you configure two upstream interfaces, the RP router transmits the same IGMP or MLD report
messages and multicast traffic on both upstream interfaces. As a result, make sure that reverse-path
forwarding (RPF) is running in the PIM-SM core domain to verify that multicast packets are received
on the correct incoming interface and to avoid sending duplicate packets.
• The router transmits IGMP or MLD report messages on one or both upstream interfaces only for the
first PIM join message that it receives among all of the downstream interfaces. Similarly, the router
transmits IGMP or MLD leave messages on one or both upstream interfaces only if it receives a PIM
prune message for the last downstream interface.
• Multicast traffic received from an upstream interface is accepted as if it came from a host.
SEE ALSO
Enabling the routing device to perform PIM-to-IGMP message translation, also referred to as PIM-to-
IGMP proxy, is useful when you want to use IGMP to forward IPv4 multicast traffic between a PIM
sparse mode edge domain and a PIM sparse mode core domain in certain network topologies.
• Make sure that the routing device connecting the PIM edge domain and that the PIM core domain is
the static or elected RP routing device.
• Make sure that the PIM sparse mode (PIM-SM) routing protocol is running on the RP routing device.
• If you plan to configure two upstream interfaces, make sure that reverse-path forwarding (RPF) is
running in the PIM-SM core domain. Because the RP router transmits the same IGMP messages and
multicast traffic on both upstream interfaces, you need to run RPF to verify that multicast packets
are received on the correct incoming interface and to avoid sending duplicate packets.
To configure the RP routing device to translate PIM join or prune messages into corresponding IGMP
report or leave messages:
1. Include the pim-to-igmp-proxy statement, specifying the names of one or two logical interfaces to
function as the upstream interfaces on which the routing device transmits IGMP report or leave
messages.
The following example configures PIM-to-IGMP message translation on a single upstream interface,
ge-0/1/0.1.
The following example configures PIM-to-IGMP message translation on two upstream interfaces,
ge-0/1/0.1 and ge-0/1/0.2. You must include the logical interface names within square brackets ( [ ] )
when you configure a set of two upstream interfaces.
2. Use the show multicast pim-to-igmp-proxy command to display the PIM-to-IGMP proxy state
(enabled or disabled) and the name or names of the configured upstream interfaces.
SEE ALSO
Enabling the routing device to perform PIM-to-MLD message translation, also referred to as PIM-to-
MLD proxy, is useful when you want to use MLD to forward IPv6 multicast traffic between a PIM sparse
mode edge domain and a PIM sparse mode core domain in certain network topologies.
• Make sure that the routing device connecting the PIM edge domain and that the PIM core domain is
the static or elected RP routing device.
• Make sure that the PIM sparse mode (PIM-SM) routing protocol is running on the RP routing device.
• If you plan to configure two upstream interfaces, make sure that reverse-path forwarding (RPF) is
running in the PIM-SM core domain. Because the RP routing device transmits the same MLD
messages and multicast traffic on both upstream interfaces, you need to run RPF to verify that
multicast packets are received on the correct incoming interface and to avoid sending duplicate
packets.
To configure the RP routing device to translate PIM join or prune messages into corresponding MLD
report or leave messages:
1. Include the pim-to-mld-proxy statement, specifying the names of one or two logical interfaces to
function as the upstream interfaces on which the router transmits MLD report or leave messages.
541
The following example configures PIM-to-MLD message translation on a single upstream interface,
ge-0/5/0.1.
The following example configures PIM-to-MLD message translation on two upstream interfaces,
ge-0/5/0.1 and ge-0/5/0.2. You must include the logical interface names within square brackets ( [ ] )
when you configure a set of two upstream interfaces.
2. Use the show multicast pim-to-mld-proxy command to display the PIM-to-MLD proxy state (enabled
or disabled) and the name or names of the configured upstream interfaces.
SEE ALSO
RELATED DOCUMENTATION
Configuring IGMP | 25
Configuring MLD | 60
542
CHAPTER 15
IN THIS CHAPTER
IN THIS SECTION
Purpose | 542
Action | 542
Meaning | 543
Purpose
Action
Sample Output
command-name
Meaning
The output shows a list of the interfaces that are configured for PIM. Verify the following information:
IN THIS SECTION
Purpose | 543
Action | 543
Meaning | 544
Purpose
Verify that the PIM RP is statically configured with the correct IP address.
Action
Sample Output
command-name
Meaning
The output shows a list of the RP addresses that are configured for PIM. At least one RP must be
configured. Verify the following information:
IN THIS SECTION
Purpose | 544
Action | 544
Meaning | 545
Purpose
Action
Sample Output
command-name
Meaning
The output shows the multicast RPF table that is configured for PIM. If no multicast RPF routing table is
configured, RPF checks use inet.0. Verify the following information:
CHAPTER 16
IN THIS CHAPTER
IN THIS SECTION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
Understanding MSDP
The Multicast Source Discovery Protocol (MSDP) is used to connect multicast routing domains. It
typically runs on the same router as the Protocol Independent Multicast (PIM) sparse-mode rendezvous
point (RP). Each MSDP router establishes adjacencies with internal and external MSDP peers similar to
the way BGP establishes peers. These peer routers inform each other about active sources within the
domain. When they detect active sources, the routers can send PIM sparse-mode explicit join messages
to the active source.
548
The peer with the higher IP address passively listens to a well-known port number and waits for the side
with the lower IP address to establish a Transmission Control Protocol (TCP) connection. When a PIM
sparse-mode RP that is running MSDP becomes aware of a new local source, it sends source-active
type, length, and values (TLVs) to its MSDP peers. When a source-active TLV is received, a peer-reverse-
path-forwarding (peer-RPF) check (not the same as a multicast RPF check) is done to make sure that this
peer is in the path that leads back to the originating RP. If not, the source-active TLV is dropped. This
TLV is counted as a “rejected” source-active message.
The MSDP peer-RPF check is different from the normal RPF checks done by non-MSDP multicast
routers. The goal of the peer-RPF check is to stop source-active messages from looping. Router R
accepts source-active messages originated by Router S only from neighbor Router N or an MSDP mesh
group member.
1. S ------------------> N ------------------> R
Router R (the router that accepts or rejects active-source messages) locates its MSDP peer-RPF
neighbor (Router N) deterministically. A series of rules is applied in a particular order to received source-
active messages, and the first rule that applies determines the peer-RPF neighbor. All source-active
messages from other routers are rejected.
The six rules applied to source-active messages originating at Router S received at Router R from Router
N are as follows:
1. If Router N originated the source-active message (Router N is Router S), then Router N is also the
peer-RPF neighbor, and its source-active messages are accepted.
2. If Router N is a member of the Router R mesh group, or is the configured peer, then Router N is the
peer-RPF neighbor, and its source-active messages are accepted.
3. If Router N is the BGP next hop of the active multicast RPF route toward Router S (Router N installed
the route on Router R), then Router N is the peer-RPF neighbor, and its source-active messages are
accepted.
4. If Router N is an external BGP (EBGP) or internal BGP (IBGP) peer of Router R, and the last
autonomous system (AS) number in the BGP AS-path to Router S is the same as Router N's AS
number, then Router N is the peer-RPF neighbor, and its source-active messages are accepted.
5. If Router N uses the same next hop as the next hop to Router S, then Router N is the peer-RPF
neighbor, and its source-active messages are accepted.
6. If Router N fits none of these criteria, then Router N is not an MSDP peer-RPF neighbor, and its
source-active messages are rejected.
The MSDP peers that receive source-active TLVs can be constrained by BGP reachability information. If
the AS path of the network layer reachability information (NLRI) contains the receiving peer's AS
number prepended second to last, the sending peer is using the receiving peer as a next hop for this
549
source. If the split horizon information is not being received, the peer can be pruned from the source-
active TLV distribution list.
For information about configuring MSDP mesh groups, see Example: Configuring MSDP with Active
Source Limits and Mesh Groups.
SEE ALSO
Configuring MSDP
Configuring MSDP
To configure the Multicast Source Discovery Protocol (MSDP), include the msdp statement:
msdp {
disable;
active-source-limit {
maximum number;
threshold number;
}
data-encapsulation (disable | enable);
export [ policy-names ];
group group-name {
... group-configuration ...
}
hold-time seconds;
import [ policy-names ];
local-address address;
keep-alive seconds;
peer address {
... peer-configuration ...
}
rib-group group-name;
source ip-prefix</prefix-length> {
active-source-limit {
maximum number;
threshold number;
}
}
sa-hold-time seconds;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
550
readable>;
flag flag <flag-modifier > <disable>;
}
group group-name {
disable;
export [ policy-names ];
import [ policy-names ];
local-address address;
mode (mesh-group | standard);
peer address {
... same statements as at the [edit protocols msdp peer address]
hierarchy level shown just following ...
}
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
peer address {
disable;
active-source-limit {
maximum number;
threshold number;
}
authentication-key peer-key;
default-peer;
export [ policy-names ];
import [ policy-names ];
local-address address;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
}
• [edit protocols]
SEE ALSO
IN THIS SECTION
Requirements | 551
Overview | 552
Configuration | 555
Verification | 560
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
Overview
IN THIS SECTION
Topology | 554
• Forwarding
• No forwarding
• Virtual router
• VPLS
• VRF
The main use of MSDP in a routing instance is to support anycast RPs in the network, which allows you
to configure redundant RPs. Anycast RP addressing requires MSDP support to synchronize the active
sources between RPs.
• authentication-key—By default, multicast routers accept and process any properly formatted MSDP
messages from the configured peer address. This default behavior might violate the security policies
in many organizations because MSDP messages by definition come from another routing domain
beyond the control of the security practices of the multicast router's organization.
The router can authenticate MSDP messages using the TCP message digest 5 (MD5) signature
option for MSDP peering sessions. This authentication provides protection against spoofed packets
being introduced into an MSDP peering session. Two organizations implementing MSDP
authentication must decide on a human-readable key on both peers. This key is included in the MD5
signature computation for each MSDP segment sent between the two peers.
You configure an MSDP authentication key on a per-peer basis, whether the MSDP peer is defined in
a group or individually. If you configure different authentication keys for the same peer one in a
group and one individually, the individual key is used.
The peer key can be a text string up to 16 letters and digits long. Strings can include any ASCII
characters with the exception of (,), &, and [. If you include spaces in an MSDP authentication key,
enclose all characters in quotation marks (“ ”).
553
Adding, removing, or changing an MSDP authentication key in a peering session resets the existing
MSDP session and establishes a new session between the affected MSDP peers. This immediate
session termination prevents excessive retransmissions and eventual session timeouts due to
mismatched keys.
• import and export—All routing protocols use the routing table to store the routes that they learn and
to determine which routes they advertise in their protocol packets. Routing policy allows you to
control which routes the routing protocols store in, and retrieve from, the routing table.
You can configure routing policy globally, for a group, or for an individual peer. This example shows
how to configure the policy for an individual peer.
If you configure routing policy at the group level, each peer in a group inherits the group's routing
policy.
The import statement applies policies to source-active messages being imported into the source-
active cache from MSDP. The export statement applies policies to source-active messages being
exported from the source-active cache into MSDP. If you specify more than one policy, they are
evaluated in the order specified, from first to last, and the first matching policy is applied to the
route. If no match is found for the import policy, MSDP shares with the routing table only those
routes that were learned from MSDP routers. If no match is found for the export policy, the default
MSDP export policy is applied to entries in the source-active cache. See Table 15 on page 553 for a
list of match conditions.
neighbor Neighbor address (the source address in the IP header of the source-active
message)
• local-address—Identifies the address of the router you are configuring as an MSDP router (the local
router). When you configure MSDP, the local-address statement is required. The router must also be
a Protocol Independent Multicast (PIM) sparse-mode rendezvous point (RP).
554
• peer—An MSDP router must know which routers are its peers. You define the peer relationships
explicitly by configuring the neighboring routers that are the MSDP peers of the local router. After
peer relationships are established, the MSDP peers exchange messages to advertise active multicast
sources. You must configure at least one peer for MSDP to function. When you configure MSDP, the
peer statement is required. The router must also be a Protocol Independent Multicast (PIM) sparse-
mode rendezvous point (RP).
You can arrange MSDP peers into groups. Each group must contain at least one peer. Arranging peers
into groups is useful if you want to block sources from some peers and accept them from others, or
set tracing options on one group and not others. This example shows how to configure the MSDP
peers in groups. If you configure MSDP peers in a group, each peer in a group inherits all group-level
options.
Topology
Configuration
IN THIS SECTION
Procedure | 555
Results | 558
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit policy-options]
user@host# set policy-statement bgp-to-ospf term 1 from protocol bgp
user@host# set policy-statement bgp-to-ospf term 1 then accept
2. Configure a policy that filters out certain source and group addresses and accepts all other source
and group addresses.
[edit policy-options]
user@host# set policy-statement sa-filter term bad-groups from route-filter 224.0.1.2/32 exact
user@host# set policy-statement sa-filter term bad-groups from route-filter 224.0.1.2/32 exact
user@host# set policy-statement sa-filter term bad-groups from route-filter 224.77.0.0/16 orlonger
user@host# set policy-statement sa-filter term bad-groups then reject
user@host# set policy-statement sa-filter term bad-sources from source-address-filter 10.0.0.0/8
orlonger
user@host# set policy-statement sa-filter term bad-sources from source-address-filter 127.0.0.0/8
orlonger
user@host# set policy-statement sa-filter term bad-sources then reject
user@host# set policy-statement sa-filter term accept-everything-else then accept
557
[edit routing-instances]
user@host# set VPN-100 instance-type vrf
user@host# set VPN-100 interface ge-0/0/0.100
user@host# set VPN-100 interface lo0.100
[edit routing-instances]
user@host# set VPN-100 route-distinguisher 10.255.120.36:100
user@host# set VPN-100 vrf-target target:100:1
[edit routing-instances]
user@host# set VPN-100 protocols ospf export bgp-to-ospf
user@host# set VPN-100 protocols ospf area 0.0.0.0 interface lo0.100
user@host# set VPN-100 protocols ospf area 0.0.0.0 interface ge-0/0/0.100
[edit routing-instances]
user@host# set VPN-100 protocols pim rp static address 11.11.47.100
user@host# set VPN-100 protocols pim interface lo0.100 mode sparse-dense
user@host# set VPN-100 protocols pim interface lo0.100 version 2
user@host# set VPN-100 protocols pim interface ge-0/0/0.100 mode sparse-dense
user@host# set VPN-100 protocols pim interface ge-0/0/0.100 version 2
[edit routing-instances]
user@host# set VPN-100 protocols msdp export sa-filter
user@host# set VPN-100 protocols msdp import sa-filter
user@host# set VPN-100 protocols msdp group 100 local-address 10.10.47.100
user@host# set VPN-100 protocols msdp group 100 peer 10.255.120.39 authentication-key “New
York”
558
[edit routing-instances]
user@host# set VPN-100 protocols msdp group to_pe local-address 10.10.47.100
[edit routing-instances]
user@host# set VPN-100 protocols msdp group to_pe peer 11.11.47.100
[edit routing-instances]
user@host# commit
Results
Confirm your configuration by entering the show policy-options command and the show routing-
instances command from configuration mode. If the output does not display the intended configuration,
repeat the instructions in this example to correct the configuration.
}
}
}
}
group to_pe {
local-address 10.10.47.100;
peer 11.11.47.100;
}
}
}
}
Verification
SEE ALSO
shows such a topology, where R2 connects to the R1 source on one subnet, and to the incoming
interface on R3 (ge-1/3/0.0 in the figure) on another subnet.
In this topology R2 is a pass-through device not running PIM, so R3 is the first hop router for multicast
packets sent from R1. Because R1 and R3 are in different subnets, the default behavior of R3 is to
disregard R1 as a remote source. You can have R3 accept multicast traffic from R1, however, by enabling
accept-remote-source on the target interface.
1. Identify the router and physical interface that you want to receive multicast traffic from the remote
source.
2. Configure the interface to accept traffic from the remote source.
NOTE: If the interface you identified is not the only path from the remote source, you need to
ensure that it is the best path. For example you can configure a static route on the receiver
side PE router to the source, or you can prepend the AS path on the other possible routes:
4. Confirm that the interface you configured accepts traffic from the remote source.
SEE ALSO
Example: Configuring MSDP with Active Source Limits and Mesh Groups
IN THIS SECTION
Requirements | 562
Overview | 563
Configuration | 567
Verification | 569
This example shows how to configure MSDP to filter source-active messages and limit the flooding of
source-active messages.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure the router as a PIM sparse-mode RP. See Configuring Local PIM RPs.
563
Overview
IN THIS SECTION
Topology | 566
A router interested in MSDP messages, such as an RP, might have to process a large number of MSDP
messages, especially source-active messages, arriving from other routers. Because of the potential need
for a router to examine, process, and create state tables for many MSDP packets, there is a possibility of
an MSDP-based denial-of-service (DoS) attack on a router running MSDP. To minimize this possibility,
you can configure the router to limit the number of source active messages the router accepts. Also, you
can configure a threshold for applying random early detection (RED) to drop some but not all MSDP
active source messages.
By default, the router accepts 25,000 source active messages before ignoring the rest. The limit can be
from 1 through 1,000,000. The limit is applied to both the number of messages and the number of
MSDP peers.
By default, the router accepts 24,000 source-active messages before applying the RED profile to
prevent a possible DoS attack. This number can also range from 1 through 1,000,000. The next 1000
messages are screened by the RED profile and the accepted messages processed. If you configure no
drop profiles (as this example does not), RED is still in effect and functions as the primary mechanism for
managing congestion. In the default RED drop profile, when the packet queue fill-level is 0 percent, the
drop probability is 0 percent. When the fill-level is 100 percent, the drop probability is 100 percent.
NOTE: The router ignores source-active messages with encapsulated TCP packets. Multicast
does not use TCP; segments inside source-active messages are most likely the result of worm
activity.
The number configured for the threshold must be less than the number configured for the maximum
number of active MSDP sources.
You can configure an active source limit globally, for a group, or for a peer. If active source limits are
configured at multiple levels of the hierarchy (as shown in this example), all are applied.
You can configure an active source limit for an address range as well as for a specific peer. A per-source
active source limit uses an IP prefix and prefix length instead of a specific address. You can configure
more than one per-source active source limit. The longest match determines the limit.
564
Per-source active source limits can be combined with active source limits at the peer, group, and global
(instance) hierarchy level. Per-source limits are applied before any other type of active source limit.
Limits are tested in the following order:
• Per-source
• Per-peer or group
• Per-instance
An active source message must “pass” all limits established before being accepted. For example, if a
source is configured with an active source limit of 10,000 active multicast groups and the instance is
configured with a limit of 5000(and there are no other sources or limits configured), only 5000 active
source messages are accepted from this source.
MSDP mesh groups are groups of peers configured in a full-mesh topology that limits the flooding of
source-active messages to neighboring peers. Every mesh group member must have a peer connection
with every other mesh group member. When a source-active message is received from a mesh group
member, the source-active message is always accepted but is not flooded to other members of the same
mesh group. However, the source-active message is flooded to non-mesh group peers or members of
other mesh groups. By default, standard flooding rules apply if mesh-group is not specified.
CAUTION: When configuring MSDP mesh groups, you must configure all members the
same way. If you do not configure a full mesh, excessive flooding of source-active
messages can occur.
A common application for MSDP mesh groups is peer-reverse-path-forwarding (peer-RPF) check bypass.
For example, if there are two MSDP peers inside an autonomous system (AS), and only one of them has
an external MSDP session to another AS, the internal MSDP peer often rejects incoming source-active
messages relayed by the peer with the external link. Rejection occurs because the external MSDP peer
must be reachable by the internal MSDP peer through the next hop toward the source in another AS,
and this next-hop condition is not certain. To prevent rejections, configure an MSDP mesh group on the
internal MSDP peer so it always accepts source-active messages.
NOTE: An alternative way to bypass the peer-RPF check is to configure a default peer. In
networks with only one MSDP peer, especially stub networks, the source-active message always
needs to be accepted. An MSDP default peer is an MSDP peer from which all source-active
messages are accepted without performing the peer-RPF check. You can establish a default peer
at the peer or group level by including the default-peer statement.
Table 16 on page 565 explains how flooding is handled by peers in this example. .
565
Peer 11 Peer 21, Peer 22, Peer 31, Peer 12, Peer 13
Peer 32
Figure 77 on page 565 illustrates source-active message flooding between different mesh groups and
peers within the same mesh group.
• active-source-limit maximum 10000—Applies a limit of 10,000 active sources to all other peers.
566
MSDP data encapsulation mainly concerns bursty sources of multicast traffic. Sources that send only
one packet every few minutes have trouble with the timeout of state relationships between sources
and their multicast groups (S,G). Routers lose data while they attempt to reestablish (S,G) state tables.
As a result, multicast register messages contain data, and this data encapsulation in MSDP source-
active messages can be turned on or off through configuration.
By default, MSDP data encapsulation is enabled. An RP running MSDP takes the data packets
arriving in the source's register message and encapsulates the data inside an MSDP source-active
message.
However, data encapsulation creates both a multicast forwarding cache entry in the inet.1 table (this
is also the forwarding table) and a routing table entry in the inet.4 table. Without data encapsulation,
MSDP creates only a routing table entry in the inet.4 table. In some circumstances, such as the
presence of Internet worms or other forms of DoS attack, the router's forwarding table might fill up
with these entries. To prevent the forwarding table from filling up with MSDP entries, you can
configure the router not to use MSDP data encapsulation. However, if you disable data
encapsulation, the router ignores and discards the encapsulated data. Without data encapsulation,
multicast applications with bursty sources having transmit intervals greater than about 3 minutes
might not work well.
• group MSDP-group local-address 10.1.2.3—Specifies the address of the local router (this router).
• group MSDP-group mode mesh-group—Specifies that all peers belonging to the MSDP-group group
are mesh group members.
• source 10.1.0.0/16 active-source-limit maximum 500—Applies a limit of 500 active sources to any
source on the 10.1.0.0/16 network.
Topology
567
Configuration
IN THIS SECTION
Procedure | 567
Results | 568
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
3. (Optional) Configure the threshold at which warning messages are logged and the amount of time
between log messages.
[edit routing-instances]
user@host# commit
Results
peer 10.0.0.1 {
active-source-limit {
maximum 5000;
threshold 4000;
}
}
source 10.1.0.0/16 {
active-source-limit {
maximum 500;
}
}
group MSDP-group {
mode mesh-group;
local-address 10.1.2.3;
peer 10.10.10.10 {
active-source-limit {
maximum 7500;
}
}
}
}
Verification
SEE ALSO
Flag Description
You can configure MSDP tracing for all peers, for all peers in a particular group, or for a particular peer.
In the following example, tracing is enabled for all routing protocol packets. Then tracing is narrowed to
focus only on MSDP peers in a particular group. To configure tracing operations for MSDP:
571
1. (Optional) Configure tracing by including the traceoptions statement at the [edit routing-options]
hierarchy level and set the all-packets-trace and all flags to trace all protocol packets.
6. Configure tracing flags. Suppose you are troubleshooting issues with the source-active cache for
groupa. The following example shows how to trace messages associated with the group address.
SEE ALSO
Understanding MSDP
Tracing and Logging Junos OS Operations
Junos OS Administration Library for Routing Devices
Disabling MSDP
To disable MSDP on the router, include the disable statement:
disable;
You can disable MSDP globally for all peers, for all peers in a group, or for an individual peer.
If you disable MSDP at the group level, each peer in the group is disabled.
SEE ALSO
[edit]
routing-options {
interface-routes {
rib-group ifrg;
}
rib-groups {
ifrg {
import-rib [inet.0 inet.2];
}
mcrg {
export-rib inet.2;
import-rib inet.2;
}
}
}
protocols {
bgp {
group lab {
type internal;
family any;
neighbor 192.168.6.18 {
local-address 192.168.6.17;
}
}
}
pim {
574
dense-groups {
224.0.1.39/32;
224.0.1.40/32;
}
rib-group mcrg;
rp {
local {
address 192.168.1.1;
}
}
interface all {
mode sparse-dense;
version 1;
}
}
msdp {
rib-group mcrg;
group lab {
peer 192.168.6.18 {
local-address 192.168.6.17;
}
}
}
}
RELATED DOCUMENTATION
MSDP instances are supported for VRF instance types. For QFX5100, QFX5110, QFX5200, and
EX9200 switches, MSDP instances are also supported for default and virtual router instance types. You
can configure multiple instances of MSDP to support multicast over VPNs.
routing-instances {
routing-instance-name {
575
interface interface-name;
instance-type vrf;
route-distinguisher (as-number:number | ip-address:number);
vrf-import [ policy-names ];
vrf-export [ policy-names ];
protocols {
msdp {
... msdp-configuration ...
}
}
}
}
RELATED DOCUMENTATION
CHAPTER 17
IN THIS CHAPTER
IN THIS SECTION
SDP is a session directory protocol that is used for multimedia sessions. It helps advertise multimedia
conference sessions and communicates setup information to participants who want to join the session.
SDP simply formats the session description. It does not incorporate a transport protocol. A client
commonly uses SDP to announce a conference session by periodically multicasting an announcement
packet to a well-known multicast address and port using SAP.
SAP is a session directory announcement protocol that SDP uses as its transport protocol.
For information about supported standards for SAP and SDP, see Supported IP Multicast Protocol
Standards.
577
1. Determine whether the router is directly attached to any multicast sources. Receivers must be able
to locate these sources.
2. Determine whether the router is directly attached to any multicast group receivers. If receivers are
present, IGMP is needed.
3. Determine whether to configure multicast to use sparse, dense, or sparse-dense mode. Each mode
has different configuration considerations.
5. Determine whether to locate the RP with the static configuration, BSR, or auto-RP method.
6. Determine whether to configure multicast to use its own RPF routing table when configuring PIM in
sparse, dense, or sparse-dense mode.
The SAP and SDP protocols associate multicast session names with multicast traffic addresses. Only SAP
has configuration parameters that users can change. Enabling SAP allows the router to receive
announcements about multimedia and other multicast sessions.
To enable SAP and the receipt of session announcements, include the sap statement:
sap {
disable;
listen address <port port>;
}
• [edit protocols]
By default, SAP listens to the address and port 224.2.127.254:9875 for session advertisements. To add
other addresses or pairs of address and port, include one or more listen statements.
Sessions established by SDP, SAP's higher-layer protocol, time out after 60 minutes.
578
SEE ALSO
IN THIS SECTION
Purpose | 578
Action | 578
Meaning | 579
Purpose
Verify that SAP and SDP are configured to listen on the correct group addresses and ports.
Action
Sample Output
command-name
Meaning
The output shows a list of the group addresses and ports that SAP and SDP listen on. Verify the
following information:
CHAPTER 18
IN THIS CHAPTER
IN THIS SECTION
Understanding AMT
Automatic Multicast Tunneling (AMT) facilitates dynamic multicast connectivity between multicast-
enabled networks across islands of unicast-only networks. Such connectivity enables service providers,
content providers, and their customers to participate in delivering multicast traffic even if they lack end-
to-end multicast connectivity.
AMT is supported on MX Series Ethernet Services Routers with Modular Port Concentrators (MPCs)
that are running Junos 13.2 or later. AMT is also supported on i-chip based MPCs. AMT supports
graceful restart (GR) but does not support graceful Routing Engine switchover (GRES).
581
The AMT protocol provides discovery and handshaking between relays and gateways to establish
tunnels dynamically without requiring explicit per-tunnel configuration.
AMT relays are typically routers with native IP multicast connectivity that aggregate a potentially large
number of AMT tunnels.
• Prevention of denial-of-service attacks by quickly discarding multicast packets that are sourced
through a gateway.
Multicast sources located behind AMT gateways are not supported.Example: Configuring the AMT
ProtocolExample: Configuring the AMT Protocol
582
AMT supports PIM sparse mode. AMT does not support dense mode operation.
SEE ALSO
AMT Applications | 0
AMT Applications
Transit service providers have a challenge in the Internet because many local service providers are not
multicast-enabled. The challenge is how to entice content owners to transmit video and other multicast
traffic across their backbones. The cost model for the content owners might be prohibitively high if they
have to pay for unicast streams for the majority of their subscribers.
Until more local providers are multicast-enabled, there is a transition strategy proposed by the Internet
Engineering Task Force (IETF) and implemented in open source software. This strategy is called
Automatic IP Multicast Without Explicit Tunnels (AMT). AMT involves setting up relays at peering points
in multicast networks that can be reached from gateways installed on hosts connected to unicast
networks.
Without AMT, when a user who is connected to a unicast-only network wants to receive multicast
content, the content owner can allow the user to join through unicast. However, the content owner
incurs an added cost because the owner needs extra bandwidth to support the unicast subscribers.
AMT allows any host to receive multicast. On the client end is an AMT gateway that is a single host.
Once the gateway has located an AMT relay, which might be a host but is more typically a router, the
gateway periodically sends Internet Group Management Protocol (IGMP) messages over a dynamically
created UDP tunnel to the relay. AMT relays and gateways cooperate to transmit multicast traffic
sourced within the multicast network to end-user sites. AMT relays receive the traffic natively and
unicast-encapsulate it to gateways. This allows anyone on the Internet to create a dynamic tunnel to
download multicast data streams.
With AMT, a multicast-enabled service provider can offer multicast services to a content owner. When a
customer of the unicast-only local provider wants to receive the content and subscribes using an AMT
join, the multicast-enabled transit provider can then efficiently transport the content to the unicast-only
local provider, which sends it on to the end user.
AMT is an excellent way for transit service providers (who can get access to the content, but do not
have many end users) to provide multicast service to content owners, where it would not otherwise be
economically feasible. It is also a useful transition strategy for local service providers who do not yet
have multicast support on all downstream equipment.
AMT is also useful for connecting two multicast-enabled service providers that are separated by a
unicast-only service provider.
583
Similarly, AMT can be used by local service providers whose networks are multicast-enabled to tunnel
multicast traffic over legacy edge devices such as digital subscriber line access multiplexers (DSLAMs)
that have limited multicast capabilities.
• A three-way handshake is used to join groups from unicast receivers to prevent spoofing and denial-
of-service (DoS) attacks.
• An AMT relay acting as a replication server joins the multicast group and translates multicast traffic
into multiple unicast streams.
• The discovery mechanism uses anycast, enabling the discovery of the relay that is closest to the
gateway in the network topology.
• An AMT gateway acting as a client is a host that joins the multicast group.
• Tunnel count limits on relays can limit bandwidth usage and avoid degradation of service.
SEE ALSO
AMT Operation
AMT is used to create multicast tunnels dynamically between multicast-enabled networks across islands
of unicast-only networks. To do this, several steps occur sequentially.
1. The AMT relay (typically a router) advertises an anycast address prefix and route into the unicast
routing infrastructure.
2. The AMT gateway (a host) sends AMT relay discovery messages to the nearest AMT relay
reachable across the unicast-only infrastructure. To reduce the possibility of replay attacks or
dictionary attacks, the relay discovery messages contain a cryptographic nonce. A cryptographic
nonce is a random number used only once.
3. The closest relay in the topology receives the AMT relay discovery message and returns the nonce
from the discovery message in an AMT relay advertisement message. This enables the gateway to
learn the relay's unique IP address. The AMT relay now has an address to use for all subsequent
(S,G), entries it will join.
4. The AMT gateway sends an AMT request message to the AMT relay's unique IP address to begin
the process of joining the (S,G).
584
5. The AMT relay sends an AMT membership query back to the gateway.
6. The AMT gateway receives the AMT query message and sends an AMT membership update
message containing the IGMP join messages.
7. The AMT relay sends a join message toward the source to build a native multicast tree in the native
multicast infrastructure.
8. As packets are received from the source, the AMT relay replicates the packets to all interfaces in
the outgoing interface list, including the AMT tunnel. The multicast traffic is then encapsulated in
unicast AMT multicast data messages.
9. To maintain state in the AMT relay, the AMT gateway sends periodic AMT membership updates.
10. After the tunnel is established, the AMT tunnel state is refreshed with each membership update
message sent. The timeout for the refresh messages is 240 seconds.
11. When the AMT gateway leaves the group, the AMT relay can free resources associated with the
tunnel.
• The AMT relay creates an AMT pseudo interface (tunnel interface). AMT tunnel interfaces are
implemented as generic UDP encapsulation (ud) logical interfaces. These logical interfaces have the
identifier format ud-fpc/pic/port.unit.
• All multicast packets (data and control) are encapsulated in unicast packets. UDP encapsulation is
used for all AMT control and data packets using the IANA reserved UDP port number (2268) for
AMT.
• The AMT relay maintains a receiver list for each multicast session. The relay maintains the multicast
state for each gateway that has joined a particular group or (S,G) pair.
SEE ALSO
AMT Applications | 0
Example: Configuring the AMT Protocol | 0
amt {
relay {
accounting;
585
family {
inet {
anycast-prefix ip-prefix</prefix-length>;
local-address ip-address;
}
}
secret-key-timeout minutes;
tunnel-limit number;
}
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
• [edit protocols]
NOTE: In the following example, only the [edit protocols] hierarchy is identified.
The minimum configuration to enable AMT is to specify the AMT local address and the AMT
anycast prefix.
1. To enable the MX Series router to create the UDP encapsulation (ud) logical interfaces, include the
bandwidth statement and specify the bandwidth in gigabits per second.
2. Specify the local address by including the local-address statement at the [edit protocols amt relay
family inet] hierarchy level.
The local address is used as the IP source of AMT control messages and the source of AMT data
tunnel encapsulation. The local address can be configured on any active interface. Typically, the IP
address of the router’s lo0.0 loopback interface is used for configuring the AMT local address in the
default routing instance, and the IP address of the router’s lo0.n loopback interface is used for
configuring the AMT local address in VPN routing instances.
3. Specify the AMT anycast address by including the anycast-prefix statement at the [edit protocols
amt relay family inet] hierarchy level.
The AMT anycast prefix is advertised by unicast routing protocols to route AMT discovery messages
to the router from nearby AMT gateways. Typically, the router’s lo0.0 interface loopback address is
used for configuring the AMT anycast prefix in the default routing instance, and the router’s lo0.n
loopback address is used for configuring the AMT anycast prefix in VPN routing instances. However,
the anycast address can be either the primary or secondary lo0.0 loopback address.
Ensure that your unicast routing protocol advertises the AMT anycast prefix in the route
advertisements. If the AMT anycast prefix is advertised by BGP, ensure that the local autonomous
system (AS) number for the AMT relay router is in the AS path leading to the AMT anycast prefix.
4. (Optional) Enable AMT accounting.
5. (Optional) Specify the AMT secret key timeout by including the secret-key-timeout statement at the
[edit protocols amt relay] hierarchy level. In the following example, the secret key timeout is
configured to be 120 minutes.
The secret key is used to generate the AMT Message Authentication Code (MAC). Setting the secret
key timeout shorter might improve security, but it consumes more CPU resources. The default is 60
minutes.
587
6. (Optional) Specify an AMT tunnel device by including the tunnel-devices statement at the [edit
protocols amt relay] hierarchy level.
7. (Optional) Specify an AMT tunnel limit by including the tunnel-limit statement at the [edit protocols
amt relay] hierarchy level. In the following example, the AMT tunnel limit is 12.
The tunnel limit configures the static upper limit to the number of AMT tunnels that can be
established. When the limit is reached, new AMT relay discovery messages are ignored.
8. Trace AMT protocol traffic by specifying options to the traceoptions statement at the [edit protocols
amt] hierarchy level. Options applied at the AMT protocol level trace only AMT traffic. In the
following example, all AMT packets are logged to the file amt-log.
NOTE: For AMT operation, configure the PIM rendezvous point address as the primary
loopback address of the AMT relay.
SEE ALSO
AMT Applications | 0
Example: Configuring the AMT Protocol | 0
CLI Explorer
588
amt {
relay {
defaults {
(accounting | no-accounting);
group-policy [ policy-names ];
query-interval seconds;
query-response-interval seconds;
robust-count number;
ssm-map ssm-map-name;
version version;
}
}
}
The IGMP statements included at the [edit protocols igmp amt relay defaults] hierarchy level have the
same syntax and purpose as IGMP statements included at the [edit protocols igmp] or [edit protocols
igmp interface interface-name] hierarchy levels. These statements are as follows:
• You can collect IGMP join and leave event statistics. To enable the collection of IGMP join and leave
event statistics for all AMT interfaces, include the accounting statement:
• After enabling IGMP accounting, you must configure the router to filter the recorded information to
a file or display it to a terminal. You can archive the events file.
589
• To disable the collection of IGMP join and leave event statistics for all AMT interfaces, include the
no-accounting statement:
• You can filter unwanted IGMP reports at the interface level. To filter unwanted IGMP reports, define
a policy to match only IGMP group addresses (for IGMPv2) by using the policy's route-filter
statement to match the group address. Define the policy to match IGMP (S,G) addresses (for
IGMPv3) by using the policy's route-filter statement to match the group address and the policy's
source-address-filter statement to match the source address. In the following example, the
amt_reject policy is created to match both the group and source addresses.
• To apply the IGMP report filtering on the interface where you prefer not to receive specific group or
(S,G) reports, include the group-policy statement. The following example applies the amt_reject
policy to all AMT interfaces.
• You can change the IGMP query interval for all AMT interfaces to reduce or increase the number of
host query messages sent. In AMT, host query messages are sent in response to membership request
messages from the gateway. The query interval configured on the relay must be compatible with the
membership request timer configured on the gateway. To modify this interval, include the query-
interval statement. The following example sets the host query interval to 250 seconds.
The IGMP querier router periodically sends general host-query messages. These messages solicit
group membership information and are sent to the all-systems multicast group address, 224.0.0.1.
• You can change the IGMP query response interval. The query response interval multiplied by the
robust count is the maximum amount of time that can elapse between the sending of a host query
message by the querier router and the receipt of a response from a host. Varying this interval allows
you to adjust the number of IGMP messages on the AMT interfaces. To modify this interval, include
590
the query-response-interval statement. The following example configures the query response
interval to 20 seconds.
• You can change the IGMP robust count. The robust count is used to adjust for the expected packet
loss on the AMT interfaces. Increasing the robust count allows for more packet loss but increases the
leave latency of the subnetwork. To modify the robust count, include the robust-count statement.
The following example configures the robust count to 3.
The robust count automatically changes certain IGMP message intervals for IGMPv2 and IGMPv3.
• On a shared network running IGMPv2, when the query router receives an IGMP leave message, it
must send an IGMP group query message for a specified number of times. The number of IGMP
group query messages sent is determined by the robust count. The interval between query
messages is determined by the last member query interval. Also, the IGMPv2 query response
interval is multiplied by the robust count to determine the maximum amount of time between the
sending of a host query message and receipt of a response from a host.
For more information about the IGMPv2 robust count, see RFC 2236, Internet Group
Management Protocol, Version 2.
• In IGMPv3 a change of interface state causes the system to immediately transmit a state-change
report from that interface. If the state-change report is missed by one or more multicast routers, it
is retransmitted. The number of times it is retransmitted is the robust count minus one. In IGMPv3
the robust count is also a factor in determining the group membership interval, the older version
querier interval, and the other querier present interval.
For more information about the IGMPv3 robust count, see RFC 3376, Internet Group
Management Protocol, Version 3.
• You can apply a source-specific multicast (SSM) map to an AMT interface. SSM mapping translates
IGMPv1 or IGMPv2 membership reports to an IGMPv3 report, which allows hosts running IGMPv1
or IGMPv2 to participate in SSM until the hosts transition to IGMPv3.
SSM mapping applies to all group addresses that match the policy, not just those that conform to
SSM addressing conventions (232/8 for IPv4).
In this example, you create a policy to match the 232.1.1.1/32 group address for translation to
IGMPv3. Then you define the SSM map that associates the policy with the 192.168.43.66 source
591
address where these group addresses are found. Finally, you apply the SSM map to all AMT
interfaces.
SEE ALSO
AMT Applications | 0
Example: Configuring the AMT Protocol | 0
Specifying Log File Size, Number, and Archiving Properties
Junos OS Administration Library for Routing Devices
IN THIS SECTION
Requirements | 591
Overview | 592
Configuration | 593
Verification | 596
This example shows how to configure the Automatic Multicast Tunneling (AMT) Protocol to facilitate
dynamic multicast connectivity between multicast-enabled networks across islands of unicast-only
networks.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library.
• Configure a multicast group membership protocol (IGMP or MLD). See Understanding IGMP and
Understanding MLD.
Overview
IN THIS SECTION
Topology | 593
In this example, Host 0 and Host 2 are multicast receivers in a unicast cloud. Their default gateway
devices are AMT gateways. R0 and R4 are configured with unicast protocols only. R1, R2, R3, and R5 are
configured with PIM multicast. Host 1 is a source in a multicast cloud. R0 and R5 are configured to
perform AMT relay. Host 3 and Host 4 are multicast receivers (or sources that are directly connected to
receivers). This example shows R1 configured with an AMT relay local address and an anycast prefix as
its own loopback address. The example also shows R0 configured with tunnel services enabled.
593
Topology
Configuration
IN THIS SECTION
Procedure | 594
Results | 595
594
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit chassis]
set fpc 0 pic 0 tunnel-services bandwidth 1g
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show chassis and show protocols
commands. If the output does not display the intended configuration, repeat the instructions in this
example to correct the configuration.
}
}
Verification
SEE ALSO
RELATED DOCUMENTATION
CHAPTER 19
IN THIS CHAPTER
IN THIS SECTION
Understanding DVMRP
Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS Release 16.1.
Although DVMRP commands continue to be available and configurable in the CLI, they are no longer
visible and are scheduled for removal in a subsequent release.
The Distance Vector Multicast Routing Protocol (DVMRP) is a distance-vector routing protocol that
provides connectionless datagram delivery to a group of hosts across an internetwork. DVMRP is a
distributed protocol that dynamically generates IP multicast delivery trees by using a technique called
reverse-path multicasting (RPM) to forward multicast traffic to downstream interfaces. These
mechanisms allow the formation of shortest-path trees, which are used to reach all group members from
each network source of multicast traffic.
599
DVMRP is designed to be used as an interior gateway protocol (IGP) within a multicast domain.
Because not all IP routers support native multicast routing, DVMRP includes direct support for tunneling
IP multicast datagrams through routers. The IP multicast datagrams are encapsulated in unicast IP
packets and addressed to the routers that do support native multicast routing. DVMRP treats tunnel
interfaces and physical network interfaces the same way.
DVMRP routers dynamically discover their neighbors by sending neighbor probe messages periodically
to an IP multicast group address that is reserved for all DVMRP routers.
SEE ALSO
Configuring DVMRP | 0
Configuring DVMRP
Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS Release 16.1.
Although DVMRP commands continue to be available and configurable in the CLI, they are no longer
visible and are scheduled for removal in a subsequent release.
Distance Vector Multicast Routing Protocol (DVMRP) is the first of the multicast routing protocols and
has a number of limitations that make this method unattractive for large-scale Internet use. DVMRP is a
dense-mode-only protocol, and uses the flood-and-prune or implicit join method to deliver traffic
everywhere and then determine where the uninterested receivers are. DVMRP uses source-based
distribution trees in the form (S,G).
To configure the Distance Vector Multicast Routing Protocol (DVMRP), include the dvmrp statement:
dvmrp {
disable;
export [ policy-names ];
import [ policy-names ];
interface interface-name {
disable;
hold-time seconds;
metric metric;
mode (forwarding | unicast-routing);
}
rib-group group-name;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
600
}
}
• [edit protocols]
SEE ALSO
IN THIS SECTION
Requirements | 600
Overview | 601
Configuration | 602
Verification | 604
This example shows how to use DVMRP to announce routes used for multicast routing as well as
multicast data forwarding.
Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS Release 16.1.
Although DVMRP commands continue to be available and configurable in the CLI, they are no longer
visible and are scheduled for removal in a subsequent release.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
Overview
DVMRP is a distance vector protocol for multicast. It is similar to RIP, in that both RIP and DVMRP have
issues with scalability and robustness. PIM domains are more commonly used than DVMRP domains. In
some environments, you might need to configure interoperability with DVMRP.
• protocols dvmrp rib-group—Associates the dvmrp-rib routing table group with the DVMRP protocol
to enable multicast RPF lookup.
• protocols dvmrp interface—Configures the DVMRP interface. The interface of a DVMRP router can
be either a physical interface to a directly attached subnetwork or a tunnel interface to another
multicast-capable area of the Multicast Backbone (MBone). The DVMRP hold-time period is the
amount of time that a neighbor is to consider the sending router (this router) to be operative (up).
The default hold-time period is 35 seconds.
• protocols dvmrp interface hold-time—The DVMRP hold-time period is the amount of time that a
neighbor is to consider the sending router (this router) to be operative (up). The default hold-time
period is 35 seconds.
• protocols dvmrp interface metric—All interfaces can be configured with a metric specifying cost for
receiving packets on a given interface. The default metric is 1.
For each source network reported, a route metric is associated with the unicast route being reported.
The metric is the sum of the interface metrics between the router originating the report and the
source network. A metric of 32 marks the source network as unreachable, thus limiting the breadth
of the DVMRP network and placing an upper bound on the DVMRP convergence time.
• routing-options rib-groups—Enables DVMRP to access route information from the unicast routing
table, inet.0, and from a separate routing table that is reserved for DVMRP. In this example, the first
routing table group named ifrg contains local interface routes. This ensures that local interface routes
get added to both the inet.0 table for use by unicast protocols and the inet.2 table for multicast RPF
check. The second routing table group named dvmrp-rib contains inet.2 routes.
DVMRP needs to access route information from the unicast routing table, inet.0, and from a separate
routing table that is reserved for DVMRP. You need to create the routing table for DVMRP and to
create groups of routing tables so that the routing protocol process imports and exports routes
properly. We recommend that you use routing table inet.2 for DVMRP routing information.
• routing-options interface-routes— After defining the ifrg routing table group, use the interface-
routes statement to insert interface routes into the ifrg group—in other words, into both inet.0 and
inet.2. By default, interface routes are imported into routing table inet.0 only.
602
• sap—Enables the Session Directory Announcement Protocol (SAP) and the Session Directory
Protocol (SDP). Enabling SAP allows the router to receive announcements about multimedia and
other multicast sessions.
SAP always listens to the address and port 224.2.127.254:9875 for session advertisements. To add
other addresses or pairs of address and port, include one or more listen statements.
Sessions learned by SDP, SAP's higher-layer protocol, time out after 60 minutes.
Configuration
IN THIS SECTION
Procedure | 602
Results | 604
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit routing-options]
user@host# set interface-routes rib-group inet ifrg
user@host# set rib-groups ifrg import-rib [ inet.0 inet.2 ]
user@host# set rib-groups dvmrp-rib import-rib inet.2
user@host# set rib-groups dvmrp-rib export-rib inet.2
[edit protocols]
user@host# set sap
3. Enable DVMRP on the router and associate the dvmrp-rib routing table group with DVMRP to
enable multicast RPF checks.
[edit protocols]
user@host# set dvmrp rib-group dvmrp-rib
4. Configure the DVMRP interface with a hold-time value and a metric. This example shows an IP-over-
IP encapsulation tunnel interface.
[edit protocols]
user@host# set dvmrp interface ip–0/0/0.0
user@host# set dvmrp interface ip–0/0/0.0 hold-time 40
user@host# set dvmrp interface ip–0/0/0.0 metric 5
user@host# commit
604
Results
Confirm your configuration by entering the show routing-options command and the show protocols
command from configuration mode. If the output does not display the intended configuration, repeat
the instructions in this example to correct the configuration.
Verification
SEE ALSO
Understanding DVMRP | 0
Example: Configuring DVMRP to Announce Unicast Routes | 0
IN THIS SECTION
Requirements | 605
Overview | 605
Configuration | 607
Verification | 610
Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS Release 16.1.
Although DVMRP commands continue to be available and configurable in the CLI, they are no longer
visible and are scheduled for removal in a subsequent release.
This example shows how to use DVMRP to announce unicast routes used solely for multicast reverse-
path forwarding (RPF) to set up the multicast control plane.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
Overview
IN THIS SECTION
Topology | 607
606
DVMRP has two modes. Forwarding mode is the default mode. In forwarding mode, DVMRP is
responsible for the multicast control plane and multicast data forwarding. In the nondefault mode (which
is shown in this example), DVMRP does not forward multicast data traffic. This mode is called unicast
routing mode because in this mode DVMRP is only responsible for announcing unicast routes used for
multicast RPF—in other words, for establishing the control plane. To forward multicast data, enable
Protocol Independent Multicast (PIM) on the interface. If you have configured PIM on the interface, as
shown in this example, you can configure DVMRP in unicast-routing mode only. You cannot configure
PIM and DVMRP in forwarding mode at the same time.
• protocols dvmrp export dvmrp-export—Associates the dvmrp-export policy with the DVMRP
protocol.
All routing protocols use the routing table to store the routes that they learn and to determine which
routes they advertise in their protocol packets. Routing policy allows you to control which routes the
routing protocols store in and retrieve from the routing table. Import and export policies are always
from the point of view of the routing table. So the dvmrp-export policy exports static default routes
from the routing table and accepts them into DVMRP.
• protocols dvmrp interface all mode unicast-routing—Enables all interfaces to announce unicast routes
used solely for multicast RPF.
• protocols dvmrp rib-group inet dvmrp-rg—Associates the dvmrp-rib routing table group with the
DVMRP protocol to enable multicast RPF checks.
• protocols pim rib-group inet pim-rg—Associates the pim-rg routing table group with the PIM protocol
to enable multicast RPF checks.
• routing-options rib inet.2 static route 0.0.0.0/0 discard—Redistributes static routes to all DVMRP
neighbors. The inet.2 routing table stores unicast IPv4 routes for multicast RPF lookup. The discard
statement silently drops packets without notice.
• routing-options rib-groups dvmrp-rg import-rib inet.2—Creates the routing table for DVMRP to
ensure that the routing protocol process imports routes properly.
• routing-options rib-groups dvmrp-rg export-rib inet.2—Creates the routing table for DVMRP to
ensure that the routing protocol process exports routes properly.
• routing-options rib-groups pim-rg import-rib inet.2—Enables access to route information from the
routing table that stores unicast IPv4 routes for multicast RPF lookup. In this example, the first
routing table group named pim-rg contains local interface routes. This ensures that local interface
routes get added to the inet.2 table.
607
Topology
Configuration
IN THIS SECTION
Procedure | 607
Results | 609
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit routing-options]
[edit routing -options]
user@host# set rib inet.2 static route 0.0.0.0/0 discard
user@host# set rib-groups pim-rg import-rib inet.2
user@host# set rib-groups dvmrp-rg import-rib inet.2
user@host# set rib-groups dvmrp-rg export-rib inet.2
2. Configure DVMRP.
[edit protocols]
user@host# set dvmrp rib-group inet dvmrp-rg
user@host# set dvmrp export dvmrp-export
user@host# set dvmrp interface all mode unicast-routing
user@host# set dvmrp interface fxp0 disable
[edit protocols]
user@host# set pim rib-group inet pim-rg
user@host# set pim interface all
user@host# commit
Results
Confirm your configuration by entering the show policy-options command, the show protocols
command, and the show routing-options command from configuration mode. If the output does not
display the intended configuration, repeat the instructions in this example to correct the configuration.
interface all;
}
Verification
SEE ALSO
Understanding DVMRP | 0
Example: Configuring DVMRP | 0
Tracing operations record detailed messages about the operation of routing protocols, such as the
various types of routing protocol packets sent and received, and routing policy actions. You can specify
611
which trace operations are logged by including specific tracing flags. The following table describes the
flags that you can include.
Flag Description
(Continued)
Flag Description
In the following example, tracing is enabled for all routing protocol packets. Then tracing is narrowed to
focus only on DVMRP packets of a particular type. To configure tracing operations for DVMRP:
1. (Optional) Configure tracing at the routing options level to trace all protocol packets.
6. Configure tracing flags. Suppose you are troubleshooting issues with a particular DVMRP neighbor.
The following example shows how to trace neighbor probe packets that match the neighbor’s IP
address.
SEE ALSO
Understanding DVMRP | 0
Tracing and Logging Junos OS Operations
Junos OS Administration Library for Routing Devices
16.1 Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS Release 16.1.
Although DVMRP commands continue to be available and configurable in the CLI, they are no longer
visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
CHAPTER 20
IN THIS CHAPTER
Example: Configuring a Specific Tunnel for IPv4 Multicast VPN Traffic (Using Draft-Rosen MVPNs) | 636
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode | 690
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast Mode | 696
• Draft-rosen multicast VPNs with service provider tunnels operating in any-source multicast (ASM)
mode (also referred to as rosen 6 Layer 3 VPN multicast)—Described in RFC 4364, BGP/MPLS IP
Virtual Private Networks (VPNs) and based on Section 2 of the IETF Internet draft draft-rosen-vpn-
mcast-06.txt, Multicast in MPLS/BGP VPNs (expired April 2004).
• Draft-rosen multicast VPNs with service provider tunnels operating in source-specific multicast
(SSM) mode (also referred to as rosen 7 Layer 3 VPN multicast)—Described in RFC 4364, BGP/MPLS
IP Virtual Private Networks (VPNs) and based on the IETF Internet draft draft-rosen-vpn-
mcast-07.txt, Multicast in MPLS/BGP IP VPNs. Draft-rosen multicast VPNs with service provider
tunnels operating in SSM mode do not require that the provider (P) routers maintain any VPN-
specific Protocol-Independent Multicast (PIM) information.
616
NOTE: Draft-rosen multicast VPNs are not supported in a logical system environment even
though the configuration statements can be configured under the logical-systems hierarchy.
In a draft-rosen Layer 3 multicast virtual private network (MVPN) configured with service provider
tunnels, the VPN is multicast-enabled and configured to use the Protocol Independent Multicast (PIM)
protocol within the VPN and within the service provider (SP) network. A multicast-enabled VPN routing
and forwarding (VRF) instance corresponds to a multicast domain (MD), and a PE router attached to a
particular VRF instance is said to belong to the corresponding MD. For each MD there is a default
multicast distribution tree (MDT) through the SP backbone, which connects all of the PE routers
belonging to that MD. Any PE router configured with a default MDT group address can be the multicast
source of one default MDT.
Draft-rosen MVPNs with service provider tunnels start by sending all multicast traffic over a default
MDT, as described in section 2 of the IETF Internet draft draft-rosen-vpn-mcast-06.txt and section 7 of
the IETF Internet draft draft-rosen-vpn-mcast-07.txt. This default mapping results in the delivery of
packets to each provider edge (PE) router attached to the provider router even if the PE router has no
receivers for the multicast group in that VPN. Each PE router processes the encapsulated VPN traffic
even if the multicast packets are then discarded.
RELATED DOCUMENTATION
IN THIS SECTION
An ASM network must be able to determine the locations of all sources for a particular multicast group
whenever there are interested listeners, no matter where the sources might be located in the network.
In ASM, the key function of source discovery is a required function of the network itself.
In an environment where many sources come and go, such as for a video conferencing service, ASM is
appropriate. Multicast source discovery appears to be an easy process, but in sparse mode it is not. In
dense mode, it is simple enough to flood traffic to every router in the network so that every router
learns the source address of the content for that multicast group.
However, in PIM sparse mode, the flooding presents scalability and network resource use issues and is
not a viable option.
SEE ALSO
IN THIS SECTION
Requirements | 618
Overview | 618
Configuration | 621
Verification | 630
This example shows how to configure an any-source multicast VPN (MVPN) using dual PIM
configuration with a customer RP and provider RP and mapping the multicast routes from customer to
618
provider (known as draft-rosen). The Junos OS complies with RFC 4364 and Internet draft draft-rosen-
vpn-mcast-07.txt, Multicast in MPLS/BGP VPNs.
Requirements
• Configure the router interfaces. See the Junos OS Network Interfaces Library for Routing Devices.
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure the VPN. See the Junos OS VPNs Library for Routing Devices.
• Configure the VPN import and VPN export policies. See Configuring Policies for the VRF Table on PE
Routers in VPNs in the Junos OS VPNs Library for Routing Devices.
• Make sure that the routing devices support multicast tunnel (mt) interfaces for encapsulating and de-
encapsulating data packets into tunnels. See Tunnel Services PICs and Multicast and Load Balancing
Multicast Tunnel Interfaces Among Available PICs.
For multicast to work on draft-rosen Layer 3 VPNs, each of the following routers must have tunnel
interfaces:
• Any customer edge (CE) router that is acting as a source's DR or as an RP. A receiver's designated
router does not need a Tunnel Services PIC.
Overview
IN THIS SECTION
Topology | 621
Draft-rosen multicast virtual private networks (MVPNs) can be configured to support service provider
tunnels operating in any-source multicast (ASM) mode or source-specific multicast (SSM) mode.
In this example, the term multicast Layer 3 VPNs is used to refer to draft-rosen MVPNs.
• interface lo0.1—Configures an additional unit on the loopback interface of the PE router. For the
lo0.1 interface, assign an address from the VPN address space. Add the lo0.1 interface to the
following places in the configuration:
• IGP and BGP policies to advertise the interface in the VPN address space
In multicast Layer 3 VPNs, the multicast PE routers must use the primary loopback address (or router
ID) for sessions with their internal BGP peers. If the PE routers use a route reflector and the next hop
is configured as self, Layer 3 multicast over VPN will not work, because PIM cannot transmit
upstream interface information for multicast sources behind remote PEs into the network core.
Multicast Layer 3 VPNs require that the BGP next-hop address of the VPN route match the BGP
next-hop address of the loopback VRF instance address.
• protocols pim interface—Configures the interfaces between each provider router and the PE routers.
On all CE routers, include this statement on the interfaces facing toward the provider router acting as
the RP.
• protocols pim mode sparse—Enables PIM sparse mode on the lo0 interface of all PE routers. You can
either configure that specific interface or configure all interfaces with the interface all statement. On
CE routers, you can configure sparse mode or sparse-dense mode.
• protocols pim rp local—On all routers acting as the RP, configure the address of the local lo0
interface. The P router acts as the RP router in this example.
• protocols pim rp static—On all PE and CE routers, configure the address of the router acting as the
RP.
It is possible for a PE router to be configured as the VPN customer RP (C-RP) router. A PE router can
also act as the DR. This type of PE configuration can simplify configuration of customer DRs and
VPN C-RPs for multicast VPNs. This example does not discuss the use of the PE as the VPN C-RP.
Figure 80 on page 619 shows multicast connectivity on the customer edge. In the figure, CE2 is the
RP router. However, the RP router can be anywhere in the customer network.
• protocols pim version 2—Enables PIM version 2 on the lo0 interface of all PE routers and CE routers.
You can either configure that specific interface or configure all interfaces with the interface all
statement.
• group-address—In a routing instance, configure multicast connectivity for the VPN on the PE routers.
Configure a VPN group address on the interfaces facing toward the router acting as the RP.
The PIM configuration in the VPN routing and forwarding (VRF) instance on the PE routers needs to
match the master PIM instance on the CE router. Therefore, the PE router contains both a master
PIM instance (to communicate with the provider core) and the VRF instance (to communicate with
the CE routers).
VRF instances that are part of the same VPN share the same VPN group address. For example, all PE
routers containing multicast-enabled routing instance VPN-A share the same VPN group address
configuration. In Figure 81 on page 620, the shared VPN group address configuration is 239.1.1.1.
• routing-instances instance-name protocols pim rib-group—Adds the routing group to the VPN's VRF
instance.
Topology
This example describes how to configure multicast in PIM sparse mode for a range of multicast
addresses for VPN-A as shown in Figure 82 on page 621.
Configuration
IN THIS SECTION
Procedure | 621
Results | 628
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
PE1
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set dense-groups 224.0.1.39/32
[edit protocols pim]
user@host# set dense-groups 224.0.1.40/32
[edit protocols pim]
user@host# set rp local address 10.255.71.47
[edit protocols pim]
623
2. Configure PIM on the PE1 and PE2 routers. Specify a static RP—the P router (10.255.71.47).
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set rp static address 10.255.71.47
[edit protocols pim]
user@host# set interface interface all mode sparse
[edit protocols pim]
user@host# set interface interface all version 2
[edit protocols pim]
user@host# set interface fxp0.0 disable
[edit protocols pim]
user@host# exit
3. Configure PIM on CE1. Specify the RP address for the VPN RP—Router CE2 (10.255.245.91).
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set rp static address 10.255.245.91
[edit protocols pim]
user@host# set interface all mode sparse
[edit protocols pim]
user@host# set interface all version 2
[edit protocols pim]
user@host# set interface fxp0.0 disable
[edit protocols pim]
user@host# exit
624
4. Configure PIM on CE2, which acts as the VPN RP. Specify CE2's address (10.255.245.91).
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set rp local address 10.255.245.91
[edit protocols pim]
user@host# set interface all mode sparse
[edit protocols pim]
user@host# set interface all version 2
[edit protocols pim]
user@host# set interface fxp0.0 disable
[edit protocols pim]
user@host# exit
5. On PE1, configure the routing instance (VPN-A) for the Layer 3 VPN.
[edit]
user@host# edit routing-instances VPN-A
[edit routing-instances VPN-A]
user@host# set instance-type vrf
[edit routing-instances VPN-A]
user@host# set interface t1-1/0/0:0.0
[edit routing-instances VPN-A]
user@host# set interface lo0.1
[edit routing-instances VPN-A]
user@host# set route-distinguisher 10.255.71.46:100
[edit routing-instances VPN-A]
user@host# set vrf-import VPNA-import
[edit routing-instances VPN-A]
user@host# set vrf-export VPNA-export
6. On PE1, configure the IGP policy to advertise the interfaces in the VPN address space.
7. On PE1, set the RP configuration for the VRF instance. The RP configuration within the VRF
instance provides explicit knowledge of the RP address, so that the (*,G) state can be forwarded.
[edit]
user@host# edit interface lo0
[edit interface lo0]
user@host# set unit 0 family inet address 192.168.27.13/32 primary
[edit interface lo0]
user@host# set unit 0 family inet address 127.0.0.1/32
[edit interface lo0]
user@host# set unit 1 family inet address 10.10.47.101/32
[edit interface lo0]
user@host# exit
626
9. As you did for the PE1 router, configure the PE2 router.
[edit]
user@host# edit routing-instances VPN-A
[edit routing-instances VPN-A]
user@host# set instance-type vrf
[edit routing-instances VPN-A]
user@host# set interface t1-2/0/0:0.0
[edit routing-instances VPN-A]
user@host# set interface lo0.1
[edit routing-instances VPN-A]
user@host# set route-distinguisher 10.255.71.51:100
[edit routing-instances VPN-A]
user@host# set vrf-import VPNA-import
[edit routing-instances VPN-A]
user@host# set vrf-export VPNA-export
[edit routing-instances VPN-A]
user@host# set protocols ospf export bgp-to-ospf
[edit routing-instances VPN-A]
user@host# set protocols ospf area 0.0.0.0 interface t1-2/0/0:0.0
[edit routing-instances VPN-A]
user@host# set protocols ospf area 0.0.0.0 interface lo0.1
[edit routing-instances VPN-A]
user@host# set protocols pim rp static address 10.255.245.91
[edit routing-instances VPN-A]
user@host# set protocols pim mvpn
[edit routing-instances VPN-A]
user@host# set protocols pim interface t1-2/0/0:0.0 mode sparse
[edit routing-instances VPN-A]
user@host# set protocols pim interface lo0.1 mode sparse
[edit routing-instances VPN-A]
user@host# set protocols pim interface lo0.1 version 2
[edit routing-instances VPN-A]
user@host# set provider-tunnel pim-asm group-address 239.1.1.1
user@host# exit
[edit]
user@host# edit interface lo0
[edit interface lo0]
user@host# set unit 0 family inet address 192.168.27.14/32 primary
[edit interface lo0]
user@host# set unit 0 family inet address 127.0.0.1/32
627
10. When one of the PE routers is running Cisco Systems IOS software, you must configure the Juniper
Networks PE router to support this multicast interoperability requirement. The Juniper Networks
PE router must have the lo0.0 interface in the master routing instance and the lo0.1 interface
assigned to the VPN routing instance. You must configure the lo0.1 interface with the same IP
address that the lo0.0 interface uses for BGP peering in the provider core in the master routing
instance.
Configure the same IP address on the lo0.0 and lo0.1 loopback interfaces of the Juniper Networks
PE router at the [edit interfaces lo0] hierarchy level, and assign the address used for BGP peering in
the provider core in the master routing instance. In this alternate example, unit 0 and unit 1 are
configured for Cisco IOS interoperability.
11. Configure the multicast routing table group. This group accesses inet.2 when doing RPF checks.
However, if you are using inet.0 for multicast RPF checks, this step will prevent your multicast
configuration from working.
[edit]
user@host# edit routing-options
[edit routing-options]
user@host# set interface-routes rib-group inet VPNA-mcast-rib
[edit routing-options]
user@host# set rib-groups VPNA-mcast-rib export-rib VPN-A.inet.2
[edit routing-options]
user@host# set rib-groups VPNA-mcast-rib import-rib VPN-A.inet.2
[edit routing-options]
user@host# exit
628
12. Activate the multicast routing table group in the VPN's VRF instance.
[edit]
user@host# edit routing-instances VPN-A
[edit routing-instances VPN-A]
user@host# set protocols pim rib-group inet VPNA-mcast-rib
13. If you are done configuring the device, commit the configuration.
Results
Confirm your configuration by entering the show interfaces, show protocols, show routing-instances,
and show routing-options commands from configuration mode. If the output does not display the
intended configuration, repeat the instructions in this example to correct the configuration. This output
shows the configuration on PE1.
rp {
static {
address 10.255.71.47;
}
}
interface fxp0.0 {
disable;
}
interface all {
mode sparse;
version 2;
}
}
}
}
interface t1-1/0/0:0.0 {
mode sparse;
version 2;
}
interface lo0.1 {
mode sparse;
version 2;
}
}
}
}
Verification
1. Display multicast tunnel information and the number of neighbors by using the show pim
interfaces instance instance-name command from the PE1 or PE2 router. When issued from the
PE1 router, the output display is:
You can also display all PE tunnel interfaces by using the show pim join command from the
provider router acting as the RP.
2. Display multicast tunnel interface information, DR information, and the PIM neighbor status between
VRF instances on the PE1 and PE2 routers by using the show pim neighbors instance instance-
name command from either PE router. When issued from the PE1 router, the output is as follows:
SEE ALSO
To generate multicast tunnel interfaces, a routing device must have one or more of the following tunnel-
capable PICs:
• On MX Series routers, a PIC created with the tunnel-services statement at the [edit chassis fpc slot-
number pic number] hierarchy level
If a routing device has multiple such PICs, it might be important in your implementation to load balance
the tunnel interfaces across the available tunnel-capable PICs.
632
The multicast tunnel interface that is used for encapsulation, mt-[xxxxx], is in the range from 32,768
through 49,151. The interface mt-[yyyyy], used for de-encapsulation, is in the range from 1,081,344
through 1,107,827. PIM runs only on the encapsulation interface. The de-encapsulation interface
populates downstream interface information. For the default MDT, an instance’s de-encapsulation and
encapsulation interfaces are always created on the same PIC.
For each VPN, the PE routers build a multicast distribution tree within the service provider core
network. After the tree is created, each PE router encapsulates all multicast traffic (data and control
messages) from the attached VPN and sends the encapsulated traffic to the VPN group address.
Because all the PE routers are members of the outgoing interface list in the multicast distribution tree
for the VPN group address, they all receive the encapsulated traffic. When the PE routers receive the
encapsulated traffic, they de-encapsulate the messages and send the data and control messages to the
CE routers.
If a routing device has multiple tunnel-capable PICs (for example, two Tunnel Services PICs), the routing
device load balances the creation of tunnel interfaces among the available PICs. However, in some cases
(for example, after a reboot), a single PIC might be selected for all of the tunnel interfaces. This causes
one PIC to have a heavy load, while other available PICs are underutilized. To prevent this, you can
manually configure load balancing. Thus, you can configure and distribute the load uniformly across the
available PICs.
The definition of a balanced state is determined by you and by the requirements of your Layer 3 VPN
implementation. You might want all of the instances to be evenly distributed across the available PICs or
across a configured list of PICs. You might want all of the encapsulation interfaces from all of the
instances to be evenly distributed across the available PICs or across a configured list of PICs. If the
bandwidth of each tunnel encapsulation interface is considered, you might choose a different
distribution. You can design your load-balancing configuration based on each instance or on each
routing device.
NOTE: In a Layer 3 VPN, each of the following routing devices must have at least one tunnel-
capable PIC:
• Any customer edge (CE) router that is acting as a source's DR or as an RP. A receiver's
designated router does not need a tunnel-capable PIC.
1. On an M Series or T Series router or on an EX Series switch, install more than one tunnel-capable
PIC. (In some implementations, only one PIC is required. Load balancing is based on the assumption
that a routing device has more than one tunnel-capable PIC.)
633
3. Configure Layer 3 VPNs as described in Example: Configuring Any-Source Multicast for Draft-Rosen
VPNs.
The physical position of the PIC in the routing device determines the multicast tunnel interface
name. For example, if you have an Adaptive Services PIC installed in FPC slot 0 and PIC slot 0, the
corresponding multicast tunnel interface name is mt-0/0/0. The same is true for Tunnel Services
PICs, Multiservices PICs, and Multiservices DPCs.
In the tunnel-devices statement, the order of the PIC list that you specify does not impact how the
interfaces are allocated. An instance uses all of the listed PICs to create default encapsulation and
de-encapsulation interfaces, and data MDT encapsulation interfaces. The instance uses a round-robin
approach to distributing the tunnel interfaces (default and data MDT) across the PIC list (or across
the available PICs, in the absence of a PIC list).
For the first tunnel, the round-robin algorithm starts with the lowest-numbered PIC. The second
tunnel is created on the next-lowest-numbered PIC, and so on, round and round. The selection
algorithm works routing device-wide. The round robin does not restart at the lowest-numbered PIC
for each new instance. This applies to both the default and data MDT tunnel interfaces.
If one PIC in the list fails, new tunnel interfaces are created on the remaining PICs in the list using the
round-robin algorithm. If all the PICs in the list go down, all tunnel interfaces are deleted and no new
tunnel interfaces are created. If a PIC in the list comes up from the down state and the restored PIC
is the only PIC that is up, the interfaces are reassigned to the restored PIC. If a PIC in the list comes
up from the down state and other PICs are already up, an interface reassignment is not done.
634
However, when a new tunnel interface needs to be created, the restored PIC is available for the
selection process. If you include in the PIC list a PIC that is not installed on the routing device, the
PIC is treated as if it is present but in the down state.
To balance the interfaces among the instances, you can assign one PIC to each instance. For example,
if you have vpn1-10 and you have three PICs—for example, mt-1/1/0, mt-1/2/0, mt-2/0/0—you can
configure vpn1-4 to only use mt-1/1/0, vpn5-7 to use mt-1/2/0, and vpn8-10 to use mt-2/0/0.
5. Commit the configuration.
user@host# commit
When you commit a new PIC list configuration, all the multicast tunnel interfaces for the routing
instance are deleted and re-created using the new PIC list.
6. If you reboot the routing device, some PICs come up faster than others. The difference can be
minutes. Therefore, when the tunnel interfaces are created, the known PIC list might not be the same
as when the routing device is fully rebooted. This causes the tunnel interfaces to be created on some
but not all available and configured PICs. To remedy this situation, you can manually rebalance the
PIC load.
Check to determine if a load rebalance is necessary.
The output shows that mt-1/1/0 has only one tunnel encapsulation interface, while mt-1/2/0 has
three tunnel encapsulation interfaces. In a case like this, you might decide to rebalance the interfaces.
As stated previously, encapsulation interfaces are in the range from 32,768 through 49,151. In
determining whether a rebalance is necessary, look at the encapsulation interfaces only, because the
default MDT de-encapsulation interface always resides on the same PIC with the default MDT
encapsulation interface.
7. (Optional) Rebalance the PIC load.
This command re-creates and rebalances all tunnel interfaces for a specific instance.
This command re-creates and rebalances all tunnel interfaces for all routing instances.
8. Verify that the PIC load is balanced.
The output shows that mt-1/1/0 has two encapsulation interfaces, and mt-1/2/0 also has two
encapsulation interfaces.
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 636
Overview | 636
Verification | 650
This example shows how to configure different provider tunnels to carry IPv4 customer traffic in a
multicast VPN network.
Requirements
This example uses the following hardware and software components:
• The PE routers can be M Series Multiservice Edge Routers, MX Series Ethernet Services Routers, or T
Series Core Routers.
• The CE devices can be switches (such as EX Series Ethernet Switches), or they can be routers (such
as M Series, MX Series, or T Series platforms).
Overview
IN THIS SECTION
A multicast tunnel is a mechanism to deliver control and data traffic across the provider core in a
multicast VPN. Control and data packets are transmitted over the multicast distribution tree in the
637
provider core. When a service provider carries both IPv4 and IPv6 traffic from a single customer, it is
sometimes useful to separate the IPv4 and IPv6 traffic onto different multicast tunnels within the
customer VRF routing instance. Putting customer IPv4 and IPv6 traffic on two different tunnels provides
flexibility and control. For example, it helps the service provider to charge appropriately, to manage and
measure traffic patterns, and to have an improved capability to make decisions when deploying new
services.
A draft-rosen 7 multicast VPN control plane is configured in this example. The control plane is
configured to use source-specific multicast (SSM) mode. The provider tunnel is used for the draft-rosen
7 control traffic and IPv4 customer traffic.
This example uses the following statements to configure the draft-rosen 7 control plane and specify
IPv4 traffic to be carried in the provider tunnel:
• Junos OS does not support more than two provider tunnels in a routing instance. For example, you
cannot configure an RSVP-TE provider tunnel plus two MVPN provider tunnels.
• In a routing instance, you cannot configure both an any-source multicast (ASM) tunnel and an SSM
tunnel.
638
Topology Diagram
Figure 83: Different Provider Tunnels for IPv4 Multicast VPN Traffic
PE Router Configuration
IN THIS SECTION
Results | 643
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Router PE1
Router PE2
Router PE1
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see the Junos OS CLI User Guide.
[edit interfaces]
user@PE1# set so-0/0/3 unit 0 family inet address 10.111.10.1/30
user@PE1# set so-0/0/3 unit 0 family mpls
user@PE1# set fe-1/1/2 unit 0 family inet address 10.10.10.1/30
user@PE1# set lo0 unit 0 family inet address 10.255.182.133/32 primary
user@PE1# set lo0 unit 1 family inet address 10.10.47.100/32
2. Configure a routing policy to export BGP routes from the routing table into OSPF.
3. Configure the router ID, route distinguisher, and autonomous system number.
[edit routing-options]
user@PE1# set router-id 10.255.182.133
user@PE1# set route-distinguisher-id 10.255.182.133
user@PE1# set autonomous-system 100
642
4. Configure the protocols that need to run in the main routing instance to enable MPLS, BGP, the IGP,
VPNs, and PIM sparse mode.
[edit protocols ]
user@PE1# set mpls interface all
user@PE1# set mpls interface fxp0.0 disable
user@PE1# set bgp group ibgp type internal
user@PE1# set bgp group ibgp local-address 10.255.182.133
user@PE1# set bgp group ibgp family inet-vpn unicast
user@PE1# set bgp group ibgp neighbor 10.255.182.142
user@PE1# set ospf traffic-engineering
user@PE1# set ospf area 0.0.0.0 interface all
user@PE1# set ospf area 0.0.0.0 interface fxp0.0 disable
user@PE1# set ldp interface all
user@PE1# set pim rp local address 10.255.182.133
user@PE1# set pim interface all mode sparse
user@PE1# set pim interface all version 2
user@PE1# set pim interface fxp0.0 disable
6. Configure the draft-rosen 7 control plane, and specify IPv4 traffic to be carried in the provider tunnel.
Results
From configuration mode, confirm your configuration by entering the show interfaces, show policy-
options, show protocols, show routing-instances, and show routing-options commands. If the output
does not display the intended configuration, repeat the instructions in this example to correct the
configuration.
}
fe-1/1/2 {
unit 0 {
family inet {
address 10.10.10.1/30;
}
}
}
interface fxp0.0 {
disable;
}
}
}
ldp {
interface all;
}
pim {
rp {
local {
address 10.255.182.133;
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
}
}
}
tunnel-limit 20;
group-range 232.1.1.3/32;
}
}
vrf-target target:100:10;
vrf-table-label;
protocols {
ospf {
export bgp-to-ospf;
area 0.0.0.0 {
interface all;
}
}
pim {
mvpn {
family {
inet {
autodiscovery {
inet-mdt;
}
}
inet6 {
disable;
}
}
}
rp {
static {
address 10.255.182.144;
}
}
interface lo0.1 {
mode sparse-dense;
}
interface fe-1/1/2.0 {
mode sparse-dense;
}
}
mvpn {
family {
647
inet {
autodiscovery-only {
intra-as {
inclusive;
}
}
}
}
}
}
}
If you are done configuring the router, enter commit from configuration mode.
Repeat the procedure for Router PE2, using the appropriate interface names and IP addresses.
CE Device Configuration
IN THIS SECTION
Results | 649
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
648
Device CE1
Device CE2
Device CE1
Step-by-Step Procedure
[edit interfaces]
user@CE1# set fe-0/1/0 unit 0 family inet address 10.10.10.2/30
user@CE1# set lo0 unit 0 family inet address 10.255.182.144/32 primary
649
[edit routing-options]
user@CE1# set router-id 10.255.182.144
3. Configure the protocols that need to run on the CE device to enable OSPF (for IPv4) and PIM sparse-
dense mode.
[edit protocols]
user@CE1# set ospf area 0.0.0.0 interface all
user@CE1# set ospf area 0.0.0.0 interface fxp0.0 disable
user@CE1# set pim rp local address 10.255.182.144
user@CE1# set pim interface all mode sparse-dense
user@CE1# set pim interface fxp0.0 disable
Results
From configuration mode, confirm your configuration by entering the show interfaces, show protocols,
and show routing-options commands. If the output does not display the intended configuration, repeat
the configuration instructions in this example to correct it.
address 10.255.182.144/32 {
primary;
}
}
650
}
}
If you are done configuring the router, enter commit from configuration mode.
Repeat the procedure for Device CE2, using the appropriate interface names and IP addresses.
Verification
IN THIS SECTION
Purpose
Verify that PIM multicast tunnel (mt) encapsulation and deencapsulation interfaces come up.
Action
Meaning
The multicast tunnel interface that is used for encapsulation, mt-[xxxxx], is in the range from 32,768
through 49,151. The interface mt-[yyyyy], used for de-encapsulation, is in the range from 1,081,344
through 1,107,827. PIM runs only on the encapsulation interface. The de-encapsulation interface
populates downstream interface information.
652
Purpose
Verify that PIM neighborship is established over the multicast tunnel interface.
Action
Meaning
When the neighbor address is listed and the uptime is incrementing, it means that PIM neighborship is
established over the multicast tunnel interface.
Purpose
Confirm that the provider tunnel and control-plane protocols are correct.
Action
Meaning
Checking Routes
Purpose
Action
Group: 224.1.1.1
Source: 10.240.0.242/32
Upstream interface: fe-1/1/2.0
Downstream interface list:
mt-1/2/0.32768
Session description: NOB Cross media facilities
Statistics: 92 kBps, 1001 pps, 1869820 packets
Next-hop ID: 1048581
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: 360 seconds
Wrong incoming interface notifications: 0
Meaning
Purpose
Verify that both default and data MDT tunnels are correct.
654
Action
Instance: PIM.VPN-A
Tunnel direction: Incoming
Tunnel mode: PIM-SSM
Default group address: 232.1.1.1
Default source address: 10.255.182.142
Default tunnel interface: mt-1/2/0.1081345
Default tunnel source: 0.0.0.0
RELATED DOCUMENTATION
IN THIS SECTION
An ASM network must be able to determine the locations of all sources for a particular multicast group
whenever there are interested listeners, no matter where the sources might be located in the network.
In ASM, the key function of source discovery is a required function of the network itself.
In an environment where many sources come and go, such as for a video conferencing service, ASM is
appropriate. Multicast source discovery appears to be an easy process, but in sparse mode it is not. In
dense mode, it is simple enough to flood traffic to every router in the network so that every router
learns the source address of the content for that multicast group.
However, in PIM sparse mode, the flooding presents scalability and network resource use issues and is
not a viable option.
SEE ALSO
IN THIS SECTION
Requirements | 656
Overview | 656
Configuration | 659
Verification | 668
656
This example shows how to configure an any-source multicast VPN (MVPN) using dual PIM
configuration with a customer RP and provider RP and mapping the multicast routes from customer to
provider (known as draft-rosen). The Junos OS complies with RFC 4364 and Internet draft draft-rosen-
vpn-mcast-07.txt, Multicast in MPLS/BGP VPNs.
Requirements
• Configure the router interfaces. See the Junos OS Network Interfaces Library for Routing Devices.
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure the VPN. See the Junos OS VPNs Library for Routing Devices.
• Configure the VPN import and VPN export policies. See Configuring Policies for the VRF Table on PE
Routers in VPNs in the Junos OS VPNs Library for Routing Devices.
• Make sure that the routing devices support multicast tunnel (mt) interfaces for encapsulating and de-
encapsulating data packets into tunnels. See Tunnel Services PICs and Multicast and Load Balancing
Multicast Tunnel Interfaces Among Available PICs.
For multicast to work on draft-rosen Layer 3 VPNs, each of the following routers must have tunnel
interfaces:
• Any customer edge (CE) router that is acting as a source's DR or as an RP. A receiver's designated
router does not need a Tunnel Services PIC.
Overview
IN THIS SECTION
Topology | 659
Draft-rosen multicast virtual private networks (MVPNs) can be configured to support service provider
tunnels operating in any-source multicast (ASM) mode or source-specific multicast (SSM) mode.
In this example, the term multicast Layer 3 VPNs is used to refer to draft-rosen MVPNs.
657
• interface lo0.1—Configures an additional unit on the loopback interface of the PE router. For the
lo0.1 interface, assign an address from the VPN address space. Add the lo0.1 interface to the
following places in the configuration:
• IGP and BGP policies to advertise the interface in the VPN address space
In multicast Layer 3 VPNs, the multicast PE routers must use the primary loopback address (or router
ID) for sessions with their internal BGP peers. If the PE routers use a route reflector and the next hop
is configured as self, Layer 3 multicast over VPN will not work, because PIM cannot transmit
upstream interface information for multicast sources behind remote PEs into the network core.
Multicast Layer 3 VPNs require that the BGP next-hop address of the VPN route match the BGP
next-hop address of the loopback VRF instance address.
• protocols pim interface—Configures the interfaces between each provider router and the PE routers.
On all CE routers, include this statement on the interfaces facing toward the provider router acting as
the RP.
• protocols pim mode sparse—Enables PIM sparse mode on the lo0 interface of all PE routers. You can
either configure that specific interface or configure all interfaces with the interface all statement. On
CE routers, you can configure sparse mode or sparse-dense mode.
• protocols pim rp local—On all routers acting as the RP, configure the address of the local lo0
interface. The P router acts as the RP router in this example.
• protocols pim rp static—On all PE and CE routers, configure the address of the router acting as the
RP.
It is possible for a PE router to be configured as the VPN customer RP (C-RP) router. A PE router can
also act as the DR. This type of PE configuration can simplify configuration of customer DRs and
VPN C-RPs for multicast VPNs. This example does not discuss the use of the PE as the VPN C-RP.
658
Figure 80 on page 619 shows multicast connectivity on the customer edge. In the figure, CE2 is the
RP router. However, the RP router can be anywhere in the customer network.
• protocols pim version 2—Enables PIM version 2 on the lo0 interface of all PE routers and CE routers.
You can either configure that specific interface or configure all interfaces with the interface all
statement.
• group-address—In a routing instance, configure multicast connectivity for the VPN on the PE routers.
Configure a VPN group address on the interfaces facing toward the router acting as the RP.
The PIM configuration in the VPN routing and forwarding (VRF) instance on the PE routers needs to
match the master PIM instance on the CE router. Therefore, the PE router contains both a master
PIM instance (to communicate with the provider core) and the VRF instance (to communicate with
the CE routers).
VRF instances that are part of the same VPN share the same VPN group address. For example, all PE
routers containing multicast-enabled routing instance VPN-A share the same VPN group address
configuration. In Figure 81 on page 620, the shared VPN group address configuration is 239.1.1.1.
• routing-instances instance-name protocols pim rib-group—Adds the routing group to the VPN's VRF
instance.
Topology
This example describes how to configure multicast in PIM sparse mode for a range of multicast
addresses for VPN-A as shown in Figure 82 on page 621.
Configuration
IN THIS SECTION
Procedure | 659
Results | 666
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
PE1
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set dense-groups 224.0.1.39/32
[edit protocols pim]
user@host# set dense-groups 224.0.1.40/32
[edit protocols pim]
user@host# set rp local address 10.255.71.47
[edit protocols pim]
661
2. Configure PIM on the PE1 and PE2 routers. Specify a static RP—the P router (10.255.71.47).
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set rp static address 10.255.71.47
[edit protocols pim]
user@host# set interface interface all mode sparse
[edit protocols pim]
user@host# set interface interface all version 2
[edit protocols pim]
user@host# set interface fxp0.0 disable
[edit protocols pim]
user@host# exit
3. Configure PIM on CE1. Specify the RP address for the VPN RP—Router CE2 (10.255.245.91).
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set rp static address 10.255.245.91
[edit protocols pim]
user@host# set interface all mode sparse
[edit protocols pim]
user@host# set interface all version 2
[edit protocols pim]
user@host# set interface fxp0.0 disable
[edit protocols pim]
user@host# exit
662
4. Configure PIM on CE2, which acts as the VPN RP. Specify CE2's address (10.255.245.91).
[edit]
user@host# edit protocols pim
[edit protocols pim]
user@host# set rp local address 10.255.245.91
[edit protocols pim]
user@host# set interface all mode sparse
[edit protocols pim]
user@host# set interface all version 2
[edit protocols pim]
user@host# set interface fxp0.0 disable
[edit protocols pim]
user@host# exit
5. On PE1, configure the routing instance (VPN-A) for the Layer 3 VPN.
[edit]
user@host# edit routing-instances VPN-A
[edit routing-instances VPN-A]
user@host# set instance-type vrf
[edit routing-instances VPN-A]
user@host# set interface t1-1/0/0:0.0
[edit routing-instances VPN-A]
user@host# set interface lo0.1
[edit routing-instances VPN-A]
user@host# set route-distinguisher 10.255.71.46:100
[edit routing-instances VPN-A]
user@host# set vrf-import VPNA-import
[edit routing-instances VPN-A]
user@host# set vrf-export VPNA-export
6. On PE1, configure the IGP policy to advertise the interfaces in the VPN address space.
7. On PE1, set the RP configuration for the VRF instance. The RP configuration within the VRF
instance provides explicit knowledge of the RP address, so that the (*,G) state can be forwarded.
[edit]
user@host# edit interface lo0
[edit interface lo0]
user@host# set unit 0 family inet address 192.168.27.13/32 primary
[edit interface lo0]
user@host# set unit 0 family inet address 127.0.0.1/32
[edit interface lo0]
user@host# set unit 1 family inet address 10.10.47.101/32
[edit interface lo0]
user@host# exit
664
9. As you did for the PE1 router, configure the PE2 router.
[edit]
user@host# edit routing-instances VPN-A
[edit routing-instances VPN-A]
user@host# set instance-type vrf
[edit routing-instances VPN-A]
user@host# set interface t1-2/0/0:0.0
[edit routing-instances VPN-A]
user@host# set interface lo0.1
[edit routing-instances VPN-A]
user@host# set route-distinguisher 10.255.71.51:100
[edit routing-instances VPN-A]
user@host# set vrf-import VPNA-import
[edit routing-instances VPN-A]
user@host# set vrf-export VPNA-export
[edit routing-instances VPN-A]
user@host# set protocols ospf export bgp-to-ospf
[edit routing-instances VPN-A]
user@host# set protocols ospf area 0.0.0.0 interface t1-2/0/0:0.0
[edit routing-instances VPN-A]
user@host# set protocols ospf area 0.0.0.0 interface lo0.1
[edit routing-instances VPN-A]
user@host# set protocols pim rp static address 10.255.245.91
[edit routing-instances VPN-A]
user@host# set protocols pim mvpn
[edit routing-instances VPN-A]
user@host# set protocols pim interface t1-2/0/0:0.0 mode sparse
[edit routing-instances VPN-A]
user@host# set protocols pim interface lo0.1 mode sparse
[edit routing-instances VPN-A]
user@host# set protocols pim interface lo0.1 version 2
[edit routing-instances VPN-A]
user@host# set provider-tunnel pim-asm group-address 239.1.1.1
user@host# exit
[edit]
user@host# edit interface lo0
[edit interface lo0]
user@host# set unit 0 family inet address 192.168.27.14/32 primary
[edit interface lo0]
user@host# set unit 0 family inet address 127.0.0.1/32
665
10. When one of the PE routers is running Cisco Systems IOS software, you must configure the Juniper
Networks PE router to support this multicast interoperability requirement. The Juniper Networks
PE router must have the lo0.0 interface in the master routing instance and the lo0.1 interface
assigned to the VPN routing instance. You must configure the lo0.1 interface with the same IP
address that the lo0.0 interface uses for BGP peering in the provider core in the master routing
instance.
Configure the same IP address on the lo0.0 and lo0.1 loopback interfaces of the Juniper Networks
PE router at the [edit interfaces lo0] hierarchy level, and assign the address used for BGP peering in
the provider core in the master routing instance. In this alternate example, unit 0 and unit 1 are
configured for Cisco IOS interoperability.
11. Configure the multicast routing table group. This group accesses inet.2 when doing RPF checks.
However, if you are using inet.0 for multicast RPF checks, this step will prevent your multicast
configuration from working.
[edit]
user@host# edit routing-options
[edit routing-options]
user@host# set interface-routes rib-group inet VPNA-mcast-rib
[edit routing-options]
user@host# set rib-groups VPNA-mcast-rib export-rib VPN-A.inet.2
[edit routing-options]
user@host# set rib-groups VPNA-mcast-rib import-rib VPN-A.inet.2
[edit routing-options]
user@host# exit
666
12. Activate the multicast routing table group in the VPN's VRF instance.
[edit]
user@host# edit routing-instances VPN-A
[edit routing-instances VPN-A]
user@host# set protocols pim rib-group inet VPNA-mcast-rib
13. If you are done configuring the device, commit the configuration.
Results
Confirm your configuration by entering the show interfaces, show protocols, show routing-instances,
and show routing-options commands from configuration mode. If the output does not display the
intended configuration, repeat the instructions in this example to correct the configuration. This output
shows the configuration on PE1.
rp {
static {
address 10.255.71.47;
}
}
interface fxp0.0 {
disable;
}
interface all {
mode sparse;
version 2;
}
}
}
}
interface t1-1/0/0:0.0 {
mode sparse;
version 2;
}
interface lo0.1 {
mode sparse;
version 2;
}
}
}
}
Verification
1. Display multicast tunnel information and the number of neighbors by using the show pim
interfaces instance instance-name command from the PE1 or PE2 router. When issued from the
PE1 router, the output display is:
You can also display all PE tunnel interfaces by using the show pim join command from the
provider router acting as the RP.
2. Display multicast tunnel interface information, DR information, and the PIM neighbor status between
VRF instances on the PE1 and PE2 routers by using the show pim neighbors instance instance-
name command from either PE router. When issued from the PE1 router, the output is as follows:
SEE ALSO
To generate multicast tunnel interfaces, a routing device must have one or more of the following tunnel-
capable PICs:
• On MX Series routers, a PIC created with the tunnel-services statement at the [edit chassis fpc slot-
number pic number] hierarchy level
If a routing device has multiple such PICs, it might be important in your implementation to load balance
the tunnel interfaces across the available tunnel-capable PICs.
670
The multicast tunnel interface that is used for encapsulation, mt-[xxxxx], is in the range from 32,768
through 49,151. The interface mt-[yyyyy], used for de-encapsulation, is in the range from 1,081,344
through 1,107,827. PIM runs only on the encapsulation interface. The de-encapsulation interface
populates downstream interface information. For the default MDT, an instance’s de-encapsulation and
encapsulation interfaces are always created on the same PIC.
For each VPN, the PE routers build a multicast distribution tree within the service provider core
network. After the tree is created, each PE router encapsulates all multicast traffic (data and control
messages) from the attached VPN and sends the encapsulated traffic to the VPN group address.
Because all the PE routers are members of the outgoing interface list in the multicast distribution tree
for the VPN group address, they all receive the encapsulated traffic. When the PE routers receive the
encapsulated traffic, they de-encapsulate the messages and send the data and control messages to the
CE routers.
If a routing device has multiple tunnel-capable PICs (for example, two Tunnel Services PICs), the routing
device load balances the creation of tunnel interfaces among the available PICs. However, in some cases
(for example, after a reboot), a single PIC might be selected for all of the tunnel interfaces. This causes
one PIC to have a heavy load, while other available PICs are underutilized. To prevent this, you can
manually configure load balancing. Thus, you can configure and distribute the load uniformly across the
available PICs.
The definition of a balanced state is determined by you and by the requirements of your Layer 3 VPN
implementation. You might want all of the instances to be evenly distributed across the available PICs or
across a configured list of PICs. You might want all of the encapsulation interfaces from all of the
instances to be evenly distributed across the available PICs or across a configured list of PICs. If the
bandwidth of each tunnel encapsulation interface is considered, you might choose a different
distribution. You can design your load-balancing configuration based on each instance or on each
routing device.
NOTE: In a Layer 3 VPN, each of the following routing devices must have at least one tunnel-
capable PIC:
• Any customer edge (CE) router that is acting as a source's DR or as an RP. A receiver's
designated router does not need a tunnel-capable PIC.
1. On an M Series or T Series router or on an EX Series switch, install more than one tunnel-capable
PIC. (In some implementations, only one PIC is required. Load balancing is based on the assumption
that a routing device has more than one tunnel-capable PIC.)
671
3. Configure Layer 3 VPNs as described in Example: Configuring Any-Source Multicast for Draft-Rosen
VPNs.
The physical position of the PIC in the routing device determines the multicast tunnel interface
name. For example, if you have an Adaptive Services PIC installed in FPC slot 0 and PIC slot 0, the
corresponding multicast tunnel interface name is mt-0/0/0. The same is true for Tunnel Services
PICs, Multiservices PICs, and Multiservices DPCs.
In the tunnel-devices statement, the order of the PIC list that you specify does not impact how the
interfaces are allocated. An instance uses all of the listed PICs to create default encapsulation and
de-encapsulation interfaces, and data MDT encapsulation interfaces. The instance uses a round-robin
approach to distributing the tunnel interfaces (default and data MDT) across the PIC list (or across
the available PICs, in the absence of a PIC list).
For the first tunnel, the round-robin algorithm starts with the lowest-numbered PIC. The second
tunnel is created on the next-lowest-numbered PIC, and so on, round and round. The selection
algorithm works routing device-wide. The round robin does not restart at the lowest-numbered PIC
for each new instance. This applies to both the default and data MDT tunnel interfaces.
If one PIC in the list fails, new tunnel interfaces are created on the remaining PICs in the list using the
round-robin algorithm. If all the PICs in the list go down, all tunnel interfaces are deleted and no new
tunnel interfaces are created. If a PIC in the list comes up from the down state and the restored PIC
is the only PIC that is up, the interfaces are reassigned to the restored PIC. If a PIC in the list comes
up from the down state and other PICs are already up, an interface reassignment is not done.
672
However, when a new tunnel interface needs to be created, the restored PIC is available for the
selection process. If you include in the PIC list a PIC that is not installed on the routing device, the
PIC is treated as if it is present but in the down state.
To balance the interfaces among the instances, you can assign one PIC to each instance. For example,
if you have vpn1-10 and you have three PICs—for example, mt-1/1/0, mt-1/2/0, mt-2/0/0—you can
configure vpn1-4 to only use mt-1/1/0, vpn5-7 to use mt-1/2/0, and vpn8-10 to use mt-2/0/0.
5. Commit the configuration.
user@host# commit
When you commit a new PIC list configuration, all the multicast tunnel interfaces for the routing
instance are deleted and re-created using the new PIC list.
6. If you reboot the routing device, some PICs come up faster than others. The difference can be
minutes. Therefore, when the tunnel interfaces are created, the known PIC list might not be the same
as when the routing device is fully rebooted. This causes the tunnel interfaces to be created on some
but not all available and configured PICs. To remedy this situation, you can manually rebalance the
PIC load.
Check to determine if a load rebalance is necessary.
The output shows that mt-1/1/0 has only one tunnel encapsulation interface, while mt-1/2/0 has
three tunnel encapsulation interfaces. In a case like this, you might decide to rebalance the interfaces.
As stated previously, encapsulation interfaces are in the range from 32,768 through 49,151. In
determining whether a rebalance is necessary, look at the encapsulation interfaces only, because the
default MDT de-encapsulation interface always resides on the same PIC with the default MDT
encapsulation interface.
7. (Optional) Rebalance the PIC load.
This command re-creates and rebalances all tunnel interfaces for a specific instance.
This command re-creates and rebalances all tunnel interfaces for all routing instances.
8. Verify that the PIC load is balanced.
The output shows that mt-1/1/0 has two encapsulation interfaces, and mt-1/2/0 also has two
encapsulation interfaces.
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
Each PE sends an MDT subsequent address family identifier (MDT-SAFI) BGP network layer reachability
information (NLRI) advertisement. The advertisement contains the following information:
• Route distinguisher
• Unicast address of the PE router to which the source site is attached (usually the loopback)
Each remote PE router imports the MDT-SAFI advertisements from each of the other PE routers if the
route target matches. Each PE router then joins the (S,G) tree rooted at each of the other PE routers.
After a PE router discovers the other PE routers, the source and group are bound to the VPN routing
and forwarding (VRF) through the multicast tunnel de-encapsulation interface.
A draft-rosen MVPN with service provider tunnels operating in any-source multicast sparse-mode uses
a shared tree and rendezvous point (RP) for autodiscovery of the PE routers. The PE that is the source of
the multicast group encapsulates multicast data packets into a PIM register message and sends them by
means of unicast to the RP router. The RP then builds a shortest-path tree (SPT) toward the source PE.
The remote PE that acts as a receiver for the MDT multicast group sends (*,G) join messages toward the
RP and joins the distribution tree for that group.
After the PE routers are discovered, PIM is notified of the multicast source and group addresses. PIM
binds the (S,G) state to the multicast tunnel (mt) interface and sends a join message for that group.
Autodiscovery for a draft-rosen MVPN with service provider tunnels operating in SSM mode uses some
of the facilities of the BGP-based MVPN control plane software module. Therefore, the BGP-based
MVPN control plane must be enabled. The BGP-based MVPN control plane can be enabled for
autodiscovery only.
675
SEE ALSO
IN THIS SECTION
Requirements | 675
Overview | 676
Configuration | 680
Verification | 688
This example shows how to configure a draft-rosen Layer 3 VPN operating in source-specific multicast
(SSM) mode. This example is based on the Junos OS implementation of the IETF Internet draft draft-
rosen-vpn-mcast-07.txt, Multicast in MPLS/BGP VPNs.
Requirements
• Make sure that the routing devices support multicast tunnel (mt) interfaces.
A tunnel-capable PIC supports a maximum of 512 multicast tunnel interfaces. Both default and data
MDTs contribute to this total. The default MDT uses two multicast tunnel interfaces (one for
encapsulation and one for de-encapsulation). To enable an M Series or T Series router to support
more than 512 multicast tunnel interfaces, another tunnel-capable PIC is required. See Tunnel
Services PICs and Multicast and Load Balancing Multicast Tunnel Interfaces Among Available PICs.
NOTE: In Junos OS Release 17.3R1, the pim-ssm hierarchy was moved from provider-tunnel to
the provider-tunnel family inet and provider-tunnel family inet6 hierarchies as part of an
upgrade to add IPv6 support for default MDT in Rosen 7, and data MDT for Rosen 6 and Rosen
7.
676
Overview
IN THIS SECTION
Topology | 678
The IETF Internet draft draft-rosen-vpn-mcast-07.txt introduced the ability to configure the provider
network to operate in SSM mode. When a draft-rosen multicast VPN is used over an SSM provider core,
there are no PIM RPs to provide rendezvous and autodiscovery between PE routers. Therefore, draft-
rosen-vpn-mcast-07 specifies the use of a BGP network layer reachability information (NLRI), called
MDT subaddress family identifier information (MDT-SAFI) to facilitate autodiscovery of PEs by other
PEs. MDT-SAFI updates are BGP messages distributed between intra-AS internal BGP peer PEs. Thus,
receipt of an MDT-SAFI update enables a PE to autodiscover the identity of other PEs with sites for a
given VPN and the default MDT (S,G) routes to join for each. Autodiscovery provides the next-hop
address of each PE, and the VPN group address for the tunnel rooted at that PE for the given route
distinguisher (RD) and route-target extended community attribute.
This example includes the following configuration options to enable draft-rosen SSM:
• protocols bgp group group-name family inet-mdt signaling—Enables MDT-SAFI signaling in BGP.
• routing-instance instance-name protocols pim mvpn—Specifies the SSM control plane. When pim
mvpn is configured for a VRF, the VPN group address must be specified with the provider-tunnel
pim-ssm group-address statement.
• routing-instances ce1 vrf-target target:100:1—Configures the VRF export policy. When you configure
draft-rosen multicast VPNs with provider tunnels operating in source-specific mode and using the
677
vrf-target statement, the VRF export policy is automatically generated and automatically accepts
routes from the vrf-name.mdt.0 routing table.
NOTE: When you configure draft-rosen multicast VPNs with provider tunnels operating in
source-specific mode and using the vrf-export statement to specify the export policy, the
policy must have a term that accepts routes from the vrf-name.mdt.0 routing table. This term
ensures proper PE autodiscovery using the inet-mdt address family.
678
Topology
Configuration
IN THIS SECTION
Procedure | 680
BGP | 684
PIM | 686
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Interface Configuration
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
Step-by-Step Procedure
Step-by-Step Procedure
1. Configure RSVP signaling among this PE router (PE1), the other PE router (PE2). and the provider
router (P1).
BGP
Step-by-Step Procedure
To configure BGP:
1. Configure the AS number. In this example, both of the PE routers and the provider router are in AS
200.
[edit]
user@host# set routing-options autonomous-system 200
685
2. Configure the internal BGP full mesh with the PE2 and P1 routers.
4. Enable BGP to carry Layer 3 VPN NLRI for the IPv4 address family.
[edit policy-options]
user@host# set policy-statement bgp_ospf term 1 from protocol bgp
user@host# set policy-statement bgp_ospf term 1 then accept
Step-by-Step Procedure
PIM
Step-by-Step Procedure
To configure PIM:
1. Configure timeout periods and the RP. Local RP configuration makes PE1 a statically defined RP.
Routing Instance
Step-by-Step Procedure
5. Configure draft-rosen VPN autodiscovery for provider tunnels operating in SSM mode.
6. Configure the BGP-based MVPN control plane to provide signaling only for autodiscovery and not
for PIM operations.
Verification
You can monitor the operation of the routing instance by running the show route table ce1.mdt.0
command.
You can manage the group-instance mapping for local SSM tunnel roots by running the show pim mvpn
command.
The show pim mdt command shows the tunnel type and source PE address for each outgoing and
incoming MDT. In addition, because each PE might have its own default MDT group address, one
incoming entry is shown for each remote PE. Outgoing data MDTs are shown after the outgoing default
MDT. Incoming data MDTs are shown after all incoming default MDTS.
For troubleshooting, you can configure tracing operations for all of the protocols.
SEE ALSO
RELATED DOCUMENTATION
In a draft-rosen Layer 3 multicast virtual private network (MVPN) configured with service provider
tunnels, the VPN is multicast-enabled and configured to use the Protocol Independent Multicast (PIM)
689
protocol within the VPN and within the service provider (SP) network. A multicast-enabled VPN routing
and forwarding (VRF) instance corresponds to a multicast domain (MD), and a PE router attached to a
particular VRF instance is said to belong to the corresponding MD. For each MD there is a default
multicast distribution tree (MDT) through the SP backbone, which connects all of the PE routers
belonging to that MD. Any PE router configured with a default MDT group address can be the multicast
source of one default MDT.
To provide optimal multicast routing, you can configure the PE routers so that when the multicast source
within a site exceeds a traffic rate threshold, the PE router to which the source site is attached creates a
new data MDT and advertises the new MDT group address. An advertisement of a new MDT group
address is sent in a User Datagram Protocol (UDP) type-length-value (TLV) packet called an MDT join
TLV. The MDT join TLV identifies the source and group pair (S,G) in the VRF instance as well as the new
data MDT group address used in the provider space. The PE router to which the source site is attached
sends the MDT join TLV over the default MDT for that VRF instance every 60 seconds as long as the
source is active.
All PE routers in the VRF instance receive the MDT join TLV because it is sent over the default MDT, but
not all the PE routers join the new data MDT group:
• PE routers connected to receivers in the VRF instance for the current multicast group cache the
contents of the MDT join TLV, adding a 180-second timeout value to the cache entry, and also join
the new data MDT group.
• PE routers not connected to receivers listed in the VRF instance for the current multicast group also
cache the contents of the MDT join TLV, adding a 180-second timeout value to the cache entry, but
do not join the new data MDT group at this time.
After the source PE stops sending the multicast traffic stream over the default MDT and uses the new
MDT instead, only the PE routers that join the new group receive the multicast traffic for that group.
When a remote PE router joins the new data MDT group, it sends a PIM join message for the new group
directly to the source PE router from the remote PE routers by means of a PIM (S,G) join.
If a PE router that has not yet joined the new data MDT group receives a PIM join message for a new
receiver for which (S,G) traffic is already flowing over the data MDT in the provider core, then that PE
router can obtain the new group address from its cache and can join the data MDT immediately without
waiting up to 59 seconds for the next data MDT advertisement.
When the PE router to which the source site is attached sends a subsequent MDT join TLV for the VRF
instance over the default MDT, any existing cache entries for that VRF instance are simply refreshed
with a timeout value of 180 seconds.
To display the information cached from MDT join TLV packets received by all PE routers in a PIM-
enabled VRF instance, use the show pim mdt data-mdt-joins operational mode command.
The source PE router starts encapsulating the multicast traffic for the VRF instance using the new data
MDT group after 3 seconds, allowing time for the remote PE routers to join the new group. The source
690
PE router then halts the flow of multicast packets over the default MDT, and the packet flow for the
VRF instance source shifts to the newly created data MDT.
The PE router monitors the traffic rate during its periodic statistics-collection cycles. If the traffic rate
drops below the threshold or the source stops sending multicast traffic, the PE router to which the
source site is attached stops announcing the MDT join TLVs and switches back to sending on the default
MDT for that VRF instance.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 690
Overview | 691
Configuration | 694
Verification | 695
This example shows how to configure data multicast distribution trees (MDTs) in a draft-rosen Layer 3
VPN operating in any-source multicast (ASM) mode. This example is based on the Junos OS
implementation of RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs) and on section 2 of the
IETF Internet draft draft-rosen-vpn-mcast-06.txt, Multicast in MPLS/BGP VPNs (expired April 2004).
Requirements
Before you begin:
• Make sure that the routing devices support multicast tunnel (mt) interfaces.
691
A tunnel-capable PIC supports a maximum of 512 multicast tunnel interfaces. Both default and data
MDTs contribute to this total. The default MDT uses two multicast tunnel interfaces (one for
encapsulation and one for de-encapsulation). To enable an M Series or T Series router to support
more than 512 multicast tunnel interfaces, another tunnel-capable PIC is required. See "Tunnel
Services PICs and Multicast" and "Load Balancing Multicast Tunnel Interfaces Among Available PICs".
Overview
IN THIS SECTION
Topology | 693
By using data multicast distribution trees (MDTs) in a Layer 3 VPN, you can prevent multicast packets
from being flooded unnecessarily to specified provider edge (PE) routers within a VPN group. This
option is primarily useful for PE routers in your Layer 3 VPN multicast network that have no receivers
for the multicast traffic from a particular source.
When a PE router that is directly connected to the multicast source (also called the source PE) receives
Layer 3 VPN multicast traffic that exceeds a configured threshold, a new data MDT tunnel is established
between the PE router connected to the source site and its remote PE router neighbors.
The source PE advertises the new data MDT group as long as the source is active. The periodic
announcement is sent over the default MDT for the VRF. Because the data MDT announcement is sent
over the default tunnel, all the PE routers receive the announcement.
Neighbors that do not have receivers for the multicast traffic cache the advertisement of the new data
MDT group but ignore the new tunnel. Neighbors that do have receivers for the multicast traffic cache
the advertisement of the new data MDT group and also send a PIM join message for the new group.
The source PE encapsulates the VRF multicast traffic using the new data MDT group and stops the
packet flow over the default multicast tree. If the multicast traffic level drops back below the threshold,
the data MDT is torn down automatically and traffic flows back across the default multicast tree.
If a PE router that has not yet joined the new data MDT group receives a PIM join message for a new
receiver for which (S,G) traffic is already flowing over the data MDT in the provider core, then that PE
router can obtain the new group address from its cache and can join the data-MDT immediately without
waiting up to 59 seconds for the next data MDT advertisement.
For a rosen 6 MVPN—a draft-rosen multicast VPN with provider tunnels operating in ASM mode—you
configure data MDT creation for a tunnel multicast group by including statements under the PIM
protocol configuration for the VRF instance associated with the multicast group. Because data MDTs
692
apply to VPNs and VRF routing instances, you cannot configure MDT statements in the master routing
instance.
• group—Specifies the multicast group address to which the threshold applies. This could be a well-
known address for a certain type of multicast traffic.
The group address can be explicit (all 32 bits of the address specified) or a prefix (network address
and prefix length specified). Explicit and prefix address forms can be combined if they do not overlap.
Overlapping configurations, in which prefix and more explicit address forms are used for the same
source or group address, are not supported.
• group-range—Specifies the multicast group IP address range used when a new data MDT needs to be
initiated on the PE router. For each new data MDT, one address is automatically selected from the
configured group range.
The PE router implementing data MDTs for a local multicast source must be configured with a range
of multicast group addresses. Group addresses that fall within the configured range are used in the
join messages for the data MDTs created in this VRF instance. Any multicast address range can be
used as the multicast prefix. However, the group address range cannot overlap the default MDT
group address configured for any VPN on the router. If you configure overlapping group addresses,
the configuration commit operation fails.
• pim—Supports data MDTs for service provider tunnels operating in any-source multicast mode.
• rate—Specifies the data rate that initiates the creation of data MDTs. When the source traffic in the
VRF exceeds the configured data rate, a new tunnel is created. The range is from 10 kilobits per
second (Kbps), the default, to 1 gigabit per second (Gbps, equivalent to 1,000,000 Kbps).
• source—Specifies the unicast address of the source of the multicast traffic. It can be a source locally
attached to or reached through the PE router. A group can have more than one source.
The source address can be explicit (all 32 bits of the address specified) or a prefix (network address
and prefix length specified). Explicit and prefix address forms can be combined if they do not overlap.
Overlapping configurations, in which prefix and more explicit address forms are used for the same
source or group address, are not supported.
• threshold—Associates a rate with a group and a source. The PE router implementing data MDTs for a
local multicast source must establish a data MDT-creation threshold for a multicast group and source.
When the traffic stops or the rate falls below the threshold value, the source PE router switches back
to the default MDT.
• tunnel-limit—Specifies the maximum number of data MDTs that can be created for a single routing
instance. The PE router implementing a data MDT for a local multicast source must establish a limit
693
for the number of data MDTs created in this VRF instance. If the limit is 0 (the default), then no data
MDTs are created for this VRF instance.
If the number of data MDT tunnels exceeds the maximum configured tunnel limit for the VRF, then
no new tunnels are created. Traffic that exceeds the configured threshold is sent on the default MDT.
The valid range is from 0 through 1024 for a VRF instance. There is a limit of 8000 tunnels for all
data MDTs in all VRF instances on a PE router.
Topology
Configuration
IN THIS SECTION
Procedure | 694
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
[edit]
set routing-instances vpn-A protocols pim mdt group-range 227.0.0.0/8
set routing-instances vpn-A protocols pim mdt threshold group 224.4.4.4/32 source 10.10.20.43/32 rate 10
set routing-instances vpn-A protocols pim mdt tunnel-limit 10
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
To configure a PE router attached to the VRF instance vpn-A in a PIM-ASM multicast VPN to initiate
new data MDTs and provider tunnels for that VRF:
[edit]
user@host# edit routing-instances vpn-A protocols pim mdt
[edit routing-instances vpn-A protocols pim mdt]
user@host# set group-range 227.0.0.0/8
695
Verification
To display information about the default MDT and any data MDTs for the VRF instance vpn-A, use the
show pim mdt instance ce1 detail operational mode command. This command displays either the
outgoing tunnels (the tunnels initiated by the local PE router), the incoming tunnels (tunnels initiated by
the remote PE routers), or both.
To display the data MDT group addresses cached by PE routers that participate in the VRF instance vpn-
A, use the show pim mdt data-mdt-joins instance vpn-A operational mode command. The command
displays the information cached from MDT join TLV packets received by all PE routers participating in
the specified VRF instance.
You can trace the operation of data MDTs by including the mdt detail flag in the [edit protocols pim
traceoptions] configuration. When this flag is set, all the mt interface-related activity is logged in trace
files.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 696
Overview | 697
Configuration | 704
Verification | 709
This example shows how to configure data multicast distribution trees (MDTs) for a provider edge (PE)
router attached to a VPN routing and forwarding (VRF) instance in a draft-rosen Layer 3 multicast VPN
operating in source-specific multicast (SSM) mode. The example is based on the Junos OS
implementation of RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs) and on section 7 of the
IETF Internet draft draft-rosen-vpn-mcast-07.txt, Multicast in MPLS/BGP IP VPNs.
Requirements
Before you begin:
• Make sure that the routing devices support multicast tunnel (mt) interfaces.
A tunnel-capable PIC supports a maximum of 512 multicast tunnel interfaces. Both default and data
MDTs contribute to this total. The default MDT uses two multicast tunnel interfaces (one for
encapsulation and one for de-encapsulation). To enable an M Series or T Series router to support
more than 512 multicast tunnel interfaces, another tunnel-capable PIC is required. See “"Tunnel
Services PICs and Multicast"” and “"Load Balancing Multicast Tunnel Interfaces Among Available
PICs"” in the Multicast Protocols User Guide .
• Make sure that the PE router has been configured for a draft-rosen Layer 3 multicast VPN operating
in SSM mode in the provider core.
In this type of multicast VPN, PE routers discover one another by sending MDT subsequent address
family identifier (MDT-SAFI) BGP network layer reachability information (NLRI) advertisements. Key
configuration statements for the master instance are highlighted in Table 17 on page 698. Key
configuration statements for the VRF instance to which your PE router is attached are highlighted in
Table 18 on page 699. For complete configuration details, see “"Example: Configuring Source-
Specific Multicast for Draft-Rosen Multicast VPNs"” in the Multicast Protocols User Guide .
697
Overview
IN THIS SECTION
Topology | 704
By using data MDTs in a Layer 3 VPN, you can prevent multicast packets from being flooded
unnecessarily to specified provider edge (PE) routers within a VPN group. This option is primarily useful
for PE routers in your Layer 3 VPN multicast network that have no receivers for the multicast traffic
from a particular source.
• When a PE router that is directly connected to the multicast source (also called the source PE)
receives Layer 3 VPN multicast traffic that exceeds a configured threshold, a new data MDT tunnel is
established between the PE router connected to the source site and its remote PE router neighbors.
• The source PE advertises the new data MDT group as long as the source is active. The periodic
announcement is sent over the default MDT for the VRF. Because the data MDT announcement is
sent over the default tunnel, all the PE routers receive the announcement.
• Neighbors that do not have receivers for the multicast traffic cache the advertisement of the new
data MDT group but ignore the new tunnel. Neighbors that do have receivers for the multicast traffic
cache the advertisement of the new data MDT group and also send a PIM join message for the new
group.
• The source PE encapsulates the VRF multicast traffic using the new data MDT group and stops the
packet flow over the default multicast tree. If the multicast traffic level drops back below the
threshold, the data MDT is torn down automatically and traffic flows back across the default
multicast tree.
• If a PE router that has not yet joined the new data MDT group receives a PIM join message for a new
receiver for which (S,G) traffic is already flowing over the data MDT in the provider core, then that PE
router can obtain the new group address from its cache and can join the data-MDT immediately
without waiting up to 59 seconds for the next data MDT advertisement.
The following sections summarize the data MDT configuration statements used in this example and in
the prerequisite configuration for this example:
• In the master instance, the PE router’s prerequisite draft-rosen PIM-SSM multicast configuration
includes statements that directly support the data MDT configuration you will enable in this example.
Table 17 on page 698 highlights some of these statements†.
698
Statement Description
[edit routing-options]
autonomous-system autonomous-
system;
Statement Description
† This table contains only a partial list of the PE router configuration statements for a draft-rosen
multicast VPN operating in SSM mode in the provider core. For complete configuration
information about this prerequisite, see “"Example: Configuring Source-Specific Multicast for
Draft-Rosen Multicast VPNs"” in the Multicast Protocols User Guide .
• In the VRF instance to which the PE router is attached—at the [edit routing-instances name]
hierarchy level—the PE router’s prerequisite draft-rosen PIM-SSM multicast configuration includes
statements that directly support the data MDT configuration you will enable in this example. Table
18 on page 699 highlights some of these statements‡.
Statement Description
Creates a VRF e
automatically ac
instance-name.m
ensures proper
the inet-mdt ad
Statement Description
To verify the co
MDT tunnel for
which the PE ro
show pim mvpn
command.
‡This table contains only a partial list of the PE router configuration statements for a draft-rosen multicast VPN oper
provider core. For complete configuration information about this prerequisite, see “"Example: Configuring Source-Spe
Rosen Multicast VPNs"” in the Multicast Protocols User Guide .
701
• For a rosen 7 MVPN—a draft-rosen multicast VPN with provider tunnels operating in SSM mode—
you configure data MDT creation for a tunnel multicast group by including statements under the
PIM-SSM provider tunnel configuration for the VRF instance associated with the multicast group.
Because data MDTs are specific to VPNs and VRF routing instances, you cannot configure MDT
statements in the primary routing instance. Table 19 on page 701 summarizes the data MDT
configuration statements for PIM-SSM provider tunnels.
Table 19: Data MDTs for PIM-SSM Provider Tunnels in a Draft-Rosen MVPN
Statement Description
[edit routing-instances name] Configures the IP group range used when a new
provider-tunnel family inet | data MDT needs to be created in the VRF instance
inet6{{ on the PE router. This address range cannot
mdt { overlap the default MDT addresses of any other
group-range multicast- VPNs on the router. If you configure overlapping
prefix; group ranges, the configuration commit fails.
}
This statement has no default value. If you do not
}
set the multicast-prefix to a valid, nonreserved
multicast address range, then no data MDTs are
created for this VRF instance.
Table 19: Data MDTs for PIM-SSM Provider Tunnels in a Draft-Rosen MVPN (Continued)
Statement Description
Table 19: Data MDTs for PIM-SSM Provider Tunnels in a Draft-Rosen MVPN (Continued)
Statement Description
[edit routing-instances name] Configures a data rate for the multicast source of a
provider-tunnel family inet | default MDT. When the source traffic in the VRF
inet6{{ instance exceeds the configured data rate, a new
mdt { tunnel is created.
threshold {
• group group-address—Multicast group address
group group-address {
of the default MDT that corresponds to a VRF
source source-
instance to which the PE router is attached. The
address {
group-address explicit (all 32 bits of the address
rate
specified) or a prefix (network address and
threshold-rate;
prefix length specified). This is typically a well-
}
known address for a certain type of multicast
}
traffic.
}
} • source source-address—Unicast IP prefix of one
} or more multicast sources in the specified
default MDT group.
Topology
Configuration
IN THIS SECTION
Enabling Data MDTs and PIM-SSM Provider Tunnels on the Local PE Router Attached to a VRF | 705
(Optional) Enabling Logging of Detailed Trace Information for Multicast Tunnel Interfaces on the Local
PE Router | 707
705
Results | 708
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see the Junos OS CLI User Guide.
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level and then enter commit from configuration mode.
Enabling Data MDTs and PIM-SSM Provider Tunnels on the Local PE Router Attached to a VRF
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
To configure the local PE router attached to the VRF instance ce1 in a PIM-SSM multicast VPN to
initiate new data MDTs and provider tunnels for that VRF:
[edit]
user@host# edit routing-instances ce1 provider-tunnel
706
3. Configure the maximum number of data MDTs for this VRF instance.
4. Configure the data MDT-creation threshold for a multicast group and source.
[edit]
user@host# commit
Results
Confirm the configuration of data MDTs for PIM-SSM provider tunnels by entering the show routing-
instances command from configuration mode. If the output does not display the intended configuration,
repeat the instructions in this procedure to correct the configuration.
[edit]
user@host# show routing-instances
ce1 {
instance-type vrf;
vrf-target target:100:1;
...
provider-tunnel {
pim-ssm {
group-address 239.1.1.1;
}
mdt {
707
threshold {
group 224.0.9.0/32 {
source 10.1.1.2/32 {
rate 10;
}
}
}
tunnel-limit 10;
group-range 239.10.10.0/24;
}
}
protocols {
...
pim {
mvpn {
family {
inet {
autodiscovery {
inet-mdt;
}
}
}
}
}
}
}
}
NOTE: The show routing-instances command output above does not show the complete
configuration of a VRF instance in a draft-rosen MVPN operating in SSM mode in the provider
core.
(Optional) Enabling Logging of Detailed Trace Information for Multicast Tunnel Interfaces on the Local
PE Router
Step-by-Step Procedure
To enable logging of detailed trace information for all multicast tunnel interfaces on the local PE router:
708
[edit]
user@host# set protocols pim traceoptions
2. Configure the trace file name, maximum number of trace files, maximum size of each trace file, and
file access type.
3. Specify that messages related to multicast data tunnel operations are logged.
[edit]
user@host# commit
Results
Confirm the configuration of multicast tunnel logging by entering the show protocols command from
configuration mode. If the output does not display the intended configuration, repeat the instructions in
this procedure to correct the configuration.
[edit]
user@host# show protocols
pim {
traceoptions {
file trace-pim-mdt size 1m files 5 world-readable;
flag mdt detail;
}
709
interface lo0.0;
...
}
Verification
IN THIS SECTION
Monitor Data MDT Group Addresses Cached by All PE Routers in the Multicast Group | 710
(Optional) View the Trace Log for Multicast Tunnel Interfaces | 710
To verify that the local PE router is managing data MDTs and PIM-SSM provider tunnels properly,
perform the following tasks:
Purpose
For the VRF instance ce1, check the incoming and outgoing tunnels established by the local PE router
for the default MDT and monitor the data MDTs initiated by the local PE router.
Action
Use the show pim mdt instance ce1 detail operational mode command.
For the default MDT, the command displays details about the incoming and outgoing tunnels established
by the local PE router for specific multicast source addresses in the multicast group using the default
MDT and identifies the tunnel mode as PIM-SSM.
For the data MDTs initiated by the local PE router, the command identifies the multicast source using
the data MDT, the multicast tunnel logical interface set up for the data MDT tunnel, the configured
threshold rate, and current statistics.
710
Monitor Data MDT Group Addresses Cached by All PE Routers in the Multicast Group
Purpose
For the VRF instance ce1, check the data MDT group addresses cached by all PE routers that participate
in the VRF.
Action
Use the show pim mdt data-mdt-joins instance ce1 operational mode command. The command output
displays the information cached from MDT join TLV packets received by all PE routers participating in
the specified VRF instance, including the current timeout value of each entry.
Purpose
If you configured logging of trace Information for multicast tunnel interfaces, you can trace the creation
and tear-down of data MDTs on the local router through the mt interface-related activity in the log.
Action
To view the trace file, use the file show /var/log/trace-pim-mdt operational mode command.
RELATED DOCUMENTATION
IN THIS SECTION
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast Mode | 713
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode | 728
To provide optimal multicast routing, you can configure the PE routers so that when the multicast source
within a site exceeds a traffic rate threshold, the PE router to which the source site is attached creates a
new data MDT and advertises the new MDT group address. An advertisement of a new MDT group
address is sent in a User Datagram Protocol (UDP) type-length-value (TLV) packet called an MDT join
TLV. The MDT join TLV identifies the source and group pair (S,G) in the VRF instance as well as the new
data MDT group address used in the provider space. The PE router to which the source site is attached
sends the MDT join TLV over the default MDT for that VRF instance every 60 seconds as long as the
source is active.
All PE routers in the VRF instance receive the MDT join TLV because it is sent over the default MDT, but
not all the PE routers join the new data MDT group:
• PE routers connected to receivers in the VRF instance for the current multicast group cache the
contents of the MDT join TLV, adding a 180-second timeout value to the cache entry, and also join
the new data MDT group.
712
• PE routers not connected to receivers listed in the VRF instance for the current multicast group also
cache the contents of the MDT join TLV, adding a 180-second timeout value to the cache entry, but
do not join the new data MDT group at this time.
After the source PE stops sending the multicast traffic stream over the default MDT and uses the new
MDT instead, only the PE routers that join the new group receive the multicast traffic for that group.
When a remote PE router joins the new data MDT group, it sends a PIM join message for the new group
directly to the source PE router from the remote PE routers by means of a PIM (S,G) join.
If a PE router that has not yet joined the new data MDT group receives a PIM join message for a new
receiver for which (S,G) traffic is already flowing over the data MDT in the provider core, then that PE
router can obtain the new group address from its cache and can join the data MDT immediately without
waiting up to 59 seconds for the next data MDT advertisement.
When the PE router to which the source site is attached sends a subsequent MDT join TLV for the VRF
instance over the default MDT, any existing cache entries for that VRF instance are simply refreshed
with a timeout value of 180 seconds.
To display the information cached from MDT join TLV packets received by all PE routers in a PIM-
enabled VRF instance, use the show pim mdt data-mdt-joins operational mode command.
The source PE router starts encapsulating the multicast traffic for the VRF instance using the new data
MDT group after 3 seconds, allowing time for the remote PE routers to join the new group. The source
PE router then halts the flow of multicast packets over the default MDT, and the packet flow for the
VRF instance source shifts to the newly created data MDT.
The PE router monitors the traffic rate during its periodic statistics-collection cycles. If the traffic rate
drops below the threshold or the source stops sending multicast traffic, the PE router to which the
source site is attached stops announcing the MDT join TLVs and switches back to sending on the default
MDT for that VRF instance.
SEE ALSO
The default MDT uses multicast tunnel (mt-) logical interfaces. Data MDTs also use multicast tunnel
logical interfaces. If you administratively disable the physical interface that the multicast tunnel logical
713
interfaces are configured on, the multicast tunnel logical interfaces are moved to a different physical
interface that is up. In this case the traffic is sent over the default MDT until new data MDTs are created.
The maximum number of data MDTs for all VPNs on a PE router is 1024, and the maximum number of
data MDTs for a VRF instance is 1024. The configuration of a VRF instance can limit the number of
MDTs possible. No new MDTs can be created after the 1024 MDT limit is reached in the VRF instance,
and all traffic for other sources that exceed the configured limit is sent on the default MDT.
Tear-down of data MDTs depends on the monitoring of the multicast source data rate. This rate is
checked once per minute, so if the source data rate falls below the configured value, data MDT deletion
can be delayed for up to 1 minute until the next statistics-monitoring collection cycle.
Changes to the configured data MDT limit value do not affect existing tunnels that exceed the new limit.
Data MDTs that are already active remain in place until the threshold conditions are no longer met.
In a draft-rosen MVPN in which PE routers are already configured to create data MDTs in response to
exceeded multicast source traffic rate thresholds, you can change the group range used for creating data
MDTs in a VRF instance. To remove any active data MDTs created using the previous group range, you
must restart the PIM routing process. This restart clears all remnants of the former group addresses but
disrupts routing and therefore requires a maintenance window for the change.
Multicast tunnel (mt) interfaces created because of exceeded thresholds are not re-created if the routing
process crashes. Therefore, graceful restart does not automatically reinstate the data MDT state.
However, as soon as the periodic statistics collection reveals that the threshold condition is still
exceeded, the tunnels are quickly re-created.
Data MDTs are supported for customer traffic with PIM sparse mode, dense mode, and sparse-dense
mode. Note that the provider core does not support PIM dense mode.
IN THIS SECTION
Requirements | 714
Overview | 714
Configuration | 721
Verification | 726
714
This example shows how to configure data multicast distribution trees (MDTs) for a provider edge (PE)
router attached to a VPN routing and forwarding (VRF) instance in a draft-rosen Layer 3 multicast VPN
operating in source-specific multicast (SSM) mode. The example is based on the Junos OS
implementation of RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs) and on section 7 of the
IETF Internet draft draft-rosen-vpn-mcast-07.txt, Multicast in MPLS/BGP IP VPNs.
Requirements
• Make sure that the routing devices support multicast tunnel (mt) interfaces.
A tunnel-capable PIC supports a maximum of 512 multicast tunnel interfaces. Both default and data
MDTs contribute to this total. The default MDT uses two multicast tunnel interfaces (one for
encapsulation and one for de-encapsulation). To enable an M Series or T Series router to support
more than 512 multicast tunnel interfaces, another tunnel-capable PIC is required. See “"Tunnel
Services PICs and Multicast"” and “"Load Balancing Multicast Tunnel Interfaces Among Available
PICs"” in the Multicast Protocols User Guide .
• Make sure that the PE router has been configured for a draft-rosen Layer 3 multicast VPN operating
in SSM mode in the provider core.
In this type of multicast VPN, PE routers discover one another by sending MDT subsequent address
family identifier (MDT-SAFI) BGP network layer reachability information (NLRI) advertisements. Key
configuration statements for the master instance are highlighted in Table 17 on page 698. Key
configuration statements for the VRF instance to which your PE router is attached are highlighted in
Table 18 on page 699. For complete configuration details, see “"Example: Configuring Source-Specific
Multicast for Draft-Rosen Multicast VPNs"” in the Multicast Protocols User Guide .
Overview
IN THIS SECTION
Topology | 721
By using data MDTs in a Layer 3 VPN, you can prevent multicast packets from being flooded
unnecessarily to specified provider edge (PE) routers within a VPN group. This option is primarily useful
for PE routers in your Layer 3 VPN multicast network that have no receivers for the multicast traffic
from a particular source.
715
• When a PE router that is directly connected to the multicast source (also called the source PE)
receives Layer 3 VPN multicast traffic that exceeds a configured threshold, a new data MDT tunnel is
established between the PE router connected to the source site and its remote PE router neighbors.
• The source PE advertises the new data MDT group as long as the source is active. The periodic
announcement is sent over the default MDT for the VRF. Because the data MDT announcement is
sent over the default tunnel, all the PE routers receive the announcement.
• Neighbors that do not have receivers for the multicast traffic cache the advertisement of the new
data MDT group but ignore the new tunnel. Neighbors that do have receivers for the multicast traffic
cache the advertisement of the new data MDT group and also send a PIM join message for the new
group.
• The source PE encapsulates the VRF multicast traffic using the new data MDT group and stops the
packet flow over the default multicast tree. If the multicast traffic level drops back below the
threshold, the data MDT is torn down automatically and traffic flows back across the default
multicast tree.
• If a PE router that has not yet joined the new data MDT group receives a PIM join message for a new
receiver for which (S,G) traffic is already flowing over the data MDT in the provider core, then that PE
router can obtain the new group address from its cache and can join the data-MDT immediately
without waiting up to 59 seconds for the next data MDT advertisement.
The following sections summarize the data MDT configuration statements used in this example and in
the prerequisite configuration for this example:
• In the master instance, the PE router’s prerequisite draft-rosen PIM-SSM multicast configuration
includes statements that directly support the data MDT configuration you will enable in this example.
Table 20 on page 715 highlights some of these statements†.
Statement Description
Statement Description
[edit routing-options]
autonomous-system autonomous-
system;
† This table contains only a partial list of the PE router configuration statements for a draft-rosen
multicast VPN operating in SSM mode in the provider core. For complete configuration
information about this prerequisite, see “"Example: Configuring Source-Specific Multicast for
Draft-Rosen Multicast VPNs"” in the Multicast Protocols User Guide .
• In the VRF instance to which the PE router is attached—at the [edit routing-instances name]
hierarchy level—the PE router’s prerequisite draft-rosen PIM-SSM multicast configuration includes
statements that directly support the data MDT configuration you will enable in this example. Table
21 on page 717 highlights some of these statements‡.
717
Statement Description
Creates a VRF e
automatically ac
instance-name.m
ensures proper
the inet-mdt ad
Statement Description
To verify the co
MDT tunnel for
which the PE ro
show pim mvpn
command.
‡This table contains only a partial list of the PE router configuration statements for a draft-rosen multicast VPN oper
provider core. For complete configuration information about this prerequisite, see “"Example: Configuring Source-Spe
Rosen Multicast VPNs"” in the Multicast Protocols User Guide .
• For a rosen 7 MVPN—a draft-rosen multicast VPN with provider tunnels operating in SSM mode—
you configure data MDT creation for a tunnel multicast group by including statements under the
PIM-SSM provider tunnel configuration for the VRF instance associated with the multicast group.
Because data MDTs are specific to VPNs and VRF routing instances, you cannot configure MDT
statements in the primary routing instance. Table 22 on page 719 summarizes the data MDT
configuration statements for PIM-SSM provider tunnels.
719
Table 22: Data MDTs for PIM-SSM Provider Tunnels in a Draft-Rosen MVPN
Statement Description
[edit routing-instances name] Configures the IP group range used when a new
provider-tunnel family inet | data MDT needs to be created in the VRF instance
inet6{{ on the PE router. This address range cannot
mdt { overlap the default MDT addresses of any other
group-range multicast- VPNs on the router. If you configure overlapping
prefix; group ranges, the configuration commit fails.
}
This statement has no default value. If you do not
}
set the multicast-prefix to a valid, nonreserved
multicast address range, then no data MDTs are
created for this VRF instance.
Table 22: Data MDTs for PIM-SSM Provider Tunnels in a Draft-Rosen MVPN (Continued)
Statement Description
[edit routing-instances name] Configures a data rate for the multicast source of a
provider-tunnel family inet | default MDT. When the source traffic in the VRF
inet6{{ instance exceeds the configured data rate, a new
mdt { tunnel is created.
threshold {
• group group-address—Multicast group address
group group-address {
of the default MDT that corresponds to a VRF
source source-
instance to which the PE router is attached. The
address {
group-address explicit (all 32 bits of the address
rate
specified) or a prefix (network address and
threshold-rate;
prefix length specified). This is typically a well-
}
known address for a certain type of multicast
}
traffic.
}
} • source source-address—Unicast IP prefix of one
} or more multicast sources in the specified
default MDT group.
Topology
Configuration
IN THIS SECTION
Enabling Data MDTs and PIM-SSM Provider Tunnels on the Local PE Router Attached to a VRF | 722
(Optional) Enabling Logging of Detailed Trace Information for Multicast Tunnel Interfaces on the Local
PE Router | 724
722
Results | 725
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see the Junos OS CLI User Guide.
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level and then enter commit from configuration mode.
Enabling Data MDTs and PIM-SSM Provider Tunnels on the Local PE Router Attached to a VRF
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
To configure the local PE router attached to the VRF instance ce1 in a PIM-SSM multicast VPN to
initiate new data MDTs and provider tunnels for that VRF:
[edit]
user@host# edit routing-instances ce1 provider-tunnel
723
3. Configure the maximum number of data MDTs for this VRF instance.
4. Configure the data MDT-creation threshold for a multicast group and source.
[edit]
user@host# commit
Results
Confirm the configuration of data MDTs for PIM-SSM provider tunnels by entering the show routing-
instances command from configuration mode. If the output does not display the intended configuration,
repeat the instructions in this procedure to correct the configuration.
[edit]
user@host# show routing-instances
ce1 {
instance-type vrf;
vrf-target target:100:1;
...
provider-tunnel {
pim-ssm {
group-address 239.1.1.1;
}
mdt {
724
threshold {
group 224.0.9.0/32 {
source 10.1.1.2/32 {
rate 10;
}
}
}
tunnel-limit 10;
group-range 239.10.10.0/24;
}
}
protocols {
...
pim {
mvpn {
family {
inet {
autodiscovery {
inet-mdt;
}
}
}
}
}
}
}
}
NOTE: The show routing-instances command output above does not show the complete
configuration of a VRF instance in a draft-rosen MVPN operating in SSM mode in the provider
core.
(Optional) Enabling Logging of Detailed Trace Information for Multicast Tunnel Interfaces on the Local
PE Router
Step-by-Step Procedure
To enable logging of detailed trace information for all multicast tunnel interfaces on the local PE router:
725
[edit]
user@host# set protocols pim traceoptions
2. Configure the trace file name, maximum number of trace files, maximum size of each trace file, and
file access type.
3. Specify that messages related to multicast data tunnel operations are logged.
[edit]
user@host# commit
Results
Confirm the configuration of multicast tunnel logging by entering the show protocols command from
configuration mode. If the output does not display the intended configuration, repeat the instructions in
this procedure to correct the configuration.
[edit]
user@host# show protocols
pim {
traceoptions {
file trace-pim-mdt size 1m files 5 world-readable;
flag mdt detail;
}
726
interface lo0.0;
...
}
Verification
IN THIS SECTION
Monitor Data MDT Group Addresses Cached by All PE Routers in the Multicast Group | 727
(Optional) View the Trace Log for Multicast Tunnel Interfaces | 727
To verify that the local PE router is managing data MDTs and PIM-SSM provider tunnels properly,
perform the following tasks:
Purpose
For the VRF instance ce1, check the incoming and outgoing tunnels established by the local PE router
for the default MDT and monitor the data MDTs initiated by the local PE router.
Action
Use the show pim mdt instance ce1 detail operational mode command.
For the default MDT, the command displays details about the incoming and outgoing tunnels established
by the local PE router for specific multicast source addresses in the multicast group using the default
MDT and identifies the tunnel mode as PIM-SSM.
For the data MDTs initiated by the local PE router, the command identifies the multicast source using
the data MDT, the multicast tunnel logical interface set up for the data MDT tunnel, the configured
threshold rate, and current statistics.
727
Monitor Data MDT Group Addresses Cached by All PE Routers in the Multicast Group
Purpose
For the VRF instance ce1, check the data MDT group addresses cached by all PE routers that participate
in the VRF.
Action
Use the show pim mdt data-mdt-joins instance ce1 operational mode command. The command output
displays the information cached from MDT join TLV packets received by all PE routers participating in
the specified VRF instance, including the current timeout value of each entry.
Purpose
If you configured logging of trace Information for multicast tunnel interfaces, you can trace the creation
and tear-down of data MDTs on the local router through the mt interface-related activity in the log.
Action
To view the trace file, use the file show /var/log/trace-pim-mdt operational mode command.
SEE ALSO
IN THIS SECTION
Requirements | 728
Overview | 728
Configuration | 731
Verification | 733
This example shows how to configure data multicast distribution trees (MDTs) in a draft-rosen Layer 3
VPN operating in any-source multicast (ASM) mode. This example is based on the Junos OS
implementation of RFC 4364, BGP/MPLS IP Virtual Private Networks (VPNs) and on section 2 of the
IETF Internet draft draft-rosen-vpn-mcast-06.txt, Multicast in MPLS/BGP VPNs (expired April 2004).
Requirements
• Make sure that the routing devices support multicast tunnel (mt) interfaces.
A tunnel-capable PIC supports a maximum of 512 multicast tunnel interfaces. Both default and data
MDTs contribute to this total. The default MDT uses two multicast tunnel interfaces (one for
encapsulation and one for de-encapsulation). To enable an M Series or T Series router to support
more than 512 multicast tunnel interfaces, another tunnel-capable PIC is required. See "Tunnel
Services PICs and Multicast" and "Load Balancing Multicast Tunnel Interfaces Among Available PICs".
Overview
IN THIS SECTION
Topology | 731
729
By using data multicast distribution trees (MDTs) in a Layer 3 VPN, you can prevent multicast packets
from being flooded unnecessarily to specified provider edge (PE) routers within a VPN group. This
option is primarily useful for PE routers in your Layer 3 VPN multicast network that have no receivers
for the multicast traffic from a particular source.
When a PE router that is directly connected to the multicast source (also called the source PE) receives
Layer 3 VPN multicast traffic that exceeds a configured threshold, a new data MDT tunnel is established
between the PE router connected to the source site and its remote PE router neighbors.
The source PE advertises the new data MDT group as long as the source is active. The periodic
announcement is sent over the default MDT for the VRF. Because the data MDT announcement is sent
over the default tunnel, all the PE routers receive the announcement.
Neighbors that do not have receivers for the multicast traffic cache the advertisement of the new data
MDT group but ignore the new tunnel. Neighbors that do have receivers for the multicast traffic cache
the advertisement of the new data MDT group and also send a PIM join message for the new group.
The source PE encapsulates the VRF multicast traffic using the new data MDT group and stops the
packet flow over the default multicast tree. If the multicast traffic level drops back below the threshold,
the data MDT is torn down automatically and traffic flows back across the default multicast tree.
If a PE router that has not yet joined the new data MDT group receives a PIM join message for a new
receiver for which (S,G) traffic is already flowing over the data MDT in the provider core, then that PE
router can obtain the new group address from its cache and can join the data-MDT immediately without
waiting up to 59 seconds for the next data MDT advertisement.
For a rosen 6 MVPN—a draft-rosen multicast VPN with provider tunnels operating in ASM mode—you
configure data MDT creation for a tunnel multicast group by including statements under the PIM
protocol configuration for the VRF instance associated with the multicast group. Because data MDTs
apply to VPNs and VRF routing instances, you cannot configure MDT statements in the master routing
instance.
• group—Specifies the multicast group address to which the threshold applies. This could be a well-
known address for a certain type of multicast traffic.
The group address can be explicit (all 32 bits of the address specified) or a prefix (network address
and prefix length specified). Explicit and prefix address forms can be combined if they do not overlap.
Overlapping configurations, in which prefix and more explicit address forms are used for the same
source or group address, are not supported.
• group-range—Specifies the multicast group IP address range used when a new data MDT needs to be
initiated on the PE router. For each new data MDT, one address is automatically selected from the
configured group range.
730
The PE router implementing data MDTs for a local multicast source must be configured with a range
of multicast group addresses. Group addresses that fall within the configured range are used in the
join messages for the data MDTs created in this VRF instance. Any multicast address range can be
used as the multicast prefix. However, the group address range cannot overlap the default MDT
group address configured for any VPN on the router. If you configure overlapping group addresses,
the configuration commit operation fails.
• pim—Supports data MDTs for service provider tunnels operating in any-source multicast mode.
• rate—Specifies the data rate that initiates the creation of data MDTs. When the source traffic in the
VRF exceeds the configured data rate, a new tunnel is created. The range is from 10 kilobits per
second (Kbps), the default, to 1 gigabit per second (Gbps, equivalent to 1,000,000 Kbps).
• source—Specifies the unicast address of the source of the multicast traffic. It can be a source locally
attached to or reached through the PE router. A group can have more than one source.
The source address can be explicit (all 32 bits of the address specified) or a prefix (network address
and prefix length specified). Explicit and prefix address forms can be combined if they do not overlap.
Overlapping configurations, in which prefix and more explicit address forms are used for the same
source or group address, are not supported.
• threshold—Associates a rate with a group and a source. The PE router implementing data MDTs for a
local multicast source must establish a data MDT-creation threshold for a multicast group and source.
When the traffic stops or the rate falls below the threshold value, the source PE router switches back
to the default MDT.
• tunnel-limit—Specifies the maximum number of data MDTs that can be created for a single routing
instance. The PE router implementing a data MDT for a local multicast source must establish a limit
for the number of data MDTs created in this VRF instance. If the limit is 0 (the default), then no data
MDTs are created for this VRF instance.
If the number of data MDT tunnels exceeds the maximum configured tunnel limit for the VRF, then
no new tunnels are created. Traffic that exceeds the configured threshold is sent on the default MDT.
The valid range is from 0 through 1024 for a VRF instance. There is a limit of 8000 tunnels for all
data MDTs in all VRF instances on a PE router.
731
Topology
Configuration
IN THIS SECTION
Procedure | 732
732
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
[edit]
set routing-instances vpn-A protocols pim mdt group-range 227.0.0.0/8
set routing-instances vpn-A protocols pim mdt threshold group 224.4.4.4/32 source 10.10.20.43/32 rate 10
set routing-instances vpn-A protocols pim mdt tunnel-limit 10
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
To configure a PE router attached to the VRF instance vpn-A in a PIM-ASM multicast VPN to initiate
new data MDTs and provider tunnels for that VRF:
[edit]
user@host# edit routing-instances vpn-A protocols pim mdt
[edit routing-instances vpn-A protocols pim mdt]
user@host# set group-range 227.0.0.0/8
Verification
To display information about the default MDT and any data MDTs for the VRF instance vpn-A, use the
show pim mdt instance ce1 detail operational mode command. This command displays either the
outgoing tunnels (the tunnels initiated by the local PE router), the incoming tunnels (tunnels initiated by
the remote PE routers), or both.
To display the data MDT group addresses cached by PE routers that participate in the VRF instance vpn-
A, use the show pim mdt data-mdt-joins instance vpn-A operational mode command. The command
displays the information cached from MDT join TLV packets received by all PE routers participating in
the specified VRF instance.
You can trace the operation of data MDTs by including the mdt detail flag in the [edit protocols pim
traceoptions] configuration. When this flag is set, all the mt interface-related activity is logged in trace
files.
SEE ALSO
IN THIS SECTION
Requirements | 734
Overview | 734
Configuration | 735
Verification | 743
This example describes how to enable dynamic reuse of data multicast distribution tree (MDT) group
addresses.
734
Requirements
• Configure the router interfaces. See the Junos OS Network Interfaces Library for Routing Devices.
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure PIM Sparse Mode on the interfaces. See Enabling PIM Sparse Mode.
Overview
IN THIS SECTION
Topology | 735
A limited number of multicast group addresses are available for use in data MDT tunnels. By default,
when the available multicast group addresses are all used, no new data MDTs can be created.
You can enable dynamic reuse of data MDT group addresses. Dynamic reuse of data MDT group
addresses allows multiple multicast streams to share a single MDT and multicast provider group address.
For example, three streams can use the same provider group address and MDT tunnel.
The streams are assigned to a particular MDT in a round-robin fashion. Since a provider tunnel might be
used by multiple customer streams, this can result in egress routers receiving customer traffic that is not
destined for their attached customer sites. This example shows the plain PIM scenario, without the
MVPN provider tunnel.
735
Topology
Configuration
IN THIS SECTION
Procedure | 737
Results | 740
736
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
set routing-instances VPN-A protocols pim mdt threshold group 224.1.1.1/32 source 192.168.255.245/32
rate 20
set routing-instances VPN-A protocols pim mdt threshold group 224.1.1.2/32 source 192.168.255.245/32
rate 20
set routing-instances VPN-A protocols pim mdt threshold group 224.1.1.3/32 source 192.168.255.245/32
rate 20
set routing-instances VPN-A protocols pim mdt data-mdt-reuse
set routing-instances VPN-A protocols pim mdt tunnel-limit 2
set routing-instances VPN-A protocols pim mdt group-range 239.1.1.0/30
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit]
user@host# edit protocols
[edit protocols]
user@host# set mpls interface all
[edit protocols]
user@host# set ldp interface all
[edit protocols]
user@host# set bgp local-as 65520
[edit protocols]
user@host# set bgp group ibgp type internal
[edit protocols]
user@host# set bgp group ibgp local-address 10.255.38.17
738
[edit protocols]
user@host# set bgp group ibgp family inet-vpn unicast
[edit protocols]
user@host# set bgp group ibgp neighbor 10.255.38.21
[edit protocols]
user@host# set bgp group ibgp neighbor 10.255.38.15
[edit protocols]
user@host# set ospf traffic-engineering
[edit protocols]
user@host# set ospf area 0.0.0.0 interface all
[edit protocols]
user@host# set ospf area 0.0.0.0 interface fxp0.0 disable
[edit protocols]
user@host# set pim rp static address 10.255.38.21
[edit protocols]
user@host# set pim interface all mode sparse
[edit protocols]
user@host# set pim interface all version 2
[edit protocols]
user@host# set pim interface fxp0.0 disable
[edit protocols]
user@host# exit
3. Configure the routing instance, and apply the bgp-to-ospf export policy.
[edit]
user@host# edit routing-instances VPN-A
[edit routing-instances VPN-A]
user@host# set instance-type vrf
[edit routing-instances VPN-A]
user@host# set interface ge-1/1/2.0
[edit routing-instances VPN-A]
user@host# set interface lo0.1
[edit routing-instances VPN-A]
user@host# set route-distinguisher 10.0.0.10:04
[edit routing-instances VPN-A]
user@host# set vrf-target target:100:10
[edit routing-instances VPN-A]
user@host# set protocols ospf export bgp-to-ospf
739
5. Configure the groups that operate in dense mode and the group address on which to encapsulate
multicast traffic from the routing instance.
6. Configure the address of the RP and the interfaces operating in sparse-dense mode.
Results
From configuration mode, confirm your configuration by entering the show policy-options, show
protocols, and show routing-instances commands. If the output does not display the intended
configuration, repeat the instructions in this example to correct the configuration.
family inet-vpn {
unicast;
}
neighbor 10.255.38.21;
neighbor 10.255.38.15;
}
}
ospf {
traffic-engineering;
area 0.0.0.0 {
interface all;
interface fxp0.0 {
disable;
}
}
}
ldp {
interface all;
}
pim {
rp {
static {
address 10.255.38.21;
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
}
protocols {
ospf {
export bgp-to-ospf;
area 0.0.0.0 {
interface all;
}
}
pim {
traceoptions {
file pim-VPN-A.log size 5m;
flag mdt detail;
}
dense-groups {
224.0.1.39/32;
224.0.1.40/32;
229.0.0.0/8;
}
vpn-group-address 239.1.0.0;
rp {
static {
address 10.255.38.15;
}
}
interface lo0.1 {
mode sparse-dense;
}
interface ge-1/1/2.0 {
mode sparse-dense;
}
mdt {
threshold {
group 224.1.1.1/32 {
source 192.168.255.245/32 {
rate 20;
}
}
group 224.1.1.2/32 {
source 192.168.255.245/32 {
rate 20;
}
}
group 224.1.1.3/32 {
source 192.168.255.245/32 {
743
rate 20;
}
}
}
data-mdt-reuse;
tunnel-limit 2;
group-range 239.1.1.0/30;
}
}
}
}
Verification
SEE ALSO
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode |
690
RELATED DOCUMENTATION
CHAPTER 21
IN THIS CHAPTER
Generating Next-Generation MVPN VRF Import and Export Policies Overview | 765
Example: Configuring Sender-Based RPF in a BGP MVPN with RSVP-TE Point-to-Multipoint Provider
Tunnels | 966
Example: Configuring Sender-Based RPF in a BGP MVPN with MLDP Point-to-Multipoint Provider
Tunnels | 1003
Anti-spoofing support for MPLS labels in BGP/MPLS IP VPNs (Inter-AS Option B) | 1086
745
Layer 3 BGP-MPLS virtual private networks (VPNs) are widely deployed in today’s networks worldwide.
Multicast applications, such as IPTV, are rapidly gaining popularity as is the number of networks with
multiple, media-rich services merging over a shared Multiprotocol Label Switching (MPLS) infrastructure.
The demand for delivering multicast service across a BGP-MPLS infrastructure in a scalable and reliable
way is also increasing.
RFC 4364 describes protocols and procedures for building unicast BGP-MPLS VPNs. However, there is
no framework specified in the RFC for provisioning multicast VPN (MVPN) services. In the past,
Multiprotocol Label Switching Virtual Private Network (MVPN) traffic was overlaid on top of a BGP-
MPLS network using a virtual LAN model based on Draft Rosen. Using the Draft Rosen approach,
service providers were faced with control and data plane scaling issues of an overlay model and the
maintenance of two routing/forwarding mechanisms: one for VPN unicast service and one for VPN
multicast service. For more information about the limitations of Draft Rosen, see draft-rekhter-mboned-
mvpn-deploy.
As a result, the IETF Layer 3 VPN working group published an Internet draft draft-ietf-l3vpn-2547bis-
mcast-10.txt, Multicast in MPLS/BGP IP VPNs, that outlines a different architecture for next-generation
MVPNs, as well as an accompanying RFC 2547 that proposes a BGP control plane for MVPNs. In turn,
Juniper Networks delivered the industry’s first implementation of BGP next-generation MVPNs in 2007.
All examples in this document refer to the network topology shown in Figure 97 on page 746:
• The service provider in this example offers VPN unicast and multicast services to Customer A (vpna).
• The VPN multicast source is connected to Site 1 and transmits data to groups 232.1.1.1 and
224.1.1.1.
• The provider edge router 1 (Router PE1) VRF table acts as the C-RP (using address 10.12.53.1) for C-
PIM-SM ASM groups.
746
• The service provider uses RSVP-TE point-to-multipoint LSPs for transmitting VPN multicast data
across the network.
RELATED DOCUMENTATION
IN THIS SECTION
This section includes background material about how next-generation MVPNs work.
Route distinguisher and VPN routing and forwarding (VRF) route target extended communities are an
integral part of unicast BGP-MPLS virtual private networks (VPNs). Route distinguisher and route target
are often confused in terms of their purpose in BGP-MPLS networks. As they play an important role in
BGP next-generation MVPNs, it is important to understand what they are and how they are used as
described in RFC 4364.
“A VPN-IPv4 address is a 12-byte quantity, beginning with an 8-byte Route Distinguisher (RD) and
ending with a 4-byte IPv4 address. If several VPNs use the same IPv4 address prefix, the PEs translate
these into unique VPN-IPv4 address prefixes. This ensures that if the same address is used in several
different VPNs, it is possible for BGP to carry several completely different routes to that address, one for
each VPN.”
Typically, each VRF table on a provider edge (PE) router is configured with a unique route distinguisher.
Depending on the routing design, the route distinguisher can be unique or the same for a given VRF on
other PE routers. A route distinguisher is an 8-byte number with two fields. The first field can be either
an AS number (2 or 4 bytes) or an IP address (4 bytes). The second field is assigned by the user.
RFC 4364 describes the purpose of a VRF route target extended community as the following:
“Every VRF is associated with one or more Route Target (RT) attributes.
748
When a VPN-IPv4 route is created (from an IPv4 route that the PE router has learned from a CE) by a PE
router, it is associated with one or more route target attributes. These are carried in BGP as attributes of
the route.
Any route associated with Route Target T must be distributed to every PE router that has a VRF
associated with Route Target T. When such a route is received by a PE router, it is eligible to be installed
in those of the PE’s VRFs that are associated with Route Target T.”
The route target also contains two fields and is structured similar to a route distinguisher. The first field
of the route target is either an AS number (2 or 4 bytes) or an IP address (4 bytes), and the second field
is assigned by the user. Each PE router advertises its VPN-IPv4 routes with the route target (as one of
the BGP path attributes) configured for the VRF table. The route target attached to the advertised route
is referred to as the export route target. On the receiving PE router, the route target attached to the
route is compared to the route target configured for the local VRF tables. The locally configured route
target that is used in deciding whether a VPN-IPv4 route should be installed in a VRF table is referred to
as the import route target.
C-Multicast Routing
Customer multicast (C-multicast) routing information exchange refers to the distribution of customer
PIM (C-PIM) join/prune messages received from local customer edge (CE) routers to other PE routers
(toward the VPN multicast source).
BGP MVPNs
BGP MVPNs use BGP as the control plane protocol between PE routers for MVPNs, including the
exchange of C-multicast routing information. The support of BGP as a PE-PE protocol for exchanging C-
multicast routes is mandated by Internet draft draft-ietf-l3vpn-mvpn-considerations-06.txt, Mandatory
Features in a Layer 3 Multicast BGP/MPLS VPN Solution. The use of BGP for distributing C-multicast
routing information is closely modeled after its highly successful counterpart of VPN unicast route
distribution. Using BGP as the control plane protocol allows service providers to take advantage of this
widely deployed, feature-rich protocol. It also enables service providers to leverage their knowledge and
investment in managing BGP-MPLS VPN unicast service to offer VPN multicast services.
A PE router can be a sender, a receiver, or both a sender and a receiver, depending on the configuration:
• A sender site set includes PE routers with local VPN multicast sources (VPN customer multicast
sources either directly connected or connected via a CE router). A PE router that is in the sender site
set is the sender PE router.
749
• A receiver site set includes PE routers that have local VPN multicast receivers. A PE router that is in
the receiver site set is the receiver PE router.
Provider Tunnels
In BGP MVPNs, the sender PE router distributes information about the provider tunnel in a BGP
attribute called provider multicast service interface (PMSI). By default, all receiver PE routers join and
become the leaves of the provider tunnel rooted at the sender PE router.
• An inclusive provider tunnel (I-PMSI provider tunnel) enables a PE router that is in the sender site set
of an MVPN to transmit multicast data to all PE routers that are members of that MVPN.
• A selective provider tunnel (S-PMSI provider tunnel) enables a PE router that is in the sender site set
of an MVPN to transmit multicast data to a subset of the PE routers.
RELATED DOCUMENTATION
IN THIS SECTION
The BGP next-generation multicast virtual private network (MVPN) control plane, as specified in
Internet draft draft-ietf-l3vpn-2547bis-mcast-10.txt and Internet draft draft-ietf-l3vpn-2547bis-mcast-
bgp-08.txt, distributes all the necessary information to enable end-to-end C-multicast routing exchange
via BGP. The main tasks of the control plane (Table 23 on page 750) include MVPN autodiscovery,
distribution of provider tunnel information, and PE-PE C-multicast route exchange.
MVPN autodiscovery A provider edge (PE) router discovers the identity of the other PE
routers that participate in the same MVPN.
Distribution of provider tunnel A sender PE router advertises the type and identifier of the provider
information tunnel that it will use to transmit VPN multicast packets.
PE-PE C-Multicast route A receiver PE router propagates C-multicast join messages (C-joins)
exchange received over its VPN interface toward the VPN multicast sources.
A PE router that participates in a BGP-based next-generation MVPN network is required to send a BGP
update message that contains MCAST-VPN network layer reachability information (NLRI). An MCAST-
751
VPN NLRI contains route type, length, and variable fields. The value of each variable field depends on
the route type.
Seven types of next-generation MVPN BGP routes (also referred to as routes in this topic) are specified
(Table 24 on page 751). The first five route types are called autodiscovery MVPN routes. This topic also
refers to Type 1-5 routes as non-C-multicast MVPN routes. Type 6 and Type 7 routes are called C-
multicast MVPN routes.
• Used by a sender PE
router to discover the
leaves of a selective
provider tunnel.
• Used by PE routers
to learn the identity
of active VPN
multicast sources.
• Originated when a PE
router receives a
shared tree C-join (C-
*, C-G) through its
PE-CE interface.
753
• Originated when a PE
router receives a
source tree C-join (C-
S, C-G) or originated
by the PE router that
already has a Type 6
route and receives a
Type 5 route.
All next-generation MVPN PE routers create and advertise a Type 1 intra-AS autodiscovery route
(Figure 98 on page 753) for each MVPN to which they are connected. Table 25 on page 753
describes the format of each MVPN Type 1 intra-AS autodiscovery route.
Field Description
Route Distinguisher Set to the route distinguisher configured for the VPN.
754
Table 25: Type 1 Intra-AS Autodiscovery Route MVPN Format Descriptions (Continued)
Field Description
Originating Router’s IP Set to the IP address of the router originating this route. The address is
Address typically the primary loopback address of the PE router.
Type 2 routes are used for membership discovery between PE routers that belong to different
autonomous systems (ASs). Their use is not covered in this topic.
A sender PE router that initiates a selective provider tunnel is required to originate a Type 3 intra-AS S-
PMSI autodiscovery route with the appropriate PMSI attribute.
A receiver PE router responds to a Type 3 route by originating a Type 4 leaf autodiscovery route if it has
local receivers interested in the traffic transmitted on the selective provider tunnel. Type 4 routes inform
the sender PE router of the leaf PE routers.
Type 5 routes carry information about active VPN sources and the groups to which they are transmitting
data. These routes can be generated by any PE router that becomes aware of an active source. Type 5
routes apply only for PIM-SM (ASM) when intersite source-tree-only mode is being used.
The C-multicast route exchange between PE routers refers to the propagation of C-joins from receiver
PE routers to the sender PE routers.
In a next-generation MVPN, C-joins are translated into (or encoded as) BGP C-multicast MVPN routes
and advertised via the BGP MCAST-VPN address family toward the sender PE routers.
• Type 6 C-multicast routes are used in representing information contained in a shared tree (C-*, C-G)
join.
• Type 7 C-multicast routes are used in representing information contained in a source tree (C-S, C-G)
join.
755
PMSI Attribute
The provider multicast service interface (PMSI) attribute (Figure 99 on page 755) carries information
about the provider tunnel. In a next-generation MVPN network, the sender PE router sets up the
provider tunnel, and therefore is responsible for originating the PMSI attribute. The PMSI attribute can
be attached to Type 1, Type 2, or Type 3 routes. Table 26 on page 755 describes each PMSI attribute
format.
Field Description
Flags Currently has only one flag specified: Leaf Information Required. This flag is used
for S-PMSI provider tunnel setup.
Tunnel Type Identifies the tunnel technology used by the sender. Currently there are seven
types of tunnels supported.
MPLS Label Used when the sender PE router allocates the MPLS labels (also called upstream
label allocation). This technique is described in RFC 5331 and is outside the scope
of this topic.
Tunnel Identifier Uniquely identifies the tunnel. Its value depends on the value set in the tunnel type
field.
Two extended communities are specified to support next-generation MVPNs: source AS (src-as) and
VRF route import extended communities.
The source AS extended community is an AS-specific extended community that identifies the AS from
which a route originates. This community is mostly used for inter-AS operations, which is not covered in
this topic.
The VPN routing and forwarding (VRF) route import extended community is an IP-address-specific
extended community that is used for importing C-multicast routes in the VRF table of the active sender
PE router to which the source is attached.
Each PE router creates a unique route target import and src-as community for each VPN and attaches
them to the VPN-IPv4 routes.
RELATED DOCUMENTATION
IN THIS SECTION
Selective Provider Tunnels (S-PMSI Autodiscovery/Type 3 and Leaf Autodiscovery/Type 4 Routes) | 759
757
A next-generation multicast virtual private network (MVPN) data plane is composed of provider tunnels
originated by and rooted at the sender provider edge (PE) routers and the receiver PE routers as the
leaves of the provider tunnel.
A provider tunnel can carry data for one or more VPNs. Those provider tunnels that carry data for more
than one VPN are called aggregate provider tunnels and are outside the scope of this topic. Here, we
assume that a provider tunnel carries data for only one VPN.
This topic covers two types of tunnel technologies: IP generic routing encapsulation (GRE) provider
tunnels signaled by Protocol Independent Multicast-Sparse Mode (PIM-SM) any-source multicast (ASM)
and MPLS provider tunnels signaled by RSVP-Traffic Engineering (RSVP-TE).
When a provider tunnel is signaled by PIM, the sender PE router runs another instance of the PIM
protocol on the provider’s network (P-PIM) that signals a provider tunnel for that VPN. When a provider
tunnel is signaled by RSVP-TE, the sender PE router initiates a point-to-multipoint label-switched path
(LSP) toward receiver PE routers by using point-to-multipoint RSVP-TE protocol messages. In either
case, the sender PE router advertises the tunnel signaling protocol and the tunnel ID to other PE routers
via BGP by attaching the provider multicast service interface (PMSI) attribute to either the Type 1 intra-
AS autodiscovery routes (inclusive provider tunnels) or Type 3 S-PMSI autodiscovery routes (selective
provider tunnels).
NOTE: The sender PE router goes through two steps when setting up the data plane. First, using
the PMSI attribute, it advertises the provider tunnel it is using via BGP. Second, it actually signals
the tunnel using whatever tunnel signaling protocol is configured for that VPN. This allows
receiver PE routers to bind the tunnel that is being signaled to the VPN that imported the Type 1
intra-AS autodiscovery route. Binding a provider tunnel to a VRF table enables a receiver PE
router to map the incoming traffic from the core network on the provider tunnel to the local
target VRF table.
The PMSI attribute contains the provider tunnel type and an identifier. The value of the provider tunnel
identifier depends on the tunnel type. Table 27 on page 757 identifies the tunnel types specified in
Internet draft draft-ietf-l3vpn-2547bis-mcast-bgp-08.txt.
3 PIM-SSM tree
4 PIM-SM tree
5 PIM-Bidir tree
6 Ingress replication
This section describes various types of provider tunnels and attributes of provider tunnels.
When the Tunnel Type field of the PMSI attribute is set to 4 (PIM-SM Tree), the tunnel identifier field
contains <Sender Address, P-Multicast Group Address>. The Sender Address field is set to the router
ID of the sender PE router. The P-multicast group address is set to a multicast group address from the
service provider’s P-multicast address space and uniquely identifies the VPN. A receiver PE router that
receives an intra-AS autodiscovery route with a PMSI attribute whose tunnel type is PIM-SM is required
to join the provider tunnel.
For example, if the service provider deploys PIM-SM provider tunnels (instead of RSVP-TE provider
tunnels), Router PE1 advertises the following PMSI attribute:
When the tunnel type field of the PMSI attribute is set to 1 (RSVP-TE point-to-multipoint LSP), the
tunnel identifier field contains an RSVP-TE point-to-multipoint session object as described in RFC 4875.
759
The session object contains the <Extended Tunnel ID, Reserved, Tunnel ID, P2MP ID> associated with
the point-to-multipoint LSPs.
The PE router that originates the PMSI attribute is required to signal an RSVP-TE point-to-multipoint
LSP and the sub-LSPs. A PE router that receives this PMSI attribute must establish the appropriate state
to properly handle the traffic received over the sub-LSP.
A selective provider tunnel is used for mapping a specific C-multicast flow (a (C-S, C-G) pair) onto a
specific provider tunnel. There are a variety of situations in which selective provider tunnels can be
useful. For example, they can be used for putting high-bandwidth VPN multicast data traffic onto a
separate provider tunnel rather than the default inclusive provider tunnel, thus restricting the
distribution of traffic to only those PE routers with active receivers.
In BGP next-generation multicast virtual private networks (MVPNs), selective provider tunnels are
signaled using Type 3 Selective-PMSI (S-PMSI) autodiscovery routes. See Figure 100 on page 759 and
Table 28 on page 760 for details. The sender PE router sends a Type 3 route to signal that it is sending
traffic for a particular (C-S, C-G) flow using an S-PMSI provider tunnel.
Figure 100: S-PMSI Autodiscovery Route Type Multicast (MCAST)-VPN Network Layer Reachability
Information (NLRI) Format
760
Field Description
Route Distinguisher Set to the route distinguisher configured on the router originating this
route.
Multicast Source Length Set to 32 for IPv4 and to 128 for IPv6 C-S IP addresses.
Multicast Group Length Set to 32 for IPv4 and to 128 for IPv6 C-G addresses.
The S-PMSI autodiscovery (Type 3) route carries a PMSI attribute similar to the PMSI attribute carried
with intra-AS autodiscovery (Type 1) routes. The Flags field of the PMSI attribute carried by the S-PMSI
autodiscovery route is set to the leaf information required. This flag signals receiver PE routers to
originate a Type 4 leaf autodiscovery route (Figure 101 on page 760) to join the selective provider
tunnel if they have active receivers. See Table 29 on page 760 for details of leaf autodiscovery route
type MCAST-VPN NLRI format descriptions.
Table 29: Leaf Autodiscovery Route Type MCAST-VPN NLRI Format Descriptions
Field Description
Table 29: Leaf Autodiscovery Route Type MCAST-VPN NLRI Format Descriptions (Continued)
Field Description
Originating Router’s IP Set to the IP address of the PE router originating the leaf
Address autodiscovery route This is typically the primary loopback address.
RELATED DOCUMENTATION
Juniper Networks introduced the industry’s first implementation of BGP next-generation multicast
virtual private networks (MVPNs). See Figure 102 on page 762 for a summary of a Junos OS next-
generation MVPN routing flow.
Next-generation MVPN services are configured on top of BGP-MPLS unicast VPN services.
You can configure a Juniper Networks PE router that is already providing unicast BGP-MPLS VPN
connectivity to support multicast VPN connectivity in three steps:
763
1. Configure the provider edge (PE) routers to support the BGP multicast VPN address family by
including the signaling statement at the [edit protocols bgp group group-name family inet-mvpn]
hierarchy level. This address family enables PE routers to exchange MVPN routes.
2. Configure the PE routers to support the MVPN control plane tasks by including the mvpn statement
at the [edit routing-instances routing-instance-name protocols] hierarchy level. This statement
signals PE routers to initialize the MVPN module that is responsible for the majority of next-
generation MVPN control plane tasks.
3. Configure the sender PE router to signal a provider tunnel by including the provider-tunnel
statement at the [edit routing-instances routing-instance-name] hierarchy level. You must also
enable the tunnel signaling protocol (RSVP-TE or P-PIM) if it is not part of the unicast VPN service
configuration. To enable the tunnel signaling protocol, include the rsvp-te or pim-asm statements at
the [edit routing-instances routing-instance-name provider-tunnel] hierarchy level.
After these three statements are configured and each PE router has established internal BGP (IBGP)
sessions using both INET-VPN and MCAST-VPN address families, four routing tables are automatically
created. These tables are bgp.l3vpn.0, bgp.mvpn.0, <routing-instance-name>.inet.0, and <routing-
instance-name>.mvpn.0. See Table 30 on page 763
RELATED DOCUMENTATION
IN THIS SECTION
In Junos OS, the policy module is responsible for VPN routing and forwarding (VRF) route import and
export decisions. You can configure these policies explicitly, or Junos OS can generate them internally
for you to reduce user-configured statements and simplify configuration. Junos OS generates all
necessary policies for supporting next-generation multicast virtual private network (MVPN) import and
export decisions. Some of these policies affect normal VPN unicast routes.
The system gives a name to each internal policy it creates. The name of an internal policy starts and
ends with a “__” notation. Also the keyword internal is added at the end of each internal policy name.
You can display these internal policies using the show policy command.
A Juniper Networks provider edge (PE) router requires a vrf-import and a vrf-export policy to control
unicast VPN route import and export decisions for a VRF. You can configure these policies explicitly at
the [edit routing-instances routing-instance-name vrf-import import_policy_name] and [edit routing-
instances routing-instance-name vrf-export export_policy_name] hierarchy level. Alternately, you can
configure only the route target for the VRF at the [edit routing-instances routing-instance-name vrf-
target] hierarchy level, and Junos OS then generates these policies automatically for you. Routers
referenced in this topic are shown in "Understanding Next-Generation MVPN Network Topology" on
page 745.
The following list identifies the automatically generated policy names and where they are applied:
Policy: vrf-import
Policy: vrf-export
Use the show policy __vrf-import-vpna-internal__ command to verify that Router PE1 has created the
following vrf-import and vrf-export policies based on a vrf-target of target:10:1. In this example, we see
that the vrf-import policy is constructed to accept a route if the route target of the route matches
target:10:1. Similarly, a route is exported with a route target of target:10:1.
• RT value: target:10:1
When you configure the mvpn statement at the [edit routing-instances routing-instance-name
protocols] hierarchy level, Junos OS automatically creates three new internal policies: one for export,
one for import, and one for handling Type 4 routes. Routers referenced in this topic are shown in
"Understanding Next-Generation MVPN Network Topology" on page 745.
The following list identifies the automatically generated policy names and where they are applied:
Policy 1: This policy is used to attach rt-import and src-as extended communities to VPN-IPv4 routes.
Use the show policy __vrf-mvpn-export-inet-vpna-internal__ command to verify that the following
export policy is created on Router PE1. Router PE1 adds rt-import:10.1.1.1:64 and src-as:65000:0
communities to unicast VPN routes through this policy.
Policy 2: This policy is used to import C-Mmulticast routes from the bgp.mvpn.0 table to the <routing-
instance-name>.mvpn.0 table.
Use the show policy __vrf-mvpn-import-cmcast-vpna-internal__ command to verify that the following
import policy is created on Router PE1. The policy accepts those C-multicast MVPN routes carrying a
route target of target:10.1.1.1:64 and installs them in the vpna.mvpn.0 table.
Policy 3: This policy is used for importing Type 4 routes and is created by default even if a selective
provider tunnel is not configured. The policy affects only Type 4 routes received from receiver PE
routers.
RELATED DOCUMENTATION
IN THIS SECTION
Comparison of Draft Rosen Multicast VPNs and Next-Generation Multiprotocol BGP Multicast VPNs | 769
PIM Sparse Mode, PIM Dense Mode, Auto-RP, and BSR for MBGP MVPNs | 771
• Video transport applications for wholesale IPTV and multiple content providers attached to the same
network
There are two ways to implement Layer 3 MVPNs. They are often referred to as dual PIM MVPNs (also
known as “draft-rosen”) and multiprotocol BGP (MBGP)-based MVPNs (the “next generation” method of
MVPN configuration). Both methods are supported and equally effective. The main difference is that the
MBGP-based MVPN method does not require multicast configuration on the service provider backbone.
Multiprotocol BGP multicast VPNs employ the intra-autonomous system (AS) next-generation BGP
control plane and PIM sparse mode as the data plane. The PIM state information is maintained between
the PE routers using the same architecture that is used for unicast VPNs. The main advantage of
deploying MVPNs with MBGP is simplicity of configuration and operation because multicast is not
needed on the service provider VPN backbone connecting the PE routers.
Using the draft-rosen approach, service providers might experience control and data plane scaling issues
associated with the maintenance of two routing and forwarding mechanisms: one for VPN unicast and
one for VPN multicast. For more information on the limitations of Draft Rosen, see draft-rekhter-
mboned-mvpn-deploy.
770
SEE ALSO
• They extend Layer 3 VPN service (RFC 4364) to support IP multicast for Layer 3 VPN service
providers.
• They follow the same architecture as specified by RFC 4364 for unicast VPNs. Specifically, BGP is
used as the provider edge (PE) router-to-PE router control plane for multicast VPN.
• They eliminate the requirement for the virtual router (VR) model (as specified in Internet draft draft-
rosen-vpn-mcast, Multicast in MPLS/BGP VPNs) for multicast VPNs and the RFC 4364 model for
unicast VPNs.
• They rely on RFC 4364-based unicast with extensions for intra-AS and inter-AS communication.
An MBGP MVPN defines two types of site sets, a sender site set and a receiver site set. These sites
have the following properties:
• Hosts within the sender site set can originate multicast traffic for receivers in the receiver site set.
• Receivers outside the receiver site set should not be able to receive this traffic.
• Hosts within the receiver site set can receive multicast traffic originated by any host in the sender
site set.
• Hosts within the receiver site set should not be able to receive multicast traffic originated by any
host that is not in the sender site set.
A site can be in both the sender site set and the receiver site set, so hosts within such a site can both
originate and receive multicast traffic. For example, the sender site set could be the same as the receiver
site set, in which case all sites could both originate and receive multicast traffic from one another.
Sites within a given MBGP MVPN might be within the same organization or in different organizations,
which means that an MBGP MVPN can be either an intranet or an extranet. A given site can be in more
than one MBGP MVPN, so MBGP MVPNs might overlap. Not all sites of a given MBGP MVPN have to
be connected to the same service provider, meaning that an MBGP MVPN can span multiple service
providers.
Feature parity for the MVPN extranet functionality or overlapping MVPNs on the Junos Trio chipset is
supported in Junos OS Releases 11.1R2, 11.2R2, and 11.4.
Another way to look at an MBGP MVPN is to say that an MBGP MVPN is defined by a set of
administrative policies. These policies determine both the sender site set and the receiver site set. These
771
policies are established by MBGP MVPN customers, but implemented by service providers using the
existing BGP and MPLS VPN infrastructure.
SEE ALSO
PIM Sparse Mode, PIM Dense Mode, Auto-RP, and BSR for MBGP MVPNs
You can configure PIM sparse mode, PIM dense mode, auto-RP, and bootstrap router (BSR) for MBGP
MVPN networks:
• PIM sparse mode—Allows a router to use any unicast routing protocol and performs reverse-path
forwarding (RPF) checks using the unicast routing table. PIM sparse mode includes an explicit join
message, so routers determine where the interested receivers are and send join messages upstream
to their neighbors, building trees from the receivers to the rendezvous point (RP).
• PIM dense mode—Allows a router to use any unicast routing protocol and performs reverse-path
forwarding (RPF) checks using the unicast routing table. Packets are forwarded to all interfaces
except the incoming interface. Unlike PIM sparse mode, where explicit joins are required for packets
to be transmitted downstream, packets are flooded to all routers in the routing instance in PIM dense
mode.
• Auto-RP—Uses PIM dense mode to propagate control messages and establish RP mapping. You can
configure an auto-RP node in one of three different modes: discovery mode, announce mode, and
mapping mode.
• BSR—Establishes RPs. A selected router in a network acts as a BSR, which selects a unique RP for
different group ranges. BSR messages are flooded using a data tunnel between PE routers.
SEE ALSO
Inclusive A single multicast distribution tree in the backbone carrying all the multicast traffic from a
tree specified set of one or more MVPNs. An inclusive tree carrying the traffic of more than
one MVPN is an aggregate inclusive tree. All the PEs that attach to MVPN receiver sites
using the tree belong to that inclusive tree.
Selective A single multicast distribution tree in the backbone carrying traffic for a specified set of
tree one or more multicast groups. When multicast groups belonging to more than one MVPN
are on the tree, it is called an aggregate selective tree.
By default, traffic from most multicast groups can be carried by an inclusive tree, while traffic from some
groups (for example, high bandwidth groups) can be carried by one of the selective trees. Selective trees,
if they contain only those PEs that need to receive multicast data from one or more groups assigned to
the tree, can provide more optimal routing than inclusive trees alone, although this requires more state
information in the P routers.
An MPLS-based VPN running BGP with autodiscovery is used as the basis for a next-generation MVPN.
The autodiscovered route information is carried in MBGP network layer reachability information (NLRI)
updates for multicast VPNs (MCAST-VPNs). These MCAST-VPN NLRIs are handled in the same way as
IPv4 routes: route distinguishers are used to distinguish between different VPNs in the network. These
NLRIs are imported and exported based on the route target extended communities, just as IPv4 unicast
routes. In other words, existing BGP mechanisms are used to distribute multicast information on the
provider backbone without requiring multicast directly.
For example, consider a customer running Protocol-Independent Multicast (PIM) sparse mode in source-
specific multicast (SSM) mode. Only source tree join customer multicast (c-multicast) routes are
required. (PIM sparse mode in anysource multicast (ASM) mode can be supported with a few
enhancements to SSM mode.)
The customer multicast route carrying a particular multicast source S needs to be imported only into the
VPN routing and forwarding (VRF) table on the PE router connected to the site that contains the source
S and not into any other VRF, even for the same MVPN. To do this, each VRF on a particular PE has a
distinct VRF route import extended community associated with it. This community consists of the PE
773
router's IP address and local PE number. Different MVPNs on a particular PE have different route
imports, and for a particular MVPN, the VRF instances on different PE routers have different route
imports. This VRF route import is auto-configured and not controlled by the user.
Also, all the VRFs within a particular MVPN will have information about VRF route imports for each VRF.
This is accomplished by “piggybacking” the VRF route import extended community onto the unicast
VPN IPv4 routes. To make sure a customer multicast route carrying multicast source S is imported only
into the VRF on the PE router connected to the site contained the source S, it is necessary to find the
unicast VPN IPv4 route to S and set the route target of the customer multicast route to the VRF import
route carried by the VPN IPv4 route just found.
The process of originating customer multicast routes in an MBGP-based MVPN is shown in Figure 103
on page 775.
In the figure, an MVPN has three receiver sites (R1, R2, and R3) and one source site (S). The site routers
are connected to four PE routers, and PIM is running between the PE routers and the site routers.
However, only BGP runs between the PE routers on the provider's network.
When router PE-1 receives a PIM join message for (S,G) from site router R1, this means that site R1 has
one or more receivers for a given source and multicast group (S,G) combination. In that case, router PE-1
constructs and originates a customer multicast route after doing three things:
2. Extracting the route distinguisher and VRF route import form this route
3. Putting the (S,G) information from the PIM join, the router distinguisher from the VPN IPv4 route,
and the route target from the VRF route import of the VPN IPv4 route into a MBGP update
774
The update is distributed around the VPN through normal BGP mechanisms such as router reflectors.
775
What happens when the source site S receives the MBGP information is shown in Figure 104 on page
778. In the figure, the customer multicast route information is distributed by the BGP route reflector as
an MBGP update.
1. Receive the customer multicast route originated by the PE routers and aggregated by the route
reflector.
2. Accept the customer multicast route into the VRF for the correct MVPN (because the VRF route
import matches the route target carried in the customer multicast route information).
777
3. Create the proper (S,G) state in the VRF and propagate the information to the customer routers of
source site S using PIM.
778
SEE ALSO
Release Description
11.1R2 Feature parity for the MVPN extranet functionality or overlapping MVPNs on the Junos Trio chipset is
supported in Junos OS Releases 11.1R2, 11.2R2, and 11.4.
RELATED DOCUMENTATION
IN THIS SECTION
Example: Configuring Point-to-Multipoint LDP LSPs as the Data Plane for Intra-AS MBGP MVPNs | 781
Example: Configuring Ingress Replication for IP Multicast Using MBGP MVPNs | 789
Example: Configuring BGP Route Flap Damping Based on the MBGP MVPN Address Family | 851
IN THIS SECTION
Multiprotocol BGP-based multicast VPNs (also referred to as next-generation Layer 3 VPN multicast)
constitute the next evolution after dual multicast VPNs (draft-rosen) and provide a simpler solution for
administrators who want to configure multicast over Layer 3 VPNs.
• They extend Layer 3 VPN service (RFC 2547) to support IP multicast for Layer 3 VPN service
providers.
• They follow the same architecture as specified by RFC 2547 for unicast VPNs. Specifically, BGP is
used as the control plane.
• They eliminate the requirement for the virtual router (VR) model, which is specified in Internet draft
draft-rosen-vpn-mcast, Multicast in MPLS/BGP VPNs, for multicast VPNs.
• They rely on RFC-based unicast with extensions for intra-AS and inter-AS communication.
Multiprotocol BGP-based VPNs are defined by two sets of sites: a sender set and a receiver set. Hosts
within a receiver site set can receive multicast traffic and hosts within a sender site set can send
multicast traffic. A site set can be both receiver and sender, which means that hosts within such a site
can both send and receive multicast traffic. Multiprotocol BGP-based VPNS can span organizations (so
the sites can be intranets or extranets), can span service providers, and can overlap.
Site administrators configure multiprotocol BGP-based VPNs based on customer requirements and the
existing BGP and MPLS VPN infrastructure.
BGP-based multicast VPN (MVPN) customer multicast routes are aggregated by route reflectors. A
route reflector (RR) might receive a customer multicast route with the same NLRI from more than one
provider edge (PE) router, but the RR readvertises only one such NLRI. If the set of PE routers that
advertise this NLRI changes, the RR does not update the route. This minimizes route churn. To achieve
this, the RR sets the next hop to self. In addition, the RR sets the originator ID to itself. The RR avoids
unnecessary best-path computation if it receives a subsequent customer multicast route for an NLRI
that the RR is already advertising. This allows aggregation of source active and customer multicast
routes with the same MVPN NLRI.
781
SEE ALSO
Example: Configuring Point-to-Multipoint LDP LSPs as the Data Plane for Intra-AS MBGP MVPNs
Example: Configuring Point-to-Multipoint LDP LSPs as the Data Plane for Intra-AS
MBGP MVPNs
IN THIS SECTION
Requirements | 781
Overview | 783
Configuration | 786
Verification | 788
This example shows how to configure point-to-multipoint (P2MP) LDP label-switched paths (LSPs) as
the data plane for intra-autonomous system (AS) multiprotocol BGP (MBGP) multicast VPNs (MVPNs).
This feature is well suited for service providers who are already running LDP in the MPLS backbone and
need MBGP MVPN functionality.
Requirements
• Configure the router interfaces. See the Junos OS Network Interfaces Library for Routing Devices.
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Configure a BGP-MVPN control plane. See MBGP-Based Multicast VPN Trees in the Multicast
Protocols User Guide .
• Configure LDP as the signaling protocol on all P2MP provider and provider-edge routers. See LDP
Operation in the Junos OS MPLS Applications User Guide.
• Configure P2MP LDP LSPs as the provider tunnel technology on each PE router in the MVPN that
belongs to the sender site set. See the Junos OS MPLS Applications User Guide.
• Configure either a virtual loopback tunnel interface (requires a Tunnel PIC) or the vrf-table-label
statement in the MVPN routing instance. If you configure the vrf-table-label statement, you can
configure an optional virtual loopback tunnel interface as well.
782
• In an extranet scenario when the egress PE router belongs to multiple MVPN instances, all of which
need to receive a specific multicast stream, a virtual loopback tunnel interface (and a Tunnel PIC) is
required on the egress PE router. See Configuring Virtual Loopback Tunnels for VRF Table Lookup in
the in the Junos OS Services Interfaces Library for Routing Devices.
• If the egress PE router is also a transit router for the point-to-multipoint LSP, a virtual loopback
tunnel interface (and a Tunnel PIC) is required on the egress PE router. See Configuring Virtual
Loopback Tunnels for VRF Table Lookup in the Multicast Protocols User Guide .
• Some extranet configurations of MBGP MVPNs with point-to-multicast LDP LSPs as the data plane
require a virtual loopback tunnel interface (and a Tunnel PIC) on egress PE routers. When an egress
PE router belongs to multiple MVPN instances, all of which need to receive a specific multicast
stream, the vrf-table-table statement cannot be used. In Figure 1, the CE1 and CE2 routers belong to
different MVPNs. However, they want to receive a multicast stream being sent by Source. If the vrf-
table-label statement is configured on Router PE2, the packet cannot be forwarded to both CE1 and
CE2. This causes packet loss. The packet is forwarded to both Routers CE1 and CE2 if a virtual
loopback tunnel interface is used in both MVPN routing instances on Router PE2. Thus, you need to
set up a virtual loopback tunnel interface if you are using an extranet scenario wherein the egress PE
router belongs to multiple MVPN instances that receive a specific multicast stream, or if you are
using the egress PE router as a transit router for the point-to-multipoint LSP.
NOTE: Starting in Junos OS Release 15.1X49-D50 and Junos OS Release 17.3R1, the vrf-
table-label statement allows mapping of the inner label to a specific Virtual Routing and
Forwarding (VRF). This mapping allows examination of the encapsulated IP header at an
egress VPN router. For SRX Series devices, the vrf-table-label statement is currently
783
Figure 105: Extranet Configuration of MBGP MVPN with P2MP LDP LSPs as Data Plane
See Configuring Virtual Loopback Tunnels for VRF Table Lookup for more information.
Overview
IN THIS SECTION
Topology | 785
This topic describes how P2MP LDP LSPs can be configured as the data plane for intra-AS selective
provider tunnels. Selective P2MP LSPs are triggered only based on the bandwidth threshold of a
particular customer’s multicast stream. A separate P2MP LDP LSP is set up for a given customer source
and customer group pair (C-S, C-G) by a PE router. The C-S is behind the PE router that belongs in the
sender site set. Aggregation of intra-AS selective provider tunnels across MVPNs is not supported.
When you configure selective provider tunnels, leaves discover the P2MP LSP root as follows. A PE
router with a receiver for a customer multicast stream behind it needs to discover the identity of the PE
router (and the provider tunnel information) with the source of the customer multicast stream behind it.
784
This information is auto-discovered dynamically using the S-PMSI AD routes originated by the PE router
with the C-S behind it.
The Junos OS also supports P2MP LDP LSPs as the data plane for intra-AS inclusive provider tunnels.
These tunnels are triggered based on the MVPN configuration. A separate P2MP LSP LSP is set up for a
given MVPN by a PE router that belongs in the sender site set. This PE router is the root of the P2MP
LSP. Aggregation of intra-AS inclusive provider tunnels across MVPNs is not supported.
When you configure inclusive provider tunnels, leaves discover the P2MP LSP root as follows. A PE
router with a receiver site for a given MVPN needs to discover the identities of PE routers (and the
provider tunnel information) with sender sites for that MVPN. This information is auto-discovered
dynamically using the intra-AS auto-discovery routes originated by the PE routers with sender sites.
785
Topology
Figure 106 on page 785 shows the topology used in this example.
Figure 106: P2MP LDP LSPs as the Data Plane for Intra-AS MBGP MVPNs
In Figure 106 on page 785, the routers perform the following functions:
• Router R0 serves both green and red CE routers in separate routing instances.
• Router R5 is connected to overlapping green and red CE routers in a single routing instance.
• Router R4 is connected to overlapping green and red CE routers in a single routing instance.
• Routers R0, R3, R4, and R5 are client internal BGP (IBGP) peers.
Configuration
IN THIS SECTION
Procedure | 787
Results | 788
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
To configure P2MP LDP LSPs as the data plane for intra-AS MBGP MVPNs:
user@host# commit
788
Results
From configuration mode, confirm your configuration by entering the show protocols and show routing-
intances commands. If the output does not display the intended configuration, repeat the configuration
instructions in this example to correct it.
Verification
• ping mpls ldp p2mp to ping the end points of a P2MP LSP.
• show ldp database to display LDP P2MP label bindings and to ensure that the LDP P2MP LSP is
signaled.
789
• show ldp session detail to display the LDP capabilities exchanged with the peer. The Capabilities
advertised and Capabilities received fields should include p2mp.
• show ldp traffic-statistics p2mp to display the data traffic statistics for the P2MP LSP.
• show mvpn instance, show mvpn neighbor, and show mvpn c-multicast to display multicast VPN
routing instance information and to ensure that the LDP P2MP LSP is associated with the MVPN as
the S-PMSI.
• show multicast route instance detail on PE routers to ensure that traffic is received by all the hosts
and to display statistics on the receivers.
• show route label label detail to display the P2MP forwarding equivalence class (FEC) if the label is an
input label for an LDP P2MP LSP.
SEE ALSO
IN THIS SECTION
Requirements | 789
Overview | 790
Configuration | 792
Verification | 798
Requirements
The routers used in this example are Juniper Networks M Series Multiservice Edge Routers, T Series
Core Routers, or MX Series 5G Universal Routing Platforms. When using ingress replication for IP
multicast, each participating router must be configured with BGP for control plane procedures and with
ingress replication for the data provider tunnel, which forms a full mesh of MPLS point-to-point LSPs.
The ingress replication tunnel can be selective or inclusive, depending on the configuration of the
provider tunnel in the routing instance.
790
Overview
IN THIS SECTION
Topology | 790
The ingress-replication provider tunnel type uses unicast tunnels between routers to create a multicast
distribution tree.
The mpls-internet-multicast routing instance type uses ingress replication provider tunnels to carry
IP multicast data between routers through an MPLS cloud, using MBGP (or Next Gen) MVPN. Ingress
replication can also be configured when using MVPN to carry multicast data between PE routers.
The mpls-internet-multicast routing instance is a non-forwarding instance used only for control
plane procedures. It does not support any interface configurations. Only one mpls-internet-
multicast routing instance can be defined for a logical system. All multicast and unicast routes used for
IP multicast are associated only with the default routing instance (inet.0), not with a configured routing
instance. The mpls-internet-multicast routing instance type is configured for the default master
instance on each router, and is also included at the [edit protocols pim] hierarchy level in the default
instance.
When a new destination needs to be added to the ingress replication provider tunnel, the resulting
behavior differs depending on how the ingress replication provider tunnel is configured:
Topology
The IP topology consists of routers on the edge of the IP multicast domain. Each router has a set of IP
interfaces configured toward the MPLS cloud and a set of interfaces configured toward the IP routers.
See Figure 107 on page 791. Internet multicast traffic is carried between the IP routers, through the
791
MPLS cloud, using ingress replication tunnels for the data plane and a full-mesh IBGP session for the
control plane.
Configuration
IN THIS SECTION
Procedure | 792
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Border Router C
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User
Guide.
The following example shows how to configure ingress replication on an IP multicast instance with the
routing instance type mpls-internet-multicast. Additionally, this example shows how to configure a
selective provider tunnel that selects a new unicast tunnel each time a new destination needs to be
added to the multicast distribution tree.
This example shows the configuration of the link between Border Router C and edge IP Router C, from
which Border Router C receives PIM join messages.
1. Enable MPLS.
4. Configure the multiprotocol BGP-related settings so that the BGP sessions carry the necessary NLRI.
This example shows a dual stacking configuration with OSPF and OSPF version 3 configured on the
interfaces.
6. Configure a global PIM instance on the interface facing the edge device.
7. Configure the ingress replication provider tunnel to create a new unicast tunnel each time a
destination needs to be added to the multicast distribution tree.
user@Border_Router_C# commit
Results
From configuration mode, confirm your configuration by issuing the show protocols and show routing-
instances command. If the output does not display the intended configuration, repeat the instructions in
this example to correct the configuration.
ipv6-tunneling;
interface all;
}
bgp {
group ibgp {
type internal;
local-address 10.255.10.61;
family inet {
unicast;
}
family inet-vpn {
any;
}
family inet6 {
unicast;
}
family inet6-vpn {
any;
}
family inet-mvpn {
signaling;
}
family inet6-mvpn {
signaling;
}
export to-bgp; ## 'to-bgp' is not defined
neighbor 10.255.10.97;
neighbor 10.255.10.55;
neighbor 10.255.10.57;
neighbor 10.255.10.59;
}
}
ospf {
traffic-engineering;
area 0.0.0.0 {
interface fxp0.0 {
disable;
}
interface lo0.0;
interface so-1/3/1.0;
interface so-0/3/0.0;
}
}
797
ospf3 {
area 0.0.0.0 {
interface lo0.0;
interface so-1/3/1.0;
interface so-0/3/0.0;
}
}
ldp {
interface all;
}
pim {
rp {
static {
address 192.0.2.2;
address 2::192.0.2.2;
}
}
interface fe-0/1/0.0;
mpls-internet-multicast;
}
Verification
IN THIS SECTION
Checking the Routing Table for the MVPN Routing Instance on Border Router C | 799
Checking the Routing Table for the MVPN Routing Instance on Border Router B | 803
Confirm that the configuration is working properly. The following operational output is for LDP ingress
replication SPT-only mode. The multicast source behind IP Router B. The multicast receiver is behind IP
Router C.
Purpose
Use the show ingress-replication mvpn command to check the ingress replication status.
Action
Meaning
Checking the Routing Table for the MVPN Routing Instance on Border Router C
Purpose
Use the show route table command to check the route status.
Action
1:0:0:10.255.10.61/240
*[BGP/170] 00:45:55, localpref 100, from 10.255.10.61
AS path: I, validation-state: unverified
> via so-2/0/1.0
1:0:0:10.255.10.97/240
*[MVPN/70] 00:47:19, metric2 1
Indirect
5:0:0:32:192.168.195.106:32:198.51.100.1/240
*[PIM/105] 00:06:35
Multicast (IPv4) Composite
[BGP/170] 00:06:35, localpref 100, from 10.255.10.61
AS path: I, validation-state: unverified
> via so-2/0/1.0
6:0:0:1000:32:192.0.2.2:32:198.51.100.1/240
*[PIM/105] 00:07:03
Multicast (IPv4) Composite
7:0:0:1000:32:192.168.195.106:32:198.51.100.1/240
*[MVPN/70] 00:06:35, metric2 1
Multicast (IPv4) Composite
[PIM/105] 00:05:35
Multicast (IPv4) Composite
1:0:0:10.255.10.61/432
*[BGP/170] 00:45:55, localpref 100, from 10.255.10.61
AS path: I, validation-state: unverified
> via so-2/0/1.0
1:0:0:10.255.10.97/432
*[MVPN/70] 00:47:19, metric2 1
Indirect
Meaning
Purpose
Use the show mvpn neighbor command to check the neighbor status.
Action
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : test
MVPN Mode : SPT-ONLY
Neighbor Inclusive Provider Tunnel
10.255.10.61 INGRESS-REPLICATION:MPLS Label
16:10.255.10.61
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
801
Instance : test
MVPN Mode : SPT-ONLY
Neighbor Inclusive Provider Tunnel
10.255.10.61 INGRESS-REPLICATION:MPLS Label
16:10.255.10.61
Purpose
Use the show pim join extensive command to check the PIM join status.
Action
Group: 198.51.100.1
Source: *
RP: 192.0.2.2
Flags: sparse,rptree,wildcard
Upstream interface: Local
Upstream neighbor: Local
Upstream state: Local RP
Uptime: 00:07:49
Downstream neighbors:
Interface: ge-3/0/6.0
192.0.2.2 State: Join Flags: SRW Timeout: Infinity
Uptime: 00:07:49 Time since last Join: 00:07:49
Number of downstream interfaces: 1
Group: 198.51.100.1
Source: 192.168.195.106
Flags: sparse
Upstream protocol: BGP
802
Purpose
Use the show multicast route extensive command to check the multicast route status.
Action
Group: 198.51.100.1
Source: 192.168.195.106/32
Upstream interface: lsi.0
Downstream interface list:
ge-3/0/6.0
Number of outgoing interfaces: 1
Session description: NOB Cross media facilities
Statistics: 18 kBps, 200 pps, 88907 packets
Next-hop ID: 1048577
Upstream protocol: MVPN
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Uptime: 00:07:25
Purpose
Use the show ingress-replication mvpn command to check the ingress replication status.
Action
Meaning
Checking the Routing Table for the MVPN Routing Instance on Border Router B
Purpose
Use the show route table command to check the route status.
Action
1:0:0:10.255.10.61/240
*[MVPN/70] 00:49:26, metric2 1
Indirect
1:0:0:10.255.10.97/240
*[BGP/170] 00:48:22, localpref 100, from 10.255.10.97
AS path: I, validation-state: unverified
804
1:0:0:10.255.10.61/432
*[MVPN/70] 00:49:26, metric2 1
Indirect
1:0:0:10.255.10.97/432
*[BGP/170] 00:48:22, localpref 100, from 10.255.10.97
AS path: I, validation-state: unverified
> via so-1/3/1.0
Meaning
Purpose
Use the show mvpn neighbor command to check the neighbor status.
Action
MVPN instance:
Legend for provider tunnel
805
Instance : test
MVPN Mode : SPT-ONLY
Neighbor Inclusive Provider Tunnel
10.255.10.97 INGRESS-REPLICATION:MPLS Label
16:10.255.10.97
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : test
MVPN Mode : SPT-ONLY
Neighbor Inclusive Provider Tunnel
10.255.10.97 INGRESS-REPLICATION:MPLS Label
16:10.255.10.97
Purpose
Use the show pim join extensive command to check the PIM join status.
Action
Group: 198.51.100.1
Source: 192.168.195.106
Flags: sparse,spt
806
Purpose
Use the show multicast route extensive command to check the multicast route status.
Action
Group: 198.51.100.1
Source: 192.168.195.106/32
Upstream interface: fe-0/1/0.0
Downstream interface list:
so-1/3/1.0
Number of outgoing interfaces: 1
Session description: NOB Cross media facilities
Statistics: 18 kBps, 200 pps, 116531 packets
Next-hop ID: 1048580
Upstream protocol: MVPN
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
807
Uptime: 00:09:43
SEE ALSO
IN THIS SECTION
Requirements | 807
Configuration | 809
This example provides a step-by-step procedure to configure multicast services across a multiprotocol
BGP (MBGP) Layer 3 virtual private network. (also referred to as next-generation Layer 3 multicast
VPNs)
Requirements
• One host system capable of sending multicast traffic and supporting the Internet Group Management
Protocol (IGMP)
• One host system capable of receiving multicast traffic and supporting IGMP
Depending on the devices you are using, you might be required to configure static routes to:
808
• The Fast Ethernet interface to which the sender is connected on the multicast receiver
• The Fast Ethernet interface to which the receiver is connected on the multicast sender
IN THIS SECTION
Topology | 809
• IPv4
• BGP
• OSPF
• RSVP
• MPLS
• Static RP
809
Topology
Configuration
IN THIS SECTION
Results | 820
810
NOTE: In any configuration session, it is a good practice to periodically verify that the
configuration can be committed using the commit check command.
In this example, the router being configured is identified using the following command prompts:
To configure MBGP multicast VPNs for the network shown in Figure 1, perform the following steps:
Configuring Interfaces
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User
Guide.
[edit interfaces]
user@CE1# set lo0 unit 0 family inet address 192.168.6.1/32 primary
Use the show interfaces terse command to verify that the IP address is correct on the loopback
logical interface.
811
2. On the PE and CE routers, configure the IP address and protocol family on the Fast Ethernet
interfaces. Specify the inet protocol family type.
[edit interfaces]
user@CE1# set fe-1/3/0 unit 0 family inet address 10.10.12.1/24
user@CE1# set fe-0/1/0 unit 0 family inet address 10.0.67.13/30
[edit interfaces]
user@PE1# set fe-0/1/0 unit 0 family inet address 10.0.67.14/30
[edit interfaces]
user@PE2# set fe-0/1/0 unit 0 family inet address 10.0.90.13/30
[edit interfaces]
user@CE2# set fe-0/1/0 unit 0 family inet address 10.0.90.14/30
user@CE2# set fe-1/3/0 unit 0 family inet address 10.10.11.1/24
Use the show interfaces terse command to verify that the IP address is correct on the Fast Ethernet
interfaces.
3. On the PE and P routers, configure the ATM interfaces' VPI and maximum virtual circuits. If the
default PIC type is different on directly connected ATM interfaces, configure the PIC type to be the
same. Configure the logical interface VCI, protocol family, local IP address, and destination IP
address.
[edit interfaces]
user@PE1# set at-0/2/0 atm-options pic-type atm1
user@PE1# set at-0/2/0 atm-options vpi 0 maximum-vcs 256
user@PE1# set at-0/2/0 unit 0 vci 0.128
user@PE1# set at-0/2/0 unit 0 family inet address 10.0.78.5/32 destination 10.0.78.6
[edit interfaces]
user@P# set at-0/2/0 atm-options pic-type atm1
user@P# set at-0/2/0 atm-options vpi 0 maximum-vcs 256
user@P# set at-0/2/0 unit 0 vci 0.128
user@P# set at-0/2/0 unit 0 family inet address 10.0.78.6/32 destination 10.0.78.5
user@P# set at-0/2/1 atm-options pic-type atm1
user@P# set at-0/2/1 atm-options vpi 0 maximum-vcs 256
user@P# set at-0/2/1 unit 0 vci 0.128
user@P# set at-0/2/1 unit 0 family inet address 10.0.89.5/32 destination 10.0.89.6
812
[edit interfaces]
user@PE2# set at-0/2/1 atm-options pic-type atm1
user@PE2# set at-0/2/1 atm-options vpi 0 maximum-vcs 256
user@PE2# set at-0/2/1 unit 0 vci 0.128
user@PE2# set at-0/2/1 unit 0 family inet address 10.0.89.6/32 destination 10.0.89.5
Use the show configuration interfaces command to verify that the ATM interfaces' VPI and
maximum VCs are correct and that the logical interface VCI, protocol family, local IP address, and
destination IP address are correct.
Configuring OSPF
Step-by-Step Procedure
1. On the P and PE routers, configure the provider instance of OSPF. Specify the lo0.0 and ATM core-
facing logical interfaces. The provider instance of OSPF on the PE router forms adjacencies with the
OSPF neighbors on the other PE router and Router P.
Use the show ospf interfaces command to verify that the lo0.0 and ATM core-facing logical
interfaces are configured for OSPF.
2. On the CE routers, configure the customer instance of OSPF. Specify the loopback and Fast Ethernet
logical interfaces. The customer instance of OSPF on the CE routers form adjacencies with the
neighbors within the VPN routing instance of OSPF on the PE routers.
Use the show ospf interfaces command to verify that the correct loopback and Fast Ethernet logical
interfaces have been added to the OSPF protocol.
3. On the P and PE routers, configure OSPF traffic engineering support for the provider instance of
OSPF.
The shortcuts statement enables the master instance of OSPF to use a label-switched path as the
next hop.
Use the show ospf overview or show configuration protocols ospf command to verify that traffic
engineering support is enabled.
Configuring BGP
Step-by-Step Procedure
1. On Router P, configure BGP for the VPN. The local address is the local lo0.0 address. The neighbor
addresses are the PE routers' lo0.0 addresses.
The unicast statement enables the router to use BGP to advertise network layer reachability
information (NLRI). The signaling statement enables the router to use BGP as the signaling protocol
for the VPN.
Use the show configuration protocols bgp command to verify that the router has been configured to
use BGP to advertise NLRI.
814
2. On the PE and P routers, configure the BGP local autonomous system number.
Use the show configuration routing-options command to verify that the BGP local autonomous
system number is correct.
3. On the PE routers, configure BGP for the VPN. Configure the local address as the local lo0.0 address.
The neighbor addresses are the lo0.0 addresses of Router P and the other PE router, PE2.
Use the show bgp group command to verify that the BGP configuration is correct.
4. On the PE routers, configure a policy to export the BGP routes into OSPF.
Use the show policy bgp-to-ospf command to verify that the policy is correct.
815
Configuring RSVP
Step-by-Step Procedure
1. On the PE routers, enable RSVP on the interfaces that participate in the LSP. Configure the Fast
Ethernet and ATM logical interfaces.
2. On Router P, enable RSVP on the interfaces that participate in the LSP. Configure the ATM logical
interfaces.
Use the show configuration protocols rsvp command to verify that the RSVP configuration is correct.
Configuring MPLS
Step-by-Step Procedure
1. On the PE routers, configure an MPLS LSP to the PE router that is the LSP egress point. Specify the
IP address of the lo0.0 interface on the router at the other end of the LSP. Configure MPLS on the
ATM, Fast Ethernet, and lo0.0 interfaces.
To help identify each LSP when troubleshooting, configure a different LSP name on each PE router. In
this example, we use the name to-pe2 as the name for the LSP configured on PE1 and to-pe1 as the
name for the LSP configured on PE2.
Use the show configuration protocols mpls and show route label-switched-path to-pe1 commands
to verify that the MPLS and LSP configuration is correct.
After the configuration is committed, use the show mpls lsp name to-pe1 and show mpls lsp name
to-pe2 commands to verify that the LSP is operational.
2. On Router P, enable MPLS. Specify the ATM interfaces connected to the PE routers.
Use the show mpls interface command to verify that MPLS is enabled on the ATM interfaces.
3. On the PE and P routers, configure the protocol family on the ATM interfaces associated with the
LSP. Specify the mpls protocol family type.
Use the show mpls interface command to verify that the MPLS protocol family is enabled on the
ATM interfaces associated with the LSP.
Step-by-Step Procedure
1. On the PE routers, configure a routing instance for the VPN and specify the vrf instance type. Add
the Fast Ethernet and lo0.1 customer-facing interfaces. Configure the VPN instance of OSPF and
include the BGP-to-OSPF export policy.
user@PE1# set routing-instances vpn-a protocols ospf area 0.0.0.0 interface all
Use the show configuration routing-instances vpn-a command to verify that the routing instance
configuration is correct.
2. On the PE routers, configure a route distinguisher for the routing instance. A route distinguisher
allows the router to distinguish between two identical IP prefixes used as VPN routes. Configure a
different route distinguisher on each PE router. This example uses 65010:1 on PE1 and 65010:2 on
PE2.
Use the show configuration routing-instances vpn-a command to verify that the route distinguisher
is correct.
3. On the PE routers, configure default VRF import and export policies. Based on this configuration,
BGP automatically generates local routes corresponding to the route target referenced in the VRF
import policies. This example uses 2:1 as the route target.
NOTE: You must configure the same route target on each PE router for a given VPN routing
instance.
Use the show configuration routing-instances vpn-a command to verify that the route target is
correct.
818
4. On the PE routers, configure the VPN routing instance for multicast support.
Use the show configuration routing-instance vpn-a command to verify that the VPN routing
instance has been configured for multicast support.
5. On the PE routers, configure an IP address on loopback logical interface 1 (lo0.1) used in the
customer routing instance VPN.
Use the show interfaces terse command to verify that the IP address on the loopback interface is
correct.
Configuring PIM
Step-by-Step Procedure
1. On the PE routers, enable PIM. Configure the lo0.1 and the customer-facing Fast Ethernet interface.
Specify the mode as sparse and the version as 2.
user@PE1# set routing-instances vpn-a protocols pim interface lo0.1 mode sparse
user@PE1# set routing-instances vpn-a protocols pim interface lo0.1 version 2
user@PE1# set routing-instances vpn-a protocols pim interface fe-0/1/0.0 mode sparse
user@PE1# set routing-instances vpn-a protocols pim interface fe-0/1/0.0 version 2
user@PE2# set routing-instances vpn-a protocols pim interface lo0.1 mode sparse
user@PE2# set routing-instances vpn-a protocols pim interface lo0.1 version 2
user@PE2# set routing-instances vpn-a protocols pim interface fe-0/1/0.0 mode sparse
user@PE2# set routing-instances vpn-a protocols pim interface fe-0/1/0.0 version 2
Use the show pim interfaces instance vpn-a command to verify that PIM sparse-mode is enabled on
the lo0.1 interface and the customer-facing Fast Ethernet interface.
819
2. On the CE routers, enable PIM. In this example, we configure all interfaces. Specify the mode as
sparse and the version as 2.
Use the show pim interfaces command to verify that PIM sparse mode is enabled on all interfaces.
Step-by-Step Procedure
1. On Router PE1, configure the provider tunnel. Specify the multicast address to be used.
The provider-tunnel statement instructs the router to send multicast traffic across a tunnel.
Use the show configuration routing-instance vpn-a command to verify that the provider tunnel is
configured to use the default LSP template.
2. On Router PE2, configure the provider tunnel. Specify the multicast address to be used.
Use the show configuration routing-instance vpn-a command to verify that the provider tunnel is
configured to use the default LSP template.
820
Step-by-Step Procedure
1. Configure Router PE1 to be the rendezvous point. Specify the lo0.1 address of Router PE1. Specify
the multicast address to be used.
Use the show pim rps instance vpn-a command to verify that the correct local IP address is
configured for the RP.
2. On Router PE2, configure the static rendezvous point. Specify the lo0.1 address of Router PE1.
Use the show pim rps instance vpn-a command to verify that the correct static IP address is
configured for the RP.
3. On the CE routers, configure the static rendezvous point. Specify the lo0.1 address of Router PE1.
Use the show pim rps command to verify that the correct static IP address is configured for the RP.
4. Use the commit check command to verify that the configuration can be successfully committed. If
the configuration passes the check, commit the configuration.
8. Use show commands to verify the routing, VPN, and multicast operation.
Results
The configuration and verification parts of this example have been completed. The following section is
for your reference.
821
Router CE1
interfaces {
lo0 {
unit 0 {
family inet {
address 192.168.6.1/32 {
primary;
}
}
}
}
fe-0/1/0 {
unit 0 {
family inet {
address 10.0.67.13/30;
}
}
}
fe-1/3/0 {
unit 0 {
family inet {
address 10.10.12.1/24;
}
}
}
}
protocols {
ospf {
area 0.0.0.0 {
interface fe-0/1/0.0;
interface lo0.0;
interface fe-1/3/0.0;
}
}
pim {
rp {
static {
address 10.10.47.101 {
version 2;
822
}
}
}
interface all;
}
}
Router PE1
interfaces {
lo0 {
unit 0 {
family inet {
address 192.168.7.1/32 {
primary;
}
}
}
}
fe-0/1/0 {
unit 0 {
family inet {
address 10.0.67.14/30;
}
}
}
at-0/2/0 {
atm-options {
pic-type atm1;
vpi 0 {
maximum-vcs 256;
}
}
unit 0 {
vci 0.128;
family inet {
address 10.0.78.5/32 {
destination 10.0.78.6;
}
}
823
family mpls;
}
}
lo0 {
unit 1 {
family inet {
address 10.10.47.101/32;
}
}
}
}
routing-options {
autonomous-system 0.65010;
}
protocols {
rsvp {
interface fe-0/1/0.0;
interface at-0/2/0.0;
}
mpls {
label-switched-path to-pe2 {
to 192.168.9.1;
}
interface fe-0/1/0.0;
interface at-0/2/0.0;
interface lo0.0;
}
bgp {
group group-mvpn {
type internal;
local-address 192.168.7.1;
family inet-vpn {
unicast;
}
family inet-mvpn {
signaling;
}
neighbor 192.168.9.1;
neighbor 192.168.8.1;
}
}
ospf {
traffic-engineering {
824
shortcuts;
}
area 0.0.0.0 {
interface at-0/2/0.0;
interface lo0.0;
}
}
}
policy-options {
policy-statement bgp-to-ospf {
from protocol bgp;
then accept;
}
}
routing-instances {
vpn-a {
instance-type vrf;
interface lo0.1;
interface fe-0/1/0.0;
route-distinguisher 65010:1;
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
vrf-target target:2:1;
protocols {
ospf {
export bgp-to-ospf;
area 0.0.0.0 {
interface all;
}
}
pim {
rp {
local {
address 10.10.47.101;
group-ranges {
224.1.1.1/32;
}
}
825
}
interface lo0.1 {
mode sparse;
version 2;
}
interface fe-0/1/0.0 {
mode sparse;
version 2;
}
}
mvpn;
}
}
}
Router P
interfaces {
lo0 {
unit 0 {
family inet {
address 192.168.8.1/32 {
primary;
}
}
}
}
at-0/2/0 {
atm-options {
pic-type atm1;
vpi 0 {
maximum-vcs 256;
}
}
unit 0 {
vci 0.128;
family inet {
address 10.0.78.6/32 {
destination 10.0.78.5;
}
826
}
family mpls;
}
}
at-0/2/1 {
atm-options {
pic-type atm1;
vpi 0 {
maximum-vcs 256;
}
}
unit 0 {
vci 0.128;
family inet {
address 10.0.89.5/32 {
destination 10.0.89.6;
}
}
family mpls;
}
}
}
routing-options {
autonomous-system 0.65010;
}
protocols {
rsvp {
interface at-0/2/0.0;
interface at-0/2/1.0;
}
mpls {
interface at-0/2/0.0;
interface at-0/2/1.0;
}
bgp {
group group-mvpn {
type internal;
local-address 192.168.8.1;
family inet {
unicast;
}
family inet-mvpn {
signaling;
827
}
neighbor 192.168.9.1;
neighbor 192.168.7.1;
}
}
ospf {
traffic-engineering {
shortcuts;
}
area 0.0.0.0 {
interface lo0.0;
interface all;
interface fxp0.0 {
disable;
}
}
}
}
Router PE2
interfaces {
lo0 {
unit 0 {
family inet {
address 192.168.9.1/32 {
primary;
}
}
}
}
fe-0/1/0 {
unit 0 {
family inet {
address 10.0.90.13/30;
}
}
}
at-0/2/1 {
atm-options {
828
pic-type atm1;
vpi 0 {
maximum-vcs 256;
}
}
unit 0 {
vci 0.128;
family inet {
address 10.0.89.6/32 {
destination 10.0.89.5;
}
}
family mpls;
}
}
lo0 {
unit 1 {
family inet {
address 10.10.47.100/32;
}
}
}
}
routing-options {
autonomous-system 0.65010;
}
protocols {
rsvp {
interface fe-0/1/0.0;
interface at-0/2/1.0;
}
mpls {
label-switched-path to-pe1 {
to 192.168.7.1;
}
interface lo0.0;
interface fe-0/1/0.0;
interface at-0/2/1.0;
}
bgp {
group group-mvpn {
type internal;
local-address 192.168.9.1;
829
family inet-vpn {
unicast;
}
family inet-mvpn {
signaling;
}
neighbor 192.168.7.1;
neighbor 192.168.8.1;
}
}
ospf {
traffic-engineering {
shortcuts;
}
area 0.0.0.0 {
interface lo0.0;
interface at-0/2/1.0;
}
}
}
policy-options {
policy-statement bgp-to-ospf {
from protocol bgp;
then accept;
}
}
routing-instances {
vpn-a {
instance-type vrf;
interface fe-0/1/0.0;
interface lo0.1;
route-distinguisher 65010:2;
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
vrf-target target:2:1;
protocols {
ospf {
export bgp-to-ospf;
830
area 0.0.0.0 {
interface all;
}
}
pim {
rp {
static {
address 10.10.47.101;
}
}
interface fe-0/1/0.0 {
mode sparse;
version 2;
}
interface lo0.1 {
mode sparse;
version 2;
}
}
mvpn;
}
}
}
Router CE2
interfaces {
lo0 {
unit 0 {
family inet {
address 192.168.0.1/32 {
primary;
}
}
}
}
fe-0/1/0 {
unit 0 {
family inet {
address 10.0.90.14/30;
831
}
}
}
fe-1/3/0 {
unit 0 {
family inet {
address 10.10.11.1/24;
}
family inet6 {
address fe80::205:85ff:fe88:ccdb/64;
}
}
}
}
protocols {
ospf {
area 0.0.0.0 {
interface fe-0/1/0.0;
interface lo0.0;
interface fe-1/3/0.0;
}
}
pim {
rp {
static {
address 10.10.47.101 {
version 2;
}
}
}
interface all {
mode sparse;
version 2;
}
}
}
832
IN THIS SECTION
Requirements | 832
Overview | 832
Configuration | 834
Verification | 844
This example shows how to configure a PIM-SSM provider tunnel for an MBGP MVPN. The
configuration enables service providers to carry customer data in the core. This example shows how to
configure PIM-SSM tunnels as inclusive PMSI and uses the unicast routing preference as the metric for
determining the single forwarder (instead of the default metric, which is the IP address from the global
administrator field in the route-import community).
Requirements
• Configure the router interfaces. See the Junos OS Network Interfaces Library for Routing Devices.
• Configure the BGP-to-OSPF routing policy. See the Routing Policies, Firewall Filters, and Traffic
Policers User Guide.
Overview
IN THIS SECTION
Topology | 833
When a PE receives a customer join or prune message from a CE, the message identifies a particular
multicast flow as belonging either to a source-specific tree (S,G) or to a shared tree (*,G). If the route to
the multicast source or RP is across the VPN backbone, then the PE needs to identify the upstream
multicast hop (UMH) for the (S,G) or (*,G) flow. Normally the UMH is determined by the unicast route to
the multicast source or RP.
833
However, in some cases, the CEs might be distributing to the PEs a special set of routes that are to be
used exclusively for the purpose of upstream multicast hop selection using the route-import community.
More than one route might be eligible, and the PE needs to elect a single forwarder from the eligible
UMHs.
The default metric for the single forwarder election is the IP address from the global administrator field
in the route-import community. You can configure a router to use the unicast route preference to
determine the single forwarder election.
• provider-tunnel family inet pim-ssm group-address—Specifies a valid SSM VPN group address. The
SSM VPN group address and the source address are advertised by the type-1 autodiscovery route.
On receiving an autodiscovery route with the SSM VPN group address and the source address, a PE
router sends an (S,G) join in the provider space to the PE advertising the autodiscovery route. All PE
routers exchange their PIM-SSM VPN group address to complete the inclusive provider multicast
service interface (I-PMSI). Unlike a PIM-ASM provider tunnel, the PE routers can choose a different
VPN group address because the (S,G) joins are sent directly toward the source PE.
NOTE: Similar to a PIM-ASM provider tunnel, PIM must be configured in the default master
instance.
• unicast-umh-election—Specifies that the PE router uses the unicast route preference to determine
the single-forwarder election.
Topology
Figure 109 on page 833 shows the topology used in this example.
Configuration
IN THIS SECTION
Procedure | 834
Results | 839
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
1. Configure the interfaces in the master routing instance on the PE routers. This example shows the
interfaces for one PE router.
[edit interfaces]
user@host# set fe-0/2/0 unit 0 family inet address 192.168.195.109/30
user@host# set fe-0/2/1 unit 0 family inet address 192.168.195.5/27
user@host# set fe-0/2/2 unit 0 family inet address 20.10.1.1/30
user@host# set fe-0/2/2 unit 0 family iso
user@host# set fe-0/2/2 unit 0 family mpls
user@host# set lo0 unit 1 family inet address 10.10.47.100/32
user@host# set lo0 unit 2 family inet address 10.10.48.100/32
2. Configure the autonomous system number in the global routing options. This is required in MBGP
MVPNs.
[edit routing-options]
user@host# set autonomous-system 100
3. Configure the routing protocols in the master routing instance on the PE routers.
6. Configure the topology such that the BGP route to the source advertised by PE1 has a higher
preference than the BGP route to the source advertised by PE2.
7. Configure a higher primary loopback address on PE2 than on PE1. This ensures that PE2 is the
MBGP MVPN single-forwarder election winner.
[edit]
user@host# set interface lo0 unit 1 family inet address 1.1.1.1/32 primary
[edit]
user@host# set routing-instances VPN-A protocols mvpn unicast-umh-election
user@host# set routing-instances VPN-B protocols mvpn unicast-umh-election
user@host# commit
839
Results
Confirm your configuration by entering the show interfaces, show protocols, show routing-instances,
and show routing-options commands from configuration mode. If the output does not display the
intended configuration, repeat the instructions in this example to correct the configuration.
}
}
}
rp {
static {
address 10.255.112.155;
}
}
interface all {
mode sparse-dense;
version 2;
}
interface fxp0.0 {
disable;
}
}
static {
address 10.10.47.101;
}
}
interface lo0.1 {
mode sparse-dense;
version 2;
}
interface fe-0/2/1.0 {
mode sparse-dense;
version 2;
}
}
mvpn {
unicast-umh-election;
}
}
}
VPN-B {
instance-type vrf;
interface fe-0/2/0.0;
interface lo0.2;
route-distinguisher 10.255.112.199:200;
provider-tunnel {
family inet {
pim-ssm {
group-address 232.2.2.2;
}
}
vrf-target target:200:200;
vrf-table-label;
routing-options {
auto-export;
}
protocols {
ospf {
export bgp-to-ospf;
area 0.0.0.0 {
interface lo0.2;
interface fe-0/2/0.0;
}
}
pim {
843
rp {
static {
address 10.10.48.101;
}
}
interface lo0.2 {
mode sparse-dense;
version 2;
}
interface fe-0/2/0.0 {
mode sparse-dense;
version 2;
}
}
mvpn {
unicast-umh-election;
}
}
}
fe-0/2/0 {
unit 0 {
family inet {
address 192.168.195.109/30;
}
}
}
fe-0/2/1 {
unit 0 {
family inet {
address 192.168.195.5/27;
}
}
}
Verification
To verify the configuration, start the receivers and the source. PE3 should create type-7 customer
multicast routes from the local joins. Verify the source-tree customer multicast entries on all PE routers.
PE3 should choose PE1 as the upstream PE toward the source. PE1 receives the customer multicast
route from the egress PEs and forwards data on the PSMI to PE3.
SEE ALSO
IN THIS SECTION
Requirements | 844
Overview | 845
Configuration | 847
Verification | 851
This example shows how to configure an MBGP MVPN that allows remote sources, even when there is
no PIM neighborship toward the upstream router.
Requirements
• Configure the router interfaces. See the Junos OS Network Interfaces Library for Routing Devices.
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
845
• Configure the point-to-multipoint static LSP. See Configuring Point-to-Multipoint LSPs for an MBGP
MVPN.
Overview
IN THIS SECTION
Topology | 846
In this example, a remote CE router is the multicast source. In an MBGP MVPN, a PE router has the PIM
interface hello interval set to zero, thereby creating no PIM neighborship. The PIM upstream state is
None. In this scenario, directly connected receivers receive traffic in the MBGP MVPN only if you
configure the ingress PE’s upstream logical interface to accept remote sources. If you do not configure
the ingress PE’s logical interface to accept remote sources, the multicast route is deleted and the local
receivers are no longer attached to the flood next hop.
This example shows the configuration on the ingress PE router. A static LSP is used to receive traffic
from the remote source.
846
Topology
Figure 110 on page 846 shows the topology used in this example.
Configuration
IN THIS SECTION
Procedure | 848
Results | 849
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
2. Configure the autonomous system number in the global routing options. This is required in MBGP
MVPNs.
6. Configure PIM in the routing instance, including the accept-remote-source statement on the
incoming logical interface.
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show routing-instances and show
routing-options commands. If the output does not display the intended configuration, repeat the
instructions in this example to correct the configuration.
interface ge-1/0/2.0;
interface ge-1/0/7.0;
route-distinguisher 10.0.0.10:04;
provider-tunnel {
rsvp-te {
label-switched-path-template {
mvpn-dynamic;
}
}
selective {
group 224.0.9.0/32 {
source 10.1.1.2/32 {
rsvp-te {
static-lsp mvpn-static;
}
}
}
}
}
vrf-target target:65000:04;
protocols {
bgp {
group 1a {
type external;
peer-as 65213;
neighbor 10.2.213.9;
}
}
pim {
interface all {
hello-interval 0;
}
interface ge-1/0/2.0 {
accept-remote-source;
}
}
mvpn;
851
}
}
Verification
SEE ALSO
Example: Configuring BGP Route Flap Damping Based on the MBGP MVPN Address
Family
IN THIS SECTION
Requirements | 852
Overview | 852
Configuration | 853
Verification | 865
852
This example shows how to configure an multiprotocol BGP multicast VPN (also called Next-Generation
MVPN) with BGP route flap damping.
Requirements
This example uses Junos OS Release 12.2. BGP route flap damping support for MBGP MVPN,
specifically, and on an address family basis, in general, is introduced in Junos OS Release 12.2.
Overview
IN THIS SECTION
Topology | 853
BGP route flap damping helps to diminish route instability caused by routes being repeatedly withdrawn
and readvertised when a link is intermittently failing.
This example uses the default damping parameters and demonstrates an MBGP MVPN scenario with
three provider edge (PE) routing devices, three customer edge (CE) routing devices, and one provider (P)
routing device.
853
Topology
Figure 111 on page 853 shows the topology used in this example.
On PE Device R4, BGP route flap damping is configured for address family inet-mvpn. A routing policy
called dampPolicy uses the nlri-route-type match condition to damp only MVPN route types 3, 4, and 5.
All other MVPN route types are not damped.
This example shows the full configuration on all devices in the "CLI Quick Configuration" section. The
"Configuring Device R4" section shows the step-by-step configuration for PE Device R4.
Configuration
IN THIS SECTION
Results | 861
854
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Device R1
Device R2
Device R3
Device R4
Device R5
Device R6
Device R7
Configuring Device R4
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit interfaces]
user@R4# set ge-1/2/0 unit 10 family inet address 10.1.1.10/30
user@R4# set ge-1/2/0 unit 10 family mpls
user@R4# set ge-1/2/1 unit 17 family inet address 10.1.1.17/30
user@R4# set ge-1/2/1 unit 17 family mpls
user@R4# set vt-1/2/0 unit 4 family inet
user@R4# set lo0 unit 4 family inet address 172.16.1.4/32
user@R4# set lo0 unit 104 family inet address 172.16.100.4/32
[edit protocols]
user@R4# set mpls interface all
user@R4# set mpls interface ge-1/2/0.10
user@R4# set rsvp interface all aggregate
user@R4# set ldp interface ge-1/2/0.10
user@R4# set ldp p2mp
3. Configure BGP.
The BGP configuration enables BGP route flap damping for the inet-mvpn address family. The BGP
configuration also imports into the routing table the routing policy called dampPolicy. This policy is
applied to neighbor PE Device R2.
5. Configure a damping policy that uses the nlri-route-type match condition to damp only MVPN
route types 3, 4, and 5.
The no-damp policy (damping no-damp disable) causes any damping state that is present in the
routing table to be deleted. The then damping no-damp statement applies the no-damp policy as
an action and has no from match conditions. Therefore, all routes that are not matched by term1
are matched by this term, with the result that all other MVPN route types are not damped.
7. Configure the parent_vpn_routes to accept all other BGP routes that are not from the inet-mvpn
address family.
861
[edit routing-options]
user@R4# set router-id 172.16.1.4
user@R4# set autonomous-system 1001
10. If you are done configuring the device, commit the configuration.
user@R4# commit
Results
From configuration mode, confirm your configuration by entering the show interfaces, show protocols,
show policy-options, show routing-instances, and show routing-options commands. If the output does
862
not display the intended configuration, repeat the instructions in this example to correct the
configuration.
aggregate;
}
}
mpls {
interface all;
interface ge-1/2/0.10;
}
bgp {
group ibgp {
type internal;
local-address 172.16.1.4;
family inet-vpn {
unicast;
any;
}
family inet-mvpn {
signaling {
damping;
}
}
neighbor 172.16.1.2 {
import dampPolicy;
}
neighbor 172.16.1.5;
}
}
ospf {
traffic-engineering;
area 0.0.0.0 {
interface all;
interface lo0.4 {
passive;
}
interface ge-1/2/0.10;
}
}
ldp {
interface ge-1/2/0.10;
864
p2mp;
}
interface ge-1/2/1.17;
}
}
pim {
rp {
static {
address 172.16.100.2;
}
}
interface ge-1/2/1.17 {
mode sparse;
}
}
mvpn;
}
}
Verification
IN THIS SECTION
Purpose
Verify the presence of the no-damp policy, which disables damping for MVPN route types other than 3,
4, and 5.
866
Action
Meaning
The output shows that the default damping parameters are in effect and that the no-damp policy is also
in effect for the specified route types.
Purpose
Action
State|#Active/Received/Accepted/Damped...
172.16.1.2 1001 3159 3155 0 0 23:43:47
Establ
bgp.l3vpn.0: 3/3/3/0
bgp.l3vpn.2: 0/0/0/0
bgp.mvpn.0: 1/1/1/0
vpn-1.inet.0: 3/3/3/0
vpn-1.mvpn.0: 1/1/1/0
172.16.1.5 1001 3157 3154 0 0 23:43:40
Establ
bgp.l3vpn.0: 3/3/3/0
bgp.l3vpn.2: 0/0/0/0
bgp.mvpn.0: 1/1/1/0
vpn-1.inet.0: 3/3/3/0
vpn-1.mvpn.0: 1/1/1/0
Meaning
The Damp State field shows that zero routes in the bgp.mvpn.0 routing table have been damped.
Further down, the last number in the State field shows that zero routes have been damped for BGP peer
172.16.1.2.
SEE ALSO
IN THIS SECTION
Requirements | 868
Configuring Sender-Only and Receiver-Only Sites Using PIM ASM Provider Tunnels | 874
This section describes how to configure multicast virtual private networks (MVPNs) using multiprotocol
BGP (MBGP) (next-generation MVPNs).
Requirements
To implement multiprotocol BGP-based multicast VPNs, auto-RP, bootstrap router (BSR) RP, and PIM
dense mode you need JUNOS Release 9.2 or later.
To implement multiprotocol BGP-based multicast VPNs, sender-only sites, and receiver-only sites you
need JUNOS Release 8.4 or later.
You can configure PIM auto-RP, bootstrap router (BSR) RP, PIM dense mode, and mtrace for next
generation multicast VPN networks. Auto-RP uses PIM dense mode to propagate control messages and
establish RP mapping. You can configure an auto-RP node in one of three different modes: discovery
mode, announce mode, and mapping mode. BSR is the IETF standard for RP establishment. A selected
router in a network acts as a BSR, which selects a unique RP for different group ranges. BSR messages
are flooded using the data tunnel between PE routers. When you enable PIM dense mode, data packets
are forwarded to all interfaces except the incoming interface. Unlike PIM sparse mode, where explicit
joins are required for data packets to be transmitted downstream, data packets are flooded to all routers
in the routing instance in PIM dense mode.
This section shows you how to configure a MVPN using MBGP. If you have multicast VPNs based on
draft-rosen, they will continue to work as before and are not affected by the configuration of MVPNs
using MBGP.
869
The network configuration used for most of the examples in this section is shown in Figure 112 on page
870.
870
In the figure, two VPNs, VPN A and VPN B, are serviced by the same provider at several sites, two of
which have CE routers for both VPN A and VPN B (site 2 is not shown). The PE routers are shown with
VRF tables for the VPN CEs for which they have routing information. It is important to note that no
multicast protocols are required between the PE routers on the network. The multicast routing
information is carried by MBGP between the PE routers. There may be one or more BGP route
reflectors in the network. Both VPNs operate independently and are configured separately.
Both the PE and CE routers run PIM sparse mode and maintain forwarding state information about
customer source (C-S) and customer group (C-G) multicast components. CE routers still send a
customer's PIM join messages (PIM C-Join) from CE to PE, and from PE to CE, as shown in the figure.
But on the provider's backbone network, all multicast information is carried by MBGP. The only addition
over and above the unicast VPN configuration normally used is the use of a special provider tunnel
(provider-tunnel) for carrying PIM sparse mode message content between provider nodes on the
network.
There are several scenarios for MVPN configuration using MBGP, depending on whether a customer site
has senders (sources) of multicast traffic, has receivers of multicast traffic, or a mixture of senders and
receivers. MVPNs can be:
• A full mesh (each MVPN site has both senders and receivers)
• A hub and spoke (two interfaces between hub PE and hub CE, and all spokes are sender-receiver
sites)
Each type of MVPN differs more in the configuration VPN statements than the provider tunnel
configuration. For information about configuring VPNs, see the Junos OS VPNs Library for Routing
Devices.
IN THIS SECTION
Configuration Steps
Step-by-Step Procedure
In this example, PE-1 connects to VPN A and VPN B at site 1, PE-4 connects to VPN A at site 4, and
PE-2 connects to VPN B at site 3. To configure a full mesh MVPN for VPN A and VPN B, perform the
following steps:
[edit]
routing-instances {
VPN-A {
instance-type vrf;
interface so-6/0/0.0;
interface so-6/0/1.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.1;
}
}
protocols {
mvpn;
}
route-distinguisher 65535:0;
vrf-target target:1:1;
}
VPN-B {
instance-type vrf;
interface ge-0/3/0.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.2;
}
}
protocols {
mvpn;
}
route-distinguisher 65535:1;
vrf-target target:1:2;
}
873
[edit]
routing-instances {
VPN-A {
instance-type vrf;
interface so-1/0/0.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.1;
}
}
protocols {
mvpn;
}
route-distinguisher 65535:4;
vrf-target target:1:1;
[edit]
routing-instances {
VPN-B {
instance-type vrf;
interface ge-1/3/0.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.2;
}
}
protocols {
mvpn;
}
route-distinguisher 65535:3;
vrf-target target:1:2;
}
874
Configuring Sender-Only and Receiver-Only Sites Using PIM ASM Provider Tunnels
IN THIS SECTION
This example describes how to configure an MBGP MVPN with a mixture of sender-only and receiver-
only sites using PIM-ASM provider tunnels.
Configuration Steps
Step-by-Step Procedure
In this example, PE-1 connects to VPN A (sender-only) and VPN B (receiver-only) at site 1, PE-4
connects to VPN A (receiver-only) at site 4, and PE-2 connects to VPN A (receiver-only) and VPN B
(sender-only) at site 3.
To configure an MVPN for a mixture of sender-only and receiver-only sites on VPN A and VPN B,
perform the following steps:
[edit]
routing-instances {
VPN-A {
instance-type vrf;
interface so-6/0/0.0;
interface so-6/0/1.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.1;
}
}
protocols {
mvpn {
sender-site;
route-target {
export-target unicast;
import-target target target:1:4;
875
}
}
route-distinguisher 65535:0;
vrf-target target:1:1;
routing-options {
auto-export;
}
}
VPN-B {
instance-type vrf;
interface ge-0/3/0.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.2;
}
}
protocols {
mvpn {
receiver-site;
route-target {
export-target target target:1:5;
import-target unicast;
}
}
}
route-distinguisher 65535:1;
vrf-target target:1:2;
routing-options {
auto-export;
}
}
[edit]
routing-instances {
VPN-A {
instance-type vrf;
interface so-1/0/0.0;
provider-tunnel {
876
pim-asm {
group-address 224.1.1.1;
}
}
protocols {
mvpn {
receiver-site;
route-target {
export-target target target:1:4;
import-target unicast;
}
}
}
route-distinguisher 65535:2;
vrf-target target:1:1;
routing-options {
auto-export;
}
}
[edit]
routing-instances {
VPN-A {
instance-type vrf;
interface so-2/0/1.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.1;
}
}
protocols {
mvpn {
receiver-site;
route-target {
export-target target target:1:4;
import-target unicast;
}
}
}
877
route-distinguisher 65535:3;
vrf-target target:1:1;
routing-options {
auto-export;
}
}
VPN-B {
instance-type vrf;
interface ge–1/3/0.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.2;
}
}
protocols {
mvpn {
sender-site;
route-target {
export-target unicast
import-target target target:1:5;
}
}
}
route-distinguisher 65535:4;
vrf-target target:1:2;
routing-options {
auto-export;
}
}
IN THIS SECTION
This example describes how to configure an MBGP MVPN with a mixture of sender-only, receiver-only,
and sender-receiver sites.
878
Configuration Steps
Step-by-Step Procedure
In this example, PE-1 connects to VPN A (sender-receiver) and VPN B (receiver-only) at site 1, PE-4
connects to VPN A (receiver-only) at site 4, and PE-2 connects to VPN A (sender-only) and VPN B
(sender-only) at site 3. To configure an MVPN for a mixture of sender-only, receiver-only, and sender-
receiver sites for VPN A and VPN B, perform the following steps:
[edit]
routing-instances {
VPN-A {
instance-type vrf;
interface so-6/0/0.0;
interface so-6/0/1.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.1;
}
}
protocols {
mvpn {
route-target {
export-target unicast target target:1:4;
import-target unicast target target:1:4 receiver;
}
}
}
route-distinguisher 65535:0;
vrf-target target:1:1;
routing-options {
auto-export;
}
}
VPN-B {
instance-type vrf;
interface ge-0/3/0.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.2;
879
}
}
protocols {
mvpn {
receiver-site;
route-target {
export-target target target:1:5;
import-target unicast;
}
}
}
route-distinguisher 65535:1;
vrf-target target:1:2;
routing-options {
auto-export;
}
}
[edit]
routing-instances {
VPN-A {
instance-type vrf;
interface so-1/0/0.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.1;
}
}
protocols {
mvpn {
receiver-site;
route-target {
export-target target target:1:4;
import-target unicast;
}
}
}
route-distinguisher 65535:2;
vrf-target target:1:1;
880
routing-options {
auto-export;
}
}
[edit]
routing-instances {
VPN-A {
instance-type vrf;
interface so-2/0/1.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.1;
}
}
protocols {
mvpn {
receiver-site;
route-target {
export-target target target:1:4;
import-target unicast;
}
}
}
route-distinguisher 65535:3;
vrf-target target:1:1;
routing-options {
auto-export;
}
}
VPN-B {
instance-type vrf;
interface ge-1/3/0.0;
provider-tunnel {
pim-asm {
group-address 224.1.1.2;
}
}
protocols {
881
mvpn {
sender-site;
route-target {
export-target unicast;
import-target target target:1:5;
}
}
}
route-distinguisher 65535:4;
vrf-target target:1:2;
routing-options {
auto-export;
}
}
IN THIS SECTION
This example describes how to configure an MBGP MVPN in a hub and spoke topology.
Configuration Steps
Step-by-Step Procedure
In this example, which only configures VPN A, PE-1 connects to VPN A (spoke site) at site 1, PE-4
connects to VPN A (hub site) at site 4, and PE-2 connects to VPN A (spoke site) at site 3. Current
support is limited to the case where there are two interfaces between the hub site CE and PE. To
configure a hub-and-spoke MVPN for VPN A, perform the following steps:
[edit]
routing-instances {
VPN-A {
instance-type vrf;
882
interface so-6/0/0.0;
interface so-6/0/1.0;
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
protocols {
mvpn {
route-target {
export-target unicast;
import-target unicast target target:1:4;
}
}
}
route-distinguisher 65535:0;
vrf-target {
import target:1:1;
export target:1:3;
}
routing-options {
auto-export;
}
}
[edit]
routing-instances {
VPN-A-spoke-to-hub {
instance-type vrf;
interface so-1/0/0.0; #receives data and joins from the CE
protocols {
mvpn {
receiver-site;
route-target {
export-target target target:1:4;
import-target unicast;
}
}
883
ospf {
export redistribute-vpn; #redistributes VPN routes to CE
area 0.0.0.0 {
interface so-1/0/0;
}
}
}
route-distinguisher 65535:2;
vrf-target {
import target:1:3;
}
routing-options {
auto-export;
}
}
VPN-A-hub-to-spoke {
instance-type vrf;
interface so-2/0/0.0; #receives data and joins from the CE
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
protocols {
mvpn {
sender-site;
route-target {
import-target target target:1:3;
export-target unicast;
}
}
ospf {
export redistribute-vpn; #redistributes VPN routes to CE
area 0.0.0.0 {
interface so-2/0/0;
}
}
}
route-distinguisher 65535:2;
vrf-target {
import target:1:1;
884
}
routing-options {
auto-export;
}
}
[edit]
routing-instances {
VPN-A {
instance-type vrf;
interface so-2/0/1.0;
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
protocols {
mvpn {
route–target {
import-target target target:1:4;
export-target unicast;
}
}
}
route-distinguisher 65535:3;
vrf-target {
import target:1:1;
export target:1:3;
}
routing-options {
auto-export;
}
}
LDP. Enabling nonstop active routing (NSR) for BGP MVPN requires that NSR support is enabled for all
these protocols.
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library.
• Configure a multicast group membership protocol (IGMP or MLD). See Understanding IGMP and
Understanding MLD.
• For this feature to work with IPv6, the routing device must be running Junos OS Release 10.4 or
later.
The state maintained by MVPN includes MVPN routes, cmcast, provider-tunnel, and forwarding
information. BGP MVPN NSR synchronizes this MVPN state between the primary and backup Routing
Engines. While some of the state on the backup Routing Engine is locally built based on the
configuration, most of it is built based on triggers from other protocols that MVPN interacts with. The
triggers from these protocols are in turn the result of state replication performed by these modules. This
includes route change notifications by unicast protocols, join and prune triggers from PIM, remote
MVPN route notification by BGP, and provider-tunnel related notifications from RSVP and LDP.
Configuring NSR and unified in-service software upgrade (ISSU) support to the BGP MVPN protocol
provides features such as various provider tunnel types, different MVPN modes (source tree, shared-
tree), and PIM features. As a result, at the ingress PE, replication is turned on for dynamic LSPs. Thus,
when NSR is configured, the state for dynamic LSPs is also replicated to the backup Routing Engine.
After the state is resolved on the backup Routing Engine, RSVP sends required notifications to MVPN.
Nonstop active routing configurations include two Routing Engines that share information so that
routing is not interrupted during Routing Engine failover. When NSR is configured on a dual Routing
Engine platform, the PIM control state is replicated on both Routing Engines.
• Neighbor relationships
• RP-set information
• Synchronization between routes and next hops and the forwarding state between the two Routing
Engines
• Dense mode
• Sparse mode
• SSM
• Static RP
• Bootstrap router
• BFD support
• Policy features such as neighbor policy, bootstrap router export and import policies, scope policy,
flow maps, and reverse path forwarding (RPF) check policies
1. Because NSR requires you to configure graceful Routing Engine switchover (GRES), to enable GRES,
include the graceful-switchover statement at the [edit chassis redundancy] hierarchy level.
[edit]
user@host# set chassis redundancy graceful-switchover
2. Include the synchronize statement at the [edit system] hierarchy level so that configuration changes
are synchronized on both Routing Engines.
[edit system]
user@host# set synchronize
user@host# exit
3. Configure PIM settings on the desingated routerwith sparse mode and version, and static address
pointing to the rendezvous points.
[edit]
user@host# set routing-options forwarding-table export load-balance
[edit]
user@host# set routing-options nonstop-routing
user@host# set routing-options router-id address
For example, to set nonstop active routing on the designated router with address 10.210.255.201:
[edit]
user@host# set routing-options router-id 10.210.255.201
SEE ALSO
Release Description
15.1X49-D50 Starting in Junos OS Release 15.1X49-D50 and Junos OS Release 17.3R1, the vrf-table-label
statement allows mapping of the inner label to a specific Virtual Routing and Forwarding (VRF).
This mapping allows examination of the encapsulated IP header at an egress VPN router. For SRX
Series devices, the vrf-table-label statement is currently supported only on physical interfaces. As a
workaround, deactivate vrf-table-label or use physical interfaces.
RELATED DOCUMENTATION
This topic provides an overview of Junos support for Inter-Autonomous System (AS) Option B, which is
achieved by extending Border Gateway Protocol Multicast Virtual Private Network (BGP-MVPN) to
support Inter-AS scenarios using segmented provider tunnels (p-tunnels). Junos OS also support Option
A and Option C unicast with non-segmented p-tunnels, support for which was introduced in Junos OS
12.1. See the links below for more information on these options.
Inter-AS support for multicast traffic is required when an L3VPN results in two or more ASes that are
using BGP-MVPN. The ASes may be administered by the same authority or by different authorities.
When using BGP-MVPN Inter-AS Option B with segmented p-tunnels, the p-tunnel segmentation is
performed at the Autonomous System Border Router (ASBRs). The ASBRs also perform BGP-MVPN
signaling and form the data plane.
Setting up Inter-AS Option B with segmented p-tunnels can be complex, but the configuration does
provide the following advantages:
• Independence. Different administrative authorities can choose whether or not to allow topology
discovery of their AS by the other ASes. That is, each AS can be separately controlled by a different
independent authority.
• Heterogeneity. Different p-tunnel technologies can be used within a given AS (as might be the case
when working with heterogeneous networks that now must be combined).
• Scale. Inter-AS Option B with segmented p-tunnels avoids the potential for ASBR bottleneck that can
happen when Intra-AS p-tunnels are set up across ASes using non-segmented p-tunnels. (Unicast
branch LSPs with inclusive p-tunnels can all have to transit through the ASBRs. In this case, for IR,
889
the pinch point becomes data-plane scale. For RSVP-TE it becomes P2MP control-plane scale, due to
the high number of RSVP refresh messages passing through the ASBRs).
The supported Junos implementation of Option B uses RSVP-TE p-tunnels for all segments, and MVPN
Inter-AS signaling procedures. Multicast traffic is forwarded across AS boundaries over a single-hop
labeled LSP. Inter-AS p-tunnels have two segments: an ASBR-ASBR segment, called Inter-AS segment
and the ASBR-PE segment called Intra-AS segment. (Static RSVP-TE, IR , PIM-ASM, and PIM-SSM p-
tunnels are not supported.)
MVPN Intra-AS AD routes are not propagated across the AS boundary. The Intra-AS inclusive p-tunnels
advertised in Type-1 routes are terminated at the ASBRs within each AS. Route learning for both unicast
and multicast traffic occurs only through Option B.
The ASBR originates an Inter-AS AD (Type-2) route into eBGP, which may include tunnel attributes for
an Inter-AS p-tunnel (called an Inter-AS, or ASBR-ASBR p-tunnel segment). The Type-2 route contains
the ASBR's route distinguisher (RD), which is unique per VPN and per ASBR, and its AS number. The
tunnel is set up between two directly connected ASBRs in neighboring ASes, and it is always a single-
hop point-topoint (P2P) LSP.
An ASBR in the originating AS forwards all multicast traffic received over the inclusive p-tunnel into the
Inter-AS p-tunnel. An ASBR in the adjacent AS propagates the received Inter-AS route into its own AS
over iBGP, but only after rewriting the Provider Multicast Service Interface (PMSI) tunnel attributes and
modifying the next-hop of the Multiprotocol Reachable (MP_REACH_NRLI) attribute with a reachable
address of the ASBR (next-hop self rewrite). When an ASBR propagates the Type-2 route over iBGP, it
can choose any p-tunnel type supported within its AS, although the supported Junos implementation of
Option B uses RSVP-TE p-tunnels only for all segments.
At the ASBRs, traffic received over the upstream p-tunnel segment is forwarded over the downstream p-
tunnel segment. This process is repeated at each AS boundary. The resulting Inter-AS p-tunnel is
comprised of alternating Inter-AS and Intra-AS p-tunnel segments (thus the name, “segmented p-
tunnel”).
• The ASBRs distribute both VPN routes and routes in the master instance. They may thus become a
bottleneck.
• With a large number of VPNs, the ASBR can run out of labels because each unicast VPN route
requires one.
• Unless route-targets are rewritten at the AS boundaries, the different service providers must agree
on VPN route-targets (this is that same as for option-C)
• The ASBRs must be capable of MVPN signaling and support Inter-AS MVPN procedures.
890
RELATED DOCUMENTATION
IN THIS SECTION
IN THIS SECTION
A multicast VPN (MVPN) extranet enables service providers to forward IP multicast traffic originating in
one VPN routing and forwarding (VRF) instance to receivers in a different VRF instance. This capability
is also know as overlapping MVPNs.
• A receiver in one VRF can receive multicast traffic from a source connected to a different router in a
different VRF.
• A receiver in one VRF can receive multicast traffic from a source connected to the same router in a
different VRF.
• A receiver in one VRF can receive multicast traffic from a source connected to a different router in
the same VRF.
• A receiver in one VRF can be prevented from receiving multicast traffic from a specific source in a
different VRF.
891
An MVPN extranet is useful when there are business partnerships between different enterprise VPN
customers that require them to be able to communicate with one another. For example, a wholesale
company might want to broadcast inventory to its contractors and resellers. An MVPN extranet is also
useful when companies merge and one set of VPN sites needs to receive content from another VPN.
The enterprises involved in the merger are different VPN customers from the service provider point of
view. The MVPN extranet makes the connectivity possible.
Video Distribution
Another use for MVPN extranets is video multicast distribution from a video headend to receiving sites.
Sites within a given multicast VPN might be in different organizations. The receivers can subscribe to
content from a specific content provider.
The PE routers on the MVPN provider network learn about the sources and receivers using MVPN
mechanisms. These PE routers can use selective trees as the multicast distribution mechanism in the
backbone. The network carries traffic belonging only to a specified set of one or more multicast groups,
from one or more multicast VPNs. As a result, this model facilitates the distribution of content from
multiple providers on a selective basis if desired.
Financial Services
A third use for MVPN extranets is enterprise and financial services infrastructures. The delivery of
financial data, such as financial market updates, stock ticker values, and financial TV channels, is an
example of an application that must deliver the same data stream to hundreds and potentially thousands
of end users. The content distribution mechanisms largely rely on multicast within the financial provider
network. In this case, there could also be an extensive multicast topology within brokerage firms and
banks networks to enable further distribution of content and for trading applications. Financial service
providers require traffic separation between customers accessing the content, and MVPN extranets
provide this separation.
• If there is more than one VRF routing instance on a provider edge (PE) router that has receivers
interested in receiving multicast traffic from the same source, virtual tunnel (VT) interfaces must be
configured on all instances.
• For auto-RP operation, the mapping agent must be configured on at least two PEs in the extranet
network.
892
• For asymmetrically configured extranets using auto-RP, when one VRF instance is the only instance
that imports routes from all other extranet instances, the mapping agent must be configured in the
VRF that can receive all RP discovery messages from all VRF instances, and mapping-agent election
should be disabled.
• For bootstrap router (BSR) operation, the candidate and elected BSRs can be on PE, CE, or C routers.
The PE router that connects the BSR to the MVPN extranets must have configured provider tunnels
or other physical interfaces configured in the routing instance. The only case not supported is when
the BSR is on a CE or C router connected to a PE routing instance that is part of an extranet but does
not have configured provider tunnels and does not have any other interfaces besides the one
connecting to the CE router.
• PIM dense mode is not supported in the MVPN extranets VRF instances.
IN THIS SECTION
Requirements | 892
Configuration | 894
This example provides a step-by-step procedure to configure multicast VPN extranets using static
rendezvous points. It is organized in the following sections:
Requirements
• One adaptive services PIC or MultiServices PIC in each of the T Series routers acting as PE routers
• One host system capable of sending multicast traffic and supporting the Internet Group Management
Protocol (IGMP)
• Three host systems capable of receiving multicast traffic and supporting IGMP
893
IN THIS SECTION
Topology | 894
• The multicast traffic originating at source H1 can be received by host H4 connected to router CE2 in
the green VPN.
• The multicast traffic originating at source H1 can be received by host H3 connected to router CE3 in
the blue VPN.
• The multicast traffic originating at source H1 can be received by host H2 directly connected to router
PE1 in the red VPN.
Topology
Configuration
IN THIS SECTION
Results | 925
NOTE: In any configuration session, it is good practice to verify periodically that the
configuration can be committed using the commit check command.
In this example, the router being configured is identified using the following command prompts:
Configuring Interfaces
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
896
user@CE1# set interfaces lo0 unit 0 family inet address 192.168.6.1/32 primary
user@PE1# set interfaces lo0 unit 0 family inet address 192.168.1.1/32 primary
user@PE2# set interfaces lo0 unit 0 family inet address 192.168.2.1/32 primary
user@CE2# set interfaces lo0 unit 0 family inet address 192.168.4.1/32 primary
user@PE3# set interfaces lo0 unit 0 family inet address 192.168.7.1/32 primary
user@CE3# set interfaces lo0 unit 0 family inet address 192.168.9.1/32 primary
Use the show interfaces terse command to verify that the correct IP address is configured on the
loopback interface.
2. On the PE and CE routers, configure the IP address and protocol family on the Fast Ethernet and
Gigabit Ethernet interfaces. Specify the inet address family type.
Use the show interfaces terse command to verify that the correct IP address and address family type
are configured on the interfaces.
897
3. On the PE and CE routers, configure the SONET interfaces. Specify the inet address family type, and
local IP address.
Use the show configuration interfaces command to verify that the correct IP address and address
family type are configured on the interfaces.
user@host> commit
commit complete
Step-by-Step Procedure
On the PE routers, configure an interior gateway protocol such as OSPF or IS-IS. This example shows
how to configure OSPF.
user@PE1# set protocols ospf area 0.0.0.0 interface ge-0/3/0.0 metric 100
user@PE1# set protocols ospf area 0.0.0.0 interface fe-0/1/1.0 metric 100
user@PE1# set protocols ospf area 0.0.0.0 interface lo0.0 passive
user@PE1# set protocols ospf area 0.0.0.0 interface fxp0.0 disable
user@PE2# set protocols ospf area 0.0.0.0 interface fe-0/1/3.0 metric 100
user@PE2# set protocols ospf area 0.0.0.0 interface ge-1/3/0.0 metric 100
user@PE2# set protocols ospf area 0.0.0.0 interface lo0.0 passive
user@PE2# set protocols ospf area 0.0.0.0 interface fxp0.0 disable
user@PE3# set protocols ospf area 0.0.0.0 interface lo0.0 passive
user@PE3# set protocols ospf area 0.0.0.0 interface fe-0/1/3.0 metric 100
user@PE3# set protocols ospf area 0.0.0.0 interface fe-0/1/1.0 metric 100
user@PE3# set protocols ospf area 0.0.0.0 interface fxp0.0 disable
Use the show ospf overview and show configuration protocols ospf commands to verify that the
correct interfaces have been configured for the OSPF protocol.
3. On the PE routers, configure OSPF traffic engineering support. Enabling traffic engineering
extensions supports the Constrained Shortest Path First algorithm, which is needed to support
Resource Reservation Protocol - Traffic Engineering (RSVP-TE) point-to-multipoint label-switched
899
paths (LSPs). If you are configuring IS-IS, traffic engineering is supported without any additional
configuration.
Use the show ospf overview and show configuration protocols ospf commands to verify that traffic
engineering support is enabled for the OSPF protocol.
user@host> commit
commit complete
Verify that the neighbor state with the other two PE routers is Full.
900
Step-by-Step Procedure
1. On the PE routers, configure BGP. Configure the BGP local autonomous system number.
2. Configure the BGP peer groups. Configure the local address as the lo0.0 address on the router. The
neighbor addresses are the lo0.0 addresses of the other PE routers.
The unicast statement enables the router to use BGP to advertise network layer reachability
information (NLRI). The signaling statement enables the router to use BGP as the signaling protocol
for the VPN.
user@host> commit
commit complete
4. On the PE routers, verify that the BGP neighbors form a peer session.
Verify that the peer state for the other two PE routers is Established and that the lo0.0 addresses of
the other PE routers are shown as peers.
902
Configuring LDP
Step-by-Step Procedure
1. On the PE routers, configure LDP to support unicast traffic. Specify the core-facing Fast Ethernet and
Gigabit Ethernet interfaces between the PE routers. Also configure LDP specifying the lo0.0
interface. As a best practice, disable LDP on the fxp0 interface.
user@host> commit
commit complete
903
3. On the PE routers, use the show ldp route command to verify the LDP route.
Verify that a next-hop interface and next-hop address have been established for each remote
destination in the core network. Notice that local destinations do not have next-hop interfaces, and
remote destinations outside the core do not have next-hop addresses.
Configuring RSVP
Step-by-Step Procedure
1. On the PE routers, configure RSVP. Specify the core-facing Fast Ethernet and Gigabit Ethernet
interfaces that participate in the LSP. Also specify the lo0.0 interface. As a best practice, disable
RSVP on the fxp0 interface.
user@host> commit
commit complete
Verify these steps using the show configuration protocols rsvp command. You can verify the
operation of RSVP only after the LSP is established.
Configuring MPLS
Step-by-Step Procedure
1. On the PE routers, configure MPLS. Specify the core-facing Fast Ethernet and Gigabit Ethernet
interfaces that participate in the LSP. As a best practice, disable MPLS on the fxp0 interface.
Use the show configuration protocols mpls command to verify that the core-facing Fast Ethernet
and Gigabit Ethernet interfaces are configured for MPLS.
905
2. On the PE routers, configure the core-facing interfaces associated with the LSP. Specify the mpls
address family type.
Use the show mpls interface command to verify that the core-facing interfaces have the MPLS
address family configured.
user@host> commit
commit complete
You can verify the operation of MPLS after the LSP is established.
Step-by-Step Procedure
1. On Router PE1 , configure the routing instance for the green and red VPNs. Specify the vrf instance
type and specify the customer-facing SONET interfaces.
Configure a virtual tunnel (VT) interface on all MVPN routing instances on each PE where hosts in
different instances need to receive multicast traffic from the same source.
Use the show configuration routing-instances green and show configuration routing-instances red
commands to verify that the virtual tunnel interfaces have been correctly configured.
2. On Router PE2 , configure the routing instance for the green VPN. Specify the vrf instance type and
specify the customer-facing SONET interfaces.
3. On Router PE3, configure the routing instance for the blue VPN. Specify the vrf instance type and
specify the customer-facing SONET interfaces.
Use the show configuration routing-instances blue command to verify that the instance type has
been configured correctly and that the correct interfaces have been configured in the routing
instance.
4. On Router PE1, configure a route distinguisher for the green and red routing instances. A route
distinguisher allows the router to distinguish between two identical IP prefixes used as VPN routes.
907
TIP: To help in troubleshooting, this example shows how to configure the route distinguisher
to match the router ID. This allows you to associate a route with the router that advertised
it.
5. On Router PE2, configure a route distinguisher for the green routing instance.
6. On Router PE3, configure a route distinguisher for the blue routing instance.
7. On the PE routers, configure the VPN routing instance for multicast support.
Use the show configuration routing-instance command to verify that the route distinguisher is
configured correctly and that the MVPN Protocol is enabled in the routing instance.
8. On the PE routers, configure an IP address on additional loopback logical interfaces. These logical
interfaces are used as the loopback addresses for the VPNs.
Use the show interfaces terse command to verify that the loopback logical interfaces are correctly
configured.
9. On the PE routers, configure virtual tunnel interfaces. These interfaces are used in VRF instances
where multicast traffic arriving on a provider tunnel needs to be forwarded to multiple VPNs.
user@PE1# set interfaces vt-1/2/0 unit 1 description "green VRF multicast vt"
user@PE1# set interfaces vt-1/2/0 unit 1 family inet
user@PE1# set interfaces vt-1/2/0 unit 2 description "red VRF unicast and multicast vt"
user@PE1# set interfaces vt-1/2/0 unit 2 family inet
user@PE1# set interfaces vt-1/2/0 unit 3 description "blue VRF multicast vt"
user@PE1# set interfaces vt-1/2/0 unit 3 family inet
user@PE2# set interfaces vt-1/2/0 unit 1 description "green VRF unicast and multicast vt"
user@PE2# set interfaces vt-1/2/0 unit 1 family inet
user@PE2# set interfaces vt-1/2/0 unit 3 description "blue VRF unicast and multicast vt"
user@PE2# set interfaces vt-1/2/0 unit 3 family inet
user@PE3# set interfaces vt-1/2/0 unit 3 description "blue VRF unicast and multicast vt"
user@PE3# set interfaces vt-1/2/0 unit 3 family inet
Use the show interfaces terse command to verify that the virtual tunnel interfaces have the correct
address family type configured.
Use the show configuration routing-instance command to verify that the provider tunnel is
configured to use the default LSP template.
909
NOTE: You cannot commit the configuration for the VRF instance until you configure the
VRF target in the next section.
Step-by-Step Procedure
1. On the PE routers, define the VPN community name for the route targets for each VPN. The
community names are used in the VPN import and export policies.
Use the show policy-options command to verify that the correct VPN community name and route
target are configured.
2. On the PE routers, configure the VPN import policy. Include the community name of the route
targets that you want to accept. Do not include the community name of the route targets that you
do not want to accept. For example, omit the community name for routes from the VPN of a
multicast sender from which you do not want to receive multicast traffic.
Use the show policy green-red-blue-import command to verify that the VPN import policy is
correctly configured.
3. On the PE routers, apply the VRF import policy. In this example, the policy is defined in a policy-
statement policy, and target communities are defined under the [edit policy-options] hierarchy
level.
Use the show configuration routing-instances command to verify that the correct VRF import
policy has been applied.
4. On the PE routers, configure VRF export targets. The vrf-target statement and export option cause
the routes being advertised to be labeled with the target community.
For Router PE3, the vrf-target statement is included without specifying the export option. If you do
not specify the import or export options, default VRF import and export policies are generated that
accept imported routes and tag exported routes with the specified target community.
911
NOTE: You must configure the same route target on each PE router for a given VPN routing
instance.
Use the show configuration routing-instances command to verify that the correct VRF export
targets have been configured.
5. On the PE routers, configure automatic exporting of routes between VRF instances. When you
include the auto-export statement, the vrf-import and vrf-export policies are compared across all
VRF instances. If there is a common route target community between the instances, the routes are
shared. In this example, the auto-export statement must be included under all instances that need
to send traffic to and receive traffic from another instance located on the same router.
6. On the PE routers, configure the load balance policy statement. While load balancing leads to
better utilization of the available links, it is not required for MVPN extranets. It is included here as a
best practice.
Use the show policy-options command to verify that the load balance policy statement has been
correctly configured.
912
user@host> commit
commit complete
9. On the PE routers, use the show rsvp neighbor command to verify that the RSVP neighbors are
established.
192.168.7.1 192.168.1.1 Up 0 *
192.168.7.1:192.168.1.1:1:mvpn:green
P2MP name: 192.168.1.1:2:mvpn:red, P2MP branch count: 2
To From State Rt P ActivePath LSPname
192.168.2.1 192.168.1.1 Up 0 *
192.168.2.1:192.168.1.1:2:mvpn:red
192.168.7.1 192.168.1.1 Up 0 *
192.168.7.1:192.168.1.1:2:mvpn:red
Total 4 displayed, Up 4, Down 0
In this display from Router PE1, notice that there are two ingress LSPs for the green VPN and two
for the red VPN configured on this router. Verify that the state of each ingress LSP is up. Also
notice that there is one egress LSP for each of the green and blue VPNs. Verify that the state of
each egress LSP is up.
TIP: The LSP name displayed in the show mpls lsp p2mp command output can be used in
the ping mpls rsvp <lsp-name> multipath command.
914
Step-by-Step Procedure
1. On the PE routers, configure the BGP export policy. The BGP export policy is used to allow static
routes and routes that originated from directly attached interfaces to be exported to BGP.
Use the show policy BGP-export command to verify that the BGP export policy is correctly
configured.
2. On the PE routers, configure the CE to PE BGP session. Use the IP address of the SONET interface as
the neighbor address. Specify the autonomous system number for the VPN network of the attached
CE router.
user@PE1# set routing-instances green protocols bgp group PE-CE export BGP-export
user@PE1# set routing-instances green protocols bgp group PE-CE neighbor 10.0.16.1 peer-as 65001
user@PE2# set routing-instances green protocols bgp group PE-CE export BGP-export
user@PE2# set routing-instances green protocols bgp group PE-CE neighbor 10.0.24.2 peer-as 65009
user@PE3# set routing-instances blue protocols bgp group PE-CE export BGP-export
user@PE3# set routing-instances blue protocols bgp group PE-CE neighbor 10.0.79.2 peer-as 65003
4. On the CE routers, configure the BGP export policy. The BGP export policy is used to allow static
routes and routes that originated from directly attached interfaces to be exported to BGP.
Use the show policy BGP-export command to verify that the BGP export policy is correctly
configured.
5. On the CE routers, configure the CE-to-PE BGP session. Use the IP address of the SONET interface
as the neighbor address. Specify the autonomous system number of the core network. Apply the
BGP export policy.
user@host> commit
commit complete
7. On the PE routers, use the show bgp group pe-ce command to verify that the BGP neighbors form a
peer session.
Verify that the peer state for the CE routers is Established and that the IP address configured on the
peer SONET interface is shown as the peer.
Step-by-Step Procedure
1. On the PE routers, enable an instance of PIM in each VPN. Configure the lo0.1, lo0.2, and customer-
facing SONET and Fast Ethernet interfaces. Specify the mode as sparse.
user@PE1# set routing-instances green protocols pim interface lo0.1 mode sparse
user@PE1# set routing-instances green protocols pim interface so-0/0/3.0 mode sparse
user@PE1# set routing-instances red protocols pim interface lo0.2 mode sparse
user@PE1# set routing-instances red protocols pim interface fe-0/1/0.0 mode sparse
917
user@PE2# set routing-instances green protocols pim interface lo0.1 mode sparse
user@PE2# set routing-instances green protocols pim interface so-0/0/1.0 mode sparse
user@PE3# set routing-instances blue protocols pim interface lo0.1 mode sparse
user@PE3# set routing-instances blue protocols pim interface so-0/0/1.0 mode sparse
user@host> commit
commit complete
3. On the PE routers, use the show pim interfaces instance green command and substitute the
appropriate VRF instance name to verify that the PIM interfaces are up.
Also notice that the normal mode for the virtual tunnel interface and label-switched interface is
SparseDense.
918
Step-by-Step Procedure
1. On the CE routers, configure the customer-facing and core-facing interfaces for PIM. Specify the
mode as sparse.
Use the show pim interfaces command to verify that the PIM interfaces have been configured to use
sparse mode.
user@host> commit
commit complete
3. On the CE routers, use the show pim interfaces command to verify that the PIM interface status is
up.
Step-by-Step Procedure
1. Configure Router PE1 to be the rendezvous point for the red VPN instance of PIM. Specify the local
lo0.2 address.
2. Configure Router PE2 to be the rendezvous point for the green VPN instance of PIM. Specify the
lo0.1 address of Router PE2.
3. Configure Router PE3 to be the rendezvous point for the blue VPN instance of PIM. Specify the
local lo0.1.
4. On the PE1, CE1, and CE2 routers, configure the static rendezvous point for the green VPN
instance of PIM. Specify the lo0.1 address of Router PE2.
5. On Router CE3, configure the static rendezvous point for the blue VPN instance of PIM. Specify the
lo0.1 address of Router PE3.
user@host> commit
commit complete
7. On the PE routers, use the show pim rps instance <instance-name> command and substitute the
appropriate VRF instance name to verify that the RPs have been correctly configured.
8. On the CE routers, use the show pim rps command to verify that the RP has been correctly
configured.
9. On Router PE1, use the show route table green.mvpn.0 | find 1 command to verify that the type-1
routes have been received from the PE2 and PE3 routers.
1:192.168.1.1:1:192.168.1.1/240
*[MVPN/70] 03:38:09, metric2 1
Indirect
1:192.168.1.1:2:192.168.1.1/240
*[MVPN/70] 03:38:05, metric2 1
Indirect
1:192.168.2.1:1:192.168.2.1/240
*[BGP/170] 03:12:18, localpref 100, from 192.168.2.1
AS path: I
> to 10.0.12.10 via ge-0/3/0.0
1:192.168.7.1:3:192.168.7.1/240
*[BGP/170] 03:12:18, localpref 100, from 192.168.7.1
AS path: I
> to 10.0.17.14 via fe-0/1/1.0
10. On Router PE1, use the show route table green.mvpn.0 | find 5 command to verify that the type-5
routes have been received from Router PE2.
A designated router (DR) sends periodic join messages and prune messages toward a group-specific
rendezvous point (RP) for each group for which it has active members. When a PIM router learns
about a source, it originates a Multicast Source Discovery Protocol (MSDP) source-address message
if it is the DR on the upstream interface. If an MBGP MVPN is also configured, the PE device
originates a type-5 MVPN route.
11. On Router PE1, use the show route table green.mvpn.0 | find 7 command to verify that the type-7
routes have been received from Router PE2.
12. On Router PE1, use the show route advertising-protocol bgp 192.168.2.1 table green.mvpn.0
detail command to verify that the routes advertised by Router PE2 use the PMSI attribute set to
RSVP-TE.
Step-by-Step Procedure
4. On Router PE1, display the provider tunnel to multicast group mapping by using the show mvpn c-
multicast command.
5. On Router PE2, use the show route table green.mvpn.0 | find 6 command to verify that the type-6
routes have been created as a result of receiving PIM join messages.
NOTE: The multicast address 239.255.255.250 shown in the preceding step is not related
to this example. This address is sent by some host machines.
8. On Router PE2, use the show route table green.mvpn.0 | find 6 command to verify that the type-6
routes have been created as a result of receiving PIM join messages from the multicast receiver
device connected to Router CE3.
11. On Router PE1, use the show route table green.mvpn.0 | find 6 command to verify that the type-6
routes have been created as a result of receiving PIM join messages from the directly connected
multicast receiver device.
NOTE: The multicast address 255.255.255.250 shown in the step above is not related to
this example.
925
Results
The configuration and verification parts of this example have been completed. The following section is
for your reference.
Router CE1
interfaces {
so-0/0/3 {
unit 0 {
description "to PE1 so-0/0/3.0";
family inet {
address 10.0.16.1/30;
}
}
}
fe-1/3/0 {
unit 0 {
family inet {
address 10.10.12.1/24;
}
}
}
lo0 {
unit 0 {
description "CE1 Loopback";
family inet {
address 192.168.6.1/32 {
primary;
}
address 127.0.0.1/32;
}
}
}
}
routing-options {
autonomous-system 65001;
router-id 192.168.6.1;
forwarding-table {
export load-balance;
}
926
}
protocols {
bgp {
group PE-CE {
export BGP-export;
neighbor 10.0.16.2 {
peer-as 65000;
}
}
}
pim {
rp {
static {
address 10.10.22.2;
}
}
interface fe-1/3/0.0 {
mode sparse;
}
interface so-0/0/3.0 {
mode sparse;
}
}
}
policy-options {
policy-statement BGP-export {
term t1 {
from protocol direct;
then accept;
}
term t2 {
from protocol static;
then accept;
}
}
policy-statement load-balance {
then {
load-balance per-packet;
}
}
}
Router PE1
interfaces {
so-0/0/3 {
unit 0 {
description "to CE1 so-0/0/3.0";
family inet {
address 10.0.16.2/30;
}
}
}
fe-0/1/0 {
unit 0 {
description "to H2";
family inet {
address 10.2.11.2/30;
}
}
}
fe-0/1/1 {
unit 0 {
description "to PE3 fe-0/1/1.0";
family inet {
address 10.0.17.13/30;
}
family mpls;
}
}
ge-0/3/0 {
unit 0 {
description "to PE2 ge-1/3/0.0";
family inet {
address 10.0.12.9/30;
}
family mpls;
}
}
vt-1/2/0 {
unit 1 {
description "green VRF multicast vt";
family inet;
}
928
unit 2 {
description "red VRF unicast and multicast vt";
family inet;
}
unit 3 {
description "blue VRF multicast vt";
family inet;
}
}
lo0 {
unit 0 {
description "PE1 Loopback";
family inet {
address 192.168.1.1/32 {
primary;
}
address 127.0.0.1/32;
}
}
unit 1 {
description "green VRF loopback";
family inet {
address 10.10.1.1/32;
}
}
unit 2 {
description "red VRF loopback";
family inet {
address 10.2.1.1/32;
}
}
}
}
routing-options {
autonomous-system 65000;
router-id 192.168.1.1;
forwarding-table {
export load-balance;
}
}
protocols {
rsvp {
interface ge-0/3/0.0;
929
interface fe-0/1/1.0;
interface lo0.0;
interface fxp0.0 {
disable;
}
}
mpls {
interface ge-0/3/0.0;
interface fe-0/1/1.0;
interface fxp0.0 {
disable;
}
}
bgp {
group group-mvpn {
type internal;
local-address 192.168.1.1;
family inet-vpn {
unicast;
}
family inet-mvpn {
signaling;
}
neighbor 192.168.2.1;
neighbor 192.168.7.1;
}
}
ospf {
traffic-engineering;
area 0.0.0.0 {
interface ge-0/3/0.0 {
metric 100;
}
interface fe-0/1/1.0 {
metric 100;
}
interface lo0.0 {
passive;
}
interface fxp0.0 {
disable;
}
}
930
}
ldp {
deaggregate;
interface ge-0/3/0.0;
interface fe-0/1/1.0;
interface fxp0.0 {
disable;
}
interface lo0.0;
}
}
policy-options {
policy-statement BGP-export {
term t1 {
from protocol direct;
then accept;
}
term t2 {
from protocol static;
then accept;
}
}
policy-statement green-red-blue-import {
term t1 {
from community [ green-com red-com blue-com ];
then accept;
}
term t2 {
then reject;
}
}
policy-statement load-balance {
then {
load-balance per-packet;
}
}
community green-com members target:65000:1;
community red-com members target:65000:2;
community blue-com members target:65000:3;
}
routing-instances {
green {
instance-type vrf;
931
interface so-0/0/3.0;
interface vt-1/2/0.1 {
multicast;
}
interface lo0.1;
route-distinguisher 192.168.1.1:1;
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
vrf-import green-red-blue-import;
vrf-target export target:65000:1;
vrf-table-label;
routing-options {
auto-export;
}
protocols {
bgp {
group PE-CE {
export BGP-export;
neighbor 10.0.16.1 {
peer-as 65001;
}
}
}
pim {
rp {
static {
address 10.10.22.2;
}
}
interface so-0/0/3.0 {
mode sparse;
}
interface lo0.1 {a
mode sparse;
}
}
mvpn;
}
932
red {
instance-type vrf;
interface fe-0/1/0.0;
interface vt-1/2/0.2;
interface lo0.2;
route-distinguisher 192.168.1.1:2;
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
vrf-import green-red-blue-import;
vrf-target export target:65000:2;
routing-options {
auto-export;
}
protocols {
pim {
rp {
local {
address 10.2.1.1;
}
}
interface fe-0/1/0.0 {
mode sparse;
}
interface lo0.2 {
mode sparse;
}
}
mvpn;
}
}
}
Router PE2
interfaces {
so-0/0/1 {
unit 0 {
description "to CE2 so-0/0/1:0.0";
family inet {
address 10.0.24.1/30;
}
}
}
fe-0/1/3 {
unit 0 {
description "to PE3 fe-0/1/3.0";
family inet {
address 10.0.27.13/30;
}
family mpls;
}
vt-1/2/0 {
unit 1 {
description "green VRF unicast and multicast vt";
family inet;
}
unit 3 {
description "blue VRF unicast and multicast vt";
family inet;
}
}
}
ge-1/3/0 {
unit 0 {
description "to PE1 ge-0/3/0.0";
family inet {
address 10.0.12.10/30;
}
family mpls;
}
}
lo0 {
unit 0 {
description "PE2 Loopback";
934
family inet {
address 192.168.2.1/32 {
primary;
}
address 127.0.0.1/32;
}
}
unit 1 {
description "green VRF loopback";
family inet {
address 10.10.22.2/32;
}
}
}
routing-options {
router-id 192.168.2.1;
autonomous-system 65000;
forwarding-table {
export load-balance;
}
}
protocols {
rsvp {
interface fe-0/1/3.0;
interface ge-1/3/0.0;
interface lo0.0;
interface fxp0.0 {
disable;
}
}
mpls {
interface fe-0/1/3.0;
interface ge-1/3/0.0;
interface fxp0.0 {
disable;
}
}
bgp {
group group-mvpn {
type internal;
local-address 192.168.2.1;
family inet-vpn {
unicast;
935
}
family inet-mvpn {
signaling;
}
neighbor 192.168.1.1;
neighbor 192.168.7.1;
}
}
ospf {
traffic-engineering;
area 0.0.0.0 {
interface fe-0/1/3.0 {
metric 100;
}
interface ge-1/3/0.0 {
metric 100;
}
interface lo0.0 {
passive;
}
interface fxp0.0 {
disable;
}
}
}
ldp {
deaggregate;
interface fe-0/1/3.0;
interface ge-1/3/0.0;
interface fxp0.0 {
disable;
}
interface lo0.0;
}
}
policy-options {
policy-statement BGP-export {
term t1 {
from protocol direct;
then accept;
}
term t2 {
from protocol static;
936
then accept;
}
}
policy-statement green-red-blue-import {
term t1 {
from community [ green-com red-com blue-com ];
then accept;
}
term t2 {
then reject;
}
}
policy-statement load-balance {
then {
load-balance per-packet;
}
}
community green-com members target:65000:1;
community red-com members target:65000:2;
community blue-com members target:65000:3;
}
routing-instances {
green {
instance-type vrf;
interface so-0/0/1.0;
interface vt-1/2/0.1;
interface lo0.1;
route-distinguisher 192.168.2.1:1;
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
vrf-import green-red-blue-import;
vrf-target export target:65000:1;
routing-options {
auto-export;
}
protocols {
bgp {
group PE-CE {
937
export BGP-export;
neighbor 10.0.24.2 {
peer-as 65009;
}
}
}
pim {
rp {
local {
address 10.10.22.2;
}
}
interface so-0/0/1.0 {
mode sparse;
}
interface lo0.1 {
mode sparse;
}
}
mvpn;
}
}
}
}
Router CE2
interfaces {
fe-0/1/1 {
unit 0 {
description "to H4";
family inet {
address 10.10.11.2/24;
}
}
}
so-0/0/1 {
unit 0 {
description "to PE2 so-0/0/1";
family inet {
938
address 10.0.24.2/30;
}
}
}
lo0 {
unit 0 {
description "CE2 Loopback";
family inet {
address 192.168.4.1/32 {
primary;
}
address 127.0.0.1/32;
}
}
}
}
routing-options {
router-id 192.168.4.1;
autonomous-system 65009;
forwarding-table {
export load-balance;
}
}
protocols {
bgp {
group PE-CE {
export BGP-export;
neighbor 10.0.24.1 {
peer-as 65000;
}
}
}
pim {
rp {
static {
address 10.10.22.2;
}
}
interface so-0/0/1.0 {
mode sparse;
}
interface fe-0/1/1.0 {
mode sparse;
939
}
}
}
policy-options {
policy-statement BGP-export {
term t1 {
from protocol direct;
then accept;
}
term t2 {
from protocol static;
then accept;
}
}
policy-statement load-balance {
then {
load-balance per-packet;
}
}
}
Router PE3
interfaces {
so-0/0/1 {
unit 0 {
description "to CE3 so-0/0/1.0";
family inet {
address 10.0.79.1/30;
}
}
}
fe-0/1/1 {
unit 0 {
description "to PE1 fe-0/1/1.0";
family inet {
address 10.0.17.14/30;
}
family mpls;
}
940
}
fe-0/1/3 {
unit 0 {
description "to PE2 fe-0/1/3.0";
family inet {
address 10.0.27.14/30;
}
family mpls;
}
}
vt-1/2/0 {
unit 3 {
description "blue VRF unicast and multicast vt";
family inet;
}
}
lo0 {
unit 0 {
description "PE3 Loopback";
family inet {
address 192.168.7.1/32 {
primary;
}
address 127.0.0.1/32;
}
}
unit 1 {
description "blue VRF loopback";
family inet {
address 10.3.33.3/32;
}
}
}
}
routing-options {
router-id 192.168.7.1;
autonomous-system 65000;
forwarding-table {
export load-balance;
}
}
protocols {
rsvp {
941
interface fe-0/1/3.0;
interface fe-0/1/1.0;
interface lo0.0;
interface fxp0.0 {
disable;
}
}
mpls {
interface fe-0/1/3.0;
interface fe-0/1/1.0;
interface fxp0.0 {
disable;
}
}
bgp {
group group-mvpn {
type internal;
local-address 192.168.7.1;
family inet-vpn {
unicast;
}
family inet-mvpn {
signaling;
}
neighbor 192.168.1.1;
neighbor 192.168.2.1;
}
}
ospf {
traffic-engineering;
area 0.0.0.0 {
interface fe-0/1/3.0 {
metric 100;
}
interface fe-0/1/1.0 {
metric 100;
}
interface lo0.0 {
passive;
}
interface fxp0.0 {
disable;
}
942
}
}
ldp {
deaggregate;
interface fe-0/1/3.0;
interface fe-0/1/1.0;
interface fxp0.0 {
disable;
}
interface lo0.0;
}
}
policy-options {
policy-statement BGP-export {
term t1 {
from protocol direct;
then accept;
}
term t2 {
from protocol static;
then accept;
}
}
policy-statement green-red-blue-import {
term t1 {
from community [ green-com red-com blue-com ];
then accept;
}
term t2 {
then reject;
}
}
policy-statement load-balance {
then {
load-balance per-packet;
}
}
community green-com members target:65000:1;
community red-com members target:65000:2;
community blue-com members target:65000:3;
}
routing-instances {
blue {
943
instance-type vrf;
interface vt-1/2/0.3;
interface so-0/0/1.0;
interface lo0.1;
route-distinguisher 192.168.7.1:3;
provider-tunnel {
rsvp-te {
label-switched-path-template {
default-template;
}
}
}
vrf-import green-red-blue-import;
vrf-target target:65000:3;
routing-options {
auto-export;
}
protocols {
bgp {
group PE-CE {
export BGP-export;
neighbor 10.0.79.2 {
peer-as 65003;
}
}
}
pim {
rp {
local {
address 10.3.33.3;
}
}
interface so-0/0/1.0 {
mode sparse;
}
interface lo0.1 {
mode sparse;
}
}
mvpn ;
}
944
}
}
Router CE3
interfaces {
so-0/0/1 {
unit 0 {
description "to PE3";
family inet {
address 10.0.79.2/30;
}
}
}
fe-0/1/0 {
unit 0 {
description "to H3";
family inet {
address 10.3.11.3/24;
}
}
}
lo0 {
unit 0 {
description "CE3 loopback";
family inet {
address 192.168.9.1/32 {
primary;
}
address 127.0.0.1/32;
}
}
}
}
routing-options {
router-id 192.168.9.1;
autonomous-system 65003;
forwarding-table {
export load-balance;
}
945
}
protocols {
bgp {
group PE-CE {
export BGP-export;
neighbor 10.0.79.1 {
peer-as 65000;
}
}
}
pim {
rp {
static {
address 10.3.33.3;
}
}
interface so-0/0/1.0 {
mode sparse;
}
interface fe-0/1/0.0 {
mode sparse;
}
}
}
policy-options {
policy-statement BGP-export {
term t1 {
from protocol direct;
then accept;
}
term t2 {
from protocol static;
then accept;
}
}
policy-statement load-balance {
then {
load-balance per-packet;
}
}
}
946
RELATED DOCUMENTATION
In multiprotocol BGP (MBGP) multicast VPNs (MVPNs), VT interfaces are needed for multicast traffic on
routing devices that function as combined provider edge (PE) and provider core (P) routers to optimize
bandwidth usage on core links. VT interfaces prevent traffic replication when a P router also acts as a PE
router (an exit point for multicast traffic).
Starting in Junos OS Release 12.3, you can configure up to eight VT interfaces in a routing instance, thus
providing Tunnel PIC redundancy inside the same multicast VPN routing instance. When the active VT
interface fails, the secondary one takes over, and you can continue managing multicast traffic with no
duplication.
Redundant VT interfaces are supported with RSVP point-to-multipoint provider tunnels as well as
multicast LDP provider tunnels. This feature also works for extranets.
You can configure one of the VT interfaces to be the primary interface. If a VT interface is configured as
the primary, it becomes the next hop that is used for traffic coming in from the core on the label-
switched path (LSP) into the routing instance. When a VT interface is configured to be primary and the
VT interface is used for both unicast and multicast traffic, only the multicast traffic is affected.
If no VT interface is configured to be the primary or if the primary VT interface is unusable, one of the
usable configured VT interfaces is chosen to be the next hop that is used for traffic coming in from the
core on the LSP into the routing instance. If the VT interface in use goes down for any reason, another
usable configured VT interface in the routing instance is chosen. When the VT interface in use changes,
all multicast routes in the instance also switch their reverse-path forwarding (RPF) interface to the new
VT interface to allow the traffic to be received.
To realize the full benefit of redundancy, we recommend that when you configure multiple VT interfaces,
at least one of the VT interfaces be on a different Tunnel PIC from the other VT interfaces. However,
Junos OS does not enforce this.
Release Description
12.3 Starting in Junos OS Release 12.3, you can configure up to eight VT interfaces in a routing instance, thus
providing Tunnel PIC redundancy inside the same multicast VPN routing instance.
947
IN THIS SECTION
Requirements | 947
Overview | 947
Configuration | 948
Verification | 959
This example shows how to configure redundant virtual tunnel (VT) interfaces in multiprotocol BGP
(MBGP) multicast VPNs (MVPNs). To configure, include multiple VT interfaces in the routing instance
and, optionally, apply the primary statement to one of the VT interfaces.
Requirements
The routing device that has redundant VT interfaces configured must be running Junos OS Release 12.3
or later.
Overview
In this example, Device PE2 has redundant VT interfaces configured in a multicast LDP routing instance,
and one of the VT interfaces is assigned to be the primary interface.
948
Figure 114 on page 948 shows the topology used in this example.
The following example shows the configuration for the customer edge (CE), provider (P), and provider
edge (PE) devices in Figure 114 on page 948. The section "Step-by-Step Procedure" on page 953
describes the steps on Device PE2.
Configuration
IN THIS SECTION
Procedure | 948
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Device CE1
Device CE2
Device CE3
Device P
Device PE1
Device PE2
Device PE3
set routing-instances vpn-1 protocols ospf area 0.0.0.0 interface lo0.1 passive
set routing-instances vpn-1 protocols ospf area 0.0.0.0 interface ge-1/2/1.0
set routing-instances vpn-1 protocols pim rp static address 198.51.100.0
set routing-instances vpn-1 protocols pim interface ge-1/2/1.0 mode sparse
set routing-instances vpn-1 protocols mvpn
set routing-options router-id 192.0.2.5
set routing-options autonomous-system 1001
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User
Guide.
[edit interfaces]
user@PE2# set ge-1/2/0 unit 0 family inet address 10.1.1.10/30
user@PE2# set ge-1/2/0 unit 0 family mpls
user@PE2# set ge-1/2/2 unit 0 family inet address 10.1.1.13/30
user@PE2# set ge-1/2/2 unit 0 family mpls
user@PE2# set ge-1/2/1 unit 0 family inet address 10.1.1.17/30
user@PE2# set ge-1/2/1 unit 0 family mpls
user@PE2# set lo0 unit 0 family inet address 192.0.2.4/24
user@PE2# set lo0 unit 1 family inet address 203.0.113.4/24
[edit interfaces]
user@PE2# set vt-1/1/0 unit 0 family inet
user@PE2# set vt-1/2/1 unit 0 family inet
954
4. Configure BGP.
6. Configure LDP.
[edit routing-options]
user@PE2# set router-id 192.0.2.4
user@PE2# set autonomous-system 1001
Results
From configuration mode, confirm your configuration by entering the show interfaces, show protocols,
show policy-options, show routing-instances, and show routing-options commands. If the output does
not display the intended configuration, repeat the configuration instructions in this example to correct it.
family inet {
address 10.1.1.10/30;
}
family mpls;
}
}
ge-1/2/2 {
unit 0 {
family inet {
address 10.1.1.13/30;
}
family mpls;
}
}
ge-1/2/1 {
unit 0 {
family inet {
address 10.1.1.17/30;
}
family mpls;
}
}
vt-1/1/0 {
unit 0 {
family inet;
}
}
vt-1/2/1 {
unit 0 {
family inet;
}
}
lo0 {
unit 0 {
family inet {
address 192.0.2.4/24;
}
}
unit 1 {
family inet {
address 203.0.113.4/24;
}
957
}
}
then accept;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
NOTE: The show multicast route extensive instance instance-name command also displays
the VT interface in the multicast forwarding table when multicast traffic is transmitted across the
VPN.
Purpose
Action
1. From operational mode, enter the show route table mpls command.
2. From configuration mode, change the primary VT interface by removing the primary statement from
the vt-1/1/0.0 interface and adding it to the vt-1/2/1.0 interface.
3. From operational mode, enter the show route table mpls command.
Meaning
With the original configuration, the output shows the vt-1/1/0.0 interface. If you change the primary
interface to vt-1/2/1.0, the output shows the vt-1/2/1.0 interface.
962
IN THIS SECTION
In a BGP multicast VPN (MVPN) (also called a multiprotocol BGP next-generation multicast VPN),
sender-based reverse-path forwarding (RPF) helps to prevent multiple provider edge (PE) routers from
sending traffic into the core, thus preventing duplicate traffic being sent to a customer. In the following
diagram, sender-based RPF configured on egress Device PE3 and Device PE4 prevents duplicate traffic
from being sent to the customers.
Sender-based RPF is supported on MX Series platforms with MPC line cards. As a prerequisite, the
router must be set to network-services enhanced-ip mode.
963
Sender-based RPF (and hot-root standby) are supported only for MPLS BGP MVPNs with RSVP point-
to-multipoint provider tunnels. Both SPT-only and SPT-RPT MVPN modes are supported.
Sender-based RPF does not work when point-to-multipoint provider tunnels are used with label-
switched interfaces (LSI). Junos OS only allocates a single LSI label for each VRF, and uses this label for
all point-to-multipoint tunnels. Therefore, the label that the egress receives does not indicate the
sending PE router. LSI labels currently cannot scale to create a unique label for each point-to-multipoint
tunnel. As such, virtual tunnel interfaces (vt) must be used for sender-based RPF functionality with
point-to-multipoint provider tunnels.
Optionally, LSI interfaces can continue to be used for unicast purposes, and virtual tunnel interfaces can
be configured to be used for multicast only.
In general, it is important to avoid (or recover from) having multiple PE routers send duplicate traffic into
the core because this can result in duplicate traffic being sent to the customer. The sender-based RPF
has a use case that is limited to BGP MVPNs. The use-case scope is limited for the following reasons:
• A traditional RPF check for native PIM is based on the incoming interface. This RPF check prevents
loops but does not prevent multiple forwarders on a LAN. The traditional RPF has been used because
current multicast protocols either avoid duplicates on a LAN or have data-driven events to resolve
the duplicates once they are detected.
• In PIM sparse mode, duplicates can occur on a LAN in normal protocol operation. The protocol has a
data-driven mechanism (PIM assert messages) to detect duplication when it happens and resolve it.
• In PIM bidirectional mode, a designated forwarder (DF) election is performed on all LANs to avoid
duplication.
• Draft Rosen MVPNs use the PIM assert mechanism because with Draft Rosen MVPNs the core
network is analogous to a LAN.
Sender-based RPF is a solution to be used in conjunction with BGP MVPNs because BGP MVPNs use
an alternative to data-driven-event solutions and bidirectional mode DF election. This is so, because, for
one thing, the core network is not exactly a LAN. In an MVPN scenario, it is possible to determine which
PE router has sent the traffic. Junos OS uses this information to only forward the traffic if it is sent from
the correct PE router. With sender-based RPF, the RPF check is enhanced to check whether data arrived
on the correct incoming virtual tunnel (vt-) interface and that the data was sent from the correct
upstream PE router.
More specifically, the data must arrive with the correct MPLS label in the outer header used to
encapsulate data through the core. The label identifies the tunnel and, if the tunnel is point-to-
multipoint, the upstream PE router.
Sender-based RPF is not a replacement for single-forwarder election, but is a complementary feature.
Configuring a higher primary loopback address (or router ID) on one PE device (PE1) than on another
(PE2) ensures that PE1 is the single-forwarder election winner. The unicast-umh-election statement
964
causes the unicast route preference to determine the single-forwarder election. If single-forwarder
election is not used or if it is not sufficient to prevent duplicates in the core, sender-based RPF is
recommended.
For RSVP point-to-multipoint provider tunnels, the transport label identifies the sending PE router
because it is a requirement that penultimate hop popping (PHP) is disabled when using point-to-
multipoint provider tunnels with MVPNs. PHP is disabled by default when you configure the MVPN
protocol in a routing instance. The label identifies the tunnel, and (because the RSVP-TE tunnel is point-
to-multipoint) the sending PE router.
The sender-based RPF mechanism is described in RFC 6513, Multicast in MPLS/BGP IP VPNs in section
9.1.1.
Sender-based RPF prevents duplicates from being sent to the customer even if there is duplication in
the provider network. Duplication could exist in the provider because of a hot-root standby
configuration or if the single-forwarder election is not sufficient to prevent duplicates. Single-forwarder
election is used to prevent duplicates to the core network, while sender-based RPF prevents duplicates
to the customer even if there are duplicates in the core. There are cases in which single-forwarder
election cannot prevent duplicate traffic from arriving at the egress PE router. One example of this
(outlined in section 9.3.1 of RFC 6513) is when PIM sparse mode is configured in the customer network
and the MVPN is in RPT-SPT mode with an I-PMSI.
After Junos OS chooses the ingress PE router, the sender-based RPF decision determines whether the
correct ingress PE router is selected. As described in RFC 6513, section 9.1.1, an egress PE router, PE1,
chooses a specific upstream PE router, for given (C-S,C-G). When PE1 receives a (C-S,C-G) packet from a
965
PMSI, it might be able to identify the PE router that transmitted the packet onto the PMSI. If that
transmitter is other than the PE router selected by PE1 as the upstream PE router, PE1 can drop the
packet. This means that the PE router detects a duplicate, but the duplicate is not forwarded.
When an egress PE router generates a type 7 C-multicast route, it uses the VRF route import extended
community carried in the VPN-IP route toward the source to construct the route target carried by the C-
multicast route. This route target results in the C-multicast route being sent to the upstream PE router,
and being imported into the correct VRF on the upstream PE router. The egress PE router programs the
forwarding entry to only accept traffic from this PE router, and only on a particular tunnel rooted at that
PE router.
When an egress PE router generates a type 6 C-multicast route, it uses the VRF route import extended
community carried in the VPN-IP route toward the rendezvous point (RP) to construct the route target
carried by the C-multicast route.
This route target results in the C-multicast route being sent to the upstream PE router and being
imported into the correct VRF on the upstream PE router. The egress PE router programs the forwarding
entry to accept traffic from this PE router only, and only on a particular tunnel rooted at that PE router.
However, if some other PE routers have switched to SPT mode for (C-S, C-G) and have sent source
active (SA) autodiscovery (A-D) routes (type 5 routes), and if the egress PE router only has (C-*, C-G)
state, the upstream PE router for (C-S, C-G) is not the PE router toward the RP to which it sent a type 6
route, but the PE router that originates a SA A-D route for (C-S, C-G). The traffic for (C-S, C-G) might be
carried over a I-PMSI or S-PMSI, depending on how it was advertised by the upstream PE router.
Additionally, when an egress PE router has only the (C-*, C-G) state and does not have the (C-S, C-G)
state, the egress PE router might be receiving (C-S, C-G) type 5 SA routes from multiple PE routers, and
chooses the best one, as follows: For every received (C-S, C-G) SA route, the egress PE router finds in its
upstream multicast hop (UMH) route-candidate set for C-S a route with the same route distinguisher
(RD). Among all such found routes the PE router selects the UMH route (based on the UMH selection).
The best (C-S, C-G) SA route is the one whose RD is the same as of the selected UMH route.
When an egress PE router has only the (C-*, C-G) state and does not have the (C-S, C-G) state, and if
later the egress PE router creates the (C-S, C-G) state (for example, as a result of receiving a PIM join (C-
S, C-G) message from one of its customer edge [CE] neighbors), the upstream PE router for that (C-S, C-
G) is not necessarily going to be the same PE router that originated the already-selected best SA A-D
route for (C-S, C-G). It is possible to have a situation in which the PE router that originated the best SA
A-D route for (C-S, C-G) carries the (C-S, C-G) over an I-PMSI, while some other PE router, that is also
connected to the site that contains C-S, carries (C-S,C-G) over an S-PMSI. In this case, the downstream
PE router would not join the S-PMSI, but continue to receive (C-S, C-G) over the I-PMSI, because the
UMH route for C-S is the one that has been advertised by the PE router that carries (C-S, C-G) over the
I-PMSI. This is expected behavior.
The egress PE router determines the sender of a (C-S, C-G) type 5 SA A-D route by finding in its UMH
route-candidate set for C-S a route whose RD is the same as in the SA A-D route. The VRF route import
extended community of the found route contains the IP address of the sender of the SA A-D route.
966
RELATED DOCUMENTATION
Example: Configuring Sender-Based RPF in a BGP MVPN with RSVP-TE Point-to-Multipoint Provider
Tunnels | 966
unicast-umh-election | 2007
IN THIS SECTION
Requirements | 966
Overview | 967
Verification | 983
This example shows how to configure sender-based reverse-path forwarding (RPF) in a BGP multicast
VPN (MVPN). Sender-based RPF helps to prevent multiple provider edge (PE) routers from sending
traffic into the core, thus preventing duplicate traffic being sent to a customer.
Requirements
No special configuration beyond device initialization is required before configuring this example.
Sender-based RPF is supported on MX Series platforms with MPC line cards. As a prerequisite, the
router must be set to network-services enhanced-ip mode.
Sender-based RPF is supported only for MPLS BGP MVPNs with RSVP-TE point-to-multipoint provider
tunnels. Both SPT-only and SPT-RPT MVPN modes are supported.
Sender-based RPF does not work when point-to-multipoint provider tunnels are used with label-
switched interfaces (LSI). Junos OS only allocates a single LSI label for each VRF, and uses this label for
all point-to-multipoint tunnels. Therefore, the label that the egress receives does not indicate the
sending PE router. LSI labels currently cannot scale to create a unique label for each point-to-multipoint
tunnel. As such, virtual tunnel interfaces (vt) must be used for sender-based RPF functionality with
point-to-multipoint provider tunnels.
967
This example requires Junos OS Release 14.2 or later on the PE router that has sender-based RPF
enabled.
Overview
IN THIS SECTION
Topology | 968
This example shows a single autonomous system (intra-AS scenario) in which one source sends multicast
traffic (group 224.1.1.1) into the VPN (VRF instance vpn-1). Two receivers subscribe to the group. They
are connected to Device CE2 and Device CE3, respectively. RSVP point-to-multipoint LSPs with
inclusive provider tunnels are set up among the PE routers. PIM (C-PIM) is configured on the PE-CE
links.
For MPLS, the signaling control protocol used here is LDP. Optionally, you can use RSVP to signal both
point-to-point and point-to-multipoint tunnels.
OSPF is used for interior gateway protocol (IGP) connectivity, though IS-IS is also a supported option. If
you use OSPF, you must enable OSPF traffic engineering.
For testing purposes, routers are used to simulate the source and the receivers. Device PE2 and Device
PE3 are configured to statically join the 224.1.1.1 group by using the set protocols igmp interface
interface-name static group 224.1.1.1 command. In the case when a real multicast receiver host is not
available, as in this example, this static IGMP configuration is useful. On the CE devices attached to the
receivers, to make them listen to the multicast group address, the example uses set protocols sap listen
224.1.1.1. A ping command is used to send multicast traffic into the BGP MBPN.
Topology
"Set Commands for All Devices in the Topology" on page 968 shows the configuration for all of the
devices in Figure 116 on page 968.
The section "Configuring Device PE2" on page 974 describes the steps on Device PE2.
IN THIS SECTION
Procedure | 974
969
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Device CE1
Device CE2
Device CE3
Device P
Device PE1
Device PE2
Device PE3
set routing-instances vpn-1 provider-tunnel selective group 225.0.1.0/24 source 0.0.0.0/0 rsvp-te label-
switched-path-template p2mp-template
set routing-instances vpn-1 provider-tunnel selective group 225.0.1.0/24 source 0.0.0.0/0 threshold-rate 0
set routing-instances vpn-1 vrf-target target:100:10
set routing-instances vpn-1 protocols ospf export parent_vpn_routes
set routing-instances vpn-1 protocols ospf area 0.0.0.0 interface lo0.105 passive
set routing-instances vpn-1 protocols ospf area 0.0.0.0 interface ge-1/2/15.0
set routing-instances vpn-1 protocols pim rp static address 100.1.1.2
set routing-instances vpn-1 protocols pim interface ge-1/2/15.0 mode sparse
set routing-instances vpn-1 protocols mvpn mvpn-mode rpt-spt
set routing-options router-id 1.1.1.5
set routing-options route-distinguisher-id 1.1.1.5
set routing-options autonomous-system 1001
Procedure
Step-by-Step Procedure
IN THIS SECTION
Procedure | 974
Procedure
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit chassis]
user@PE2# set network-services enhanced-ip
[edit interfaces]
user@PE2# set ge-1/2/12 unit 0 family inet address 10.1.1.10/30
user@PE2# set ge-1/2/12 unit 0 family mpls
user@PE2# set ge-1/2/14 unit 0 family inet address 10.1.1.17/30
user@PE2# set ge-1/2/14 unit 0 family mpls
user@PE2# set vt-1/2/10 unit 4 family inet
user@PE2# set lo0 unit 0 family inet address 1.1.1.4/32
user@PE2# set lo0 unit 104 family inet address 100.1.1.4/32
4. (Optional) Force the PE device to join the multicast group with a static configuration.
Normally, this would happen dynamically in a setup with real sources and receivers.
6. Configure MPLS.
The policy is used for exporting the BGP into the PE-CE IGP session.
In the context of unicast IPv4 routes, choosing vrf-target has two implications. First, every locally
learned (in this case, direct and static) route at the VRF is exported to BGP with the specified route
target (RT). Also, every received inet-vpn BGP route with that RT value is imported into the VRF
vpn-1. This has the advantage of a simpler configuration, and the drawback of less flexibility in
selecting and modifying the exported and imported routes. It also implies that the VPN is full mesh
and all the PE routers get routes from each other, so complex configurations like hub-and-spoke or
extranet are not feasible. If any of these features are required, it is necessary to use vrf-import and
vrf-export instead.
[edit ]
user@PE2# set routing-instances vpn-1 vrf-target target:100:10
18. Configure the router ID, the router distinguisher, and the AS number.
[edit routing-options]
user@PE2# set router-id 1.1.1.4
user@PE2# set route-distinguisher-id 1.1.1.4
user@PE2# set autonomous-system 1001
Results
From configuration mode, confirm your configuration by entering the show chassis, show interfaces,
show protocols, show policy-options, show routing-instances, and show routing-options commands. If
979
the output does not display the intended configuration, repeat the instructions in this example to
correct the configuration.
}
}
area 0.0.0.0 {
interface lo0.0 {
passive;
}
interface ge-1/2/13.0;
}
}
ldp {
interface ge-1/2/13.0;
p2mp;
}
}
}
}
vrf-target target:100:10;
protocols {
ospf {
export parent_vpn_routes;
area 0.0.0.0 {
interface lo0.105 {
passive;
}
interface ge-1/2/15.0;
}
}
pim {
rp {
static {
address 100.1.1.2;
}
}
interface ge-1/2/15.0 {
mode sparse;
}
}
mvpn {
mvpn-mode {
rpt-spt;
}
sender-based-rpf;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
983
Verification
IN THIS SECTION
Purpose
Action
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : vpn-1
MVPN Mode : RPT-SPT
Sender-Based RPF: Enabled.
Hot Root Standby: Disabled. Reason: Not enabled by configuration.
Provider tunnel: I-P-tnl:RSVP-TE P2MP:1.1.1.4, 32647,1.1.1.4
984
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : vpn-1
MVPN Mode : RPT-SPT
Sender-Based RPF: Enabled.
Hot Root Standby: Disabled. Reason: Not enabled by configuration.
Provider tunnel: I-P-tnl:RSVP-TE P2MP:1.1.1.4, 32647,1.1.1.4
Purpose
Make sure the expected BGP routes are being added to the routing tables on the PE devices.
Action
1.1.1.4:32767:1.1.1.6/32
*[BGP/170] 1d 04:23:24, MED 1, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299792(top)
1.1.1.4:32767:10.1.1.16/30
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299792(top)
1.1.1.4:32767:100.1.1.4/32
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299792(top)
1.1.1.5:32767:1.1.1.7/32
*[BGP/170] 1d 04:23:23, MED 1, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299776(top)
1.1.1.5:32767:10.1.1.20/30
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299776(top)
986
1.1.1.5:32767:100.1.1.5/32
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299776(top)
1:1.1.1.4:32767:1.1.1.4/240
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299792
1:1.1.1.5:32767:1.1.1.5/240
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776
6:1.1.1.2:32767:1001:32:100.1.1.2:32:224.1.1.1/240
*[BGP/170] 1d 04:17:25, MED 0, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776
[BGP/170] 1d 04:17:24, MED 0, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299792
6:1.1.1.2:32767:1001:32:100.1.1.2:32:224.2.127.254/240
*[BGP/170] 1d 04:17:25, MED 0, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776
[BGP/170] 1d 04:17:23, MED 0, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299792
7:1.1.1.2:32767:1001:32:10.1.1.1:32:224.1.1.1/240
*[BGP/170] 20:34:47, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776
[BGP/170] 20:34:47, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299792
1:1.1.1.4:32767:1.1.1.4/240
987
1.1.1.2:32767:1.1.1.1/32
*[BGP/170] 1d 04:23:24, MED 1, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299808(top)
1.1.1.2:32767:10.1.1.0/30
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299808(top)
1.1.1.2:32767:100.1.1.2/32
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299808(top)
1.1.1.5:32767:1.1.1.7/32
*[BGP/170] 1d 04:23:20, MED 1, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299776(top)
1.1.1.5:32767:10.1.1.20/30
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
989
1:1.1.1.2:32767:1.1.1.2/240
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299808
1:1.1.1.5:32767:1.1.1.5/240
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776
5:1.1.1.2:32767:32:10.1.1.1:32:224.1.1.1/240
*[BGP/170] 20:34:47, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299808
1:1.1.1.2:32767:1.1.1.2/240
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299808
1:1.1.1.5:32767:1.1.1.5/240
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776
5:1.1.1.2:32767:32:10.1.1.1:32:224.1.1.1/240
*[BGP/170] 20:34:47, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
990
1.1.1.2:32767:1.1.1.1/32
*[BGP/170] 1d 04:23:23, MED 1, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
991
1:1.1.1.2:32767:1.1.1.2/240
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299808
1:1.1.1.4:32767:1.1.1.4/240
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299792
5:1.1.1.2:32767:32:10.1.1.1:32:224.1.1.1/240
*[BGP/170] 20:34:47, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299808
1:1.1.1.2:32767:1.1.1.2/240
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.2
992
Purpose
Make sure that the expected join messages are being sent.
Action
Group: 224.1.1.1
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: ge-1/2/14.0
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: ge-1/2/14.0
Group: 224.1.1.1
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: ge-1/2/15.0
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: ge-1/2/15.0
Meaning
Both Device CE2 and Device CE3 send C-Join packets upstream to their neighboring PE routers, their
unicast next-hop to reach the C-Source.
Purpose
Make sure that the expected join messages are being sent.
Action
Group: 224.1.1.1
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: Local
Group: 224.1.1.1
Source: 10.1.1.1
Flags: sparse,spt
Upstream interface: ge-1/2/10.0
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: Local
Group: 224.1.1.1
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream protocol: BGP
Upstream interface: Through BGP
Group: 224.1.1.1
Source: 10.1.1.1
Flags: sparse,spt
Upstream protocol: BGP
Upstream interface: Through BGP
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream protocol: BGP
995
Group: 224.1.1.1
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream protocol: BGP
Upstream interface: Through BGP
Group: 224.1.1.1
Source: 10.1.1.1
Flags: sparse,spt
Upstream protocol: BGP
Upstream interface: Through BGP
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream protocol: BGP
Upstream interface: Through BGP
Meaning
Both Device CE2 and Device CE3 send C-Join packets upstream to their neighboring PE routers, their
unicast next-hop to reach the C-Source.
The C-Join state points to BGP as the upstream interface. Actually, there is no PIM neighbor relationship
between the PEs. The downstream PE converts the C-PIM (C-S, C-G) state into a Type 7 source-tree join
BGP route, and sends it to the upstream PE router toward the C-Source.
996
Purpose
Make sure that the C-Multicast flow is integrated in MVPN vpn-1 and sent by Device PE1 into the
provider tunnel.
Action
Group: 224.1.1.1/32
Source: *
Upstream interface: local
Downstream interface list:
ge-1/2/11.0
Group: 224.1.1.1
Source: 10.1.1.1/32
Upstream interface: ge-1/2/10.0
Downstream interface list:
ge-1/2/11.0
Group: 224.2.127.254/32
Source: *
Upstream interface: local
Downstream interface list:
ge-1/2/11.0
Group: 224.1.1.1/32
Source: *
Upstream rpf interface list:
vt-1/2/10.4 (P)
Sender Id: Label 299840
Downstream interface list:
ge-1/2/14.0
997
Group: 224.1.1.1
Source: 10.1.1.1/32
Upstream rpf interface list:
vt-1/2/10.4 (P)
Sender Id: Label 299840
Group: 224.2.127.254/32
Source: *
Upstream rpf interface list:
vt-1/2/10.4 (P)
Sender Id: Label 299840
Downstream interface list:
ge-1/2/14.0
Group: 224.1.1.1/32
Source: *
Upstream interface: vt-1/2/10.5
Downstream interface list:
ge-1/2/15.0
Group: 224.1.1.1
Source: 10.1.1.1/32
Upstream interface: vt-1/2/10.5
Group: 224.2.127.254/32
Source: *
Upstream interface: vt-1/2/10.5
Downstream interface list:
ge-1/2/15.0
Meaning
The output shows that, unlike the other PE devices, Device PE2 is using sender-based RPF. The output
on Device PE2 includes the upstream RPF sender. The Sender Id field is only shown when sender-based
RPF is enabled.
998
Purpose
Action
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : vpn-1
MVPN Mode : RPT-SPT
C-mcast IPv4 (S:G) Provider Tunnel St
0.0.0.0/0:224.1.1.1/32 RSVP-TE P2MP:1.1.1.2, 33314,1.1.1.2 RM
10.1.1.1/32:224.1.1.1/32 RSVP-TE P2MP:1.1.1.2, 33314,1.1.1.2 RM
0.0.0.0/0:224.2.127.254/32 RSVP-TE P2MP:1.1.1.2, 33314,1.1.1.2 RM
...
Instance : vpn-1
MVPN Mode : RPT-SPT
C-mcast IPv4 (S:G) Provider Tunnel St
999
...
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : vpn-1
MVPN Mode : RPT-SPT
C-mcast IPv4 (S:G) Provider Tunnel St
0.0.0.0/0:224.1.1.1/32 RSVP-TE P2MP:1.1.1.2, 33314,1.1.1.2
10.1.1.1/32:224.1.1.1/32 RSVP-TE P2MP:1.1.1.2, 33314,1.1.1.2
0.0.0.0/0:224.2.127.254/32 RSVP-TE P2MP:1.1.1.2, 33314,1.1.1.2
...
Meaning
Purpose
Action
Instance : vpn-1
MVPN Mode : RPT-SPT
Family : INET
C-Multicast route address :0.0.0.0/0:224.1.1.1/32
MVPN Source-PE1:
extended-community: no-advertise target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: lo0.102 Index: -1610691384
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: lo0.102 Index: -1610691384
C-Multicast route address :10.1.1.1/32:224.1.1.1/32
MVPN Source-PE1:
extended-community: no-advertise target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: ge-1/2/10.0 Index: -1610691384
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: ge-1/2/10.0 Index: -1610691384
C-Multicast route address :0.0.0.0/0:224.2.127.254/32
MVPN Source-PE1:
extended-community: no-advertise target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: lo0.102 Index: -1610691384
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: lo0.102 Index: -1610691384
Instance : vpn-1
MVPN Mode : RPT-SPT
Family : INET
C-Multicast route address :0.0.0.0/0:224.1.1.1/32
1002
MVPN Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
C-Multicast route address :10.1.1.1/32:224.1.1.1/32
MVPN Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
C-Multicast route address :0.0.0.0/0:224.2.127.254/32
MVPN Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
...
Meaning
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 1003
Overview | 1004
Verification | 1019
This example shows how to configure sender-based reverse-path forwarding (RPF) in a BGP multicast
VPN (MVPN). Sender-based RPF helps to prevent multiple provider edge (PE) routers from sending
traffic into the core, thus preventing duplicate traffic being sent to a customer.
Requirements
No special configuration beyond device initialization is required before configuring this example.
Sender-based RPF is supported on MX Series platforms with MPC line cards. As a prerequisite, the
router must be set to network-services enhanced-ip mode.
Sender-based RPF is supported only for MPLS BGP MVPNs with RSVP-TE point-to-multipoint provider
tunnels. Both SPT-only and SPT-RPT MVPN modes are supported.
Sender-based RPF does not work when point-to-multipoint provider tunnels are used with label-
switched interfaces (LSI). Junos OS only allocates a single LSI label for each VRF, and uses this label for
all point-to-multipoint tunnels. Therefore, the label that the egress receives does not indicate the
sending PE router. LSI labels currently cannot scale to create a unique label for each point-to-multipoint
tunnel. As such, virtual tunnel interfaces (vt) must be used for sender-based RPF functionality with
point-to-multipoint provider tunnels.
1004
This example requires Junos OS Release 21.1R1 or later on the PE router that has sender-based RPF
enabled.
Overview
IN THIS SECTION
Topology | 1005
This example shows a single autonomous system (intra-AS scenario) in which one source sends multicast
traffic (group 224.1.1.1) into the VPN (VRF instance vpn-1). Two receivers subscribe to the group. They
are connected to Device CE2 and Device CE3, respectively. MLDP point-to-multipoint LSPs with
inclusive provider tunnels are set up among the PE routers. PIM (C-PIM) is configured on the PE-CE
links.
For MPLS, the signaling control protocol used here is LDP. Optionally, you can use RSVP to signal both
point-to-point and point-to-point tunnels.
OSPF is used for interior gateway protocol (IGP) connectivity, though IS-IS is also a supported option. If
you use OSPF, you must enable OSPF traffic engineering.
For testing purposes, routers are used to simulate the source and the receivers. Device PE2 and Device
PE3 are configured to statically join the 224.1.1.1 group by using the set protocols igmp interface
interface-name static group 224.1.1.1 command. In the case when a real multicast receiver host is not
available, as in this example, this static IGMP configuration is useful. On the CE devices attached to the
receivers, to make them listen to the multicast group address, the example uses set protocols sap listen
224.1.1.1. A ping command is used to send multicast traffic into the BGP MBPN.
Topology
"Set Commands for All Devices in the Topology" on page 1005 shows the configuration for all of the
devices in Figure 117 on page 1005.
The section "Configuring Device PE2" on page 1011 describes the steps on Device PE2.
IN THIS SECTION
Procedure | 1011
1006
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Device CE1
Device CE2
Device CE3
Device P
Device PE1
Device PE2
Device PE3
Procedure
Step-by-Step Procedure
IN THIS SECTION
Procedure | 1011
Procedure
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit chassis]
user@PE2# set network-services enhanced-ip
1012
[edit interfaces]
user@PE2# set ge-1/2/12 unit 0 family inet address 10.1.1.10/30
user@PE2# set ge-1/2/12 unit 0 family mpls
user@PE2# set ge-1/2/14 unit 0 family inet address 10.1.1.17/30
user@PE2# set ge-1/2/14 unit 0 family mpls
user@PE2# set vt-1/2/10 unit 4 family inet
user@PE2# set lo0 unit 0 family inet address 1.1.1.4/32
user@PE2# set lo0 unit 104 family inet address 100.1.1.4/32
4. (Optional) Force the PE device to join the multicast group with a static configuration.
Normally, this would happen dynamically in a setup with real sources and receivers.
6. Configure MPLS.
The policy is used for exporting the BGP into the PE-CE IGP session.
In the context of unicast IPv4 routes, choosing vrf-target has two implications. First, every locally
learned (in this case, direct and static) route at the VRF is exported to BGP with the specified route
target (RT). Also, every received inet-vpn BGP route with that RT value is imported into the VRF
vpn-1. This has the advantage of a simpler configuration, and the drawback of less flexibility in
selecting and modifying the exported and imported routes. It also implies that the VPN is full mesh
and all the PE routers get routes from each other, so complex configurations like hub-and-spoke or
extranet are not feasible. If any of these features are required, it is necessary to use vrf-import and
vrf-export instead.
[edit ]
user@PE2# set routing-instances vpn-1 vrf-target target:100:10
18. Configure the router ID, the router distinguisher, and the AS number.
[edit routing-options]
user@PE2# set router-id 1.1.1.4
user@PE2# set route-distinguisher-id 1.1.1.4
user@PE2# set autonomous-system 1001
Results
From configuration mode, confirm your configuration by entering the show chassis, show interfaces,
show protocols, show policy-options, show routing-instances, and show routing-options commands. If
the output does not display the intended configuration, repeat the instructions in this example to
correct the configuration.
}
ge-1/2/14 {
unit 0 {
family inet {
address 10.1.1.17/30;
}
family mpls;
}
}
vt-1/2/10 {
unit 5 {
family inet;
}
}
lo0 {
unit 0 {
family inet {
address 1.1.1.5/32;
}
}
unit 105 {
family inet {
address 100.1.1.5/32;
}
}
}
template;
p2mp;
}
interface ge-1/2/13.0;
}
bgp {
group ibgp {
type internal;
local-address 1.1.1.5;
family inet {
unicast;
}
family inet-vpn {
any;
}
family inet-mvpn {
signaling;
}
neighbor 1.1.1.2;
neighbor 1.1.1.4;
}
}
ospf {
traffic-engineering;
area 0.0.0.0 {
interface lo0.0 {
passive;
}
interface ge-1/2/13.0;
}
}
ldp {
interface ge-1/2/13.0;
p2mp;
}
then accept;
}
selective {
group 225.0.1.0/24 {
source 0.0.0.0/0 {
ldp-p2mp;
threshold-rate 0;
}
}
}
vrf-target target:100:10;
protocols {
ospf {
export parent_vpn_routes;
area 0.0.0.0 {
interface lo0.105 {
passive;
}
interface ge-1/2/15.0;
}
}
pim {
rp {
static {
address 100.1.1.2;
}
}
interface ge-1/2/15.0 {
1019
mode sparse;
}
}
mvpn {
mvpn-mode {
rpt-spt;
}
sender-based-rpf;
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Purpose
Action
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : vpn-1
MVPN Mode : RPT-SPT
Sender-Based RPF: Enabled.
Hot Root Standby: Disabled. Reason: Not enabled by configuration.
Provider tunnel: I-P-tnl:LDP-P2MP:1.1.1.4, lsp-id 16777217
Neighbor Inclusive Provider Tunnel
1.1.1.2 LDP-P2MP:1.1.1.2, lsp-id 16777219
1.1.1.5 LDP-P2MP:1.1.1.5, lsp-id 16777210
C-mcast IPv4 (S:G) Provider Tunnel St
0.0.0.0/0:224.1.1.1/32 LDP-P2MP:1.1.1.2, lsp-id 16777219
0.0.0.0/0:224.2.127.254/32 LDP-P2MP:1.1.1.3, lsp-id 16777210
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : vpn-1
MVPN Mode : RPT-SPT
Sender-Based RPF: Enabled.
1021
Purpose
Make sure the expected BGP routes are being added to the routing tables on the PE devices.
Action
1.1.1.4:32767:1.1.1.6/32
*[BGP/170] 1d 04:23:24, MED 1, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299792(top)
1.1.1.4:32767:10.1.1.16/30
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299792(top)
1.1.1.4:32767:100.1.1.4/32
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299792(top)
1.1.1.5:32767:1.1.1.7/32
*[BGP/170] 1d 04:23:23, MED 1, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299776(top)
1.1.1.5:32767:10.1.1.20/30
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299776(top)
1.1.1.5:32767:100.1.1.5/32
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776, Push 299776(top)
1:1.1.1.4:32767:1.1.1.4/240
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299792
1:1.1.1.5:32767:1.1.1.5/240
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776
6:1.1.1.2:32767:1001:32:100.1.1.2:32:224.1.1.1/240
1023
1:1.1.1.4:32767:1.1.1.4/240
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299792
1:1.1.1.5:32767:1.1.1.5/240
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776
6:1.1.1.2:32767:1001:32:100.1.1.2:32:224.1.1.1/240
[BGP/170] 1d 04:17:25, MED 0, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776
[BGP/170] 1d 04:17:24, MED 0, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299792
6:1.1.1.2:32767:1001:32:100.1.1.2:32:224.2.127.254/240
[BGP/170] 1d 04:17:25, MED 0, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/11.0, Push 299776
1024
1.1.1.2:32767:1.1.1.1/32
*[BGP/170] 1d 04:23:24, MED 1, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299808(top)
1.1.1.2:32767:10.1.1.0/30
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299808(top)
1.1.1.2:32767:100.1.1.2/32
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299808(top)
1.1.1.5:32767:1.1.1.7/32
*[BGP/170] 1d 04:23:20, MED 1, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299776(top)
1.1.1.5:32767:10.1.1.20/30
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299776(top)
1.1.1.5:32767:100.1.1.5/32
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776, Push 299776(top)
1:1.1.1.2:32767:1.1.1.2/240
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299808
1:1.1.1.5:32767:1.1.1.5/240
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776
1026
5:1.1.1.2:32767:32:10.1.1.1:32:224.1.1.1/240
*[BGP/170] 20:34:47, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299808
1:1.1.1.2:32767:1.1.1.2/240
*[BGP/170] 1d 04:23:24, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299808
1:1.1.1.5:32767:1.1.1.5/240
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.5
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299776
5:1.1.1.2:32767:32:10.1.1.1:32:224.1.1.1/240
*[BGP/170] 20:34:47, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/12.0, Push 299808
1.1.1.2:32767:1.1.1.1/32
*[BGP/170] 1d 04:23:23, MED 1, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299776, Push 299808(top)
1.1.1.2:32767:10.1.1.0/30
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299776, Push 299808(top)
1.1.1.2:32767:100.1.1.2/32
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299776, Push 299808(top)
1.1.1.4:32767:1.1.1.6/32
*[BGP/170] 1d 04:23:20, MED 1, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299776, Push 299792(top)
1.1.1.4:32767:10.1.1.16/30
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299776, Push 299792(top)
1.1.1.4:32767:100.1.1.4/32
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299776, Push 299792(top)
1:1.1.1.2:32767:1.1.1.2/240
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299808
1:1.1.1.4:32767:1.1.1.4/240
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299792
5:1.1.1.2:32767:32:10.1.1.1:32:224.1.1.1/240
*[BGP/170] 20:34:47, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299808
1:1.1.1.2:32767:1.1.1.2/240
*[BGP/170] 1d 04:23:23, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299808
1:1.1.1.4:32767:1.1.1.4/240
*[BGP/170] 1d 04:23:20, localpref 100, from 1.1.1.4
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299792
5:1.1.1.2:32767:32:10.1.1.1:32:224.1.1.1/240
*[BGP/170] 20:34:47, localpref 100, from 1.1.1.2
AS path: I, validation-state: unverified
> via ge-1/2/13.0, Push 299808
Purpose
Make sure that the expected join messages are being sent.
Action
Group: 224.1.1.1
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: ge-1/2/14.0
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: ge-1/2/14.0
Group: 224.1.1.1
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: ge-1/2/15.0
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: ge-1/2/15.0
Meaning
Both Device CE2 and Device CE3 send C-Join packets upstream to their neighboring PE routers, their
unicast next-hop to reach the C-Source.
Purpose
Make sure that the expected join messages are being sent.
Action
Group: 224.1.1.1
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: Local
Group: 224.1.1.1
Source: 10.1.1.1
Flags: sparse,spt
Upstream interface: ge-1/2/10.0
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream interface: Local
Group: 224.1.1.1
Source: *
1031
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream protocol: BGP
Upstream interface: Through BGP
Group: 224.1.1.1
Source: 10.1.1.1
Flags: sparse,spt
Upstream protocol: BGP
Upstream interface: Through BGP
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream protocol: BGP
Upstream interface: Through BGP
Group: 224.1.1.1
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
Upstream protocol: BGP
Upstream interface: Through BGP
Group: 224.1.1.1
Source: 10.1.1.1
Flags: sparse,spt
Upstream protocol: BGP
Upstream interface: Through BGP
Group: 224.2.127.254
Source: *
RP: 100.1.1.2
Flags: sparse,rptree,wildcard
1032
Meaning
Both Device CE2 and Device CE3 send C-Join packets upstream to their neighboring PE routers, their
unicast next-hop to reach the C-Source.
The C-Join state points to BGP as the upstream interface. Actually, there is no PIM neighbor relationship
between the PEs. The downstream PE converts the C-PIM (C-S, C-G) state into a Type 7 source-tree join
BGP route, and sends it to the upstream PE router toward the C-Source.
Purpose
Make sure that the C-Multicast flow is integrated in MVPN vpn-1 and sent by Device PE1 into the
provider tunnel.
Action
Group: 224.1.1.1/32
Source: *
Upstream interface: local
Downstream interface list:
ge-1/2/11.0
Group: 224.1.1.1
Source: 10.1.1.1/32
Upstream interface: ge-1/2/10.0
Downstream interface list:
ge-1/2/11.0
Group: 224.2.127.254/32
Source: *
Upstream interface: local
1033
Group: 224.1.1.1/32
Source: *
Upstream rpf interface list:
vt-1/2/10.4 (P)
Sender Id: Label 299840
Downstream interface list:
ge-1/2/14.0
Group: 224.1.1.1
Source: 10.1.1.1/32
Upstream rpf interface list:
vt-1/2/10.4 (P)
Sender Id: Label 299840
Group: 224.2.127.254/32
Source: *
Upstream rpf interface list:
vt-1/2/10.4 (P)
Sender Id: Label 299840
Downstream interface list:
ge-1/2/14.0
Group: 224.1.1.1/32
Source: *
Upstream interface: vt-1/2/10.5
Downstream interface list:
ge-1/2/15.0
Group: 224.1.1.1
Source: 10.1.1.1/32
1034
Group: 224.2.127.254/32
Source: *
Upstream interface: vt-1/2/10.5
Downstream interface list:
ge-1/2/15.0
Meaning
The output shows that, unlike the other PE devices, Device PE2 is using sender-based RPF. The output
on Device PE2 includes the upstream RPF sender. The Sender Id field is only shown when sender-based
RPF is enabled.
Purpose
Action
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : vpn-1
MVPN Mode : RPT-SPT
C-mcast IPv4 (S:G) Provider Tunnel St
0.0.0.0/0:224.1.1.1/32 I-P-tnl:LDP-P2MP:1.1.1.3, lsp-id
16777217 RM
10.1.1.1/32:224.1.1.1/32 I-P-tnl:LDP-P2MP:1.1.1.3, lsp-id
16777217 RM
0.0.0.0/0:224.2.127.254/32 I-P-tnl:LDP-P2MP:1.1.1.3, lsp-id
16777217 RM
1035
...
Instance : vpn-1
MVPN Mode : RPT-SPT
C-mcast IPv4 (S:G) Provider Tunnel St
0.0.0.0/0:224.1.1.1/32 I-P-tnl:LDP-P2MP:1.1.1.2, lsp-id 16777217
10.1.1.1/32:224.1.1.1/32 I-P-tnl:LDP-P2MP:1.1.1.2, lsp-id 16777217
0.0.0.0/0:224.2.127.254/32 I-P-tnl:LDP-P2MP:1.1.1.2, lsp-id 16777217
...
MVPN instance:
Legend for provider tunnel
S- Selective provider tunnel
Instance : vpn-1
MVPN Mode : RPT-SPT
C-mcast IPv4 (S:G) Provider Tunnel St
0.0.0.0/0:224.1.1.1/32 I-P-tnl:LDP-P2MP:1.1.1.2, lsp-id 16777217
10.1.1.1/32:224.1.1.1/32 I-P-tnl:LDP-P2MP:1.1.1.2, lsp-id 16777217
0.0.0.0/0:224.2.127.254/32 I-P-tnl:LDP-P2MP:1.1.1.2, lsp-id 16777217
...
1036
Meaning
Purpose
Action
Instance : vpn-1
MVPN Mode : RPT-SPT
Family : INET
C-Multicast route address :0.0.0.0/0:224.1.1.1/32
MVPN Source-PE1:
extended-community: no-advertise target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: lo0.102 Index: -1610691384
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: lo0.102 Index: -1610691384
C-Multicast route address :10.1.1.1/32:224.1.1.1/32
MVPN Source-PE1:
extended-community: no-advertise target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: ge-1/2/10.0 Index: -1610691384
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: ge-1/2/10.0 Index: -1610691384
C-Multicast route address :0.0.0.0/0:224.2.127.254/32
MVPN Source-PE1:
1037
Interface: (Null)
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
Instance : vpn-1
MVPN Mode : RPT-SPT
Family : INET
C-Multicast route address :0.0.0.0/0:224.1.1.1/32
MVPN Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
C-Multicast route address :10.1.1.1/32:224.1.1.1/32
MVPN Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
C-Multicast route address :0.0.0.0/0:224.2.127.254/32
MVPN Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
1039
PIM Source-PE1:
extended-community: target:1.1.1.2:72
Route Distinguisher: 1.1.1.2:32767
Autonomous system number: 1001
Interface: (Null)
...
Meaning
RELATED DOCUMENTATION
unicast-umh-election | 2007
IN THIS SECTION
Understanding Wildcards to Configure Selective Point-to-Multipoint LSPs for an MBGP MVPN | 1039
IN THIS SECTION
Selective LSPs are also referred to as selective provider tunnels. Selective provider tunnels carry traffic
from some multicast groups in a VPN and extend only to the PE routers that have receivers for these
groups. You can configure a selective provider tunnel for group prefixes and source prefixes, or you can
use wildcards for the group and source, as described in the Internet draft draft-rekhter-mvpn-wildcard-
spmsi-01.txt, Use of Wildcard in S-PMSI Auto-Discovery Routes.
The following sections describe the scenarios and special considerations when you use wildcards for
selective provider tunnels.
About S-PMSI
The provider multicast service interface (PMSI) is a BGP tunnel attribute that contains the tunnel ID
used by the PE router for transmitting traffic through the core of the provider network. A selective PMSI
(S-PMSI) autodiscovery route advertises binding of a given MVPN customer multicast flow to a
particular provider tunnel. The S-PMSI autodiscovery route advertised by the ingress PE router
contains /32 IPv4 or /128 IPv6 addresses for the customer source and the customer group derived from
the source-tree customer multicast route.
Figure 118 on page 1041 shows a simple MVPN topology. The ingress router, PE1, originates the S-
PMSI autodiscovery route. The egress routers, PE2 and PE3, have join state as a result of receiving join
messages from CE devices that are not shown in the topology. In response to the S-PMSI autodiscovery
route advertisement sent by PE1, PE2, and PE3, elect whether or not to join the tunnel based on the
join state. The selective provider tunnel configuration is configured in a VRF instance on PE1.
1041
NOTE: The MVPN mode configuration (RPT-SPT or SPT-only) is configured on all three PE
routers for all VRFs that make up the VPN. If you omit the MVPN mode configuration, the
default mode is SPT-only.
A wildcard S-PMSI has the source or the group (or both the source and the group) field set to the
wildcard value of 0.0.0.0/0 and advertises binding of multiple customer multicast flows to a single
provider tunnel in a single S-PMSI autodiscovery route.
The scenarios under which you might configure a wildcard S-PMSI are as follows:
1042
• When the customer multicast flows are PIM-SM in ASM-mode flows. In this case, a PE router
connected to an MVPN customer's site that contains the customer's RP (C-RP) could bind all the
customer multicast flows traveling along a customer's RPT tree to a single provider tunnel.
• When a PE router is connected to an MVPN customer’s site that contains multiple sources, all
sending to the same group.
• When the customer multicast flows are PIM-bidirectional flows. In this case, a PE router could bind
to a single provider tunnel all the customer multicast flows for the same group that have been
originated within the sites of a given MVPN connected to that PE, and advertise such binding in a
single S-PMSI autodiscovery route.
• When the customer multicast flows are PIM-SM in SSM-mode flows. In this case, a PE router could
bind to a single provider tunnel all the customer multicast flows coming from a given source located
in a site connected to that PE router.
• When you want to carry in the provider tunnel all the customer multicast flows originated within the
sites of a given MVPN connected to a given PE router.
• A (*,G) S-PMSI matches all customer multicast routes that have the group address. The customer
source address in the customer multicast route can be any address, including 0.0.0.0/0 for shared-
tree customer multicast routes. A (*, C-G) S-PMSI autodiscovery route is advertised with the source
field set to 0 and the source address length set to 0. The multicast group address for the S-PMSI
autodiscovery route is derived from the customer multicast joins.
• A (*,*) S-PMSI matches all customer multicast routes. Any customer source address and any customer
group address in a customer multicast route can be bound to the (*,*) S-PMSI. The S-PMSI
autodiscovery route is advertised with the source address and length set to 0 and the group address
and length set 0. The remaining fields in the S-PMSI autodiscovery route follow the same rule as (C-S,
C-G) S-PMSI, as described in section 12.1 of the BGP-MVPN draft (draft-ietf-l3vpn-2547bis-mcast-
bgp-00.txt).
For dynamic provider tunnels, each customer multicast stream is bound to a separate provider tunnel,
and each tunnel is advertised by a separate S-PMSI autodiscovery route. For static LSPs, multiple
customer multicast flows are bound to a single provider tunnel by having multiple S-PMSI autodiscovery
routes advertise the same provider tunnel.
When you configure a wildcard (*,G) or (*,*) S-PMSI, one or more matching customer multicast routes
share a single S-PMSI. All customer multicast routes that have a matching source and group address are
1043
bound to the same (*,G) or (*,*) S-PMSI and share the same tunnel. The (*,G) or (*,*) S-PMSI is
established when the first matching remote customer multicast join message is received in the ingress
PE router, and deleted when the last remote customer multicast join is withdrawn from the ingress PE
router. Sharing a single S-PMSI autodiscovery route improves control plane scalability.
For (S,G) and (*,G) S-PMSI autodiscovery routes in PIM dense mode (PIM-DM), all downstream PE
routers receive PIM-DM traffic. If a downstream PE router does not have receivers that are interested in
the group address, the PE router instantiates prune state and stops receiving traffic from the tunnel.
Now consider what happens for (*,*) S-PMSI autodiscovery routes. If the PIM-DM traffic is not bound by
a longer matching (S,G) or (*,G) S-PMSI, it is bound to the (*,*) S-PMSI. As is always true for dense mode,
PIM-DM traffic is flooded to downstream PE routers over the provider tunnel regardless of the
customer multicast join state. Because there is no group information in the (*,*) S-PMSI autodiscovery
route, egress PE routers join a (*,*) S-PMSI tunnel if there is any configuration on the egress PE router
indicating interest in PIM-DM traffic.
Interest in PIM-DM traffic is indicated if the egress PE router has one of the following configurations in
the VRF instance that corresponds to the instance that imports the S-PMSI autodiscovery route:
• At least one interface is configured in dense mode at the [edit routing-instances instance-name
protocols pim interface] hierarchy level.
• At least one group is configured as a dense-mode group at the [edit routing-instances instance-name
protocols pim dense-groups group-address] hierarchy level.
For (S,G) and (*,G) S-PMSI autodiscovery routes in PIM bootstrap router (PIM-BSR) mode, an ingress PE
router floods the PIM bootstrap message (BSM) packets over the provider tunnel to all egress PE
routers. An egress PE router does not join the tunnel unless the message has the ALL-PIM-ROUTERS
group. If the message has this group, the egress PE router joins the tunnel, regardless of the join state.
The group field in the message determines the presence or absence of the ALL-PIM-ROUTERS address.
Now consider what would happen for (*,*) S-PMSI autodiscovery routes used with PIM-BSR mode. If the
PIM BSM packets are not bound by a longer matching (S,G) or (*,G) S-PMSI, they are bound to the (*,*)
S-PMSI. As is always true for PIM-BSR, BSM packets are flooded to downstream PE routers over the
provider tunnel to the ALL-PIM-ROUTERS destination group. Because there is no group information in
the (*,*) S-PMSI autodiscovery route, egress PE routers always join a (*,*) S-PMSI tunnel. Unlike PIM-
DM, the egress PE routers might have no configuration suggesting use of PIM-BSR as the RP discovery
mechanism in the VRF instance. To prevent all egress PE routers from always joining the (*,*) S-PMSI
tunnel, the (*,*) wildcard group configuration must be ignored.
1044
This means that if you configure PIM-BSR, a wildcard-group S-PMSI can be configured for all other
group addresses. The (*,*) S-PMSI is not used for PIM-BSR traffic. Either a matching (*,G) or (S,G) S-PMSI
(where the group address is the ALL-PIM-ROUTERS group) or an inclusive provider tunnel is needed to
transmit data over the provider core. For PIM-BSR, the longest-match lookup is (S,G), (*,G), and the
inclusive provider tunnel, in that order. If you do not configure an inclusive tunnel for the routing
instance, you must configure a (*,G) or (S,G) selective tunnel. Otherwise, the data is dropped. This is
because PIM-BSR functions like PIM-DM, in that traffic is flooded to downstream PE routers over the
provider tunnel regardless of the customer multicast join state. However, unlike PIM-DM, the egress PE
routers might have no configuration to indicate interest or noninterest in PIM-BSR traffic.
You can configure a 0.0.0.0/0 source prefix and a wildcard source under the same group prefix in a
selective provider tunnel. For example, the configuration might look as follows:
routing-instances {
vpna {
provider-tunnel {
selective {
group 203.0.113.0/24 {
source 0.0.0.0/0 {
rsvp-te {
label-switched-path-template {
sptnl3;
}
}
}
wildcard-source {
rsvp-te {
label-switched-path-template {
sptnl2;
}
static-lsp point-to-multipoint-lsp-name;
}
threshold-rate kbps;
}
}
}
}
}
}
1045
The functions of the source 0.0.0.0/0 and wildcard-source configuration statements are different. The
0.0.0.0/0 source prefix only matches (C-S, C-G) customer multicast join messages and triggers (C-S, C-G)
S-PMSI autodiscovery routes derived from the customer multicast address. Because all (C-S, C-G) join
messages are matched by the 0.0.0.0/0 source prefix in the matching group, the wildcard source S-PMSI
is used only for (*,C-G) customer multicast join messages. In the absence of a configured 0.0.0.0/0
source prefix, the wildcard source matches (C-S, C-G) and (*,C-G) customer multicast join messages. In
the example, a join message for (10.0.1.0/24, 203.0.113.0/24) is bound to sptnl3. A join message for (*,
203.0.113.0/24) is bound to sptnl2.
Sharing a single route improves control plane scalability because it reduces the number of S-PMSI
autodiscovery routes.
1. Configure a wildcard group matching any group IPv4 address and a wildcard source for (*,*) join
messages.
2. Configure a wildcard group matching any group IPv6 address and a wildcard source for (*,*) join
messages.
3. Configure an IP prefix of a multicast group and a wildcard source for (*,G) join messages.
routing-instances {
vpna {
provider-tunnel {
selective {
wildcard-group-inet {
wildcard-source {
rsvp-te {
label-switched-path-template {
sptnl1;
}
}
1047
}
}
group 203.0.113.0/24 {
wildcard-source {
rsvp-te {
label-switched-path-template {
sptnl2;
}
}
}
source 10.1.1/24 {
rsvp-te {
label-switched-path-template {
sptnl3;
}
}
}
}
}
}
}
}
• A customer multicast (10.1.1.1, 203.0.113.1) join message is bound to the sptnl3 S-PMSI
autodiscovery route.
• A customer multicast (10.2.1.1, 203.0.113.1) join message is bound to the sptnl2 S-PMSI
autodiscovery route.
• A customer multicast (10.1.1.1, 203.1.113.1) join message is bound to the sptnl1 S-PMSI
autodiscovery route.
When more than one customer multicast route is bound to the same wildcard S-PMSI, only one S-PMSI
autodiscovery route is created. An egress PE router always uses the same matching rules as the ingress
PE router that advertises the S-PMSI autodiscovery route. This ensures consistent customer multicast
mapping on the ingress and the egress PE routers.
RELATED DOCUMENTATION
IN THIS SECTION
Eliminating PE-PE Distribution of (C-*, C-G) State Using Source Active Autodiscovery Routes | 1052
While non-C-multicast multicast virtual private network (MVPN) routes (Type 1 – Type 5) are generally
used by all provider edge (PE) routers in the network, C-multicast MVPN routes (Type 6 and Type 7) are
only useful to the PE router connected to the active C-S or candidate rendezvous point (RP). Therefore,
C-multicast routes need to be installed only in the VPN routing and forwarding (VRF) table on the active
sender PE router for a given C-G. To accomplish this, Internet draft draft-ietf-l3vpn-2547bis-
1049
mcast-10.txt specifies to attach a special and dynamic route target to C-multicast MVPN routes (Figure
119 on page 1049).
Figure 119: Attaching a Special and Dynamic Route Target to C-Multicast MVPN Routes
The route target attached to C-multicast routes is also referred to as the C-multicast import route target
and should not to be confused with route target import (Table 31 on page 1049). Note that C-multicast
MVPN routes differ from other MVPN routes in one essential way: they carry a dynamic route target
whose value depends on the identity of the active sender PE router at a given time and can change if
the active PE router changes.
Table 31: Distinction Between Route Target Improt Attached to VPN-IPv4 Routes and Route Target
Attached to C-Multicast MVPN Routes
Route Target Import Attached to VPN-IPV4 Route Target Attached to C-Multicast MVPN Routes
Routes
Value generated by the originating PE Value depends on the identity of the active PE router.
router. Must be unique per VRF table.
1050
Table 31: Distinction Between Route Target Improt Attached to VPN-IPv4 Routes and Route Target
Attached to C-Multicast MVPN Routes (Continued)
Route Target Import Attached to VPN-IPV4 Route Target Attached to C-Multicast MVPN Routes
Routes
Static. Created upon configuration to help Dynamic because if the active sender PE router
identify to which PE router and to which changes, then the route target attached to the C-
VPN the VPN unicast routes belong. multicast routes must change to target the new sender
PE router. For example, a new VPN source attached to
a different PE router becomes active and preferred.
A PE router that receives a local C-join determines the identity of the active sender PE router by
performing a unicast route lookup for the C-S or candidate rendezvous point (router) [candidate RP] in
the unicast VRF table. If there is more than one route, the receiver PE router chooses a single forwarder
PE router. The procedures used for choosing a single forwarder are outlined in Internet draft draft-ietf-
l3vpn-2547bis-mcast-bgp-08.txt and are not covered in this topic.
After the active sender (upstream) PE router is selected, the receiver PE router constructs the C-
multicast MVPN route corresponding to the local C-join.
After the C-multicast route is constructed, the receiver PE router needs to attach the correct route
target to this route targeting the active sender PE router. As mentioned, each PE router creates a unique
VRF route target import community and attaches it to the VPN-IPv4 routes. When the receiver PE
router does a route lookup for C-S or candidate RP, it can extract the value of the route target import
associated with this route and set the value of the C-import route target to the value of the route target
import.
On the active sender PE router, C-multicast routes are imported only if they carry the route target
whose value is the same as the route target import that the sender PE router generated.
1051
A PE router originates a C-multicast MVPN route in response to receiving a C-join through its PE-CE
interface. See Figure 120 on page 1051 for the formats in the C-multicast route encoded in MCAST-
VPN NLRI. Table 32 on page 1051 describes each field.
Field Description
Route Distinguisher Set to the route distinguisher of the C-S or candidate RP (the route
distinguisher associated with the upstream PE router).
Source AS Set to the value found in the src-as community of the C-S or candidate RP.
Multicast Source Length Set to 32 for IPv4 and to 128 for IPv6 C-S or candidate RP IP addresses.
Multicast Group Length Set to 32 for IPv4 and to 128 for IPv6 C-G addresses.
1052
Table 32: C-Multicast Route Type MCAST-VPN NLRI Format Descriptions (Continued)
Field Description
This same structure is used for encoding both Type 6 and Type 7 routes with two differences:
• The first difference is the value used for the multicast source field. For Type 6 routes, this field is set
to the IP address of the candidate RP configured. For Type 7 routes, this field is set to the IP address
of the C-S contained in the (C-S, C-G) message.
• The second difference is the value used for the route distinguisher. For Type 6 routes, this field is set
to the route distinguisher that is attached to the IP address of the candidate RP. For Type 7 routes,
this field is set to the route distinguisher that is attached to the IP address of the C-S.
Eliminating PE-PE Distribution of (C-*, C-G) State Using Source Active Autodiscovery
Routes
PE routers must maintain additional state when the C-multicast routing protocol is Protocol
Independent Multicast-Sparse Mode (PIM-SM) in any-source multicast (ASM). This is a requirement
because with ASM, the receivers first join the shared tree rooted at the candidate RP (called a candidate
RP tree or candidate RPT). However, as the VPN multicast sources become active, receivers learn the
identity of the sources and join the tree rooted at the source (called a customer shortest-path tree or C-
SPT). The receivers then send a prune message to the candidate RP to stop the traffic coming through
the shared tree for the group that they joined to the C-SPT. The switch from candidate RPT to C-SPT is
a complicated process requiring additional state.
In this approach, a PE router that receives a local (C-*, C-G) join creates a Type 6 route, but does not
advertise the route to the remote PE routers until it receives information about an active source. The PE
router acting as the candidate RP (or that learns about active sources via MSDP) is responsible for
originating a Type 5 route. A Type 5 route carries information about the active source and the group
addresses. The information contained in a Type 5 route is enough for receiver PE routers to join the C-
SPT by originating a Type 7 route toward the sender PE router, completely skipping the advertisement
1053
of the Type 6 route that is created when a C-join is received. Figure 121 on page 1053 shows the format
of a source active (SA) autodiscovery route. Table 33 on page 1053 describes each format.
Figure 121: Source Active Autodiscovery Route Type MCAST-VPN NLRI Format
Table 33: Source Active Autodiscovery Route Type MCAST-VPN NLRI Format Descriptions
Field Description
Route Distinguisher Set to the route distinguisher configured on the router originating the SA
autodiscovery route.
Multicast Source Length Set to 32 for IPv4 and to 128 for IPv6 C-S IP addresses.
Multicast Source Set to the IP address of the C-S that is actively transmitting data to C-G.
Multicast Group Length Set to 32 for IPv4 and to 128 for IPv6 C-G addresses.
Multicast Group Set to the IP address of the C-G to which C-S is transmitting data.
The sender PE router imports C-multicast routes into the VRF table based on the route target of the
route. If the route target attached to the C-multicast MVPN route matches the route target import
1054
community originated by this router, the C-multicast MVPN route is imported into the VRF table. If not,
it is discarded.
Once the C-multicast MVPN routes are imported, they are translated back to C-joins and passed on to
the VRF C-PIM protocol for further processing per normal PIM procedures.
RELATED DOCUMENTATION
IN THIS SECTION
This section describes PE-PE distribution of Type 7 routes discussed in "Signaling Provider Tunnels and
Data Plane Setup" on page 1069.
In source-tree-only mode, a receiver provider edge (PE) router generates and installs a Type 6 route in its
<routing-instance-name>.mvpn.0 table in response to receiving a (C-*, C-G) message from a local
receiver, but does not advertise this route to other PE routers via BGP. The receiver PE router waits for a
Type 5 route corresponding to the C-join.
Type 5 routes carry information about active sources and can be advertised by any PE router. In Junos
OS, a PE router originates a Type 5 route if one of the following conditions occurs:
• PE router starts receiving multicast data directly from a VPN multicast source.
• PE router is the candidate rendezvous point (router) (candidate RP) and starts receiving C-PIM
register messages.
• PE router has a Multicast Source Discovery Protocol (MSDP) session with the candidate RP and
starts receiving MSDP Source Active routes.
1055
Once both Type 6 and Type 5 routes are installed in the <routing-instance-name>.mvpn.0 table, the
receiver PE router is ready to originate a Type 7 route
If the C-join received over a VPN interface is a source tree join (C-S, C-G), then the receiver PE router
simply originates a Type 7 route (Step 7 in the following procedure). If the C-join is a shared tree join (C-
*, C-G), then the receiver PE router needs to go through a few steps (Steps 1-7) before originating a
Type 7 route.
Note that Router PE1 is the candidate RP that is conveniently located in the same router as the sender
PE router. If the sender PE router and the PE router acting as (or MSDP peering with) the candidate RP
are different, then the VPN multicast register messages first need to be delivered to the PE router acting
as the candidate RP that is responsible for originating the Type 5 route. Routers referenced in this topic
are shown in "Understanding Next-Generation MVPN Network Topology" on page 745.
1. A PE router that receives a (C-*, C-G) join message processes the message using normal C-PIM
procedures and updates its C-PIM database accordingly.
Enter the show pim join extensive instance vpna 224.1.1.1 command on Router PE3 to verify that
Router PE3 creates the C-PIM database after receiving the (*, 224.1.1.1) C-join message from Router
CE3:
Group: 224.1.1.1
Source: *
RP: 10.12.53.1
Flags: sparse,rptree,wildcard
Upstream protocol: BGP
Upstream interface: Through BGP
Upstream neighbor: Through MVPN
Upstream state: Join to RP
Downstream neighbors:
Interface: so-0/2/0.0
10.12.87.1 State: Join Flags: SRW Timeout: Infinity
2. The (C-*, C-G) entry in the C-PIM database triggers the generation of a Type 6 route that is then
installed in the <routing-instance-name>.mvpn.0 table by C-PIM. The Type 6 route uses the
candidate RP IP address as the source.
1056
Enter the show route table vpna.mvpn.0 detail | find 6:10.1.1.1 command on Router PE3 to verify
that Router PE3 installs the following Type 6 route in the vpna.mvpn.0 table:
3. The route distinguisher and route target attached to the Type 6 route are learned from a route
lookup in the <routing-instance-name>.inet.0 table for the IP address of the candidate RP.
Enter the show route table vpna.inet.0 10.12.53.1 detail command on Router PE3 to verify that
Router PE3 has the following entry for C-RP 10.12.53.1 in the vpna.inet.0 table:
4. After the VPN source starts transmitting data, the first PE router that becomes aware of the active
source (either by receiving register messages or the MSDP source-active routes) installs a Type 5
route in its VRF mvpn table.
Enter the show route table vpna.mvpn.0 detail | find 5:10.1.1.1 command on Router PE1 to verify
that Router PE1 has installed the following entry in the vpna.mvpn.0 table and starts receiving C-PIM
register messages from Router CE1:
5. Type 5 routes that are installed in the <routing-instance-name>.mvpn.0 table are picked up by BGP
and advertised to remote PE routers.
Enter the show route advertising-protocol bgp 10.1.1.3 detail table vpna.mvpn.0 | find 5: command
on Router PE1 to verify that Router PE1 advertises the following Type 5 route to remote PE routers:
6. The receiver PE router that has both a Type 5 and Type 6 route for (C-*, C-G) is now ready to
originate a Type 7 route.
Enter the show route table vpna.mvpn.0 detail command on Router PE3 to verify that Router PE3
has the following Type 5, 6, and 7 routes in the vpna.mvpn.0 table.
The Type 6 route is installed by C-PIM in Step 2. The Type 5 route is learned via BGP in Step 5. The
Type 7 route is originated by the MVPN module in response to having both Type 5 and Type 6 routes
for the same (C-*, C-G). The route target of the Type 7 route is the same as the route target of the
Type 6 route because both routes (IP address of the candidate RP [10.12.53.1] and the address of the
VPN multicast source [192.168.1.2]) are reachable via the same router [PE1]). Therefore, 10.12.53.1
and 192.168.1.2 carry the same route target import (10.1.1.1:64) community
7. The Type 7 route installed in the VRF MVPN table is picked up by BGP and advertised to remote PE
routers.
Enter the show route advertising-protocol bgp 10.1.1.1 detail table vpna.mvpn.0 | find 7:10.1.1.1
command on Router PE3 to verify that Router PE3 advertises the following Type 7 route:
8. If the C-join is a source tree join, then the Type 7 route is originated immediately (without waiting for
a Type 5 route).
1060
Enter the show route table vpna.mvpn.0 detail | find 7:10.1.1.1 command on Router PE2 to verify
that Router PE2 originates the following Type 7 route in response to receiving a (192.168.1.2,
232.1.1.1) C-join:
A sender PE router imports a Type 7 route if the route is carrying a route target that matches the locally
originated route target import community. All Type 7 routes must pass the __vrf-mvpn-import-cmcast-
<routing-instance-name>-internal__ policy in order to be installed in the <routing-instance-
name>.mvpn.0 table.
When a sender PE router receives a Type 7 route via BGP, this route is installed in the <routing-
instance-name>.mvpn.0 table. The BGP route is then translated back into a normal C-join inside the
VRF table, and the C-join is installed in the local C-PIM database of the receiver PE router. A new C-join
added to the C-PIM database triggers C-PIM to originate a Type 6 or Type 7 route. The C-PIM on the
sender PE router creates its own version of the same Type 7 route received via BGP.
Use the show route table vpna.mvpn.0 detail | find 7:10.1.1.1 command to verify that Router PE1
contains the following entries for a Type 7 route in the vpna.mvpn.0 table corresponding to a
(192.168.1.2, 224.1.1.1) join message. There are two entries; one entry is installed by PIM and the other
entry is installed by BGP. This example also shows the Type 7 route corresponding to the (192.168.1.2,
232.1.1.1) join.
Remote C-joins (Type 7 routes learned via BGP translated back to normal C-joins) are installed in the
VRF C-PIM database on the sender PE router and are processed based on regular C-PIM procedures.
This process completes the end-to-end C-multicast routing exchange.
Use the show pim join extensive instance vpna command to verify that Router PE1 has installed the
following entries in the C-PIM database:
Group: 224.1.1.1
Source: 192.168.1.2
Flags: sparse,spt
Upstream interface: fe-0/2/0.0
Upstream neighbor: 10.12.97.2
Upstream state: Local RP, Join to Source
Keepalive timeout: 201
Downstream neighbors:
Interface: Pseudo-MVPN
Group: 232.1.1.1
Source: 192.168.1.2
Flags: sparse,spt
Upstream interface: fe-0/2/0.0
Upstream neighbor: 10.12.97.2
Upstream state: Local RP, Join to Source
Keepalive timeout:
Downstream neighbors:
Interface: Pseudo-MVPN
1063
RELATED DOCUMENTATION
Both route target import (rt-import) and source autonomous system (src-as) communities contain two
fields (following their respective keywords). In Junos OS, a provider edge (PE) router constructs the
route target import community using its router ID in the first field and a per-VRF unique number in the
second field. The router ID is normally set to the primary loopback IP address of the PE router. The
unique number used in the second field is an internal number derived from the routing-instance table
index. The combination of the two numbers creates a route target import community that is unique to
the originating PE router and unique to the VPN routing and forwarding (VRF) instance from which it is
created.
For example, Router PE1 creates the following route target import community: rt-import:10.1.1.1:64.
Since the route target import community is constructed using the primary loopback address and the
routing-instance table index of the PE router, any event that causes either number to change triggers a
change in the value of the route target import community. This in turn requires VPN-IPv4 routes to be
re-advertised with the new route target import community. Under normal circumstances, the primary
loopback address and the routing-instance table index numbers do not change. If they do change, Junos
OS updates all related internal policies and re-advertises VPN-IPv4 routes with the new rt-import and
src-as values per those policies.
To ensure that the route target import community generated by a PE router is unique across VRF tables,
the Junos OS Policy module restricts the use of primary loopback addresses to next-generation
multicast virtual private network (MVPN) internal policies only. You are not permitted to configure a
route target for any VRF table (MVPN or otherwise) using the primary loopback address. The commit
fails with an error if the system finds a user-configured route target that contains the IP address used in
constructing the route target import community.
1064
The global administrator field of the src-as community is set to the local AS number of the PE router
originating the community, and the local administrator field is set to 0. This community is used for inter-
AS operations but needs to be carried along with all VPN-IPv4 routes.
For example, Router PE1 creates an src-as community with a value of src-as:65000:0.
RELATED DOCUMENTATION
IN THIS SECTION
Every provider edge (PE) router that is participating in the next-generation multicast virtual private
network (MVPN) is required to originate a Type 1 intra-AS autodiscovery route. In Junos OS, the MVPN
module is responsible for installing the intra-AS autodiscovery route in the local <routing-instance-
name>.mvpn.0 table. All PE routers advertise their local Type 1 routes to each other. Routers referenced
in this topic are shown in "Understanding Next-Generation MVPN Network Topology" on page 745.
Use the show route table vpna.mvpn.0 command to verify that Router PE1 has installed intra-AS AD
routes in the vpna.mvpn.0 table. The route is installed by the MVPN protocol (meaning it is the MVPN
module that originated the route), and the mask for the entire route is /240.
1:10.1.1.1:1:10.1.1.1/240
1065
Intra-AS AD routes are picked up by the BGP protocol from the <routing-instance-name>.mvpn.0 table
and advertised to the remote PE routers via the MCAST-VPN address family. By default, intra-AS
autodiscovery routes carry the same route target community that is attached to the unicast VPN-IPv4
routes. If the unicast and multicast network topologies are not congruent, then you can configure a
different set of import route target and export route target communities for non-C-multicast MVPN
routes (C-multicast MVPN routes always carry a dynamic import route target).
Multicast route targets are configured by including the import-target and export-target statements at
the [edit routing-instances routing-instance-name protocols mvpn route-target] hierarchy level.
Junos OS creates two additional internal policies in response to configuring multicast route targets.
These policies are applied to non-C-multicast MVPN routes during import and export decisions.
Multicast VPN routing and forwarding (VRF) internal import and export polices follow a naming
convention similar to unicast VRF import and export policies. The contents of these polices are also
similar to policies applied to unicast VPN routes.
The following list identifies the default policy names and where they are applied:
Use the show policy __vrf-mvpn-import-target-vpna-internal__ command on Router PE1 to verify that
Router PE1 has created the following internal MVPN policies if import-target and export-target are
configured to be target:10:2:
The provider multicast service interface (PMSI) attribute is originated and attached to Type 1 intra-AS
autodiscovery routes by the sender PE routers when the provider-tunnel statement is included at the
[edit routing-instances routing-instance-name] hierarchy level. Since provider tunnels are signaled by
the sender PE routers, this statement is not necessary on the PE routers that are known to have VPN
multicast receivers only.
If the provider tunnel configured is Protocol Independent Multicast-Sparse Mode (PIM-SM) any-source
multicast (ASM), then the PMSI attribute carries the IP address of the sender-PE and provider tunnel
group address. The provider tunnel group address is assigned by the service provider (through
configuration) from the provider’s multicast address space and is not to be confused with the multicast
addresses used by the VPN customer.
If the provider tunnel configured is the RSVP-Traffic Engineering (RSVP-TE) type, then the PMSI attribute
carries the RSVP-TE point-to-multipoint session object. This point-to-multipoint session object is used
as the identifier for the parent point-to-multipoint label-switched path (LSP) and contains the fields
shown in Figure 122 on page 1066.
In Junos OS, the P2MP ID and Extended Tunnel ID fields are set to the router ID of the sender PE
router. The Tunnel ID is set to the port number used for the point-to-multipoint RSVP session that is
unique for the length of the RSVP session.
Use the show rsvp session p2mp detail command to verify that Router PE1 signals the following RSVP
sessions to Router PE2 and Router PE3 (using port number 6574). In this example, Router PE1 is
signaling a point-to-multipoint LSP named 10.1.1.1:65535:mvpn:vpna with two sub-LSPs. Both sub-
LSPs 10.1.1.3:10.1.1.1:65535:mvpn:vpna and 10.1.1.2:10.1.1.1:65535:mvpn:vpna use the same RSVP
port number (6574) as the parent point-to-multipoint LSP.
10.1.1.3
From: 10.1.1.1, LSPstate: Up, ActiveRoute: 0
LSPname: 10.1.1.3:10.1.1.1:65535:mvpn:vpna, LSPpath: Primary
P2MP LSPname: 10.1.1.1:65535:mvpn:vpna
Suggested label received: -, Suggested label sent: -
Recovery label received: -, Recovery label sent: 299968
Resv style: 1 SE, Label in: -, Label out: 299968
Time left: -, Since: Wed May 27 07:36:22 2009
Tspec: rate 0bps size 0bps peak Infbps m 20 M 1500
Port number: sender 1 receiver 6574 protocol 0
PATH rcvfrom: localclient
Adspec: sent MTU 1500
Path MTU: received 1500
PATH sentto: 10.12.100.6 (fe-0/2/3.0) 27 pkts
RESV rcvfrom: 10.12.100.6 (fe-0/2/3.0) 27 pkts
Explct route: 10.12.100.6 10.12.100.22
Record route: <self> 10.12.100.6 10.12.100.22
10.1.1.2
From: 10.1.1.1, LSPstate: Up, ActiveRoute: 0
LSPname: 10.1.1.2:10.1.1.1:65535:mvpn:vpna, LSPpath: Primary
P2MP LSPname: 10.1.1.1:65535:mvpn:vpna
Suggested label received: -, Suggested label sent: -
Recovery label received: -, Recovery label sent: 299968
Resv style: 1 SE, Label in: -, Label out: 299968
Time left: -, Since: Wed May 27 07:36:22 2009
Tspec: rate 0bps size 0bps peak Infbps m 20 M 1500
Port number: sender 1 receiver 6574 protocol 0
1068
In Junos OS, you can configure a PE router to be a sender-site only or a receiver-site only. These options
are enabled by including the sender-site and receiver-site statements at the [edit routing-instances
routing-instance-name protocols mvpn] hierarchy level.
• A sender-site only PE router does not join the provider tunnels advertised by remote PE routers
The commit fails if you include the receiver-site and provider-tunnel statements in the same VPN.
RELATED DOCUMENTATION
IN THIS SECTION
A sender provider edge (PE) router configured to use an inclusive PIM-sparse mode (PIM-SM) any-
source multicast (ASM ) provider tunnel for a VPN creates a multicast tree (using the P-group address
configured) in the service provider network. This tree is rooted at the sender PE router and has the
receiver PE routers as the leaves. VPN multicast packets received from the local VPN source are
encapsulated by the sender PE router with a multicast generic routing encapsulation (GRE) header
containing the P-group address configured for the VPN. These packets are then forwarded on the
service provider network as normal IP multicast packets per normal P-PIM procedures. At the leaf
nodes, the GRE header is stripped and the packets are passed on to the local VRF C-PIM protocol for
further processing.
In Junos OS, a logical interface called multicast tunnel (MT) is used for GRE encapsulation and de-
encapsulation of VPN multicast packets. The multicast tunnel interface is created automatically if a
Tunnel PIC is present.
The multicast tunnel subinterfaces act as pseudo upstream or downstream interfaces between C-PIM
and P-PIM.
1070
In the following two examples, assume that the network uses PIM-SM (ASM) signaled GRE tunnels as
the tunneling technology. Routers referenced in this topic are shown in "Understanding Next-
Generation MVPN Network Topology" on page 745.
Use the show interfaces mt-0/1/0 terse command to verify that Router PE1 has created the following
multicast tunnel subinterface. The logical interface number is 32768, indicating that this sub-unit is used
for GRE encapsulation.
Use the show interfaces mt-0/1/0 terse command to verify that Router PE2 has created the following
multicast tunnel subinterface. The logical interface number is 49152, indicating that this sub-unit is used
for GRE de-encapsulation.
The sender PE router installs a local join entry in its P-PIM database for each VRF table configured to
use PIM as the provider tunnel. The outgoing interface list (OIL) of this entry points to the core-facing
interface. Since the P-PIM entry is installed as Local, the sender PE router sets the source address to its
primary loopback IP address.
Use the show pim join extensive command to verify that Router PE1 has installed the following state in
its P-PIM database.
Group: 239.1.1.1
Source: 10.1.1.1
1071
Flags: sparse,spt
Upstream interface: Local
Upstream neighbor: Local
Upstream state: Local Source
Keepalive timeout: 339
Downstream neighbors:
Interface: fe-0/2/3.0
10.12.100.6 State: Join Flags: S Timeout: 195
On the VRF side of the sender PE router, C-PIM installs a Local Source entry in its C-PIM database for
the active local VPN source. The OIL of this entry points to Pseudo-MVPN, indicating that the
downstream interface points to the receivers in the next-generation MVPN network. Routers referenced
in this topic are shown in "Understanding Next-Generation MVPN Network Topology" on page 745.
Use the show pim join extensive instance vpna 224.1.1.1 command to verify that Router PE1 has
installed the following entry in its C-PIM database.
Group: 224.1.1.1
Source: 192.168.1.2
Flags: sparse,spt
Upstream interface: fe-0/2/0.0
Upstream neighbor: 10.12.97.2
Upstream state: Local RP, Join to Source
Keepalive timeout: 0
Downstream neighbors:
Interface: Pseudo-MVPN
The forwarding entry corresponding to the C-PIM Local Source (or Local RP) on the sender PE router
points to the multicast tunnel encapsulation subinterface as the downstream interface. This indicates
that the local multicast data packets are encapsulated as they are passed on to the P-PIM protocol.
1072
Use the show multicast route extensive instance vpna group 224.1.1.1 command to verify that Router
PE1 has the following multicast forwarding entry for group 224.1.1.1. The upstream interface is the PE-
CE interface and the downstream interface is the multicast tunnel encapsulation subinterface:
Group: 224.1.1.1
Source: 192.168.1.2/32
Upstream interface: fe-0/2/0.0
Downstream interface list:
mt-0/1/0.32768
Session description: ST Multicast Groups
Statistics: 7 kBps, 79 pps, 719738 packets
Next-hop ID: 262144
Upstream protocol: MVPN
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
On the receiver PE router, multicast data packets received from the network are de-encapsulated as
they are passed through the multicast tunnel de-encapsulation interface.
The P-PIM database on the receiver PE router contains two P-joins. One is for P-RP, and the other is for
the sender PE router. For both entries, the OIL contains the multicast tunnel de-encapsulation interface
from which the GRE header is stripped. The upstream interface for P-joins is the core-facing interface
that faces towards the sender PE router.
Use the show pim join extensive command to verify that Router PE3 has the following state in its P-PIM
database. The downstream neighbor interface points to the GRE de-encapsulation subinterface:
Group: 239.1.1.1
Source: *
1073
RP: 10.1.1.10
Flags: sparse,rptree,wildcard
Upstream interface: so-0/0/3.0
Upstream neighbor: 10.12.100.21
Upstream state: Join to RP
Downstream neighbors:
Interface: mt-1/2/0.49152
10.12.53.13 State: Join Flags: SRW Timeout: Infinity
Group: 239.1.1.1
Source: 10.1.1.1
Flags: sparse,spt
Upstream interface: so-0/0/3.0
Upstream neighbor: 10.12.100.21
Upstream state: Join to Source
Keepalive timeout: 351
Downstream neighbors:
Interface: mt-1/2/0.49152
10.12.53.13 State: Join Flags: S Timeout: Infinity
On the VRF side of the receiver PE router, C-PIM installs a join entry in its C-PIM database. The OIL of
this entry points to the local VPN interface, indicating active local receivers. The upstream protocol,
interface, and neighbor of this entry point to the next-generation-MVPN network. Routers referenced in
this topic are shown in "Understanding Next-Generation MVPN Network Topology" on page 745.
Use the show pim join extensive instance vpna 224.1.1.1 command to verify that Router PE3 has the
following state in its C-PIM database:
Group: 224.1.1.1
Source: *
RP: 10.12.53.1
Flags: sparse,rptree,wildcard
Upstream protocol: BGP
Upstream interface: Through BGP
1074
Group: 224.1.1.1
Source: 192.168.1.2
Flags: sparse
Upstream protocol: BGP
Upstream interface: Through BGP
Upstream neighbor: Through MVPN
Upstream state: Join to Source
Keepalive timeout:
Downstream neighbors:
Interface: so-0/2/0.0
10.12.87.1 State: Join Flags: S Timeout: 195
The forwarding entry corresponding to the C-PIM entry on the receiver PE router uses the multicast
tunnel de-encapsulation subinterface as the upstream interface.
Use the show multicast route extensive instance vpna group 224.1.1.1 command to verify that Router
PE3 has installed the following multicast forwarding entry for the local receiver:
Group: 224.1.1.1
Source: 192.168.1.2/32
Upstream interface: mt-1/2/0.49152
Downstream interface list:
so-0/2/0.0
Session description: ST Multicast Groups
Statistics: 1 kBps, 10 pps, 149 packets
Next-hop ID: 262144
Upstream protocol: MVPN
Route state: Active
Forwarding state: Forwarding
1075
Junos OS supports signaling both inclusive and selective provider tunnels by RSVP-TE point-to-
multipoint label-switched paths (LSPs). You can configure a combination of inclusive and selective
provider tunnels per VPN.
• If you configure a VPN to use an inclusive provider tunnel, the sender PE router signals one point-to-
multipoint LSP for the VPN.
• If you configure a VPN to use selective provider tunnels, the sender PE router signals a point-to-
multipoint LSP for each selective tunnel configured.
Sender (ingress) PE routers and receiver (egress) PE routers play different roles in the point-to-multipoint
LSP setup. Sender PE routers are mainly responsible for initiating the parent point-to-multipoint LSP and
the sub-LSPs associated with it. Receiver PE routers are responsible for setting up state such that they
can forward packets received over a sub-LSP to the correct VRF table (binding a provider tunnel to the
VRF).
The point-to-multipoint LSP and associated sub-LSPs are signaled by the ingress PE router. The
information about the point-to-multipoint LSP is advertised to egress PE routers in the PMSI attribute
via BGP.
The ingress PE router signals point-to-multipoint sub-LSPs by originating point-to-multipoint RSVP path
messages toward egress PE routers. The ingress PE router learns the identity of the egress PE routers
from Type 1 routes installed in its <routing-instance-name>.mvpn.0 table. Each RSVP path message
carries an S2L_Sub_LSP object along with the point-to-multipoint session object. The S2L_Sub_LSP
object carries a 4-byte sub-LSP destination (egress) IP address.
In Junos OS, sub-LSPs associated with a point-to-multipoint LSP can be signaled automatically by the
system or via a static sub-LSP configuration. When they are automatically signaled, the system chooses
a name for the point-to-multipoint LSP and each sub-LSP associated with it using the following naming
convention.
Use the show mpls lsp p2mp command to verify that the following LSPs have been created by Router
PE1:
An egress PE router responds to an RSVP path message by originating an RSVP reservation (RESV)
message per normal RSVP procedures. The RESV message contains the MPLS label allocated by the
egress PE router for this sub-LSP and is forwarded hop by hop toward the ingress PE router, thus setting
up state on the network. Routers referenced in this topic are shown in "Understanding Next-Generation
MVPN Network Topology" on page 745.
1077
Use the show rsvp session command to verify that Router PE2 has assigned label 299840 for the sub-
LSP 10.1.1.2:10.1.1.1:65535:mvpn:vpna:
Use the show mpls lsp p2mp command to verify that Router PE3 has assigned label 16 for the sub-LSP
10.1.1.3:10.1.1.1:65535:mvpn:vpna:
The egress PE router installs a forwarding entry in its mpls table for the label it allocated for the sub-LSP.
The MPLS label is installed with a pop operation (a pop operation removes the top MPLS label), and the
packet is passed on to the VRF table for a second route lookup. The second lookup on the egress PE
router is necessary for the VPN multicast data packets to be processed inside the VRF table using
normal C-PIM procedures.
1078
Use the show route table mpls label 16 command to verify that Router PE3 has installed the following
label entry in its MPLS forwarding table:
16 *[VPN/0] 03:03:17
to table vpna.inet.0, Pop
In Junos OS, VPN multicast routing entries are stored in the <routing-instance-name>.inet.1 table,
which is where the second route lookup occurs. In the example above, even though vpna.inet.0 is listed
as the routing table where the second lookup happens after the pop operation, internally the lookup is
pointed to the vpna.inet.1 table. Routers referenced in this topic are shown in "Understanding Next-
Generation MVPN Network Topology" on page 745.
Use the show route table vpna.inet.1 command to verify that Router PE3 contains the following entry in
its VPN multicast routing table:
224.1.1.1,192.168.1.2/32*[MVPN/70] 00:04:10
Multicast (IPv4)
Use the show multicast route extensive instance vpna command to verify that Router PE3 contains the
following VPN multicast forwarding entry corresponding to the multicast routing entry for the Llocal
join. The upstream interface points to lsi.0 and the downstream interface (OIL) points to the so-0/2/0.0
interface (toward local receivers). The Upstream protocol value is MVPN because the VPN multicast
source is reachable via the next-generation MVPN network. The lsi.0 interface is similar to the multicast
tunnel interface used when PIM-based provider tunnels are used. The lsi.0 interface is used for
removing the top MPLS header.
Group: 224.1.1.1
Source: 192.168.1.2/32
Upstream interface: lsi.0
Downstream interface list:
1079
so-0/2/0.0
Session description: ST Multicast Groups
Statistics: 1 kBps, 10 pps, 3472 packets
Next-hop ID: 262144
Upstream protocol: MVPN
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Family: INET6
The requirement for a double route lookup on the VPN packet header requires two additional
configuration statements on the egress PE routers when provider tunnels are signaled by RSVP-TE.
First, since the top MPLS label used for the point-to-multipoint sub-LSP is actually tied to the VRF table
on the egress PE routers, the penultimate-hop popping (PHP) operation is not used for next-generation
MVPNs. Only ultimate-hop popping is used. PHP allows the penultimate router (router before the
egress PE router) to remove the top MPLS label. PHP works well for VPN unicast data packets because
they typically carry two MPLS labels: one for the VPN and one for the transport LSP.
After the LSP label is removed, unicast VPN packets still have a VPN label that can be used for
determining the VPN to which the packets belong. VPN multicast data packets, on the other hand, carry
only one MPLS label that is directly tied to the VPN. Therefore, the MPLS label carried by VPN multicast
packets must be preserved until the packets reach the egress PE router. Normally, PHP must be disabled
through manual configuration.
To simplify the configuration, PHP is disabled by default on Juniper Networks PE routers when you
include the mvpn statement at the [edit routing-instances routing-interface-name interface] hierarchy
level. PHP is also disabled by default when you include the vrf-table-label statement at the [edit
routing-instances routing-instance-name] hierarchy level.
Second, in Junos OS, VPN labels associated with a VRF table can be allocated in two ways.
• Allocate a unique label for each VPN next hop (PE-CE interface). This is the default behavior.
• Allocate one label for the entire VRF table, which requires additional configuration. Only allocating a
label for the entire VRF table allows a second lookup on the VPN packet’s header. Therefore, PE
routers supporting next-generation-MVPN services must be configured to allocate labels for the VRF
table. There are two ways to do this as shown in Figure 123 on page 1080.
• One is by including a virtuall tunnel interface named vt at the [edit routing-instances routing-
instance-name interfaces] hierarchy level, which requires a Tunnel PIC.
Both of these options enable an egress PE router to perform two route lookups. However, there are
some differences in the way in which the second lookup is done
If the vt interface is used, the allocated label is installed in the mpls table with a pop operation and a
forwarding next hop pointing to the vt interface.
Use the show route table mpls label 299840 command to verify that Router PE2 has installed the
following entry and uses a vt interface in the mpls table. The label associated with the point-to-
multipoint sub-LSP (299840) is installed with a pop and a forward operation with the vt-0/1/0.0
interface being the next hop. VPN multicast packets received from the core exit the vt-0/1/0.0 interface
without their MPLS header, and the egress Router PE2 does a second lookup on the packet header in
the vpna.inet.1 table.
If the vrf-table-label is configured, the allocated label is installed in the mpls table with a pop operation,
and the forwarding entry points to the <routing-instance-name>.inet.0 table (which internally triggers
the second lookup to be done in the <routing-instance-name>.inet.1 table).
1081
Use the show route table mpls label 16 command to verify that Router PE3 has installed the following
entry in its mpls table and uses the vrf-table-label statement:
16 *[VPN/0] 03:03:17
to table vpna.inet.0, Pop
Configuring label allocation for each VRF table affects both unicast VPN and MVPN routes. However,
you can enable per-VRF label allocation for MVPN routes only if per-VRF allocation is configured via vt.
This feature is configured via multicast and unicast keywords at the [edit routing-instances routing-
instance-name interface vt-x/y/z.0] hierarchy level.
Note that including the vrf-table-label statement enables per-VRF label allocation for both unicast and
MVPN routes and cannot be turned off for either type of routes (it is either on or off for both).
If a PE router is a bud router, meaning it has local receivers and also forwards MPLS packets received
over a point-to-multipoint LSP downstream to other P and PE routers, then there is a difference in how
the vrf-table-label and vt statements work. When, the vrf-table-label statement is included, the bud PE
router receives two copies of the packet from the penultimate router: one to be forwarded to local
receivers and the other to be forwarded to downstream P and PE routers. When the vt statement is
included, the PE router receives a single copy of the packet.
On the ingress PE router, local VPN data packets are encapsulated with the MPLS label received from
the network for sub-LSPs.
Use the show rsvp session command to verify that on the ingress Router PE1, VPN multicast data
packets are encapsulated with MPLS label 300016 (advertised by Router P1 per normal RSVP RESV
procedures) and forwarded toward Router P1 down the sub-LSPs 10.1.1.3:10.1.1.1:65535:mvpn:vpna
and 10.1.1.2:10.1.1.1:65535:mvpn:vpna.
RFC 4875 describes a branch node as “an LSR that replicates the incoming data on to one or more
outgoing interfaces.” On a branch Rrouter, the incoming data carrying an MPLS label is replicated onto
one or more outgoing interfaces that can use different MPLS labels. Branch nodes keep track of
incoming and outgoing labels associated with point-to-multipoint LSPs. Routers referenced in this topic
are shown in "Understanding Next-Generation MVPN Network Topology" on page 745.
Use the show rsvp session command to verify that branch node P1 has the incoming label 300016 and
outgoing labels 16 for sub-LSP 10.1.1.3:10.1.1.1:65535:mvpn:vpna (to Router PE3) and 299840 for sub-
LSP 10.1.1.2:10.1.1.1:65535:mvpn:vpna (to Router PE2).
Use the show route table mpls label 300016 command to verify that the corresponding forwarding
entry on Router P1 shows that the packets coming in with one MPLS label (300016) are swapped with
labels 16 and 299840 and forwarded out through their respective interfaces (so-0/0/3.0 and so-0/0/1.0
respectively toward Router PE2 and Router PE3).
Selective Tunnels: Type 3 S-PMSI Autodiscovery and Type 4 Leaf Autodiscovery Routes
Selective provider tunnels are configured by including the selective statement at the [edit routing-
instances routing-instance-name provider-tunnel] hierarchy level. You can configure a threshold to
trigger the signaling of a selective provider tunnel. Including the selective statement triggers the
following events.
First, the ingress PE router originates a Type 3 S-PMSI autodiscovery route. The S-PMSI autodiscovery
route contains the route distinguisher of the VPN where the tunnel is configured and the (C-S, C-G) pair
that uses the selective provider tunnel.
In this section assume that Router PE1 is signaling a selective tunnel for (192.168.1.2, 224.1.1.1) and
Router PE3 has an active receiver.
Use the show route table vpna.mvpn.0 | find 3: command to verify that Router PE1 has installed the
following Type 3 route after the selective provider tunnel is configured:
Second, the ingress PE router attaches a PMSI attribute to a Type 3 route. This PMSI attribute is similar
to the PMSI attribute advertised for inclusive provider tunnels with one difference: the PMSI attribute
carried with Type 3 routes has its Flags bit set to Leaf Information Required. This means that the sender
PE router is requesting receiver PE routers to send a Type 4 route if they have active receivers for the
(C-S, C-G) carried in the Type 3 route. Also, remember that for each selective provider tunnel, a new
point-to-multipoint and associated sub-LSPs are signaled. The PMSI attribute of a Type 3 route carries
information about the new point-to-multipoint LSP.
Use the show route advertising-protocol bgp 10.1.1.3 detail table vpna.mvpn | find 3: command to
verify that Router PE1 advertises the following Type 3 route and the PMSI attribute. The point-to-
multipoint session object included in the PMSI attribute has a different port number (29499) than the
one used for the inclusive tunnel (6574) indicating that this is a new point-to-multipoint tunnel.
Egress PE routers with active receivers should respond to a Type 3 route by originating a Type 4 leaf
autodiscovery route. A leaf autodiscovery route contains a route key and the originating router’s IP
address fields. The Route Key field of the leaf autodiscovery route contains the original Type 3 route
that is received. The originating router’s IP address field is set to the router ID of the PE router
originating the leaf autodiscovery route.
The ingress PE router adds each egress PE router that originated the leaf autodiscovery route as a leaf
(destination of the sub-LSP for the selective point-to-multipoint LSP). Similarly, the egress PE router that
originated the leaf autodiscovery route sets up forwarding state to start receiving data through the
selective provider tunnel.
Egress PE routers advertise Type 4 routes with a route target that is specific to the PE router signaling
the selective provider tunnel. This route target is in the form of target:<rid of the sender PE>:0. The
sender PE router (the PE router signaling the selective provider tunnel) applies a special internal import
policy to Type 4 routes that looks for a route target with its own router ID. Routers referenced in this
topic are shown in "Understanding Next-Generation MVPN Network Topology" on page 745.
Use the show route table vpna.mvpn | find 4:3: command to verify that Router PE3 originates the
following Type 4 route. The local Type 4 route is installed by the MVPN module.
Use the show route advertising-protocol bgp 10.1.1.1 table vpna.mvpn detail | find 4:3: command to
verify that Router PE3 has advertised the local Type 4 route with the following route target community.
This route target carries the IP address of the sender PE router (10.1.1.1) followed by a 0.
* 4:3:10.1.1.1:1:32:192.168.1.2:32:224.1.1.1:10.1.1.1:10.1.1.3/240 (1 entry, 1
announced)
BGP group int type Internal
Nexthop: Self
Flags: Nexthop Change
Localpref: 100
AS path: [65000] I
Communities: target:10.1.1.1:0
For each selective provider tunnel configured, a Type 3 route is advertised and a new point-to-
multipoint LSP is signaled. Point-to-multipoint LSPs created by Junos OS for selective provider tunnels
are named using the following naming conventions:
Use the show mpls lsp p2mp command to verify that Router PE1 signals point-to-multipoint LSP
10.1.1.1:65535:mv5:vpna with one sub-LSP 10.1.1.3:10.1.1.1:65535:mv5:vpna. The first point-to-
multipoint LSP 10.1.1.1:65535:mvpn:vpna is the LSP created for the inclusive tunnel.
RELATED DOCUMENTATION
Service providers have traditionally adopted Option A VPN deployment scenarios instead of Option B
because Option B is unable to ensure that the provider network is protected in the event of incorrect
route distinguisher (RD) advertisements or spoofed MPLS labels.
1087
Inter-AS Option B, however, can provide VPN services that are built using BGP based L3VPN. It is more
scalable than the Option A alternative because Inter-autonomous system (AS) VPN routes are stored
only in the BGP RIBs, as opposed to Option A which results in AS boundary routers (ASBRs) creating
multiple VRF tables, each of which includes all IP routes.
Inter-AS Option B is also known as RFC 4364, BGP/MPLS IP Virtual Private Networks.
Junos OS Release 16.1 and later address the security shortcomings attributed to Option B. New features
provide policy-based RD filtering (protection against MPLS label spoofing) to ensure that only RDs
generated within the service provider domain are accepted. At the same time, the filtering can be used
to filter loopback VPN-IPv4 addresses generated by PIM Rosen implementations from Cisco PEs, which
can cause routing issues and traffic loss if imported into customer Virtual Routing and Forwarding (VRF)
tables. These features are supported on M, MX, and T Series routers when using MPC1, MPC2, and
MPC3D MPCs.
Inter-AS Option B uses BGP to signal VPN labels between ASBRs. The base MPLS tunnels are local to
each AS, and stacked tunnels run from end-to-end between PE routers on the different AS VPN routes.
The Junos OS anti-spoofing support for Option B implementations works by creating distinct MPLS
forwarding table contexts. A separate mpls.0 table is created for each set of VPN ASBR peers. As such,
each MPLS forwarding table contains only the relevant labels advertised to the group of inter AS-Option
B peers. Packets received with a different MPLS label are dropped. Option B peers are reachable
through local interfaces that have been configured as part of the MFI (a new type of routing instance
created for inter-AS BGP neighbors that require MPLS spoof-protection), so MPLS packets arriving from
the Option B peers are resolved in the instance-specific MPLS forwarding table.
To enable anti-spoofing support for MPLS labels, configure separate instances of the new routing
instance type, mpls-forwarding, on all MPLS-enabled Inter-AS links (which must be running a supported
MPC). Then configure each Option B peer to use this routing instance as its forwarding-context under
BGP. This forms the transport session with the peers and performs forwarding functions for traffic from
peers. Spoof checking occurs between any peers with different mpls-forwarding MFIs. For peers with
the same forwarding-context, spoof-checking is not necessary because peers share the same
MFI.mpls.0 table.
Note that anti-spoofing support for MPLS labels is also supported on mixed networks, that is, those that
include Juniper network devices that are not running a supported MPC, as long as the MPLS-enabled
Inter-AS link is on a supported MPC. Any existing label-switched interface (LSI) features in the network,
such as vrf-table-label, will continue to work as usual.
Inter-AS Option B supports graceful RE switchover (GRES), nonstop active routing (NSR), and in service
software upgrades (unified ISSU).
RELATED DOCUMENTATION
instance-type
1088
forwarding-context
1089
CHAPTER 22
IN THIS CHAPTER
Example: Configuring PIM Join Load Balancing on Draft-Rosen Multicast VPN | 1098
Example: Configuring PIM Join Load Balancing on Next-Generation Multicast VPN | 1110
Large-scale service providers often have to meet the dynamic requirements of rapidly growing,
worldwide virtual private network (VPN) markets. Service providers use the VPN infrastructure to
deliver sophisticated services, such as video and voice conferencing, over highly secure, resilient
networks. These services are usually loss-sensitive or delay-sensitive, and their data packets need to be
delivered over a large-scale IP network in real time. The use of IP Multicast bandwidth-conserving
technology has enabled service providers to exceed the most stringent service-level agreements (SLAs)
and resiliency requirements.
IP multicast enables service providers to optimize network utilization while offering new revenue-
generating value-added services, such as voice, video, and collaboration-based applications. IP multicast
applications are becoming increasingly popular among enterprises, and as new applications start using
multicast to deploy high-bandwidth and mission-critical services, it raises a new set of challenges for
deploying IP multicast in the network.
provides bandwidth efficiency by utilizing unequal paths toward a destination, improves scalability for
large service providers, and minimizes service disruption.
The large-scale demands of service providers for IP access require Layer 3 VPN composite next hops
along with external and internal BGP (EIBGP) VPN load balancing. The multipath PIM join load-balancing
feature meets the large-scale requirements of enterprises by enabling l3vpn-composite-nh to be turned
on along with EIBGP load balancing.
When the service provider network does not have the multipath PIM join load-balancing feature
enabled on the provider edge (PE) routers, a hash-based algorithm is used to determine the best route to
transmit multicast datagrams throughout the network. With hash-based join load balancing, adding new
PE routers to the candidate upstream toward the destination results in PIM join messages being
redistributed to new upstream paths. If the number of join messages is large, network performance is
impacted because join messages are being sent to the new reverse path forwarding (RPF) neighbor and
prune messages are being sent to the old RPF neighbor. In next-generation multicast virtual private
network (MVPN), this results in multicast data messages being withdrawn from old upstream paths and
advertised on new upstream paths, impacting network performance.
RELATED DOCUMENTATION
By default, PIM join messages are sent toward a source based on the RPF routing table check. If there is
more than one equal-cost path toward the source, then one upstream interface is chosen to send the
join message. This interface is also used for all downstream traffic, so even though there are alternative
interfaces available, the multicast load is concentrated on one upstream interface and routing device.
For PIM sparse mode, you can configure PIM join load balancing to spread join messages and traffic
across equal-cost upstream paths (interfaces and routing devices) provided by unicast routing toward a
source. PIM join load balancing is only supported for PIM sparse mode configurations.
PIM join load balancing is supported on draft-rosen multicast VPNs (also referred to as dual PIM
multicast VPNs) and multiprotocol BGP-based multicast VPNs (also referred to as next-generation
Layer 3 VPN multicast). When PIM join load balancing is enabled in a draft-rosen Layer 3 VPN scenario,
the load balancing is achieved based on the join counts for the far-end PE routing devices, not for any
intermediate P routing devices.
1091
If an internal BGP (IBGP) multipath forwarding VPN route is available, the Junos OS uses the multipath
forwarding VPN route to send join messages to the remote PE routers to achieve load balancing over
the VPN.
By default, when multiple PIM joins are received for different groups, all joins are sent to the same
upstream gateway chosen by the unicast routing protocol. Even if there are multiple equal-cost paths
available, these alternative paths are not utilized to distribute multicast traffic from the source to the
various groups.
When PIM join load balancing is configured, the PIM joins are distributed equally among all equal-cost
upstream interfaces and neighbors. Every new join triggers the selection of the least-loaded upstream
interface and neighbor. If there are multiple neighbors on the same interface (for example, on a LAN),
join load balancing maintains a value for each of the neighbors and distributes multicast joins (and
downstream traffic) among these as well.
Join counts for interfaces and neighbors are maintained globally, not on a per-source basis. Therefore,
there is no guarantee that joins for a particular source are load-balanced. However, the joins for all
sources and all groups known to the routing device are load-balanced. There is also no way to
administratively give preference to one neighbor over another: all equal-cost paths are treated the same
way.
You can configure message filtering globally or for a routing instance. This example shows the global
configuration.
You configure PIM join load balancing on the non-RP routers in the PIM domain.
1. Determine if there are multiple paths available for a source (for example, an RP) with the output of
the show pim join extensive or show pim source commands.
Group: 224.1.1.1
Source: *
RP: 10.255.245.6
Flags: sparse,rptree,wildcard
Upstream interface: t1-0/2/3.0
Upstream neighbor: 192.168.38.57
Upstream state: Join to RP
Downstream neighbors:
Interface: t1–0/2/1.0
192.168.38.16 State: JOIN Flags; SRW Timeout: 164
Group: 224.2.127.254
Source: *
1092
RP: 10.255.245.6
Flags: sparse,rptree,wildcard
Upstream interface: so–0/3/0.0
Upstream neighbor: 192.168.38.47
Upstream state: Join to RP
Downstream neighbors:
Interface: t1–0/2/3.0
192.168.38.16 State: JOIN Flags; SRW Timeout: 164
Note that for this router, the RP at IP address 10.255.245.6 is the source for two multicast groups:
224.1.1.1 and 224.2.127.254. This router has two equal-cost paths through two different upstream
interfaces (t1-0/2/3.0 and so-0/3/0.0) with two different neighbors (192.168.38.57 and
192.168.38.47). This router is a good candidate for PIM join load balancing.
2. On the non-RP router, configure PIM sparse mode and join load balancing.
Note that the two equal-cost paths shown by the show pim interfaces command now have nonzero
join counts. If the counts differ by more than one and were zero (0) when load balancing commenced,
an error occurs (joins before load balancing are not redistributed). The join count also appears in the
show pim neighbors detail output:
Interface: t1-0/2/3.0
Note that the join count is nonzero on the two load-balanced interfaces toward the upstream
neighbors.
PIM join load balancing only takes effect when the feature is configured. Prior joins are not
redistributed to achieve perfect load balancing. In addition, if an interface or neighbor fails, the new
joins are redistributed among remaining active interfaces and neighbors. However, when the
1094
interface or neighbor is restored, prior joins are not redistributed. The clear pim join-distribution
command redistributes the existing flows to new or restored upstream neighbors. Redistributing the
existing flows causes traffic to be disrupted, so we recommend that you perform PIM join
redistribution during a maintenance window.
RELATED DOCUMENTATION
A multicast virtual private network (MVPN) is a technology to deploy the multicast service in an existing
MPLS/BGP VPN.
Next-generation MVPNs constitute the next evolution after the Draft-Rosen MVPN and provide a
simpler solution for administrators who want to configure multicast over Layer 3 VPNs. A Draft-Rosen
MVPN uses Protocol Independent Multicast (PIM) for customer multicast (C-multicast) signaling, and a
next-generation MVPN uses BGP for C-multicast signaling.
Multipath routing in an MVPN is applied to make data forwarding more robust against network failures
and to minimize shared backup capacities when resilience against network failures is required.
By default, PIM join messages are sent toward a source based on the reverse path forwarding (RPF)
routing table check. If there is more than one equal-cost path toward the source [S, G] or rendezvous
point (RP) [*, G], then one upstream interface is used to send the join messages. The upstream path can
be:
• A single active external BGP (EBGP) path when both EBGP and internal BGP (IBGP) paths are
present.
With the introduction of the multipath PIM join load-balancing feature, customer PIM (C-PIM) join
messages are load-balanced in the following ways:
• In the case of a Draft-Rosen MVPN, unequal EBGP and IBGP paths are utilized.
• Available EBGP paths are utilized when both EBGP and IBGP paths are present.
This feature is applicable to IPv4 C-PIM join messages over the Layer 3 MVPN service.
By default, a customer source (C-S) or a customer RP (C-RP) is considered remote if the active rt_entry is
a secondary route and the primary route is present in a different routing instance. Such determination is
being done without taking into consideration the (C-*,G) or (C-S,G) state for which the check is being
performed. The multipath PIM join load-balancing feature determines if a source (or RP) is remote by
taking into account the associated (C-*,G) or (C-S,G) state.
When the provider network does not have provider edge (PE) routers with the multipath PIM join load-
balancing feature enabled, hash-based join load balancing is used. Although the decision to configure
this feature does not impact PIM or overall system performance, network performance can be affected
temporarily, if the feature is not enabled.
With hash-based join load balancing, adding new PE routers to the candidate upstream toward the C-S
or C-RP results in C-PIM join messages being redistributed to new upstream paths. If the number of join
messages is large, network performance is impacted because of join messages being sent to the new
RPF neighbor and prune messages being sent to the old RPF neighbor. In next-generation MVPN, this
results in BGP C-multicast data messages being withdrawn from old upstream paths and advertised on
new upstream paths, impacting network performance.
1096
In Figure 124 on page 1096, PE1 and PE2 are the upstream PE routers. Router PE1 learns route Source
from EBGP and IBGP peers—the customer edge CE1 router and the PE2 router, respectively.
• If the PE routers run the Draft-Rosen MVPN, the PE1 router distributes C-PIM join messages
between the EBGP path to the CE1 router and the IBGP path to the PE2 router. The join messages
on the IBGP path are sent over a multicast tunnel interface through which the PE routers establish C-
PIM adjacency with each other.
1097
If a PE router loses one or all EBGP paths toward the source (or RP), the C-PIM join messages that
were previously using the EBGP path are moved to a multicast tunnel interface, and the RPF
neighbor on the multicast tunnel interface is selected based on a hash mechanism.
On discovering the first EBGP path toward the source (or RP), only new join messages get load-
balanced across EBGP and IBGP paths, whereas the existing join messages on the multicast tunnel
interface remain unaffected.
• If the PE routers run the next-generation MVPN, the PE1 router sends C-PIM join messages directly
to the CE1 router over the EBGP path. There is no C-PIM adjacency between the PE1 and PE2
routers. Router PE3 distributes the C-PIM join messages between the two IBGP paths to PE1 and
PE2. The Bytewise-XOR hash algorithm is used to send the C-multicast data according to Internet
draft draft-ietf-l3vpn-2547bis-mcast-bgp, BGP Encodings and Procedures for Multicast in
MPLS/BGP IP VPNs.
Because the multipath PIM join load-balancing feature in a Draft-Rosen MVPN utilizes unequal EBGP
and IBGP paths to the destination, loops can be created when forwarding unicast packets to the
destination. To avoid or break such loops:
• Traffic arriving from a core or master instance should not be forwarded back to the core facing
interfaces.
• A single multicast tunnel interface should either be selected as the upstream interface or the
downstream interface.
As a result of the loop avoidance mechanism, join messages arriving from an EBGP path get load-
balanced across EIBGP paths as expected, whereas join messages from an IBGP path are constrained to
choose the EBGP path only.
In Figure 124 on page 1096, if the CE2 host sends unicast data traffic to the CE1 host, the PE1 router
could send the multicast flow to the PE2 router over the MPLS core due to traffic load balancing. A data
forwarding loop is prevented by ensuring that PE2 does not forward traffic back on the MPLS core
because of the load-balancing algorithm.
In the case of C-PIM join messages, assuming that both the CE2 host and the CE3 host are interested in
receiving traffic from the source (S, G), and if both PE1 and PE2 choose each other as the RPF neighbor
toward the source, then a multicast tree cannot be formed completely. This feature implements
mechanisms to prevent such join loops in the multicast control plane in a Draft-Rosen MVPN scenario.
NOTE: Disruption of multicast traffic or creation of join loops can occur, resulting in a multicast
distribution tree (MDT) not being formed properly due to one of the following reasons:
1098
• During a graceful Routing Engine switchover (GRES), the EIBGP path selection for C-PIM join
messages can vary, because the upstream interface selection is performed again for the new
Routing Engine based on the join messages it receives from the CE and PE neighbors. This can
lead to disruption of multicast traffic depending on the number of join messages received and
the load on the network at the time of the graceful restart. However, nonstop active routing
(NSR) is not supported and has no impact on the multicast traffic in a Draft-Rosen MVPN
scenario.
• Any PE router in the provider network is running another vendor’s implementation that does
not apply the same hashing algorithm implemented in this feature.
• The multipath PIM join load-balancing feature has not been configured properly.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 1099
Configuration | 1104
Verification | 1108
This example shows how to configure multipath routing for external and internal virtual private network
(VPN) routes with unequal interior gateway protocol (IGP) metrics, and Protocol Independent Multicast
(PIM) join load balancing on provider edge (PE) routers running Draft-Rosen multicast VPN (MVPN). This
feature allows customer PIM (C-PIM) join messages to be load-balanced across external and internal
BGP (EIBGP) upstream paths when the PE router has both external BGP (EBGP) and internal BGP (IBGP)
paths toward the source or rendezvous point (RP).
1099
Requirements
This example requires the following hardware and software components:
• Three routers that can be a combination of M Series Multiservice Edge Routers, MX Series 5G
Universal Routing Platforms, or T Series Core Routers.
• OSPF
• MPLS
• LDP
• PIM
• BGP
During load balancing, if a PE router loses one or more EBGP paths toward the source (or RP), the C-PIM
join messages that were previously using the EBGP path are moved to a multicast tunnel interface, and
the reverse path forwarding (RPF) neighbor on the multicast tunnel interface is selected based on a hash
mechanism.
On discovering the first EBGP path toward the source (or RP), only the new join messages get load-
balanced across EIBGP paths, whereas the existing join messages on the multicast tunnel interface
remain unaffected.
Though the primary goal for multipath PIM join load balancing is to utilize unequal EIBGP paths for
multicast traffic, potential join loops can be avoided if a PE router chooses only the EBGP path when
there are one or more join messages for different groups from a remote PE router. If the remote PE
router’s join message arrives after the PE router has already chosen IBGP as the upstream path, then the
potential loops can be broken by changing the selected upstream path to EBGP.
1100
NOTE: During a graceful Routing Engine switchover (GRES), the EIBGP path selection for C-PIM
join messages can vary, because the upstream interface selection is performed again for the new
Routing Engine based on the join messages it receives from the CE and PE neighbors. This can
lead to disruption of multicast traffic depending on the number of join messages received and
the load on the network at the time of the graceful restart. However, the nonstop active routing
feature is not supported and has no impact on the multicast traffic in a Draft-Rosen MVPN
scenario.
In this example, PE1 and PE2 are the upstream PE routers for which the multipath PIM join load-
balancing feature is configured. Routers PE1 and PE2 have one EBGP path and one IBGP path each
toward the source. The Source and Receiver attached to customer edge (CE) routers are Free BSD hosts.
On PE routers that have EIBGP paths toward the source (or RP), such as PE1 and PE2, PIM join load
balancing is performed as follows:
1. The existing join-count-based load balancing is performed such that the algorithm first selects the
least loaded C-PIM interface. If there is equal or no load on all the C-PIM interfaces, the join
messages get distributed equally across the available upstream interfaces.
In Figure 125 on page 1103, if the PE1 router receives PIM join messages from the CE2 router, and if
there is equal or no load on both the EBGP and IBGP paths toward the source, the join messages get
load-balanced on the EIBGP paths.
2. If the selected least loaded interface is a multicast tunnel interface, then there can be a potential join
loop if the downstream list of the customer join (C-join) message already contains the multicast
tunnel interface. In such a case, the least loaded interface among EBGP paths is selected as the
upstream interface for the C-join message.
Assuming that the IBGP path is the least loaded, the PE1 router sends the join messages to PE2 using
the IBGP path. If PIM join messages from the PE3 router arrive on PE1, then the downstream list of
the C-join messages for PE3 already contains a multicast tunnel interface, which can lead to a
potential join loop, because both the upstream and downstream interfaces are multicast tunnel
interfaces. In this case, PE1 uses only the EBGP path to send the join messages.
3. If the selected least loaded interface is a multicast tunnel interface and the multicast tunnel interface
is not present in the downstream list of the C-join messages, the loop prevention mechanism is not
necessary. If any PE router has already advertised data multicast distribution tree (MDT) type, length,
and values (TLVs), that PE router is selected as the upstream neighbor.
When the PE1 router sends the join messages to PE2 using the least loaded IBGP path, and if PE3
sends its join messages to PE2, no join loop is created.
1101
4. If no data MDT TLV corresponds to the C-join message, the least loaded neighbor on a multicast
tunnel interface is selected as the upstream interface.
On PE routers that have only IBGP paths toward the source (or RP), such as PE3, PIM join load balancing
is performed as follows:
1. The PE router only finds a multicast tunnel interface as the RPF interface, and load balancing is done
across the C-PIM neighbors on a multicast tunnel interface.
Router PE3 load-balances PIM join messages received from the CE4 router across the IBGP paths to
the PE1 and PE2 routers.
2. If any PE router has already advertised data MDT TLVs corresponding to the C-join messages, that PE
router is selected as the RPF neighbor.
For a particular C-multicast flow, at least one of the PE routers having EIBGP paths toward the source
(or RP) must use only the EBGP path to avoid or break join loops. As a result of the loop avoidance
mechanism, a PE router is constrained to choose among EIBGP paths when a multicast tunnel interface
is already present in the downstream list.
In Figure 125 on page 1103, assuming that the CE2 host is interested in receiving traffic from the Source
and CE2 initiates multiple PIM join messages for different groups (Group 1 with group address
203.0.113.1, and Group 2 with group address 203.0.113.2), the join messages for both groups arrive on
the PE1 router.
Router PE1 then equally distributes the join messages between the EIBGP paths toward the Source.
Assuming that Group 1 join messages are sent to the CE1 router directly using the EBGP path, and
Group 2 join messages are sent to the PE2 router using the IBGP path, PE1 and PE2 become the RPF
neighbors for Group 1 and Group 2 join messages, respectively.
When the CE3 router initiates Group 1 and Group 2 PIM join messages, the join messages for both
groups arrive on the PE2 router. Router PE2 then equally distributes the join messages between the
EIBGP paths toward the Source. Since PE2 is the RPF neighbor for Group 2 join messages, it sends the
Group 2 join messages directly to the CE1 router using the EBGP path. Group 1 join messages are sent
to the PE1 router using the IBGP path.
However, if the CE4 router initiates multiple Group 1 and Group 2 PIM join messages, there is no
control over how these join messages received on the PE3 router get distributed to reach the Source.
The selection of the RPF neighbor by PE3 can affect PIM join load balancing on EIBGP paths.
• If PE3 sends Group 1 join messages to PE1 and Group 2 join messages to PE2, there is no change in
RPF neighbor. As a result, no join loops are created.
• If PE3 sends Group 1 join messages to PE2 and Group 2 join messages to PE1, there is a change in
the RPF neighbor for the different groups resulting in the creation of join loops. To avoid potential
join loops, PE1 and PE2 do not consider IBGP paths to send the join messages received from the PE3
router. Instead, the join messages are sent directly to the CE1 router using only the EBGP path.
1102
The loop avoidance mechanism in a Draft-Rosen MVPN has the following limitations:
• Because the timing of arrival of join messages on remote PE routers determines the distribution of
join messages, the distribution could be sub-optimal in terms of join count.
• Because join loops cannot be avoided and can occur due to the timing of join messages, the
subsequent RPF interface change leads to loss of multicast traffic. This can be avoided by
implementing the PIM make-before-break feature.
The PIM make-before-break feature is an approach to detect and break C-PIM join loops in a Draft-
Rosen MVPN. The C-PIM join messages are sent to the new RPF neighbor after establishing the PIM
neighbor relationship, but before updating the related multicast forwarding entry. Though the
upstream RPF neighbor would have updated its multicast forwarding entry and started sending the
multicast traffic downstream, the downstream router does not forward the multicast traffic (because
of RPF check failure) until the multicast forwarding entry is updated with the new RPF neighbor. This
1103
helps to ensure that the multicast traffic is available on the new path before switching the RPF
interface of the multicast forwarding entry.
Configuration
IN THIS SECTION
Procedure | 1105
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
PE1
PE2
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode. To configure the
PE1 router:
NOTE: Repeat this procedure for every Juniper Networks router in the MVPN domain, after
modifying the appropriate interface names, addresses, and any other parameters for each router.
Results
From configuration mode, confirm your configuration by entering the show routing-instances command.
If the output does not display the intended configuration, repeat the instructions in this example to
correct the configuration.
routing-instances {
vpn1 {
instance-type vrf;
interface ge-5/0/4.0;
interface ge-5/2/0.0;
interface lo0.1;
route-distinguisher 1:1;
vrf-target target:1:1;
routing-options {
multipath {
vpn-unequal-cost equal-external-internal;
}
}
protocols {
bgp {
export direct;
group bgp {
type external;
local-address 192.0.2.4;
family inet {
unicast;
}
neighbor 192.0.2.5 {
peer-as 3;
}
}
group bgp1 {
type external;
local-address 192.0.2.1;
1108
family inet {
unicast;
}
neighbor 192.0.2.2 {
peer-as 4;
}
}
}
pim {
group-address 198.51.100.1;
rp {
static {
address 10.255.8.168;
}
}
interface all;
join-load-balance;
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Verifying PIM Join Load Balancing for Different Groups of Join Messages | 1108
Verifying PIM Join Load Balancing for Different Groups of Join Messages
Purpose
Verify PIM join load balancing for the different groups of join messages received on the PE1 router.
1109
Action
From operational mode, run the show pim join instance extensive command.
Group: 203.0.113.1
Source: *
RP: 10.255.8.168
Flags: sparse,rptree,wildcard
Upstream interface: ge-5/2/0.1
Upstream neighbor: 10.10.10.2
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-5/0/4.0
10.40.10.2 State: Join Flags: SRW Timeout: 207
Group: 203.0.113.2
Source: *
RP: 10.255.8.168
Flags: sparse,rptree,wildcard
Upstream interface: mt-5/0/10.32768
Upstream neighbor: 19.19.19.19
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-5/0/4.0
10.40.10.2 State: Join Flags: SRW Timeout: 207
Group: 203.0.113.3
Source: *
RP: 10.255.8.168
Flags: sparse,rptree,wildcard
Upstream interface: ge-5/2/0.1
Upstream neighbor: 10.10.10.2
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-5/0/4.0
10.40.10.2 State: Join Flags: SRW Timeout: 207
Group: 203.0.113.4
1110
Source: *
RP: 10.255.8.168
Flags: sparse,rptree,wildcard
Upstream interface: mt-5/0/10.32768
Upstream neighbor: 19.19.19.19
Upstream state: Join to RP
Downstream neighbors:
Interface: ge-5/0/4.0
10.40.10.2 State: Join Flags: SRW Timeout: 207
Meaning
The output shows how the PE1 router has load-balanced the C-PIM join messages for four different
groups.
• For Group 1 (group address: 203.0.113.1) and Group 3 (group address: 203.0.113.3) join messages,
the PE1 router has selected the EBGP path toward the CE1 router to send the join messages.
• For Group 2 (group address: 203.0.113.2) and Group 4 (group address: 203.0.113.4) join messages,
the PE1 router has selected the IBGP path toward the PE2 router to send the join messages.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 1111
Configuration | 1114
Verification | 1120
1111
This example shows how to configure multipath routing for external and internal virtual private network
(VPN) routes with unequal interior gateway protocol (IGP) metrics and Protocol Independent Multicast
(PIM) join load balancing on provider edge (PE) routers running next-generation multicast VPN (MVPN).
This feature allows customer PIM (C-PIM) join messages to be load-balanced across available internal
BGP (IBGP) upstream paths when there is no external BGP (EBGP) path present, and across available
EBGP upstream paths when external and internal BGP (EIBGP) paths are present toward the source or
rendezvous point (RP).
Requirements
This example uses the following hardware and software components:
• OSPF
• MPLS
• LDP
• PIM
• BGP
By default, only one active IBGP path is used to send the C-PIM join messages for a PE router having
only IBGP paths toward the source (or RP). When there are EIBGP upstream paths present, only one
active EBGP path is used to send the join messages.
In a next-generation MVPN, C-PIM join messages are translated into (or encoded as) BGP customer
multicast (C-multicast) MVPN routes and advertised with the BGP MCAST-VPN address family toward
1112
the sender PE routers. A PE router originates a C-multicast MVPN route in response to receiving a C-
PIM join message through its PE router to customer edge (CE) router interface. The two types of C-
multicast MVPN routes are:
• Originated when a PE router receives a shared tree C-PIM join message through its PE-CE router
interface.
• Originated when a PE router receives a source tree C-PIM join message (C-S, C-G), or originated
by the PE router that already has a shared tree join route and receives a source active
autodiscovery route.
The upstream path in a next-generation MVPN is selected using the Bytewise-XOR hash algorithm as
specified in Internet draft draft-ietf-l3vpn-2547bis-mcast, Multicast in MPLS/BGP IP VPNs. The hash
algorithm is performed as follows:
1. The PE routers in the candidate set are numbered from lower to higher IP address, starting from
0.
2. A bytewise exclusive-or of all the bytes is performed on the C-root (source) and the C-G (group)
address.
3. The result is taken modulo n, where n is the number of PE routers in the candidate set. The result
is N.
During load balancing, if a PE router with one or more upstream IBGP paths toward the source (or RP)
discovers a new IBGP path toward the same source (or RP), the C-PIM join messages distributed among
previously existing IBGP paths get redistributed due to the change in the candidate PE router set.
In this example, PE1, PE2, and PE3 are the PE routers that have the multipath PIM join load-balancing
feature configured. Router PE1 has two EBGP paths and one IBGP upstream path, PE2 has one EBGP
path and one IBGP upstream path, and PE3 has two IBGP upstream paths toward the Source. Router
CE4 is the customer edge (CE) router attached to PE3. Source and Receiver are the Free BSD hosts.
On PE routers that have EIBGP paths toward the source (or RP), such as PE1 and PE2, PIM join load
balancing is performed as follows:
1. The C-PIM join messages are sent using EBGP paths only. IBGP paths are not used to propagate the
join messages.
1113
In Figure 126 on page 1114, the PE1 router distributes the join messages between the two EBGP
paths to the CE1 router, and PE2 uses the EBGP path to CE1 to send the join messages.
2. If a PE router loses one or more EBGP paths toward the source (or RP), the RPF neighbor on the
multicast tunnel interface is selected based on a hash mechanism.
On discovering the first EBGP path, only new join messages get load-balanced across available EBGP
paths, whereas the existing join messages on the multicast tunnel interface are not redistributed.
If the EBGP path from the PE2 router to the CE1 router goes down, PE2 sends the join messages to
PE1 using the IBGP path. When the EBGP path to CE1 is restored, only new join messages that
arrive on PE2 use the restored EBGP path, whereas join messages already sent on the IBGP path are
not redistributed.
On PE routers that have only IBGP paths toward the source (or RP), such as the PE3 router, PIM join
load balancing is performed as follows:
1. The C-PIM join messages from CE routers get load-balanced only as BGP C-multicast data messages
among IBGP paths.
In Figure 126 on page 1114, assuming that the CE4 host is interested in receiving traffic from the
Source, and CE4 initiates source join messages for different groups (Group 1 [C-S,C-G1] and Group 2
[C-S,C-G2]), the source join messages arrive on the PE3 router.
Router PE3 then uses the Bytewise-XOR hash algorithm to select the upstream PE router to send the
C-multicast data for each group. The algorithm first numbers the upstream PE routers from lower to
higher IP address starting from 0.
Assuming that Router PE1 router is numbered 0 and Router PE2 is 1, and the hash result for Group 1
and Group 2 join messages is 0 and 1, respectively, the PE3 router selects PE1 as the upstream PE
router to send Group 1 join messages, and PE2 as the upstream PE router to send the Group 2 join
messages to the Source.
1114
2. The shared join messages for different groups [C-*,C-G] are also treated in a similar way to reach the
destination.
Configuration
IN THIS SECTION
Procedure | 1117
Results | 1119
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
PE1
PE2
PE3
Procedure
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode. To configure the
PE1 router:
NOTE: Repeat this procedure for every Juniper Networks router in the MVPN domain, after
modifying the appropriate interface names, addresses, and any other parameters for each router.
7. Configure the mode for C-PIM join messages to use rendezvous-point trees, and switch to the
shortest-path tree after the source is known.
Results
From configuration mode, confirm your configuration by entering the show routing-instances command.
If the output does not display the intended configuration, repeat the instructions in this example to
correct the configuration.
type external;
local-address 10.10.10.1;
family inet {
unicast;
}
neighbor 10.10.10.2 {
peer-as 3;
}
}
}
pim {
rp {
static {
address 10.255.10.119;
}
}
interface all;
join-load-balance;
}
mvpn {
mvpn-mode {
rpt-spt;
}
mvpn-join-load-balance {
bytewise-xor-hash;
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Verifying MVPN C-Multicast Route Information for Different Groups of Join Messages | 1121
Verifying MVPN C-Multicast Route Information for Different Groups of Join Messages
Purpose
Verify MVPN C-multicast route information for different groups of join messages received on the PE3
router.
Action
user@PE3>
MVPN instance:
Legend for provider tunnel
I-P-tnl -- inclusive provider tunnel S-P-tnl -- selective provider tunnel
Legend for c-multicast routes properties (Pr)
DS -- derived from (*, c-g) RM -- remote VPN route
Family : INET
Instance : vpn1
MVPN Mode : RPT-SPT
C-mcast IPv4 (S:G) Ptnl St
0.0.0.0/0:203.0.113.1/24 RSVP-TE P2MP:10.255.10.2, 5834,10.255.10.2
192.0.2.2/24:203.0.113.1/24 RSVP-TE P2MP:10.255.10.2, 5834,10.255.10.2
0.0.0.0/0:203.0.113.2/24 RSVP-TE P2MP:10.255.10.14, 47575,10.255.10.14
192.0.2.2/24:203.0.113.2/24 RSVP-TE P2MP:10.255.10.14, 47575,10.255.10.14
Meaning
The output shows how the PE3 router has load-balanced the C-multicast data for the different groups.
• 192.0.2.2/24:203.0.113.1/24 (S,G1) toward the PE1 router (10.255.10.2 is the loopback address
of Router PE1).
• 192.0.2.2/24:203.0.113.2/24 (S,G2) toward the PE2 router (10.255.10.14 is the loopback address
of Router PE2).
• 0.0.0.0/0:203.0.113.1/24 (*,G1) toward the PE1 router (10.255.10.2 is the loopback address of
Router PE1).
• 0.0.0.0/0:203.0.113.2/24 (*,G2) toward the PE2 router (10.255.10.14 is the loopback address of
Router PE2).
RELATED DOCUMENTATION
IN THIS SECTION
The existing PIM join load-balancing feature enables distribution of joins across ECMP links. In case of a
link failure, the joins are redistributed among the remaining ECMP links, and traffic is lost. The addition
of an interface causes no change to this distribution of joins unless the clear pim join-distribution
command is used to load-balance the existing joins to the new interface. If the PIM automatic MBB join
load-balancing feature is configured, this process takes place automatically.
The feature can be enabled by using the automatic statement at the [edit protocols pim join-load-
balance] hierarchy level. When a new neighbor is available, the time taken to create a path to the
neighbor (standby path) can be configured by using the standby-path-creation-delay seconds statement
at the [edit protocols pim] hierarchy level. In the absence of this statement, the standby path is created
immediately, and the joins are redistributed as soon as the new neighbor is added to the network. For a
join to be moved to the standby path in the absence of traffic, the idle-standby-path-switchover-delay
1123
seconds statement is configured at the [edit protocols pim] hierarchy level. In the absence of this
statement, the join is not moved until traffic is received on the standby path.
protocols {
pim {
join-load-balance {
automatic;
}
standby-path-creation-delay seconds;
idle-standby-path-switchover-delay seconds;
}
}
IN THIS SECTION
Requirements | 1123
Overview | 1124
Configuration | 1125
Verification | 1131
This example shows how to configure the PIM make-before-break (MBB) join load-balancing feature.
Requirements
• Three routers that can be a combination of M Series Multiservice Edge Routers (M120 and M320
only), MX Series 5G Universal Routing Platforms, or T Series Core Routers (TX Matrix and TX Matrix
Plus only).
• Configured an interior gateway protocol (IGP) for both IPv4 and IPv6 routes on the devices (for
example, OSPF and OSPFv3).
• Configured multiple ECMP interfaces (logical tunnels) using VLANs on any two routers (for example,
Routers R1 and R2).
Overview
IN THIS SECTION
Topology | 1124
Junos OS provides a PIM automatic MBB join load-balancing feature to ensure that PIM joins are evenly
redistributed to all upstream PIM neighbors on an equal-cost multipath (ECMP) path. When an interface
is added to an ECMP path, MBB provides a switchover to an alternate path with minimal traffic
disruption.
Topology
In this example, three routers are connected in a linear manner between source and receiver. An IGP
protocol and PIM sparse mode are configured on all three routers. The source is connected to Router
R0, and five interfaces are configured between Routers R1 and R2. The receiver is connected to Router
R2, and PIM automatic MBB join load balancing is configured on Router R2.
Figure 127 on page 1124 shows the topology used in this example.
Configuration
IN THIS SECTION
Results | 1127
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Router R0 (Source)
Router R1 (RP)
Router R2 (Receiver)
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
4. Configure the Multicast Listener Discovery (MLD) group for ECMP interfaces on Router R2.
5. Configure the PIM MBB join load-balancing feature on the receiver router (Router R2).
Results
From configuration mode, confirm your configuration by entering the show protocols command. If the
output does not display the intended configuration, repeat the instructions in this example to correct
the configuration.
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
interface ge-0/0/3.1;
interface ge-0/0/3.2;
interface ge-0/0/3.3;
interface ge-0/0/3.4;
interface ge-0/0/3.5;
}
}
family inet6 {
address abcd::10:255:12:34;
}
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
interface ge-0/0/3.1;
interface ge-0/0/3.2;
interface ge-0/0/3.3;
interface ge-0/0/3.4;
interface ge-0/0/3.5;
}
ospf3 {
area 0.0.0.0 {
interface lo0.0;
interface ge-1/0/7.1;
interface ge-1/0/7.2;
interface ge-1/0/7.3;
interface ge-1/0/7.4;
interface ge-1/0/7.5;
interface ge-0/0/3.1;
}
}
pim {
rp {
static {
address 10.255.12.34;
address abcd::10:255:12:34;
}
}
interface all {
mode sparse;
version 2;
}
interface fxp0.0 {
disable;
}
interface ge-1/0/7.1;
interface ge-1/0/7.2;
interface ge-1/0/7.3;
interface ge-1/0/7.4;
interface ge-1/0/7.5;
interface ge-0/0/3.1;
join-load-balance {
automatic;
}
standby-path-creation-delay 5;
idle-standby-path-switchover-delay 10;
}
1131
Verification
IN THIS SECTION
Purpose
Action
Send 100 (S,G) joins from the receiver to Router R2 . From the operational mode of Router R2, run the
show pim interfaces command.
The output lists all the interfaces configured for use with the PIM protocol. The Stat field indicates the
current status of the interface. The DR address field lists the configured IP addresses. All the interfaces
are operational. If the output does not indicate that the interfaces are operational, reconfigure the
interfaces before proceeding.
1132
Meaning
Verifying PIM
Purpose
Action
Global Statistics
The V2 Hello field lists the number of PIM hello messages sent and received. The V2 Join Prune field
lists the number of join messages sent before the join-prune-timeout value is reached. If both values are
nonzero, PIM is functional.
Meaning
Purpose
Verify that the PIM automatic MBB join load-balancing feature works as configured.
Action
1. Run the show pim interfaces operational mode command before disabling an interface.
The JoinCnt(sg/*g) field shows that the 100 joins are equally distributed among the five interfaces.
[edit]
user@R2# set interfaces ge-1/0/7.5 disable
user@R2# commit
3. Run the show pim interfaces command to check if load balancing of joins is taking place.
The JoinCnt(sg/*g) field shows that the 100 joins are equally redistributed among the four active
interfaces.
[edit]
user@R2# delete interfaces ge-1/0/7.5 disable
user@R2# commit
5. Run the show pim interfaces command to check if load balancing of joins is taking place after
enabling the inactive interface.
The JoinCnt(sg/*g) field shows that the 100 joins are equally distributed among the five interfaces.
Meaning
SEE ALSO
Configuring MLD | 60
join-load-balance | 1607
IN THIS SECTION
IN THIS SECTION
A service provider network must protect itself from potential attacks from misconfigured or misbehaving
customer edge (CE) devices and their associated VPN routing and forwarding (VRF) routing instances.
Misbehaving CE devices can potentially advertise a large number of multicast routes toward a provider
edge (PE) device, thereby consuming memory on the PE device and using other system resources in the
network that are reserved for routes belonging to other VPNs.
To protect against potential misbehaving CE devices and VRF routing instances for specific multicast
VPNs (MVPNs), you can control the following Protocol Independent Multicast (PIM) resources:
• Limit the number of accepted PIM join messages for any-source groups (*,G) and source-specific
groups (S,G).
• Limit the number of PIM register messages received for a specific VRF routing instance. Use this
configuration if the device is configured as a rendezvous point (RP) or has the potential to become an
RP. When a source in a multicast network becomes active, the source’s designated router (DR)
encapsulates multicast data packets into a PIM register message and sends them by means of unicast
to the RP router.
• Each unique (S,G) join received by the RP counts as one group toward the configured register
messages limit.
• Periodic register messages sent by the DR for existing or already known (S,G) entries do not count
toward the configured register messages limit.
• Register messages are accepted until either the PIM register limit or the PIM join limit (if
configured) is exceeded. Once either limit isreached, any new requests are dropped.
• Limit the number of group-to-RP mappings allowed in a specific VRF routing instance. Use this
configuration if the device is configured as an RP or has the potential to become an RP. This
configuration can apply to devices configured for automatic RP announce and discovery (Auto-RP) or
as a PIM bootstrap router. Every multicast device within a PIM domain must be able to map a
particular multicast group address to the same RP. Both Auto-RP and the bootstrap router
functionality are the mechanisms used to learn the set of group-to-RP mappings. Auto-RP is typically
used in a PIM dense-mode deployment, and the bootstrap router is typically used in a PIM sparse-
mode deployment.
NOTE: The group-to-RP mappings limit does not apply to static RP or embedded RP
configurations.
Some important things to note about how the device counts group-to-RP mappings:
• One group prefix mapped to five RPs counts as five group-to-RP mappings.
• Five distinct group prefixes mapped to one RP count as five group-to-RP mappings.
Once the configured limits are reached, no new PIM join messages, PIM register messages, or group-to-
RP mappings are accepted unless one of the following occurs:
• You clear the current PIM join states by using the clear pim join command. If you use this
command on an RP configured for PIM register message limits, the register limit count is also
restarted because the PIM join messages are unknown by the RP.
1138
NOTE: On the RP, you can also use the clear pim register command to clear all of the
PIM registers. This command is useful if the current PIM register count is greater than the
newly configured PIM register limit. After you clear the PIM registers, new PIM register
messages are received up to the configured limit.
• The traffic responsible for the excess PIM join messages and PIM register messages stops and is no
longer present.
You restart the PIM routing process on the device. This restart clears all of the configured limits but
disrupts routing and therefore requires a maintenance window for the change.
You can optionally configure a system log warning threshold for each of the PIM resources. With this
configuration, you can generate and review system log messages to detect if an excessive number of
PIM join messages, PIM register messages, or group-to-RP mappings have been received on the device.
The system log warning thresholds are configured per PIM resource and are a percentage of the
configured maximum limits of the PIM join messages, PIM register messages, and group-to-RP
mappings. You can further specify a log interval for each configured PIM resource, which is the amount
of time (in seconds) between the log messages.
The log messages convey when the configured limits have been exceeded, when the configured warning
thresholds have been exceeded, and when the configured limits drop below the configured warning
threshold. Table 34 on page 1138 describes the different types of PIM system messages that you might
see depending on your system log warning and log interval configurations.
IN THIS SECTION
Requirements | 1140
Overview | 1140
Configuration | 1141
Verification | 1152
This example shows how to set limits on the Protocol Independent Multicast (PIM) state information so
that a service provider network can protect itself from potential attacks from misconfigured or
misbehaving customer edge (CE) devices and their associated VPN routing and forwarding (VRF) routing
instances.
Requirements
No special configuration beyond device initialization is required before configuring this example.
Overview
In this example, a multiprotocol BGP-based multicast VPN (next-generation MBGP MVPN) is configured
with limits on the PIM state resources.
The sglimit maximum statement sets a limit for the number of accepted (*,G) and (S,G) PIM join states
received for the vpn-1 routing instance.
The rp register-limit maximum statement configures a limit for the number of PIM register messages
received for the vpn-1 routing instance. You configure this statement on the rendezvuos point (RP) or on
all the devices that might become the RP.
The group-rp-mapping maximum statement configures a limit for the number of group-to-RP mappings
allowed in the vpn-1 routing instance.
For each configured PIM resource, the threshold statement sets a percentage of the maximum limit at
which to start generating warning messages in the PIM log file.
For each configured PIM resource, the log-interval statement is an amount of time (in seconds) between
system log message generation.
1141
Figure 128 on page 1141 shows the topology used in this example.
"CLI Quick Configuration" shows the configuration for all of the devices in Figure 128 on page 1141.
The section "No Link Title" describes the steps on Device PE1.
Configuration
IN THIS SECTION
Procedure | 1141
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Device CE1
Device PE1
Device P
Device PE2
Device PE3
Device CE2
Device CE3
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User
Guide.
[edit interfaces]
user@PE1# set ge-1/2/0 unit 2 family inet address 10.1.1.2/30
user@PE1# set ge-1/2/0 unit 2 family mpls
user@PE1# set ge-1/2/1 unit 5 family inet address 10.1.1.5/30
user@PE1# set ge-1/2/1 unit 5 family mpls
user@PE1# set vt-1/2/0 unit 2 family inet
1147
The customer-facing interfaces and the BGP export policy are referenced in the routing instance.
[edit routing-options]
user@PE1# set router-id 192.0.2.2
user@PE1# set autonomous-system 1001
1149
Results
From configuration mode, confirm your configuration by entering the show interfaces, show protocols,
show policy-options, show routing-instances, and show routing-options commands. If the output does
not display the intended configuration, repeat the configuration instructions in this example to correct it.
}
}
then accept;
}
family inet {
maximum 100;
threshold 80;
log-interval 10;
}
}
static {
address 203.0.113.1;
}
}
interface ge-1/2/0.2 {
mode sparse;
}
}
mvpn;
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Purpose
Verify that the counters are set as expected and are not exceeding the configured limits.
1153
Action
Meaning
The V4 (S,G) Maximum field shows the maximum number of (S,G) IPv4 multicast routes accepted for the
VPN routing instance. If this number is met, additional (S,G) entries are not accepted.
The V4 (S,G) Accepted field shows the number of accepted (S,G) IPv4 multicast routes.
The V4 (S,G) Threshold field shows the threshold at which a warning message is logged (percentage of
the maximum number of (S,G) IPv4 multicast routes accepted by the device).
The V4 (S,G) Log Interval field shows the time (in seconds) between consecutive log messages.
The V4 (grp-prefix, RP) Maximum field shows the maximum number of group-to-rendezvous point (RP)
IPv4 multicast mappings accepted for the VRF routing instance. If this number is met, additional
mappings are not accepted.
The V4 (grp-prefix, RP) Accepted field shows the number of accepted group-to-RP IPv4 multicast
mappings.
The V4 (grp-prefix, RP) Threshold field shows the threshold at which a warning message is logged
(percentage of the maximum number of group-to-RP IPv4 multicast mappings accepted by the device).
The V4 (grp-prefix, RP) Log Interval field shows the time (in seconds) between consecutive log messages.
1154
The V4 Register Maximum field shows the maximum number of IPv4 PIM registers accepted for the VRF
routing instance. If this number is met, additional PIM registers are not accepted. You configure the
register limits on the RP.
The V4 Register Accepted field shows the number of accepted IPv4 PIM registers.
The V4 Register Threshold field shows the threshold at which a warning message is logged (percentage
of the maximum number of IPv4 PIM registers accepted by the device).
The V4 Register Log Interval field shows the time (in seconds) between consecutive log messages.
RELATED DOCUMENTATION
Use Multicast-Only Fast Reroute (MoFRR) to Minimize Packet Loss During Link
Failures | 1180
Enable Multicast Between Layer 2 and Layer 3 Devices Using Snooping | 1239
CHAPTER 23
IN THIS CHAPTER
IN THIS SECTION
IN THIS SECTION
Unicast forwarding decisions are typically based on the destination address of the packet arriving at a
router. The unicast routing table is organized by destination subnet and mainly set up to forward the
packet toward the destination.
In multicast, the router forwards the packet away from the source to make progress along the
distribution tree and prevent routing loops. The router's multicast forwarding state runs more logically
by organizing tables based on the reverse path, from the receiver back to the root of the distribution
tree. This process is known as ().
The router adds a branch to a distribution tree depending on whether the request for traffic from a
multicast group passes the reverse-path-forwarding check (RPF check). Every multicast packet received
must pass an RPF check before it is eligible to be replicated or forwarded on any interface.
The RPF check is essential for every router's multicast implementation. When a multicast packet is
received on an interface, the router interprets the source address in the multicast IP packet as the
destination address for a unicast IP packet. The source multicast address is found in the unicast routing
table, and the outgoing interface is determined. If the outgoing interface found in the unicast routing
table is the same as the interface that the multicast packet was received on, the packet passes the RPF
check. Multicast packets that fail the RPF check are dropped because the incoming interface is not on
the back to the source.
Figure 129 on page 1157 shows how multicast routers can use the unicast routing table to perform an
RPF check and how the results obtained at each router determine where join messages are sent.
Routers can build and maintain separate tables for RPF purposes. The router must have some way to
determine its RPF interface for the group, which is the interface topologically closest to the root. For
greatest efficiency, the distribution tree follows the shortest-path tree topology. The RPF check helps to
construct this tree.
1158
RPF Table
The RPF table plays the key role in the multicast router. The RPF table is consulted for every RPF check,
which is performed at intervals on multicast packets entering the multicast router. Distribution trees of
all types rely on the RPF table to form properly, and the multicast forwarding state also depends on the
RPF table.
RPF checks are performed only on unicast addresses to find the upstream interface for the multicast
source or RP.
The routing table used for RPF checks can be the same routing table used to forward unicast IP packets,
or it can be a separate routing table used only for multicast RPF checks. In either case, the RPF table
contains only unicast routes, because the RPF check is performed on the source address of the multicast
packet, not the multicast group destination address, and a multicast address is forbidden from appearing
in the source address field of an IP packet header. The unicast address can be used for RPF checks
because there is only one source host for a particular stream of IP multicast content for a multicast
group address, although the same content could be available from multiple sources.
If the same routing table used to forward unicast packets is also used for the RPF checks, the routing
table is populated and maintained by the traditional unicast routing protocols such as BGP, IS-IS, OSPF,
and the Routing Information Protocol (RIP). If a dedicated multicast RPF table is used, this table must be
populated by some other method. Some multicast routing protocols (such as the Distance Vector
Multicast Routing Protocol [DVMRP]) essentially duplicate the operation of a unicast routing protocol
and populate a dedicated RPF table. Others, such as PIM, do not duplicate routing protocol functions
and must rely on some other routing protocol to set up this table, which is why PIM is protocol
independent. .
Some traditional routing protocols such as BGP and IS-IS now have extensions to differentiate between
different sets of routing information sent between routers for unicast and multicast. For example, there
is multiprotocol BGP (MBGP) and multitopology routing in IS-IS (M-IS-IS). IS-IS routes can be added to
the RPF table even when special features such as traffic engineering and “shortcuts” are turned on.
Multicast Open Shortest Path First (MOSPF) also extends OSPF for multicast use, but goes further than
MBGP or M-IS-IS and makes MOSPF into a complete multicast routing protocol on its own. When these
routing protocols are used, routes can be tagged as multicast RPF routers and used by the receiving
router differently than the unicast routing information.
Using the main unicast routing table for RPF checks provides simplicity. A dedicated routing table for
RPF checks allows a network administrator to set up separate paths and routing policies for unicast and
multicast traffic, allowing the multicast network to function more independently of the unicast network.
In general, a router is to forward a multicast packet only if it arrives on the interface closest (as defined
by a unicast routing protocol) to the origin of the packet, whether source host or rendezvous point (RP).
In other words, if a unicast packet would be sent to the “destination” (the reverse path) on the interface
that the multicast packet arrived on, the packet passes the RPF check and is processed. Multicast (or
unicast) packets that fail the RPF check are not forwarded (this is the default behavior). For an overview
of how a Juniper Networks router implements RPF checks with tables, see Understanding Multicast
Reverse Path Forwarding.
However, there are network router configurations where multicast packets that fail the RPF check need
to be forwarded. For example, when point-to-multipoint label-switched paths (LSPs) are used for
distributing multicast traffic to PIM “islands” downstream from the egress router, the interface on which
the multicast traffic arrives is not always the RPF interface. This is because LSPs do not follow the
normal next-hop rules of independent packet routing.
In cases such as these, you can configure policies on the PE router to decide which multicast groups and
sources are exempt from the default RPF check.
SEE ALSO
IN THIS SECTION
Requirements | 1159
Overview | 1160
Configuration | 1161
This example explains how to configure a dedicated Protocol Independent Multicast (PIM) reverse path
forwarding (RPF) routing table.
Requirements
• Configure the router interfaces. See the Interfaces User Guide for Security Devices.
1160
Overview
By default, PIM uses the inet.0 routing table as its RPF routing table. PIM uses an RPF routing table to
resolve its RPF neighbor for a particular multicast source address and to resolve the RPF neighbor for
the rendezvous point (RP) address. PIM can optionally use inet.2 as its RPF routing table. The inet.2
routing table is dedicated to this purpose.
PIM uses a single routing table for its RPF check, this ensures that the route with the longest matching
prefix is chosen as the RPF route.
If multicast routes are exchanged by Multiprotocol Border Gateway Protocol MP-BGP or multitopology
IS-IS, they are placed in inet.2 by default.
Using inet.2 as the RPF routing table enables you to have a control plane for multicast, which is
independent of the normal unicast routing table. You might want to use inet.2 as the RPF routing table
for any of the following reasons:
• If you use traffic engineering or have an interior gateway protocol (IGP) configured for shortcuts, the
router has label-switched paths (LSPs) installed as the next hops in inet.2. By applying policy, you can
have the router install the routes with non-MPLS next-hops in the inet.2 routing table.
• If you have an MPLS network that does not support multicast traffic over LSP tunnels, you need to
configure the router to use a routing table other than inet.0. You can have the inet.2 routing table
populated with native IGP, BGP, and interface routes that can be used for RPF.
To populate the PIM RPF table, you use rib groups. A rib group is defined with the rib-groups statement
at the [edit routing-options] hierarchy level. The rib group is applied to the PIM protocol by including
the rib-group statement at the [edit pim] hierarchy level. A rib group is most frequently used to place
routes in multiple routing tables.
When you configure rib groups for PIM, keep the following in mind:
• The import-rib statement copies routes from the protocol to the routing table.
• Only the first rib routing table specified in the import-rib statement is used by PIM for RPF checks.
You can also configure IS-IS or OSPF to populate inet.2 with routes that have regular IP next hops. This
allows RPF to work properly even when MPLS is configured for traffic engineering, or when IS-IS or
OSPF are configured to use “shortcuts” for local traffic.
1161
You can also configure the PIM protocol to use a rib group for RPF checks under a virtual private
network (VPN) routing instance. In this case the rib group is still defined at the [edit routing-options]
hierarchy level.
Configuration
IN THIS SECTION
Configuring a PIM RPF Routing Table Group Using Interface Routes | 1161
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
In this example, the network administrator has decided to use the inet.2 routing table for RPF checks. In
this process, local routes are copied into this table by using an interface rib group.
To define an interface routing table group and use it to populate inet.2 for RPF checks:
1. Use the show multicast rpf command to verify that the multicast RPF table is not populated with
routes.
Each routing table group must contain one or more routing tables that Junos OS uses when
importing routes (specified in the import-rib statement).
Include the import-rib statement and specify the inet.2 routing table at the [edit routing-options rib-
groups] hierarchy level.
The rib group for PIM can be applied globally or in a routing instance. In this example, the global
configuration is shown.
Include the rib-group statement and specify the mcast-rpf-rib rib group at the [edit protocols pim]
hierarchy level.
Include the rib-group statement and specify the inet address family at the [edit routing-options
interface-routes] hierarchy level.
5. Configure the if-rib rib group to import routes from the inet.0 and inet.2 routing tables.
Include the import-rib statement and specify the inet.0 and inet.2 routing tables at the [edit routing-
options rib-groups] hierarchy level.
user@host# commit
Purpose
Verify that the multicast RPF table is now populated with routes.
Action
10.0.24.12/30
Protocol: Direct
Interface: fe-0/1/2.0
10.0.24.13/32
Protocol: Local
10.0.27.12/30
Protocol: Direct
Interface: fe-0/1/3.0
10.0.27.13/32
Protocol: Local
10.0.224.8/30
Protocol: Direct
Interface: ge-1/3/3.0
10.0.224.9/32
Protocol: Local
127.0.0.1/32
Inactive
1164
192.168.2.1/32
Protocol: Direct
Interface: lo0.0
192.168.187.0/25
Protocol: Direct
Interface: fxp0.0
192.168.187.12/32
Protocol: Local
Meaning
The first line of the sample output shows that the inet.2 table is being used and that there are 10 routes
in the table. The remainder of the sample output lists the routes that populate the inet.2 routing table.
SEE ALSO
IN THIS SECTION
Requirements | 1165
Overview | 1165
Configuration | 1165
Verification | 1168
This example shows how to configure and apply a PIM RPF routing table.
1165
Requirements
1. Determine whether the router is directly attached to any multicast sources. Receivers must be able
to locate these sources.
2. Determine whether the router is directly attached to any multicast group receivers. If receivers are
present, IGMP is needed.
3. Determine whether to configure multicast to use sparse, dense, or sparse-dense mode. Each mode
has different configuration considerations.
5. Determine whether to locate the RP with the static configuration, BSR, or auto-RP method.
6. Determine whether to configure multicast to use its RPF routing table when configuring PIM in
sparse, dense, or sparse-dense mode.
7. Configure the SAP and SDP protocols to listen for multicast session announcements. See
Configuring the Session Announcement Protocol.
10. Filter PIM register messages from unauthorized groups and sources. See Example: Rejecting
Incoming PIM Register Messages on RP Routers and Example: Stopping Outgoing PIM Register
Messages on a Designated Router.
Overview
In this example, you name the new RPF routing table group multicast-rfp-rib and use inet.2 for its export
as well as its import routing table. Then you create a routing table group for the interface routes and
name the RPF if-rib. Finally, you use inet.2 and inet.0 for its import routing tables, and add the new
interface routing table group to the interface routes.
Configuration
IN THIS SECTION
Procedure | 1166
1166
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
instructions on how to do that, see Using the CLI Editor in Configuration Mode in the Junos OS CLI User
Guide.
[edit]
user@host# edit routing-options rib-groups
2. Configure a name.
[edit]
user@host# edit routing-options rib-groups
Results
From configuration mode, confirm your configuration by entering the show protocols and show routing-
options commands. If the output does not display the intended configuration, repeat the configuration
instructions in this example to correct it.
[edit]
user@host# show protocols
pim {
rib-group inet multicast-rpf-rib;
}
[edit]
user@host# show routing-options
interface-routes {
rib-group inet if-rib;
}
1168
static {
route 0.0.0.0/0 next-hop 10.100.37.1;
}
rib-groups {
multicast-rpf-rib {
export-rib inet.2;
import-rib inet.2;
}
if-rib {
import-rib [ inet.2 inet.0 ];
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Purpose
Verify that SAP and SDP are configured to listen on the correct group addresses and ports.
Action
Purpose
Action
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Derived Parameters:
IGMP Membership Timeout: 260.0
IGMP Other Querier Present Timeout: 255.0
Purpose
Action
Purpose
Verify that the PIM RP is statically configured with the correct IP address.
1170
Action
Purpose
Action
SEE ALSO
IN THIS SECTION
Requirements | 1171
Overview | 1171
Configuration | 1172
Verification | 1174
A multicast RPF policy disables RPF checks for a particular multicast (S,G) pair. You usually disable RPF
checks on egress routing devices of a point-to-multipoint label-switched path (LSP), because the
interface receiving the multicast traffic on a point-to-multipoint LSP egress router might not always be
the RPF interface.
1171
This example shows how to configure an RPF check policy named disable-RPF-on-PE. The disable-RPF-
on-PE policy disables RPF checks on packets arriving for group 228.0.0.0/8 or from source address
196.168.25.6.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
Overview
An RPF policy behaves like an import policy. If no policy term matches the input packet, the default
action is to accept (that is, to perform the RPF check). The route-filter statement filters group addresses,
and the source-address-filter statement filters source addresses.
This example shows how to configure each condition as a separate policy and references both policies in
the rpf-check-policy statement. This allows you to associate groups in one policy and sources in the
other.
NOTE: Be careful when disabling RPF checks on multicast traffic. If you disable RPF checks in
some configurations, multicast loops can result.
• If the policy name is changed, the new policy takes effect immediately and any packets no longer
filtered are subjected to the RPF check.
• If the policy is deleted, all packets formerly filtered are subjected to the RPF check.
• If the underlying policy is changed, but retains the same name, the new conditions take effect
immediately and any packets no longer filtered are subjected to the RPF check.
1172
Configuration
IN THIS SECTION
Procedure | 1172
Results | 1173
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
set policy-options policy-statement disable-RPF-from-group term first from route-filter 228.0.0.0/8 orlonger
set policy-options policy-statement disable-RPF-from-group term first then reject
set policy-options policy-statement disable-RPF-from-source term first from source-address-filter
192.168.25.6/32 exact
set policy-options policy-statement disable-RPF-from-source term first then reject
set routing-options multicast rpf-check-policy [ disable-RPF-from-group disable-RPF-from-source ]
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit policy-options]
user@host# set policy-statement disable-RPF-for-group term first from route-filter 228.0.0.0/8
orlonger
user@host# set policy-statement disable-RPF-for-group term first then reject
1173
[edit policy-options]
user@host# set policy-statement disable-RPF-for-source term first from source-address-filter
192.168.25.6/32 exact
user@host# set policy-statement disable-RPF-for-source term first then reject
[edit routing-options]
user@host# set multicast rpf-check-policy [ disable-RPF-for-group disable-RPF-for-source ]
user@host# commit
Results
Confirm your configuration by entering the show policy-options and show routing-options commands.
}
}
Verification
SEE ALSO
IN THIS SECTION
Requirements | 1174
Overview | 1175
Configuration | 1176
Verification | 1179
This example shows how to configure and verify the multicast PIM RPF next-hop neighbor selection for
a group or (S,G) pair.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
• Make sure that the RPF next-hop neighbor you want to specify is operating.
Overview
IN THIS SECTION
Topology | 1176
Multicast PIM RPF neighbor selection allows you to specify the RPF neighbor (next hop) and source
address for a single group or multiple groups using a prefix list. RPF neighbor selection can only be
configured for VPN routing and forwarding (VRF) instances.
If you have multiple service VRFs through which a receiver VRF can learn the same source or rendevous
point (RP) address, PIM RPF checks typically choose the best path determined by the unicast protocol
for all multicast flows. However, if RPF neighbor selection is configured, RPF checks are based on your
configuration instead of the unicast routing protocols.
You can use this static RPF selection as a building block for particular applications. For example, an
extranet. Suppose you want to split the multicast flows among parallel PIM links or assign one multicast
flow to a specific PIM link. With static RPF selection configured, the router sends join and prune
messages based on the configuration.
You can use wildcards to designate the source address. Whether or not you use wildcards affects how
the PIM joins work:
• If you configure only a source prefix for a group, all (*,G) joins are sent to the next-hop neighbor
selected by the unicast protocol, while (S,G) joins are sent to the next-hop neighbor specified for the
source.
• If you configure only a wildcard source for a group, all (*,G) and (S,G) joins are sent to the upstream
interface pointing to the wildcard source next-hop neighbor.
• If you configure both a source prefix and a wildcard source for a group, all (S,G) joins are sent to the
next-hop neighbor defined for the source prefix, while (*,G) joins are sent to the next-hop neighbor
specified for the wildcard source.
1176
Topology
Figure 130 on page 1176 shows the topology used in this example.
In this example, the RPF selection is configured on the receiver provider edge router (PE2).
Configuration
IN THIS SECTION
Procedure | 1177
1177
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
set routing-instance vpn-a protocols pim rpf-selection group 225.5.0.0/16 wildcard-source next-hop
10.12.5.2
set routing-instance vpn-a protocols pim rpf-selection prefix-list group12 wildcard-source next-hop
10.12.31.2
set routing-instance vpn-a protocols pim rpf-selection prefix-list group34 source 22.1.12.0/24 next-hop
10.12.32.2
set policy-options prefix-list group12 225.1.1.0/24
set policy-options prefix-list group12 225.2.0.0/16
set policy-options prefix-list group34 225.3.3.3/32
set policy-options prefix-list group34 225.4.4.0/24
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit policy-options]
set prefix-list group12 225.1.1.0/24
1178
user@host# commit
Results
From configuration mode, confirm your configuration by entering the show policy-options and show
routing-instances commands. If the output does not display the intended configuration, repeat the
instructions in this example to correct the configuration.
prefix-list group34 {
source 22.1.12.0/24 {
next-hop 10.12.32.2;
}
}
}
}
}
}
Verification
To verify the configuration, run the following commands, checking the upstream interface and the
upstream neighbor:
SEE ALSO
RELATED DOCUMENTATION
CHAPTER 24
IN THIS CHAPTER
IN THIS SECTION
Multicast-only fast reroute (MoFRR) minimizes packet loss for traffic in a multicast distribution tree
when link failures occur, enhancing multicast routing protocols like Protocol Independent Multicast
(PIM) and multipoint Label Distribution Protocol (multipoint LDP) on devices that support these
features.
1181
NOTE: On switches, MoFRR with MPLS label-switched paths and multipoint LDP is not
supported.
For MX Series routers, MoFRR is supported only on MX Series routers with MPC line cards. As a
prerequisite, you must configure the router into network-services enhanced-ip mode, and all
the line cards in the router must be MPCs.
With MoFRR enabled, devices send join messages on primary and backup upstream paths toward a
multicast source. Devices receive data packets from both the primary and backup paths, and discard the
redundant packets based on priority (weights that are assigned to the primary and backup paths). When
a device detects a failure on the primary path, it immediately starts accepting packets from the
secondary interface (the backup path). The fast switchover greatly improves convergence times upon
primary path link failures.
One application for MoFRR is streaming IPTV. IPTV streams are multicast as UDP streams, so any lost
packets are not retransmitted, leading to a less-than-satisfactory user experience. MoFRR can improve
the situation.
MoFRR Overview
With fast reroute on unicast streams, an upstream routing device preestablishes MPLS label-switched
paths (LSPs) or precomputes an IP loop-free alternate (LFA) fast reroute backup path to handle failure of
a segment in the downstream path.
In multicast routing, the receiving side usually originates the traffic distribution graphs. This is unlike
unicast routing, which generally establishes the path from the source to the receiver. PIM (for IP),
multipoint LDP (for MPLS), and RSVP-TE (for MPLS) are protocols that are capable of establishing
multicast distribution graphs. Of these, PIM and multipoint LDP receivers initiate the distribution graph
setup, so MoFRR can work with these two multicast protocols where they are supported.
In a multicast tree, if the device detects a network component failure, it takes some time to perform a
reactive repair, leading to significant traffic loss while setting up an alternate path. MoFRR reduces
traffic loss in a multicast distribution tree when a network component fails. With MoFRR, one of the
downstream routing devices sets up an alternative path toward the source to receive a backup live
stream of the same multicast traffic. When a failure happens along the primary stream, the MoFRR
routing device can quickly switch to the backup stream.
With MoFRR enabled, for each (S,G) entry, the device uses two of the available upstream interfaces to
send a join message and to receive multicast traffic. The protocol attempts to select two disjoint paths if
two such paths are available. If disjoint paths are not available, the protocol selects two non-disjoint
paths. If two non-disjoint paths are not available, only a primary path is selected with no backup. MoFRR
prioritizes the disjoint backup in favor of load balancing the available paths.
1182
Figure 131 on page 1182 shows two paths from the multicast receiver routing device (also referred to as
the egress provider edge (PE) device) to the multicast source routing device (also referred to as the
ingress PE device).
With MoFRR enabled, the egress (receiver side) routing device sets up two multicast trees, a primary
path and a backup path, toward the multicast source for each (S,G). In other words, the egress routing
device propagates the same (S,G) join messages toward two different upstream neighbors, thus creating
two multicast trees.
One of the multicast trees goes through plane 1 and the other through plane 2, as shown in Figure 131
on page 1182. For each (S,G), the egress routing device forwards traffic received on the primary path
and drops traffic received on the backup path.
MoFRR is supported on both equal-cost multipath (ECMP) paths and non-ECMP paths. The device
needs to enable unicast loop-free alternate (LFA) routes to support MoFRR on non-ECMP paths. You
enable LFA routes using the link-protection statement in the interior gateway protocol (IGP)
1183
configuration. When you enable link protection on an OSPF or IS-IS interface, the device creates a
backup LFA path to the primary next hop for all destination routes that traverse the protected interface.
Junos OS implements MoFRR in the IP network for IP MoFRR and at the MPLS label-edge routing
device (LER) for multipoint LDP MoFRR.
Multipoint LDP MoFRR is used at the egress device of an MPLS network, where the packets are
forwarded to an IP network. With multipoint LDP MoFRR, the device establishes two paths toward the
upstream PE routing device for receiving two streams of MPLS packets at the LER. The device accepts
one of the streams (the primary), and the other one (the backup) is dropped at the LER. IF the primary
path fails, the device accepts the backup stream instead. Inband signaling support is a prerequisite for
MoFRR with multipoint LDP (see Understanding Multipoint LDP Inband Signaling for Point-to-
Multipoint LSPs).
PIM Functionality
Junos OS supports MoFRR for shortest-path tree (SPT) joins in PIM source-specific multicast (SSM) and
any-source multicast (ASM). MoFRR is supported for both SSM and ASM ranges. To enable MoFRR for
(*,G) joins, include the mofrr-asm-starg configuration statement at the [edit routing-options multicast
stream-protection] hierarchy. For each group G, MoFRR will operate for either (S,G) or (*,G), but not
both. (S,G) always takes precedence over (*,G).
With MoFRR enabled, a PIM routing device propagates join messages on two upstream reverse-path
forwarding (RPF) interfaces to receive multicast traffic on both links for the same join request. MoFRR
gives preference to two paths that do not converge to the same immediate upstream routing device.
PIM installs appropriate multicast routes with upstream RPF next hops with two interfaces (for the
primary and backup paths).
When the primary path fails, the backup path is upgraded to primary status, and the device forwards
traffic accordingly. If there are alternate paths available, MoFRR calculates a new backup path and
updates or installs the appropriate multicast route.
You can enable MoFRR with PIM join load balancing (see the join-load-balance automatic
statement). However, in that case the distribution of join messages among the links might not be even.
When a new ECMP link is added, join messages on the primary path are redistributed and load-
balanced. The join messages on the backup path might still follow the same path and might not be
evenly redistributed.
You enable MoFRR using the stream-protection configuration statement at the [edit routing-options
multicast] hierarchy. MoFRR is managed by a set of filter policies.
When an egress PIM routing device receives a join message or an IGMP report, it checks for an MoFRR
configuration and proceeds as follows:
• If the MoFRR configuration is not present, PIM sends a join message upstream toward one upstream
neighbor (for example, plane 2 in Figure 131 on page 1182).
1184
• If the MoFRR configuration is present, the device checks for a policy configuration.
• If a policy is not present, the device checks for primary and backup paths (upstream interfaces), and
proceeds as follows:
• If primary and backup paths are not available—PIM sends a join message upstream toward one
upstream neighbor (for example, plane 2 in Figure 131 on page 1182).
• If primary and backup paths are available—PIM sends the join message upstream toward two of
the available upstream neighbors. Junos OS sets up primary and secondary multicast paths to
receive multicast traffic (for example, plane 1 in Figure 131 on page 1182).
• If a policy is present, the device checks whether the policy allows MoFRR for this (S,G), and proceeds
as follows:
• If this policy check fails—PIM sends a join message upstream toward one upstream neighbor (for
example, plane 2 in Figure 131 on page 1182).
• If this policy check passes—The device checks for primary and backup paths (upstream interfaces).
• If the primary and backup paths are not available, PIM sends a join message upstream toward
one upstream neighbor (for example, plane 2 in Figure 131 on page 1182).
• If the primary and backup paths are available, PIM sends the join message upstream toward
two of the available upstream neighbors. The device sets up primary and secondary multicast
paths to receive multicast traffic (for example, plane 1 in Figure 131 on page 1182).
To avoid MPLS traffic duplication, multipoint LDP usually selects only one upstream path. (See section
2.4.1.1. Determining One's 'upstream LSR' in RFC 6388, Label Distribution Protocol Extensions for
Point-to-Multipoint and Multipoint-to-Multipoint Label Switched Paths.)
For multipoint LDP with MoFRR, the multipoint LDP device selects two separate upstream peers and
sends two separate labels, one to each upstream peer. The device uses the same algorithm described in
RFC 6388 to select the primary upstream path. The device uses the same algorithm to select the backup
upstream path but excludes the primary upstream LSR as a candidate. The two different upstream peers
send two streams of MPLS traffic to the egress routing device. The device selects only one of the
upstream neighbor paths as the primary path from which to accept the MPLS traffic. The other path
becomes the backup path, and the device drops that traffic. When the primary upstream path fails, the
device starts accepting traffic from the backup path. The multipoint LDP device selects the two
upstream paths based on the interior gateway protocol (IGP) root device next hop.
A forwarding equivalency class (FEC) is a group of IP packets that are forwarded in the same manner,
over the same path, and with the same forwarding treatment. Normally, the label that is put on a
particular packet represents the FEC to which that packet is assigned. In MoFRR, two routes are placed
1185
into the mpls.0 table for each FEC—one route for the primary label and the other route for the backup
label.
If there are parallel links toward the same immediate upstream device, the device considers both parallel
links to be the primary. At any point in time, the upstream device sends traffic on only one of the
multiple parallel links.
A bud node is an LSR that is an egress LSR, but also has one or more directly connected downstream
LSRs. For a bud node, the traffic from the primary upstream path is forwarded to a downstream LSR. If
the primary upstream path fails, the MPLS traffic from the backup upstream path is forwarded to the
downstream LSR. This means that the downstream LSR next hop is added to both MPLS routes along
with the egress next hop.
As with PIM, you enable MoFRR with multipoint LDP using the stream-protection configuration
statement at the [edit routing-options multicast] hierarchy, and it’s managed by a set of filter policies.
If you have enabled the multipoint LDP point-to-multipoint FEC for MoFRR, the device factors the
following considerations into selecting the upstream path:
• The targeted LDP sessions are skipped if there is a nontargeted LDP session. If there is a single
targeted LDP session, the targeted LDP session is selected, but the corresponding point-to-
multipoint FEC loses the MoFRR capability because there is no interface associated with the targeted
LDP session.
• All interfaces that belong to the same upstream LSR are considered to be the primary path.
• For any root-node route updates, the upstream path is changed based on the latest next hops from
the IGP. If a better path is available, multipoint LDP attempts to switch to the better path.
Packet Forwarding
For either PIM or multipoint LDP, the device performs multicast source stream selection at the ingress
interface. This preserves fabric bandwidth and maximizes forwarding performance because it:
For PIM, each IP multicast stream contains the same destination address. Regardless of the interface on
which the packets arrive, the packets have the same route. The device checks the interface upon which
each packet arrives and forwards only those that are from the primary interface. If the interface matches
a backup stream interface, the device drops the packets. If the interface doesn’t match either the
primary or backup stream interface, the device handles the packets as exceptions in the control plane.
1186
Figure 132 on page 1186 shows this process with sample primary and backup interfaces for routers with
PIM. Figure 133 on page 1186 shows this similarly for switches with PIM.
Figure 132: MoFRR IP Route Lookup in the Packet Forwarding Engine on Routers
Figure 133: MoFRR IP Route Handling in the Packet Forwarding Engine on Switches
For MoFRR with multipoint LDP on routers, the device uses multiple MPLS labels to control MoFRR
stream selection. Each label represents a separate route, but each references the same interface list
check. The device only forwards the primary label, and drops all others. Multiple interfaces can receive
packets using the same label.
1187
Figure 134 on page 1187 shows this process for routers with multipoint LDP.
Figure 134: MoFRR MPLS Route Lookup in the Packet Forwarding Engine
MoFRR has the following limitations and caveats on routing and switching devices:
• MoFRR failure detection is supported for immediate link protection of the routing device on which
MoFRR is enabled and not on all the links (end-to-end) in the multicast traffic path.
• MoFRR supports fast reroute on two selected disjoint paths toward the source. Two of the selected
upstream neighbors cannot be on the same interface—in other words, two upstream neighbors on a
LAN segment. The same is true if the upstream interface happens to be a multicast tunnel interface.
• Detection of the maximum end-to-end disjoint upstream paths is not supported. The receiver side
(egress) routing device only makes sure that there is a disjoint upstream device (the immediate
previous hop). PIM and multipoint LDP do not support the equivalent of explicit route objects (EROs).
Hence, disjoint upstream path detection is limited to control over the immediately previous hop
device. Because of this limitation, the path to the upstream device of the previous hop selected as
primary and backup might be shared.
• MoFRR is enabled or disabled on the egress device while there is an active traffic stream flowing.
• PIM join load balancing for join messages for backup paths are not supported.
1188
• For a multicast group G, MoFRR is not allowed for both (S,G) and (*,G) join messages. (S,G) join
messages have precedence over (*,G).
• MoFRR is not supported for multicast traffic streams that use two different multicast groups. Each
(S,G) combination is treated as a unique multicast traffic stream.
• Multicast statistics for the backup traffic stream are not maintained by PIM and therefore are not
available in the operational output of show commands.
• MoFRR is not supported when the upstream interface is an integrated routing and bridging (IRB)
interface, which impacts other multicast features such as Internet Group Management Protocol
version 3 (IGMPv3) snooping.
• Packet replication and multicast lookups while forwarding multicast traffic can cause packets to
recirculate through PFEs multiple times. As a result, displayed values for multicast packet counts
from the show pfe statistics traffic command might show higher numbers than expected in output
fields such as Input packets and Output packets. You might notice this behavior more frequently in
MoFRR scenarios because duplicate primary and backup streams increase the traffic flow in general.
MoFRR has the following limitations and caveats on routers when used with multipoint LDP:
• MoFRR does not apply to multipoint LDP traffic received on an RSVP tunnel because the RSVP
tunnel is not associated with any interface.
• Mixed upstream MoFRR is not supported. This refers to PIM multipoint LDP in-band signaling,
wherein one upstream path is through multipoint LDP and the second upstream path is through PIM.
• If the source is reachable through multiple ingress (source-side) provider edge (PE) routing devices,
multipoint LDP MoFRR is not supported.
• Targeted LDP upstream sessions are not selected as the upstream device for MoFRR.
1189
• Multipoint LDP link protection on the backup path is not supported because there is no support for
MoFRR inner labels.
You can configure multicast-only fast reroute (MoFRR) to minimize packet loss in a network when there
is a link failure.
When fast reroute is applied to unicast streams, an upstream router preestablishes MPLS label-switched
paths (LSPs) or precomputes an IP loop-free alternate (LFA) fast reroute backup path to handle failure of
a segment in the downstream path.
In multicast routing, the traffic distribution graphs are usually originated by the receiver. This is unlike
unicast routing, which usually establishes the path from the source to the receiver. Protocols that are
capable of establishing multicast distribution graphs are PIM (for IP), multipoint LDP (for MPLS) and
RSVP-TE (for MPLS). Of these, PIM and multipoint LDP receivers initiate the distribution graph setup,
and therefore:
• On the MX Series and SRX Series, MoFRR is supported in PIM and multipoint LDP domains.
The configuration steps are the same for enabling MoFRR for PIM on all devices that support this
feature, unless otherwise indicated. Configuration steps that are not applicable to multipoint LDP
MoFRR are also indicated.
(For MX Series routers only) MoFRR is supported on MX Series routers with MPC line cards. As a
prerequisite,all the line cards in the router must be MPCs.
1. (For MX Series and SRX Series routers only) Set the router to enhanced IP mode.
[edit chassis]
user@host# set network-services enhanced-ip
2. Enable MoFRR.
3. (Optional) Configure a routing policy that filters for a restricted set of multicast streams to be
affected by your MoFRR configuration.
You can apply filters that are based on source or group addresses.
For example:
[edit policy-options]
policy-statement mofrr-select {
term A {
from {
source-address-filter 225.1.1.1/32 exact;
}
then {
accept;
}
}
term B {
from {
source-address-filter 226.0.0.0/8 orlonger;
}
then {
accept;
}
}
term C {
from {
source-address-filter 227.1.1.0/24 orlonger;
source-address-filter 227.4.1.0/24 orlonger;
source-address-filter 227.16.1.0/24 orlonger;
}
then {
accept;
}
}
term D {
from {
source-address-filter 227.1.1.1/32 exact
}
then {
reject; #MoFRR disabled
}
}
1191
...
}
4. (Optional) If you configured a routing policy to filter the set of multicast groups to be affected by
your MoFRR configuration, apply the policy for MoFRR stream protection.
For example:
routing-options {
multicast {
stream-protection {
policy mofrr-select
}
}
}
5. (Optional) In a PIM domain with MoFRR, allow MoFRR to be applied to any-source multicast (ASM)
(*,G) joins.
This is not supported for multipoint LDP MoFRR.
6. (Optional) In a PIM domain with MoFRR, allow only a disjoint RPF (an RPF on a separate plane) to be
selected as the backup RPF path.
This is not supported for multipoint LDP MoFRR. In a multipoint LDP MoFRR domain, the same label
is shared between parallel links to the same upstream neighbor. This is not the case in a PIM domain,
where each link forms a neighbor. The mofrr-disjoint-upstream-only statement does not allow a
backup RPF path to be selected if the path goes to the same upstream neighbor as that of the
primary RPF path. This ensures that MoFRR is triggered only on a topology that has multiple RPF
upstream neighbors.
7. (Optional) In a PIM domain with MoFRR, prevent sending join messages on the backup path, but
retain all other MoFRR functionality.
1192
8. (Optional) In a PIM domain with MoFRR, allow new primary path selection to be based on the unicast
gateway selection for the unicast route to the source and to change when there is a change in the
unicast selection, rather than having the backup path be promoted as primary. This ensures that the
primary RPF hop is always on the best path.
When you include the mofrr-primary-selection-by-routing statement, the backup path is not
guaranteed to get promoted to be the new primary path when the primary path goes down.
IN THIS SECTION
Requirements | 1193
Overview | 1193
Verification | 1201
This example shows how to configure multicast-only fast reroute (MoFRR) to minimize packet loss in a
network when there is a link failure. It works by enhancing the multicast routing protocol, Protocol
Independent Multicast (PIM).
MoFRR transmits a multicast join message from a receiver toward a source on a primary path, while also
transmitting a secondary multicast join message from the receiver toward the source on a backup path.
Data packets are received from both the primary path and the backup paths. The redundant packets are
discarded at topology merge points , based on priority (weights assigned to primary and backup paths).
1193
When a failure is detected on the primary path, the repair is made by changing the interface on which
packets are accepted to the secondary interface. Because the repair is local, it is fast—greatly improving
convergence times in the event of a link failure on the primary path.
Requirements
No special configuration beyond device initialization is required before configuring this example.
In this example, only the egress provider edge (PE) router has MoFRR enabled,MoFRR in a PIM domain
can be enabled on any of the routers.
MoFRR is supported on MX Series platforms with MPC line cards. As a prerequisite, the router must be
set to network-services enhanced-ip mode, and all the line-cards in the platform must be MPCs.
This example requires Junos OS Release 14.1 or later on the egress PE router.
Overview
IN THIS SECTION
Topology | 1194
In this example, Device R3 is the egress edge router. MoFRR is enabled on this device only.
OSPF or IS-IS is used for connectivity, though any interior gateway protocol (IGP) or static routes can be
used.
PIM sparse mode version 2 is enabled on all devices in the PIM domain. Device R1 serves as the
rendezvous point (RP).
Device R3, in addition to MoFRR, also has PIM join load balancing enabled.
For testing purposes, routers are used to simulate the source and the receiver. Device R3 is configured
to statically join the desired group by using the set protocols igmp interface fe-1/2/15.0 static group
225.1.1.1 command. It is just joining, not listening. The fe-1/2/15.0 interface is the Device R3 interface
facing the receiver. In the case when a real multicast receiver host is not available, as in this example,
this static IGMP configuration is useful. On the receiver, to make it listen to the multicast group address,
this example uses set protocols sap listen 225.1.1.1. To make the source send multicast traffic, a
multicast ping is issued from the source router. The ping command is ping 225.1.1.1 bypass-routing
interface fe-1/2/10.0 ttl 10 count 1000000000. The fe-1/2/10.0 interface is the source interface facing
Device R1.
1194
MoFRR configuration includes multiple options that are not shown in this example, but are explained
separately. The options are as follows:
stream-protection {
mofrr-asm-starg;
mofrr-disjoint-upstream-only;
mofrr-no-backup-join;
mofrr-primary-path-selection-by-routing;
policy policy-name;
}
Topology
"CLI Quick Configuration" on page 1195 shows the configuration for all of the devices in Figure 135 on
page 1194.
The section "Step-by-Step Configuration" on page 1197 describes the steps on Device R3.
1195
IN THIS SECTION
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Device R1
Device R2
Device R3
Device R6
Device Source
Device Receiver
Step-by-Step Configuration
IN THIS SECTION
Procedure | 1197
Procedure
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit chassis]
user@R3# set network-services enhanced-ip
[edit interfaces]
user@R3# set fe-1/2/13 unit 0 family inet address 10.0.0.10/30
user@R3# set fe-1/2/15 unit 0 family inet address 10.0.0.13/30
user@R3# set fe-1/2/14 unit 0 family inet address 10.0.0.22/30
user@R3# set lo0 unit 0 family inet address 192.168.0.3/32
1198
3. For testing purposes only, on the interface facing Device Receiver, simulate IGMP joins.
If your test environment has receiver hosts, this step is not necessary.
5. Configure PIM.
8. Enable MoFRR.
Results
From configuration mode, confirm your configuration by entering the show chassis, show interfaces,
show protocols, show policy-options, and show routing-options commands. If the output does not
display the intended configuration, repeat the instructions in this example to correct the configuration.
address 192.168.0.3/32;
}
}
}
then {
load-balance per-packet;
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Purpose
Action
Meaning
The interface on Device Source, facing Device R1, is fe-1/2/10.0. Keep in mind that multicast pings have
a TTL of 1 by default, so you must use the ttl option.
Purpose
Make sure that the egress device has two upstream interfaces for the multicast group join.
Action
Group: 225.1.1.1
Source: 10.0.0.1
Flags: sparse,spt
Active upstream interface: fe-1/2/13.0
Active upstream neighbor: 10.0.0.9
MoFRR Backup upstream interface: fe-1/2/14.0
MoFRR Backup upstream neighbor: 10.0.0.21
Upstream state: Join to Source, No Prune to RP
Keepalive timeout: 354
Uptime: 00:00:06
Downstream neighbors:
Interface: fe-1/2/15.0
10.0.0.13 State: Join Flags: S Timeout: Infinity
Uptime: 00:00:06 Time since last Join: 00:00:06
Number of downstream interfaces: 1
1203
Meaning
The output shows an active upstream interface and neighbor, and also an MoFRR backup upstream
interface and neighbor.
Purpose
Examine the IP multicast forwarding table to make sure that there is an upstream RPF interface list, with
a primary and a backup interface.
Action
Group: 225.1.1.1
Source: 10.0.0.1/32
Upstream rpf interface list:
fe-1/2/13.0 (P) fe-1/2/14.0 (B)
Downstream interface list:
fe-1/2/15.0
Session description: Unknown
Forwarding statistics are not available
RPF Next-hop ID: 836
Next-hop ID: 1048585
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: 171 seconds
Wrong incoming interface notifications: 0
Uptime: 00:03:09
Meaning
The output shows an upstream RPF interface list, with a primary and a backup interface.
1204
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 1204
Overview | 1205
Verification | 1212
This example shows how to configure multicast-only fast reroute (MoFRR) to minimize packet loss in a
network when there is a link failure. It works by enhancing the multicast routing protocol, Protocol
Independent Multicast (PIM).
MoFRR transmits a multicast join message from a receiver toward a source on a primary path, while also
transmitting a secondary multicast join message from the receiver toward the source on a backup path.
Data packets are received from both the primary path and the backup paths. The redundant packets are
discarded at topology merge points, based on priority (weights assigned to primary and backup paths).
When a failure is detected on the primary path, the repair is made by changing the interface on which
packets are accepted to the secondary interface. Because the repair is local, it is fast—greatly improving
convergence times in the event of a link failure on the primary path.
Requirements
No special configuration beyond device initialization is required before configuring this example.
This example uses QFX Series switches, and only the egress provider edge (PE) device has MoFRR
enabled. This topology might alternatively include MX Series routers for the other devices where
MoFRR is not enabled; in that case, substitute the corresponding interfaces for MX Series device ports
used for the primary or backup multicast traffic streams.
1205
This example requires Junos OS Release 17.4R1 or later on the device running MoFRR.
Overview
IN THIS SECTION
Topology | 1206
In this example, Device R3 is the egress edge device. MoFRR is enabled on this device only.
OSPF or IS-IS is used for connectivity, though any interior gateway protocol (IGP) or static routes can be
used.
PIM sparse mode version 2 is enabled on all devices in the PIM domain. Device R1 serves as the
rendezvous point (RP).
Device R3, in addition to MoFRR, also has PIM join load balancing enabled.
For testing purposes, routing or switching devices are used to simulate the multicast source and the
receiver. Device R3 is configured to statically join the desired group by using the set protocols igmp
interface xe-0/0/15.0 static group 225.1.1.1 command. It is just joining, not listening. The xe-0/0/15.0
interface is the Device R3 interface facing the receiver. In the case when a real multicast receiver host is
not available, as in this example, this static IGMP configuration is useful. On the receiver, to listen to the
multicast group address, this example uses set protocols sap listen 225.1.1.1. For the source to send
multicast traffic, a multicast ping is issued from the source device. The ping command is ping 225.1.1.1
bypass-routing interface xe-0/0/10.0 ttl 10 count 1000000000. The xe-0/0/10.0 interface is the
source interface facing Device R1.
MoFRR configuration includes multiple options that are not shown in this example, but are explained
separately. The options are as follows:
stream-protection {
mofrr-asm-starg;
mofrr-disjoint-upstream-only;
mofrr-no-backup-join;
mofrr-primary-path-selection-by-routing;
policy policy-name;
}
1206
Topology
"CLI Quick Configuration" on page 1206 shows the configuration for all of the devices in Figure 136 on
page 1206.
The section "Step-by-Step Configuration" on page 1208 describes the steps on Device R3.
IN THIS SECTION
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Device R1
Device R2
Device R3
Device R6
Device Source
Device Receiver
Step-by-Step Configuration
IN THIS SECTION
Procedure | 1209
1209
Procedure
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit interfaces]
user@R3# set xe-0/0/13 unit 0 family inet address 10.0.0.10/30
user@R3# set xe-0/0/15 unit 0 family inet address 10.0.0.13/30
user@R3# set xe-0/0/14 unit 0 family inet address 10.0.0.22/30
user@R3# set lo0 unit 0 family inet address 192.168.0.3/32
2. For testing purposes only, on the interface facing the device labeled Receiver, simulate IGMP joins.
If your test environment has receiver hosts, this step is not necessary.
4. Configure PIM.
7. Enable MoFRR.
Results
From configuration mode, confirm your configuration by entering the show interfaces, show protocols,
show policy-options, and show routing-options commands. If the output does not display the intended
configuration, repeat the instructions in this example to correct the configuration.
xe-0/0/15 {
unit 0 {
family inet {
address 10.0.0.13/30;
}
}
}
lo0 {
unit 0 {
family inet {
address 192.168.0.3/32;
}
}
}
version 2;
}
join-load-balance {
automatic;
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
IN THIS SECTION
Purpose
Action
Meaning
The interface on Device Source, facing Device R1, is xe-0/0/10.0. Keep in mind that multicast pings
have a TTL of 1 by default, so you must use the ttl option.
Purpose
Make sure that the egress device has two upstream interfaces for the multicast group join.
Action
Group: 225.1.1.1
Source: 10.0.0.1
Flags: sparse,spt
Active upstream interface: xe-0/0/13.0
Active upstream neighbor: 10.0.0.9
MoFRR Backup upstream interface: xe-0/0/14.0
1214
Meaning
The output shows an active upstream interface and neighbor, and also an MoFRR backup upstream
interface and neighbor.
Purpose
Examine the IP multicast forwarding table to make sure that there is an upstream RPF interface list, with
a primary and a backup interface.
Action
Group: 225.1.1.1
Source: 10.0.0.1/32
Upstream rpf interface list:
xe-0/0/13.0 (P) xe-0/0/14.0 (B)
Downstream interface list:
xe-0/0/15.0
Session description: Unknown
Forwarding statistics are not available
RPF Next-hop ID: 836
Next-hop ID: 1048585
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
1215
Meaning
The output shows an upstream RPF interface list, with a primary and a backup interface.
RELATED DOCUMENTATION
IN THIS SECTION
Requirements | 1216
Overview | 1216
Configuration | 1226
Verification | 1233
This example shows how to configure multicast-only fast reroute (MoFRR) to minimize packet loss in a
network when there is a link failure.
Multipoint LDP MoFRR is used at the egress node of an MPLS network, where the packets are
forwarded to an IP network. In the case of multipoint LDP MoFRR, the two paths toward the upstream
provider edge (PE) router are established for receiving two streams of MPLS packets at the label-edge
router (LER). One of the streams (the primary) is accepted, and the other one (the backup) is dropped at
the LER. The backup stream is accepted if the primary path fails.
1216
Requirements
No special configuration beyond device initialization is required before configuring this example.
In a multipoint LDP domain, for MoFRR to work, only the egress PE router needs to have MoFRR
enabled. The other routers do not need to support MoFRR.
MoFRR is supported on MX Series platforms with MPC line cards. As a prerequisite, the router must be
set to network-services enhanced-ip mode, and all the line-cards in the platform must be MPCs.
This example requires Junos OS Release 14.1 or later on the egress PE router.
Overview
IN THIS SECTION
Topology | 1217
In this example, Device R3 is the egress edge router. MoFRR is enabled on this device only.
OSPF is used for connectivity, though any interior gateway protocol (IGP) or static routes can be used.
For testing purposes, routers are used to simulate the source and the receiver. Device R4 and Device R8
are configured to statically join the desired group by using the set protocols igmp interface interface-
name static group group command. In the case when a real multicast receiver host is not available, as in
this example, this static IGMP configuration is useful. On the receivers, to make them listen to the
multicast group address, this example uses set protocols sap listen group.
MoFRR configuration includes a policy option that is not shown in this example, but is explained
separately. The option is configured as follows:
stream-protection {
policy policy-name;
}
1217
Topology
"CLI Quick Configuration" on page 1217 shows the configuration for all of the devices in Figure 137 on
page 1217.
The section "Configuration" on page 1226 describes the steps on Device R3.
IN THIS SECTION
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Device src1
Device src2
Device R1
Device R2
Device R3
Device R4
Device R5
Device R6
Device R7
Device R8
Configuration
IN THIS SECTION
Procedure | 1226
Procedure
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit chassis]
user@R3# set network-services enhanced-ip
[edit interfaces]
user@R3# set ge-1/2/14 unit 0 description R3-to-R2
user@R3# set ge-1/2/14 unit 0 family inet address 1.2.3.2/30
user@R3# set ge-1/2/14 unit 0 family mpls
user@R3# set ge-1/2/18 unit 0 description R3-to-R4
user@R3# set ge-1/2/18 unit 0 family inet address 1.3.4.1/30
user@R3# set ge-1/2/18 unit 0 family mpls
user@R3# set ge-1/2/19 unit 0 description R3-to-R6
1227
5. Configure PIM.
6. Configure LDP.
Results
From configuration mode, confirm your configuration by entering the show chassis, show interfaces,
show protocols, show policy-options, and show routing-options commands. If the output does not
display the intended configuration, repeat the instructions in this example to correct the configuration.
address 1.3.7.1/30;
}
family mpls;
}
}
ge-1/2/22 {
unit 0 {
description R3-to-R8;
family inet {
address 1.3.8.1/30;
}
family mpls;
}
}
ge-1/2/15 {
unit 0 {
description R3-to-R2;
family inet {
address 1.2.94.2/30;
}
family mpls;
}
}
ge-1/2/20 {
unit 0 {
description R3-to-R6;
family inet {
address 1.2.96.2/30;
}
family mpls;
}
}
lo0 {
unit 0 {
family inet {
address 192.168.15.1/32;
address 1.1.1.3/32 {
primary;
}
}
1231
}
}
interface ge-1/2/22.0;
}
If you are done configuring the device, enter commit from configuration mode.
1233
Verification
IN THIS SECTION
Purpose
Make sure the MoFRR is enabled, and determine what labels are being used.
Action
Meaning
The output shows that MoFRR is enabled, and it shows that the labels 301568 and 301600 are being
used for the two multipoint LDP point-to-multipoint LSPs.
1234
Purpose
Make sure that the egress device has two upstream interfaces for the multicast group join.
Action
RPF Nexthops :
ge-1/2/20.0, 1.2.96.1, Label: 301584, weight: 0xfffe
ge-1/2/19.0, 1.3.6.1, Label: 301584, weight: 0xfffe
Meaning
The output shows the primary upstream paths and the backup upstream paths. It also shows the RPF
next hops.
Purpose
Examine the IP multicast forwarding table to make sure that there is an upstream RPF interface list, with
a primary and a backup interface.
Action
Interface ge-1/2/22.0
RPF Nexthops: Interface ge-1/2/15.0, 1.2.94.1, 301616, 65534
Interface ge-1/2/20.0, 1.2.96.1, 301600, 1
Interface ge-1/2/14.0, 1.2.3.1, 301616, 65534
Interface ge-1/2/19.0, 1.3.6.1, 301600, 1
Attached FECs: P2MP root-addr 1.1.1.1, grp: 232.1.1.2, src: 192.168.219.11
(Active)
P2MP path type: Transit/Egress
Output Session (label): 1.1.1.2:0 (301616) (Backup)
Egress Nexthops: Interface ge-1/2/18.0
Interface ge-1/2/22.0
RPF Nexthops: Interface ge-1/2/15.0, 1.2.94.1, 301616, 65534
Interface ge-1/2/20.0, 1.2.96.1, 301600, 1
Interface ge-1/2/14.0, 1.2.3.1, 301616, 65534
Interface ge-1/2/19.0, 1.3.6.1, 301600, 1
Attached FECs: P2MP root-addr 1.1.1.1, grp: 232.1.1.2, src: 192.168.219.11
(Active)
Meaning
The output shows primary and backup sessions, and RPF next hops.
Purpose
Make sure that both primary and backup statistics are listed.
Action
Meaning
The output shows both primary and backup routes with the labels.
1239
CHAPTER 25
IN THIS CHAPTER
Configuring Multicast Snooping to Ignore Spanning Tree Topology Change Messages | 1253
Because MX Series routers can support both Layer 3 and Layer 2 functions at the same time, you can
configure the Layer 3 multicast protocols Protocol Independent Multicast (PIM) and the Internet Group
Membership Protocol (IGMP) as well as Layer 2 VLANs on an MX Series router.
Normal encapsulation rules restrict Layer 2 processing to accessing information in the frame header and
Layer 3 processing to accessing information in the packet header. However, in some cases, an interface
running a Layer 2 protocol needs information available only at Layer 3. In multicast applications, the
VLANs need the group membership information and multicast tree information available to the Layer 3
IGMP and PIM protocols. In these cases, the Layer 3 configurations can use PIM or IGMP snooping to
provide the needed information at the VLAN level.
For information about configuring multicast snooping for the operational details of a Layer 3 protocol on
behalf of a Layer 2 spanning-tree protocol process, see "Understanding Multicast Snooping and VPLS
Root Protection" on page 1241.
Snooping configuration statements and examples are not included in the Junos OS Layer 2 Switching
and Bridging Library for Routing Devices. For more information about configuring PIM and IGMP
snooping, see the Junos OS Multicast Protocols User Guide.
1240
RELATED DOCUMENTATION
IN THIS SECTION
Enabling Multicast Snooping for Multichassis Link Aggregation Group Interfaces | 1251
Routers can handle both Layer 2 and Layer 3 addressing information because the frame and its
addresses must be processed to access the encapsulated packet inside. Routers can run Layer 3
multicast protocols such as PIM or IGMP and determine where to forward multicast content or when a
host on an interface joins or leaves a group. However, bridges and LAN switches, as Layer 2 devices, are
not supposed to have access to the multicast information inside the packets that their frames carry.
How then are bridges and other Layer 2 devices to determine when a device on an interface joins or
leaves a multicast tree, or whether a host on an attached LAN wants to receive the content of a
particular multicast group?
The answer is for the Layer 2 device to implement multicast snooping. Multicast snooping is a general
term and applies to the process of a Layer 2 device “snooping” at the Layer 3 packet content to
determine which actions are taken to process or forward a frame. There are more specific forms of
1241
snooping, such as IGMP snooping or PIM snooping. In all cases, snooping involves a device configured to
function at Layer 2 having access to normally “forbidden” Layer 3 (packet) information. Snooping makes
multicasting more efficient in these devices.
SEE ALSO
VPLS root protection is a spanning-tree protocol process in which only one interface in a multihomed
environment is actively forwarding spanning-tree protocol frames. This protects the root of the spanning
tree against bridging loops, but also prevents both devices in the multihomed topology from snooped
information, such as IGMP membership reports.
For example, consider a collection of multicast-capable hosts connected to two customer edge (CE)
routers (CE1 and CE2) which are connected to each other (a CE1–CE2 link is configured) and
multihomed to two provider edge (PE) routers (PE1 and PE2, respectively). The active PE only receives
forwarded spanning-tree protocol information on the active PE-CE link, due to root protection
operation. As long as the CE1–CE2 link is operational, this is not a problem. However, if the link
between CE1 and CE2 fails, and the other PE becomes the active spanning-tree protocol link, no
multicast snooping information is available on the new active PE. The new active PE will not forward
multicast traffic to the CE and the hosts serviced by this CE router.
The service outage is corrected once the hosts send new group membership IGMP reports to the CE
routers. However, the service outage can be avoided if multicast snooping information is available to
both PEs in spite of normal spanning-tree protocol root protection operation.
You can configure multicast snooping to ignore messages about spanning tree topology changes on
bridge domains on virtual switches and bridge domains default routing switches. You can use the ignore-
stp-topology-change command to ignore messages about spanning tree topology changes
SEE ALSO
multicast-snooping-options {
flood-groups [ ip-addresses ];
forwarding-cache {
threshold suppress value <reuse value>;
}
graceful-restart <restart-duration seconds>;
ignore-stp-topology-change;
multichassis-lag-replicate-state;
nexthop-hold-time milliseconds;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
By default, multicast snooping is disabled. You can enable multicast snooping in VPLS or virtual switch
instance types in the instance hierarchy.
If there are multiple bridge domains configured under a VPLS or virtual switch instance, the multicast
snooping options configured at the instance level apply to all the bridge domains.
1243
SEE ALSO
IN THIS SECTION
Requirements | 1243
Configuration | 1246
Verification | 1249
This example shows how to configure multicast snooping in a bridge or VPLS routing-instance scenario.
Requirements
• Configure an interior gateway protocol. See the Junos OS Routing Protocols Library for Routing
Devices.
• Configure a multicast protocol. This feature works with the following multicast protocols:
• DVMRP
• PIM-DM
• PIM-SM
• PIM-SSM
IN THIS SECTION
Topology | 1246
IGMP snooping prevents Layer 2 devices from indiscriminately flooding multicast traffic out all
interfaces. The settings that you configure for multicast snooping help manage the behavior of IGMP
snooping.
You can configure multicast snooping options on the default master instance and on individual bridge or
VPLS instances. The default master instance configuration is global and applies to all individual bridge or
VPLS instances in the logical router. The configuration for the individual instances overrides the global
configuration.
• flood-groups—Enables you to list multicast group addresses for which traffic must be flooded. This
setting if useful for making sure that IGMP snooping does not prevent necessary multicast flooding.
The block of multicast addresses from 224.0.0.1 through 224.0.0.255 is reserved for local wire use.
Groups in this range are assigned for various uses, including routing protocols and local discovery
mechanisms. For example, OSPF uses 224.0.0.5 for all OSPF routers.
• forwarding-cache—Specifies how forwarding entries are aged out and how the number of entries is
controlled.
1245
You can configure threshold values on the forwarding cache to suppress (suspend) snooping when
the cache entries reach a certain maximum and reuse the cache when the number falls to another
threshold value. By default, no threshold values are enabled on the router.
The suppress threshold suppresses new multicast forwarding cache entries. An optional reuse
threshold specifies the point at which the router begins to create new multicast forwarding cache
entries. The range for both thresholds is from 1 through 200,000. If configured, the reuse value must
be less than the suppression value. The suppression value is mandatory. If you do not specify the
optional reuse value, then the number of multicast forwarding cache entries is limited to the
suppression value. A new entry is created as soon as the number of multicast forwarding cache
entries falls below the suppression value.
• graceful-restart—Configures the time after which routes learned before a restart are replaced with
routes relearned. If graceful restart for multicast snooping is disabled, snooping information is lost
after a Routing Engine restart.
By default, the graceful restart duration is 180 seconds (3 minutes). You can set this value between 0
and 300 seconds. If you set the duration to 0, graceful restart is effectively disabled. Set this value
slightly larger than the IGMP query response interval.
By default the IGMP snooping process on an MX Series router detects interface state changes made
by any of the spanning tree protocols (STPs).
In a VPLS multihoming environment where two PE routers are connected to two interconnected CE
routers and STP root protection is enabled on the PE routers, one of the PE router interfaces is in
forwarding state and the other is in blocking state.
If the link interconnecting the two CE routers fails, the PE router interface in blocking state
transitions to the forwarding state.
The PE router interface does not wait to receive membership reports in response to the next general
or group-specific query. Instead, the IGMP snooping process sends a general query message toward
the CE router. The hosts connected to the CE router reply with reports for all groups they are
interested in.
When the link interconnecting the two CE routers is restored, the original spanning-tree state on
both PE routers is restored. The forwarding PE receives a spanning-tree topology change message
and sends a general query message toward the CE router to immediately reconstruct the group
membership state.
1246
Topology
Figure 138 on page 1246 shows a VPLS multihoming topology in which a customer network has two CE
devices with a link between them. Each CE is connected to one PE.
Configuration
IN THIS SECTION
Procedure | 1247
1247
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see the Junos OS CLI User Guide.
5. Configure the router to ignore messages about spanning-tree topology state changes.
user@host# commit
Results
Confirm your configuration by entering the show bridge-domains and show routing-instances
commands.
}
}
Verification
SEE ALSO
In this example, you configure a hold-time of 20 milliseconds for instance-type virtual-switch, using the
nexthop-hold-time statement:
2. Use the show multicast snooping route command to verify that the bulk updates feature is turned
on.
You can include the nexthop-hold-time statement only for routing-instance types of virtual-switch or
vpls at the following hierarchy level.
If the nexthop-hold-time statement is deleted from the router configuration, bulk updates are disabled.
SEE ALSO
multicast-snooping-options | 1703
nexthop-hold-time | 1723
1251
[edit]
multicast-snooping-options {
multichassis-lag-replicate-state;
}
Replicating join and leave messages between links of a dual-link MC-LAG interface enables faster
recovery of membership information for MC-LAG interfaces that experience service interruption.
Without state replication, if a dual-link MC-LAG interface experiences a service interruption (for
example, if an active link switches to standby), the membership information for the interface is
recovered by generating an IGMP query to the network. This method can take from 1 through 10
seconds to complete, which might be too long for some applications.
When state replication is provided for MC-LAG interfaces, IGMP join or leave messages received on an
MC-LAG device are replicated from the active MC-LAG link to the standby link through an Interchassis
Communication Protocol (ICCP) connection. The standby link processes the messages as if they were
received from the corresponding active MC-LAG link, except it does not add itself as a next hop and it
does not flood the message to the network. After a failover, the multicast membership status of the link
can be recovered within a few seconds or less by retrieving the replicated messages.
After you commit the configuration, multicast snooping automatically identifies the active link during
initialization or after failover, and replicates data between the active and standby links without
administrator intervention.
2. Use the show igmp snooping interface command to display the state for MC-LAG interfaces.
Learning-Domain: default
Interface: ae0.1
State: Up Groups: 1
1252
NOTE: You can use the show igmp snooping membership command to display group
membership information for the links of MC-LAG interfaces.
SEE ALSO
multichassis-lag-replicate-state | 1707
Configuring Multicast Snooping | 0
This example configures the multicast snooping option for a bridge domain named Ignore-STP in a
virtual switch routing instance named vs_routing_instance_multihomed_CEs:
[edit]
routing-instances {
vs_routing_instance_multihomed_CEs {
instance-type virtual-switch;
bridge-domains {
1253
bd_ignore_STP {
multicast-snooping-options {
ignore-stp-topology-change;
}
}
}
}
}
RELATED DOCUMENTATION
You can configure the multicast snooping process for a virtual switch to ignore VPLS root protection
topology change messages.
1. Configure the spanning-tree protocol. For configuration details, see one of the following topics:
2. Configure VPLS root protection. For configuration details, see one of the following topics:
• Configuring VPLS Root Protection Topology Change Actions to Control Individual VLAN
Spanning-Tree Behavior
1. Configure a virtual-switch routing instance to isolate a LAN segment with its VSTP instance.
[edit]
user@host# edit routing-instances routing-instance-name
user@host# set instance-type virtual-switch
You can configure multicast snooping to ignore messages about spanning tree topology changes
for the virtual-switch routing-instance type only.
c. Configure the logical interfaces for the bridge domain in the virtual switch:
d. Configure the VLAN identifiers for the bridge domain in the virtual switch. For detailed
information, see Configuring a Virtual Switch Routing Instance on MX Series Routers.
2. Configure the multicast snooping process to ignore any spanning tree topology change messages
sent to the virtual switch routing instance:
3. Verify the configuration of multicast snooping for the virtual-switch routing instance to ignore
spanning tree topology change messages:
routing-instance-name {
instance-type virtual-switch;
bridge-domains {
bridge-domain-name {
domain-type bridge {
interface interface-name;
...VLAN-identifiers-configuration...
multicast-snooping-options {
ignore-stp-topology-change;
}
}
}
}
RELATED DOCUMENTATION
When graceful restart is enabled for multicast snooping, no data traffic is lost during a process restart or
a graceful Routing Engine switchover (GRES). Graceful restart can be configured for multicast snooping
either at the global level or at the level of individual routing instances.
At the global level, graceful restart is enabled by default for multicast snooping. To change this default
setting, you can configure the disable statement at the [edit multicast-snooping-options graceful-
restart] hierarchy level:
multicast-snooping-options {
graceful-restart disable;
}
The range for restart-duration is from 0 through 300 seconds. The default value is 180 seconds.
After this period, the Routing Engine resumes normal multicast operation.
You can also set the graceful-restart statement for an individual routing instance level at the [edit
logical-systems logical-system-name routing-instances routing-instance-name multicast-snooping-
options] hierarchy level.
2. Verify your configuration by using the show multicast-snooping-options command.
[edit]
user@host# show multicast-snooping-options
graceful-restart {
restart-duration 200;
}
[edit]
user@host# commit
To configure graceful restart for multicast snooping for an individual routing instance level:
The range for restart-duration is from 0 through 300 seconds. The default value is 180 seconds.
After this period, the Routing Engine resumes normal multicast operation.
1257
NOTE: You can also set the graceful-restart statement for an individual routing instance level
at the [edit logical-systems logical-system-name routing-instances routing-instance-name
multicast-snooping-options] hierarchy level.
[edit]
user@host# show routing-instances ri1 multicast-snooping-options
graceful-restart {
restart-duration 200;
}
[edit]
user@host# commit
RELATED DOCUMENTATION
IN THIS SECTION
PIM snooping configures a device to examine and operate only on PIM hello and join/prune packets. A
PIM snooping device snoops PIM hello and join/prune packets on each interface to find interested
multicast receivers and populates the multicast forwarding tree with this information. PIM snooping
differs from PIM proxying in that both PIM hello and join/prune packets are transparently flooded in the
VPLS as opposed to the flooding of only hello packets in the case of PIM proxying. PIM snooping is
configured on PE routers connected through pseudowires. PIM snooping ensures that no new PIM
packets are generated in the VPLS, with the exception of PIM messages sent through LDP on
pseudowires.
NOTE: In the VPLS documentation, the word router in terms such as PE router is used to refer to
any device that provides routing functions.
A device that supports PIM snooping snoops hello packets received on attachment circuits. It does not
introduce latency in the VPLS core when it forwards PIM join/prune packets.
To configure PIM snooping on a PE router, use the pim-snooping statement at the [edit routing-
instances instance-name protocols] hierarchy level:
routing-instances {
customer {
instance-type vpls;
...
protocols {
pim-snooping{
traceoptions {
file pim.log size 10m;
flag all;
flag timer disable;
}
}
}
}
}
1259
"Example: Configuring PIM Snooping for VPLS" explains the PIM snooping method. The use of the PIM
proxying method is not discussed here and is outside the scope of this document. For more information
about PIM proxying, see PIM Snooping over VPLS.
SEE ALSO
IN THIS SECTION
Requirements | 1259
Overview | 1259
Configuration | 1261
Verification | 1271
This example shows how to configure PIM snooping in a virtual private LAN service (VPLS) to restrict
multicast traffic to interested devices.
Requirements
• M Series Multiservice Edge Routers (M7i and M10i with Enhanced CFEB, M120, and M320 with E3
FPCs) or MX Series 5G Universal Routing Platforms (MX80, MX240, MX480, and MX960)
Overview
IN THIS SECTION
Topology | 1260
1260
The following example shows how to configure PIM snooping to restrict multicast traffic to interested
devices in a VPLS.
NOTE: This example demonstrates PIM snooping by the use of a PIM snooping device to restrict
multicast traffic. The use of the PIM proxying method to achieve PIM snooping is out of the
scope of this document and is yet to be implemented in Junos OS.
Topology
In this example, two PE routers are connected to each other through a pseudowire connection. Router
PE1 is connected to Routers CE1 and CE2. A multicast receiver is attached to Router CE2. Router PE2 is
connected to Routers CE3 and CE4. A multicast source is connected to Router CE3, and a second
multicast receiver is attached to Router CE4.
PIM snooping is configured on Routers PE1 and PE2. Hence, data sent from the multicast source is
received only by members of the multicast group.
1261
Figure 139 on page 1261 shows the topology used in this example.
Configuration
IN THIS SECTION
Results | 1268
1262
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Router PE1
Router CE1
Router CE2
Router PE2
Router CE4
Step-by-Step Procedure
The following example requires that you navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User
Guide.
NOTE: This section includes a step-by-step configuration procedure for one or more routers in
the topology. For comprehensive configurations for all routers, see "CLI Quick Configuration" on
page 1262.
1. Configure the router interfaces forming the links between the routers.
Router PE2
[edit interfaces]
user@PE2# set ge-2/0/0 encapsulation ethernet-vpls
user@PE2# set ge-2/0/0 unit 0 description toCE3
user@PE2# set ge-2/0/1 encapsulation ethernet-vpls
user@PE2# set ge-2/0/1 unit 0 description toCE4
user@PE2# set ge-2/0/2 unit 0 description toPE1
user@PE2# set ge-2/0/2 unit 0 family mpls
user@PE2# set ge-2/0/2 unit 0 family inet address 10.0.0.2/30
user@PE2# set lo0 unit 0 family inet address 10.255.7.7/32
NOTE: ge-2/0/0.0 and ge-2/0/1.0 are configured as VPLS interfaces and connect to Routers
CE3 and CE4. See Virtual Private LAN Service User Guide for more details.
Router CE3
[edit interfaces]
user@CE3# set ge-2/0/0 unit 0 description toPE2
1266
NOTE: The ge-2/0/1.0 interface on Router CE3 connects to the multicast source.
Router CE4
[edit interfaces]
user@CE4# set ge-2/0/0 unit 0 description toPE2
user@CE4# set ge-2/0/0 unit 0 family inet address 10.0.0.22/30
user@CE4# set ge-2/0/1 unit 0 description toReceiver2
user@CE4# set ge-2/0/1 unit 0 family inet address 10.0.0.25/30
user@CE4# set lo0 unit 0 family inet address 10.255.4.4/32
Router PE2
[edit routing-options]
user@PE2# set router-id 10.255.7.7
Router PE2
[edit protocols ospf area 0.0.0.0]
user@PE2# set interface ge-2/0/2.0
user@PE2# set interface lo0.0
Router PE2
[edit protocols]
user@PE2# set ldp interface lo0.0
user@PE2# set mpls interface ge-2/0/2.0
user@PE2# set bgp group toPE1 type internal
user@PE2# set bgp group toPE1 local-address 10.255.7.7
user@PE2# set bgp group toPE1 family l2vpn signaling
user@PE2# set bgp group toPE1 neighbor 10.255.1.1
user@PE2# set ldp interface ge-2/0/2.0
The BGP group is required for interfacing with the other PE router. Similarly, configure Router PE1.
Ensure that Router CE3 is configured as the rendezvous point (RP) and that the RP address is
configured on other CE routers.
Router CE3
[edit protocols pim]
user@CE3# set rp local address 10.255.3.3
user@CE3# set interface all
Router CE4
[edit protocols pim]
user@CE4# set rp static address 10.255.3.3
user@CE4# set interface all
Router PE2
[edit multicast-snooping-options traceoptions]
user@PE2# set file snoop.log size 10m
7. Create a routing instance (titanium), and configure the VPLS on the PE routers.
Router PE2
[edit routing-instances titanium]
user@PE2# set instance-type vpls
user@PE2# set vlan-id none
user@PE2# set interface ge-2/0/0.0
user@PE2# set interface ge-2/0/1.0
user@PE2# set route-distinguisher 101:101
user@PE2# set vrf-target target:201:201
user@PE2# set protocols vpls vpls-id 15
user@PE2# set protocols vpls site pe2 site-identifier 2
Router PE2
[edit routing-instances titanium]
user@PE2# set protocols pim-snooping
Results
From configuration mode, confirm your configuration by entering the show interfaces, show routing-
options, show protocols, show multicast-snooping-options, and show routing-instances commands.
If the output does not display the intended configuration, repeat the instructions in this example to
correct the configuration.
ge-2/0/0 {
encapsulation ethernet-vpls;
unit 0 {
description toCE3;
}
}
ge-2/0/1 {
encapsulation ethernet-vpls;
unit 0 {
description toCE4;
}
}
lo0 {
unit 0 {
family inet {
address 10.255.7.7/32;
}
}
}
local-address 10.255.7.7;
family l2vpn {
signaling;
}
neighbor 10.255.1.1;
}
Similarly, confirm the configuration on all other routers. If you are done configuring the routers, enter
commit from configuration mode.
NOTE: Use the show protocols command on the CE routers to verify the configuration for the
PIM RP .
1271
Verification
IN THIS SECTION
Purpose
Action
To verify that PIM snooping is working as desired, use the following commands:
1. From operational mode on Router PE2, run the show pim snooping interfaces command.
Learning-Domain: default
DR address: 10.0.0.22
DR flooding is ON
The output verifies that PIM snooping is configured on the two interfaces connecting Router PE2 to
Routers CE3 and CE4.
2. From operational mode on Router PE2, run the show pim snooping neighbors detail command.
Interface: ge-2/0/0.0
Address: 10.0.0.18
Uptime: 00:17:06
Hello Option Holdtime: 105 seconds 99 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 552495559
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Tracking is supported
Interface: ge-2/0/1.0
Address: 10.0.0.22
Uptime: 00:15:16
Hello Option Holdtime: 105 seconds 103 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 1131703485
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Tracking is supported
The output verifies that Router PE2 can detect the IP addresses of its PIM snooping neighbors
(10.0.0.18 on CE3 and 10.0.0.22 on CE4).
3. From operational mode on Router PE2, run the show pim snooping statistics command.
Learning-Domain: default
Tx J/P messages 0
RX J/P messages 246
Rx J/P messages -- seen 0
Rx J/P messages -- received 246
Rx Hello messages 1036
Rx Version Unknown 0
Rx Neighbor Unknown 0
Rx Upstream Neighbor Unknown 0
Rx J/P Busy Drop 0
Rx J/P Group Aggregate 0
Rx Malformed Packet 0
Rx No PIM Interface 0
Rx Bad Length 0
Rx Unknown Hello Option 0
Rx Unknown Packet Type 0
Rx Bad TTL 0
Rx Bad Destination Address 0
Rx Bad Checksum 0
Rx Unknown Version 0
The output shows the number of hello and join/prune messages received by Router PE2. This verifies
that PIM sparse mode is operational in the network.
4. Send multicast traffic from the source terminal attached to Router CE3, for the multicast group
203.0.113.1.
5. From operational mode on Router PE2, run the show pim snooping join, show pim snooping join
extensive, and show multicast snooping route extensive instance <instance-name> group <group-
name> commands to verify PIM snooping.
Group: 203.0.113.1
Source: *
Flags: sparse,rptree,wildcard
Upstream neighbor: 10.0.0.18, Port: ge-2/0/0.0
Group: 203.0.113.1
Source: 10.0.0.30
Flags: sparse
Upstream neighbor: 10.0.0.18, Port: ge-2/0/0.0
Group: 203.0.113.1
Source: *
Flags: sparse,rptree,wildcard
Upstream neighbor: 10.0.0.18, Port: ge-2/0/0.0
Downstream port: ge-2/0/1.0
Downstream neighbors:
10.0.0.22 State: Join Flags: SRW Timeout: 180
Group: 203.0.113.1
Source: 10.0.0.30
Flags: sparse
Upstream neighbor: 10.0.0.18, Port: ge-2/0/0.0
Downstream port: ge-2/0/1.0
Downstream neighbors:
10.0.0.22 State: Join Flags: S Timeout: 180
The outputs show that multicast traffic sent for the group 203.0.113.1 is sent to Receiver 2 through
Router CE4 and also display the upstream and downstream neighbor details.
user@PE2> show multicast snooping route extensive instance titanium group 203.0.113.1
Nexthop Bulking: OFF
Family: INET
Group: 203.0.113.1/24
1275
Bridge-domain: titanium
Mesh-group: __all_ces__
Downstream interface list:
ge-2/0/1.0 -(1072)
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 1048577
Route state: Active
Forwarding state: Forwarding
Group: 203.0.113.1/24
Source: 10.0.0.8
Bridge-domain: titanium
Mesh-group: __all_ces__
Downstream interface list:
ge-2/0/1.0 -(1072)
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 1048577
Route state: Active
Forwarding state: Forwarding
Meaning
SEE ALSO
CHAPTER 26
IN THIS CHAPTER
IN THIS SECTION
IP multicast implementations can achieve some level of scoping by using the time-to-live (TTL) field in
the IP header. However, TTL scoping has proven difficult to implement reliably, and the resulting
schemes often are complex and difficult to understand.
Administratively scoped IP multicast provides clearer and simpler semantics for multicast scoping.
Packets addressed to administratively scoped multicast addresses do not cross configured administrative
1277
boundaries. Administratively scoped multicast addresses are locally assigned, and hence are not required
to be unique across administrative boundaries.
The administratively scoped IP version 4 (IPv4) multicast address space is the range from 239.0.0.0
through 239.255.255.255.
The structure of the IPv4 administratively scoped multicast space is based loosely on the IP version 6
(IPv6) addressing architecture described in RFC 1884, IP Version 6 Addressing Architecture.
• IPv4 local scope—This scope comprises addresses in the range 239.255.0.0/16. The local scope is the
minimal enclosing scope and is not further divisible. Although the exact extent of a local scope is
site-dependent, locally scoped regions must not span any other scope boundary and must be
contained completely within or be equal to any larger scope. If scope regions overlap in an area, the
area of overlap must be within the local scope.
• IPv4 organization local scope—This scope comprises 239.192.0.0/14. It is the space from which an
organization allocates subranges when defining scopes for private use.
The ranges 239.0.0.0/10, 239.64.0.0/10, and 239.128.0.0/10 are unassigned and available for
expansion of this space.
Two other scope classes already exist in IPv4 multicast space: the statically assigned link-local scope,
which is 224.0.0.0/24, and the static global scope allocations, which contain various addresses.
All scoping is inherently bidirectional in the sense that join messages and data forwarding are controlled
in both directions on the scoped interface.
You can configure multicast scoping either by creating a named scope associated with a set of routing
device interfaces and an address range, or by referencing a scope policy that specifies the interfaces and
configures the address range as a series of filters. You cannot combine the two methods (the commit
operation fails for a configuration that includes both). The methods differ somewhat in their
requirements and result in different output from the show multicast scope command.
Routing loops must be avoided in IP multicast networks. Because multicast routers must replicate
packets for each downstream branch, not only do looping packets not arrive at a destination, but each
pass around the loop multiplies the number of looping packets, eventually overwhelming the network.
Scoping limits the routers and interfaces that can be used to forward a multicast packet. Scoping can use
the TTL field in the IP packet header, but TTL scoping depends on the administrator having a thorough
knowledge of the network topology. This topology can change as links fail and are restored, making TTL
scoping a poor solution for multicast.
Multicast scoping is administrative in the sense that a range of multicast addresses is reserved for
scoping purposes, as described in RFC 2365. Routers at the boundary must be able to filter multicast
packets and make sure that the packets do not stray beyond the established limit.
1278
Administrative scoping is much better than TTL scoping, but in many cases the dropping of
administratively scoped packets is still determined by the network administrator. For example, the
multicast address range 239/8 is defined in RFC 2365 as administratively scoped, and packets using this
range are not to be forwarded beyond a network “boundary,” usually a routing domain. But only the
network administrator knows where the border routers are and can implement the scoping correctly.
Multicast groups used by unicast routing protocols, such as 224.0.0.5 for all OSPF routers, are
administratively scoped for that LAN only. This scoping allows the same multicast address to be used
without conflict on every LAN running OSPF.
SEE ALSO
IN THIS SECTION
Requirements | 1278
Overview | 1279
Configuration | 1279
Verification | 1282
This example shows how to configure multicast scoping with four scopes: local, organization,
engineering, and marketing.
Requirements
• Configure a tunnel interface. See the Junos OS Network Interfaces Library for Routing Devices.
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
1279
Overview
The local scope is configured on a GRE tunnel interface. The organization scope is configured on a GRE
tunnel interface and a SONET/SDH interface. The engineering scope is configured on an IP-IP tunnel
interface and two SONET/SDH interfaces. The marketing scope is configured on a GRE tunnel interface
and two SONET/SDH interfaces. The Junos OS can scope any user-configurable IPv6 or IPv4 group.
To configure multicast scoping by defining a named scope, you must specify a name for the scope, the
set of routing device interfaces on which you are configuring scoping, and the scope's address range.
NOTE: The prefix specified with the prefix statement must be unique for each scope statement.
If multiple scopes contain the same prefix, only the last scope applies to the interfaces. If you
need to scope the same prefix on multiple interfaces, list all of them in the interface statement
for a single scope statement.
When you configure multicast scoping with a named scope, all scope boundaries must include the local
scope. If this scope is not configured, it is added automatically at all scoped interfaces. The local scope
limits the use of the multicast group 239.255.0.0/16 to an attached LAN.
Configuration
IN THIS SECTION
Procedure | 1279
Results | 1281
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Step-by-Step Procedure
1. The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos
OS CLI User Guide.
user@host# commit
Results
Verification
To verify that group scoping is in effect, issue the show multicast scope command:
When you configure scoping with a named scope, the show multicast scope operational mode
command displays the names of the defined scopes, prefixes, and interfaces.
SEE ALSO
IN THIS SECTION
Requirements | 1282
Overview | 1283
Configuration | 1283
Verification | 1286
This example shows how to configure a multicast scope policy named allow-auto-rp-on-backbone,
allowing packets for auto-RP groups 224.0.1.39/32 and 224.0.1.40/32 on backbone-facing interfaces,
and rejecting all other addresses in the 224.0.1.0/24 and 239.0.0.0/8 address ranges.
Requirements
• Configure an interior gateway protocol or static routing. See the Junos OS Routing Protocols Library
for Routing Devices.
Overview
Each referenced policy must be correctly configured at the [edit policy-options] hierarchy level,
specifying the set of routing device interfaces on which to configure scoping, and defining the scope's
address range as a series of route filters. Only the interface, route-filter, and prefix-list match conditions
are supported for multicast scope policies. All other configured match conditions are ignored. The only
actions supported are accept, reject, and the policy flow actions next-term and next-policy. The reject
action means that joins and multicast forwarding are suppressed in both directions on the configured
interfaces. The accept action allows joins and multicast forwarding in both directions on the interface.
By default, scope policies apply to all interfaces. The default action is accept.
NOTE: Multicast scoping configured with a scope policy differs in some ways from scoping
configured with a named scope (which uses the scope statement):
• You cannot apply a scope policy to a specific routing instance, because all scope policies apply
to all routing instances. In contrast, a named scope does apply individually to a specific
routing instance.
• In contrast to scoping with a named scope, scoping with a scope policy does not
automatically add the local scope at scope boundaries. You must explicitly configure the local
scope boundaries. The local scope limits the use of the multicast group 239.255.0.0/16 to an
attached LAN.
Configuration
IN THIS SECTION
Procedure | 1284
Results | 1285
1284
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, and then copy and paste
the commands into the CLI at the [edit] hierarchy level.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
user@host# commit
Results
Confirm your configuration by entering the show policy-options and show routing-options commands.
then reject;
}
}
Verification
To verify that the scope policy is in effect, issue the show multicast scope configuration mode
command:
When you configure multicast scoping with a scope policy, the show multicast scope operational mode
command displays only the name of the scope policy.
SEE ALSO
routing-options {
multicast {
scope auto-rp-39 {
prefix 224.0.1.39/32;
interface t1-0/0/0.0;
}
1287
scope auto-rp-40 {
prefix 224.0.1.40/32;
interface t1-0/0/0.0;
}
scope scoped-range {
prefix 239.0.0.0/8;
interface t1-0/0/0.0;
}
}
}
RELATED DOCUMENTATION
IN THIS SECTION
Bandwidth management ensures that multicast traffic oversubscription does not occur on an interface.
When managing multicast bandwidth, you define the maximum amount of multicast bandwidth that an
individual interface can use as well as the bandwidth individual multicast flows use.
For example, the routing software cannot add a flow to an interface if doing so exceeds the allowed
bandwidth for that interface. Under these circumstances, the interface is rejected. This rejection,
however, does not prevent a multicast protocol (for example, PIM) from sending a join message
upstream. Traffic continues to arrive on the router, even though the router is not sending the flow from
the expected outgoing interfaces.
You can configure the flow bandwidth statically by specifying a bandwidth value for the flow in bits per
second, or you can enable the flow bandwidth to be measured and adaptively changed. When using the
adaptive bandwidth option, the routing software queries the statistics for the flows to be measured at
5-second intervals and calculates the bandwidth based on the queries. The routing software uses the
maximum value measured within the last minute (that is, the last 12 measuring points) as the flow
bandwidth.
If PIM graceful restart is not configured, after the routing process restarts, previously admitted or
rejected interfaces might be rejected or admitted in an unpredictable manner.
SEE ALSO
CLI Explorer
multiple forwarding entries—(s1,g) and (s2,g)—are created after each goes through the admission
process.
With redundant sources, unlike unrelated entries, an OIF that is already admitted for one entry—for
example, (s1,g)—is automatically admitted for other redundancy entries—for example, (s2,g). The
remaining bandwidth on the interface is deducted each time an outbound interface is added, even
though only one sender actively transmits. By measuring bandwidth, the bandwidth deducted for the
inactive entries is credited back when the router detects no traffic is being transmitted.
For more information about defining redundant sources, see Example: Configuring a Multicast Flow
Map.
When displaying interface bandwidth information, a negative available bandwidth value indicates
oversubscription on the interface.
Interface bandwidth can become oversubscribed when the configured maximum bandwidth decreases
or when some flow bandwidths increase because of a configuration change or an actual increase in the
traffic rate.
Interface bandwidth can become available again if one of the following occurs:
• Some flows are no longer transmitted from interfaces, and bandwidth reserves for them are now
available to other flows.
• Some flow bandwidths decrease because of a configuration change or an actual decrease in the
traffic rate.
Interfaces that are rejected for a flow because of insufficient bandwidth are not automatically
readmitted, even when bandwidth becomes available again. Rejected interfaces have an opportunity to
be readmitted when one of the following occurs:
• The multicast routing protocol updates the forwarding entry for the flow after receiving a join, leave,
or prune message or after a topology change occurs.
• The multicast routing protocol updates the forwarding entry for the flow due to configuration
changes.
• You manually reapply bandwidth management to a specific flow or to all flows using the clear
multicast bandwidth-admission operational command.
1290
In addition, even if previously available bandwidth is no longer available, already admitted interfaces are
not removed until one of the following occurs:
• The multicast routing protocol explicitly removes the interfaces after receiving a leave or prune
message or after a topology change occurs.
• You manually reapply bandwidth management to a specific flow or to all flows using the clear
multicast bandwidth-admission operational command.
SEE ALSO
CLI Explorer
IN THIS SECTION
Requirements | 1290
Overview | 1291
Configuration | 1292
Verification | 1294
This example shows you how to configure the maximum bandwidth for a physical or logical interface.
Requirements
• Configure an interior gateway protocol. See the Junos OS Routing Protocols Library for Routing
Devices.
• Configure a multicast protocol. This feature works with the following multicast protocols:
• DVMRP
• PIM-DM
• PIM-SM
1291
• PIM-SSM
Overview
IN THIS SECTION
Topology | 1292
The maximum bandwidth setting applies admission control either against the configured interface
bandwidth or against the native speed of the underlying interface (when there is no configured
bandwidth for the interface).
If you configure several logical interfaces (for example, to support VLANs or PVCs) on the same
underlying physical interface, and no bandwidth is configured for the logical interfaces, it is assumed
that the logical interfaces all have the same bandwidth as the underlying interface. This can cause
oversubscription. To prevent oversubscription, configure bandwidth for the logical interfaces, or
configure admission control at the physical interface level.
You only need to define the maximum bandwidth for an interface on which you want to apply
bandwidth management. An interface that does not have a defined maximum bandwidth transmits all
multicast flows as determined by the multicast protocol that is running on the interface (for example,
PIM).
routing-options {
multicast {
interface fe-0/2/0.200 {
maximum-bandwidth;
}
interfaces {
fe-0/2/0 {
unit 200 {
bandwidth 20m;
}
1292
}
}
Topology
Configuration
IN THIS SECTION
Procedure | 1292
Results | 1293
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
[edit interfaces]
user@host# set fe-0/2/0 unit 200 bandwidth 20m
[edit routing-options]
user@host# set multicast interface fe-0/2/0.200 maximum-bandwidth
3. On a physical interface, enable admission control and set the maximum bandwidth to 60 Mbps.
[edit routing-options]
user@host# set multicast interface fe-0/2/1 maximum-bandwidth 60m
4. For a logical interface on the same physical interface shown in Step "3" on page 1293, set a smaller
maximum bandwidth.
[edit routing-options]
user@host# set multicast interface fe-0/2/1.200 maximum-bandwidth 10m
Results
Confirm your configuration by entering the show interfaces and show routing-options commands.
}
interface fe-0/2/1 {
maximum-bandwidth 60m;
}
interface fe-0/2/1.200 {
maximum-bandwidth 10m;
}
}
Verification
SEE ALSO
IN THIS SECTION
Requirements | 1294
Configuration | 1299
Verification | 1311
This example shows how to configure an MX Series router to function as a broadband service router
(BSR).
Requirements
• One MX Series router or EX Series switch with a PIC that supports traffic control profile queuing
• One DSLAM
1295
• Configure an interior gateway protocol. See the Junos OS Routing Protocols Library for Routing
Devices.
IN THIS SECTION
Topology | 1298
When multiple BSR interfaces receive IGMP and MLD join and leave requests for the same multicast
stream, the BSR sends a copy of the multicast stream on each interface. Both the multicast control
packets (IGMP and MLD) and the multicast data packets flow on the same BSR interface, along with the
unicast data. Because all per-customer traffic has its own interface on the BSR, per-customer
accounting, call admission control (CAC), and quality-of-service (QoS) adjustment are supported. The
QoS bandwidth used by multicast reduces the unicast bandwidth.
Multiple interfaces on the BSR might connect to a shared device (for example, a DSLAM). The BSR
sends the same multicast stream multiple times to the shared device, thus wasting bandwidth. It is more
efficient to send the multicast stream one time to the DSLAM and replicate the multicast streams in the
DSLAM. There are two approaches that you can use.
The first approach is to continue to send unicast data on the per-customer interfaces, but have the
DSLAM route all the per-customer IGMP and MLD join and leave requests to the BSR on a single
dedicated interface (a multicast VLAN). The DSLAM receives the multicast streams from the BSR on the
dedicated interface with no unnecessary replication and performs the necessary replication to the
customers. Because all multicast control and data packets use only one interface, only one copy of a
stream is sent even if there are multiple requests. This approach is called reverse outgoing interface
(OIF) mapping. Reverse OIF mapping enables the BSR to propagate the multicast state of the shared
interface to the customer interfaces, which enables per-customer accounting and QoS adjustment to
work. When a customer changes the TV channel, the router gateway (RG) sends an IGMP or MLD join
and leave messages to the DSLAM. The DSLAM transparently passes the request to the BSR through
the multicast VLAN. The BSR maps the IGMP or MLD request to one of the subscriber VLANs based on
the IP source address or the source MAC address. When the subscriber VLAN is found, QoS adjustment
and accounting are perfomed on that VLAN or interface.
The second approach is for the DSLAM to continue to send unicast data and all the per-customer IGMP
and MLD join and leave requests to the BSR on the individual customer interfaces, but to have the
1296
multicast streams arrive on a single dedicated interface. If multiple customers request the same
multicast stream, the BSR sends one copy of the data on the dedicated interface. The DSLAM receives
the multicast streams from the BSR on the dedicated interface and performs the necessary replication
to the customers. Because the multicast control packets use many customer interfaces, configuration on
the BSR must specify how to map each customer’s multicast data packets to the single dedicated output
interface. QoS adjustment is supported on the customer interfaces. CAC is supported on the shared
interface. This second approach is called multicast OIF mapping.
OIF mapping and reverse OIF mapping are not supported on the same customer interface or shared
interface. This example shows how to configure the two different approaches. Both approaches support
QoS adjustment, and both approaches support MLD/IPv6. The reverse OIF mapping example focuses on
IGMP/IPv4 and enables QoS adjustment. The OIF mapping example focuses on MLD/IPv6 and disables
QoS adjustment.
The first approach (reverse OIF mapping) includes the following statements:
• flow-map—Defines a flow map that controls the bandwidth for each flow.
• maximum-bandwidth—Enables CAC.
After the subscriber VLAN is identified, the routing device immediately adjusts the QoS (in this case,
the bandwidth) on that VLAN based on the addition or removal of a subscriber.
The routing device uses IGMP and MLD join or leave reports to obtain the subscriber VLAN
information. This means that the connecting equipment (for example, the DSLAM) must forward all
IGMP and MLD reports to the routing device for this feature to function properly. Using report
suppression or an IGMP proxy can result in reverse OIF mapping not working properly.
• subscriber-leave-timer—Introduces a delay to the QoS update. After receiving an IGMP or MLD leave
request, this statement defines a time delay (between 1 and 30 seconds) that the routing device
waits before updating the QoS for the remaining subscriber interfaces. You might use this delay to
decrease how often the routing device adjusts the overall QoS bandwidth on the VLAN when a
subscriber sends rapid leave and join messages (for example, when changing channels in an IPTV
network).
• traffic-control-profile—Configures a shaping rate on the logical interface. The configured shaping rate
must be configured as an absolute value, not as a percentage.
The OIF map is a routing policy statement that can contain multiple terms. When creating OIF maps,
keep the following in mind:
1297
• If you specify a physical interface (for example, ge-0/0/0), a ".0" is appended to the interface to
create a logical interface (for example, ge-0/0/0.0).
• Configure a routing policy for each logical system. You cannot configure routing policies
dynamically.
• We recommend that you configure policy statements for IGMP and MLD separately.
• Specify either a logical interface or the keyword self. The self keyword specifies that multicast
data packets be sent on the same interface as the control packets and that no mapping occur. If
no term matches, then no multicast data packets are sent.
QoS adjustment decreases the available bandwidth on the client interface by the amount of
bandwidth consumed by the multicast streams that are mapped from the client interface to the
shared interface. This action always occurs unless it is explicitly disabled.
If you disable QoS adjustment, available bandwidth is not reduced on the customer interface when
multicast streams are added to the shared interface.
NOTE: You can dynamically disable QoS adjustment for IGMP and MLD interfaces using
dynamic profiles.
• oif-map—Associate a map with an IGMP or MLD interface. The OIF map is then applied to all IGMP
or MLD requests received on the configured interface. In this example, subscriber VLANs 1 and 2
have MLD configured, and each VLAN points to an OIF map that directs some traffic to
ge-2/3/9.4000, some traffic to ge-2/3/9.4001, and some traffic to self.
NOTE: You can dynamically associate OIF maps with IGMP interfaces using dynamic profiles.
The OIF map interface should not typically pass IGMP or MLD control traffic and should be
configured as passive. However, the OIF map implementation does support running IGMP or MLD
on an interface (control and data) in addition to mapping data streams to the same interface. In this
case, you should configure IGMP or MLD normally (that is, not in passive mode) on the mapped
interface. In this example, the OIF map interfaces (ge-2/3/9.4000 and ge-2/3/9.4001) are configured
as MLD passive.
1298
By default, specifying the passive statement means that no general queries, group-specific queries, or
group-source-specific queries are sent over the interface and that all received control traffic is
ignored by the interface. However, you can selectively activate up to two out of the three available
options for the passive statement while keeping the other functions passive (inactive).
Topology
In both approaches, if multiple customers request the same multicast stream, the BSR sends one copy of
the stream on the shared multicast VLAN interface. The DSLAM receives the multicast stream from the
BSR on the shared interface and performs the necessary replication to the customers.
In the first approach (reverse OIF mapping), the DSLAM uses the per-customer subscriber VLANs for
unicast data only. IGMP and MLD join and leave requests are sent on the multicast VLAN.
1299
In the second approach (OIF mapping), the DSLAM uses the per-customer subscriber VLANs for unicast
data and for IGMP and MLD join and leave requests. The multicast VLAN is used only for multicast
streams, not for join and leave requests.
Configuration
IN THIS SECTION
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode. .
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
4. Configure a policy.
6. Enable OIF mapping on the logical interface that receives subscriber control traffic.
[edit protocols]
user@host# set igmp interface all
user@host# set igmp interface fxp0.0 disable
user@host# set pim rp local address 20.0.0.2
user@host# set pim interface all
user@host# set pim interface fxp0.0 disable
user@host# set pim interface ge-2/2/0.10 disable
8. Configure the hierarchical scheduler by configuring a shaping rate for the physical interface and a
slower shaping rate for the logical interfaces on which QoS adjustments are made.
Results
From configuration mode, confirm your configuration by entering the show class-of-service, show
interfaces, show policy-options, show protocols, and show routing-options commands. If the output
1303
does not display the intended configuration, repeat the instructions in this example to correct the
configuration.
address 50.0.0.2/24;
}
}
unit 51 {
vlan-id 51;
family inet {
address 50.0.1.2/24;
}
}
}
}
}
If you are done configuring the device, enter commit from configuration mode.
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see the Junos OS CLI User Guide.
6. Configure PIM and MLD. Point the MLD subscriber VLANs to the OIF map.
[edit protocols]
user@host# set pim rp local address 20.0.0.4
user@host# set pim rp local family inet6 address C000::1 #C000::1 is the address of lo0
user@host# set pim interface ge-2/3/8.0 mode sparse
user@host# set pim interface ge-2/3/8.0 version 2
user@host# set mld interface fxp0.0 disable
user@host# set interface ge-2/3/9.4000 passive
user@host# set interface ge-2/3/9.4001 passive
user@host# set interface ge-2/3/9.1 version 1
user@host# set interface ge-2/3/9.1 oif-map g539-v6
user@host# set interface ge-2/3/9.2 version 2
user@host# set interface ge-2/3/9.2 oif-map g539-v6
Results
From configuration mode, confirm your configuration by entering the show interfaces, show policy-
options, show protocols, and show routing-options commands. If the output does not display the
intended configuration, repeat the instructions in this example to correct the configuration.
address C400:0201::/24;
}
}
unit 4000 {
vlan-id 4000;
family inet6 {
address C40F:A001::/24;
}
}
unit 4001 {
vlan-id 4001;
family inet6 {
address C40F:A101::/24;
}
}
}
then {
map-to-interface self;
accept;
}
}
}
policy-statement g539-v6-all {
term g539 {
from {
route-filter 0::/0 orlonger;
}
then {
map-to-interface ge-2/3/9.4000;
accept;
}
}
}
address 20.0.0.4;
family inet6 {
address C000::1;
}
}
}
interface ge-2/3/8.0 {
mode sparse;
version 2;
}
}
If you are done configuring the device, enter commit from configuration mode.
Verification
• show policy
SEE ALSO
Depending on the intelligence of the MSAN device, determining which client receives the packet can
occur in an inefficient manner. For example, when it receives IGMP control traffic, an MSAN might
forward the control traffic to all clients instead of the one intended client. In addition, once a data
stream destination is established, though an MSAN can use IGMP snooping to determine which hosts
reside in a particular group and limit data streams to only that group, the MSAN still must send multiple
copies of the data stream to each group member, even if that data stream is intended for only one client
in the group.
Various multicast features, when combined, enable you to avoid the inefficiencies mentioned above.
These features include the following:
• The ability to configure the IP demux interface family statement to use inet for either the numbered
or unnumbered primary interface.
• The ability to configure IGMP on the primary interface to send general queries for all clients. The
demux configuration prevents the primary IGMP interface from receiving any client IGMP control
packets. Instead, all IGMP control packets go to the demux interfaces. However, to guarantee that no
joins occur on the primary interface:
• For static IGMP interfaces—Include the passive send-general-query statement in the IGMP
configuration at the [edit protocols igmp interface interface-name] hierarchy level.
• For dynamic IGMP demux interfaces—Include the passive send-general-query statement at the
[edit dynamic-profiles profile-name protocols igmp interface interface-name] hierarchy level.
• The ability to map all multicast groups to the primary interface as follows:
• For static IGMP interfaces—Include the oif-map statement at the [edit protocols igmp interface
interface-name] hierarchy level.
• For dynamic IGMP demux interfaces—Include the oif-map statement at the [edit dynamic-profiles
profile-name protocols igmp interface interface-name] hierarchy level.
Using the oif-map statement, you can map the same IGMP group to the same output interface and
send only one copy of the multicast stream from the interface.
• The ability to configure IGMP on each demux interface. To prevent duplicate general queries:
1313
• For static IGMP interfaces—Include the passive allow-receive send-group-query statement at the
[edit protocols igmp interface interface-name] hierarchy level.
NOTE: To send only one copy of each group, regardless of how many customers join, use the
oif-map statement as previously mentioned.
SEE ALSO
On an MX Series router that contains MPCs and MS-DPCs, multicast packets are dropped on the router
and not processed properly if the router contains MLPPP LSQ logical interfaces that function as
multicast receivers and if the network services mode is configured as enhanced IP mode on the router.
This behavior is expected with LSQ interfaces in conjunction with enhanced IP mode. In such a scenario,
if enhanced IP mode is not configured, multicasting works correctly. However, if the router contains
redundant LSQ interfaces and enhanced IP network services mode configured with FIB localization,
multicast works properly.
To enable packet classification by the egress interface, you first configure a forwarding class map and
one or more queue numbers for the egress interface at the [edit class-of-service forwarding-class-map
forwarding-class-map-name] hierarchy level:
[edit class-of-service]
forwarding-classes-interface-specific forwarding-class-map-name {
class class-name queue-num queue-number [ restricted-queue queue-number ];
}
1314
For T Series routers that are restricted to only four queues, you can control the queue assignment with
the restricted-queue option, or you can allow the system to automatically determine the queue in a
modular fashion. For example, a map assigning packets to queue 6 would map to queue 2 on a four-
queue system.
NOTE: If you configure an output forwarding class map associating a forwarding class with a
queue number, this map is not supported on multiservices link services intelligent queuing (lsq-)
interfaces.
Once the forwarding class map has been configured, you apply the map to the logical interface by using
the output-forwarding-class-map statement at the [edit class-of-service interfaces interface-name unit
logical-unit-number ] hierarchy level:
All parameters relating to the queues and forwarding class must be configured as well. For more
information about configuring forwarding classes and queues, see Configuring a Custom Forwarding
Class for Each Queue.
This example shows how to configure an interface-specific forwarding-class map named FCMAP1 that
restricts queues 5 and 6 to different queues on four-queue systems and then applies FCMAP1 to unit 0
of interface ge-6/0/0:
[edit class-of-service]
forwarding-class-map FCMAP1 {
class FC1 queue-num 6 restricted-queue 3;
class FC2 queue-num 5 restricted-queue 2;
class FC3 queue-num 3;
class FC4 queue-num 0;
class FC3 queue-num 0;
class FC4 queue-num 1;
}
[edit class-of-service]
interfaces {
ge-6/0/0 unit 0 {
output-forwarding-class-map FCMAP1;
}
}
1315
Note that without the restricted-queue option in FCMAP1, the example would assign FC1 and FC2 to
queues 2 and 1, respectively, on a system restricted to four queues.
Use the show class-of-service interface interface-name command to display the forwarding-class maps
(and other information) assigned to a logical interface:
RELATED DOCUMENTATION
IN THIS SECTION
IN THIS SECTION
Requirements | 1317
Overview | 1317
Configuration | 1318
1317
Verification | 1320
When a routing device receives multicast traffic, it places the (S,G) route information in the multicast
forwarding cache, inet.1. This example shows how to configure multicast forwarding cache limits to
prevent the cache from filling up with entries.
Requirements
• Configure an interior gateway protocol. See the Junos OS Routing Protocols Library for Routing
Devices.
• Configure a multicast protocol. This feature works with the following multicast protocols:
• DVMRP
• PIM-DM
• PIM-SM
• PIM-SSM
Overview
IN THIS SECTION
Topology | 1318
• forwarding-cache—Specifies how forwarding entries are aged out and how the number of entries is
controlled.
• timeout—Specifies an idle period after which entries are aged out and removed from inet.1. You can
specify a timeout in the range from 1 through 720 minutes.
1318
• threshold—Enables you to specify threshold values on the forwarding cache to suppress (suspend)
entries from being added when the cache entries reach a certain maximum and begin adding entries
to the cache when the number falls to another threshold value. By default, no threshold values are
enabled on the routing device.
The suppress threshold suspends the addition of new multicast forwarding cache entries. If you do
not specify a suppress value, multicast forwarding cache entries are created as necessary. If you
specify a suppress threshold, you can optionally specify a reuse threshold, which sets the point at
which the device resumes adding new multicast forwarding cache entries. During suspension,
forwarding cache entries time out. After a certain number of entries time out, the reuse threshold is
reached, and new entries are added. The range for both thresholds is from 1 through 200,000. If
configured, the reuse value must be less than the suppression value. If you do not specify a reuse
value, the number of multicast forwarding cache entries is limited to the suppression value. A new
entry is created as soon as the number of multicast forwarding cache entries falls below the
suppression value.
Topology
Configuration
IN THIS SECTION
Procedure | 1318
Results | 1319
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
2. Configure the amount of time (in minutes) entries can remain idle before being removed.
3. Configure the size of the forwarding cache when suppression stops and new entries can be added.
Results
Verification
To verify the configuration, run the show multicast route extensive command.
SEE ALSO
IN THIS SECTION
Requirements | 1321
Overview | 1321
Configuration | 1323
Verification | 1325
1321
This example shows how to configure a flow map to prevent certain forwarding cache entries from aging
out, thus allowing for faster failover from one source to another. Flow maps enable you to configure
bandwidth variables and multicast forwarding cache timeout values for entries defined by the flow map
policy.
Requirements
• Configure an interior gateway protocol. See the Junos OS Routing Protocols Library for Routing
Devices.
• Configure a multicast protocol. This feature works with the following multicast protocols:
• DVMRP
• PIM-DM
• PIM-SM
• PIM-SSM
Overview
Flow maps are typically used for fast multicast source failover when there are multiple sources for the
same group. For example, when one video source is actively sending the traffic, the forwarding states for
other video sources are timed out after a few minutes. Later, when a new source starts sending the
traffic again, it takes time to install a new forwarding state for the new source if the forwarding state is
not already there. This switchover delay is worsened when there are many video streams. Using flow
maps with longer timeout values or permanent cache entries helps reduce this switchover delay.
NOTE: The permanent forwarding state must exist on all routing devices in the path for fast
source switchover to function properly.
• bandwidth—Specifies the bandwidth for each flow that is defined by a flow map to ensure that an
interface is not oversubscribed for multicast traffic. If adding one more flow would cause overall
bandwidth to exceed the allowed bandwidth for the interface, the request is rejected. A rejected
request means that traffic might not be delivered out of some or all of the expected outgoing
interfaces. You can define the bandwidth associated with multicast flows that match a flow map by
1322
specifying a bandwidth in bits per second or by specifying that the bandwidth is measured and
adaptively modified.
When you use the adaptive option, the bandwidth adjusts based on measurements made at 5-
second intervals. The flow uses the maximum bandwidth value from the last 12 measured values (1
minute).
When you configure a bandwidth value with the adaptive option, the bandwidth value acts as the
starting bandwidth for the flow. The bandwidth then changes based on subsequent measured
bandwidth values. If you do not specify a bandwidth value with the adaptive option, the starting
bandwidth defaults to 2 megabits per second (Mbps).
For example, the bandwidth 2m adaptive statement is equivalent to the bandwidth adaptive
statement because they both use the same starting bandwidth (2 Mbps, the default). If the actual
flow bandwidth is 4 Mbps, the measured flow bandwidth changes to 4 Mbps after reaching the first
measuring point (5 seconds). However, if the actual flow bandwidth rate is 1 Mbps, the measured
flow bandwidth remains at 2 Mbps for the first 12 measurement cycles (1 minute) and then changes
to the measured 1 Mbps value.
• flow-map—Defines a flow map that controls the forwarding cache timeout of specified source and
group addresses, controls the bandwidth for each flow, and specifies redundant sources. If a flow can
match multiple flow maps, the first flow map applies.
• policy—Specifies source and group addresses to which the flow map applies.
Configuration
IN THIS SECTION
Procedure | 1323
Results | 1325
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
Multicast flow maps enable you to manage a subset of multicast forwarding table entries. For example,
you can specify that certain forwarding cache entries be permanent or have a different timeout value
from other multicast flows that are not associated with the flow map policy.
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
1. Configure the flow map policy. This step creates a flow map policy called policyForFlow1. The policy
statement matches the source address using the source-address-filter statement, and matches the
group address using the prefix-list-filter. The addresses must match the configured policy for flow
mapping to occur.
[edit policy-options]
user@host# set prefix-list permanentEntries1 232.1.1.0/24
user@host# set policy policyForFlow1 from source-address-filter 11.11.11.11/32 exact
user@host# set policy policyForFlow1 from prefix-list-filter permanentEntries1 orlonger
user@host# set policy policyForFlow1 then accept
2. Define a flow map, flowMap1, that references the flow map policy, policyForFlow1, we just created.
[edit routing-options]
user@host# set multicast flow-map flowMap1 policy policyForFlow1
3. Configure permanent forwarding entries (that is, entries that never time out), and enable entries in
the pruned state to time out.
[edit routing-options]
user@host# set multicast flow-map flowMap1 forwarding-cache timeout never non-discard-entry-only
4. Configure the flow map bandwidth to be adaptive with a default starting bandwidth of 2 Mbps.
[edit routing-options]
user@host# set multicast flow-map flowMap1 bandwidth 2m adaptive
[edit routing-options]
user@host# set multicast flow-map flowMap1 redundant-sources [ 10.11.11.11 10.11.11.12 ]
user@host# commit
1325
Results
Confirm your configuration by entering the show policy-options and show routing-options commands.
Verification
SEE ALSO
RELATED DOCUMENTATION
IN THIS SECTION
Ingress PE redundancy eliminates the bandwidth duplication requirement by configuring one or more
ingress PEs as a group. Within a group, one PE is designated as the primary PE and one or more others
become backup PEs for the configured traffic stream. The solution depends on a full mesh of point-to-
point (P2P) LSPs among the primary and backup PEs. Also, you must configure a full set of point-to-
multipoint LSPs at the backup PEs, even though these point-to-multipoint LSPs at the backup PEs are
not sending any traffic or using any bandwidth. The P2P LSPs are configured with bidirectional
forwarding detection (BFD). When BFD detects a failure on the primary PE, a new designated forwarder
is elected for the stream.
1327
SEE ALSO
IN THIS SECTION
Requirements | 1327
Overview | 1327
Configuration | 1329
Verification | 1333
This example shows how to configure one PE as part of a backup PE group to enable ingress PE
redundancy for multicast traffic streams.
Requirements
• Configure a full mesh of P2P LSPs between the PEs in the backup group.
Overview
Ingress PE redundancy provides a backup resource when point-to-multipoint LSPs are configured for
multicast distribution. When point-to-multipoint LSPs are used for multicast traffic, the PE device can
become a single point of failure. One way to provide redundancy is by broadcasting duplicate streams
from multiple PEs, thus doubling the bandwidth requirements for each stream. This feature implements
redundancy between two or more PEs by designating a primary and one or more backup PEs for each
configured stream. The solution depends on the configuration of a full mesh of P2P LSPs between the
primary and backup PEs. These LSPs are configured with Bidirectional Forwarding Detection (BFD)
running on top of them. BFD is used on the backup PEs to detect failure on the primary PE routing
device and to elect a new designated forwarder for the stream.
A full mesh is required so that each member of the group can make an independent decision about the
health of the other PEs and determine the designated forwarder for the group. The key concept in a
backup PE group is that of a designated PE. A designated PE is a PE that forwards data on the static
route. All other PEs in the backup PE group do not forward any data on the static route. This allows you
1328
to have one designated forwarder. If the designated forwarder fails, another PE takes over as the
designated forwarder, thus allowing the traffic flow to continue uninterrupted.
Each PE in the backup PE group makes its own local decision regarding the designated forwarder. Thus,
there is no inter-PE communication regarding designated forwarder. A PE computes the designated
forwarder based on the IP address of all PEs and the connectivity status of other PEs. Connectivity
status is determined based on the state of the BFD session on the P2P LSP to a PE.
• The PE is in the UP state. Either it is the local PE, or the BFD session on the P2P LSP to that PE is in
the UP state.
• The PE has the lowest IP address among all PEs that are in the UP state.
Because all PEs have P2P LSPs to each other, each PE can determine the UP state of each other PE, and
all PEs converge to the same designated forwarder.
If the designated forwarder PE fails, then all other PEs lose connectivity with the designated forwarder,
and their BFD session ends. Consequently, other PEs then choose another designated forwarder. The
new forwarder starts forwarding traffic. Thus, the traffic loss is limited to the failure detection time,
which is the BFD session detection time.
When a PE that was the designated forwarder fails and then resumes operating, all other PEs recognize
this fact, rerun the designated forwarder algorithm, and choose the PE as the designated forwarder.
Consequently, the backup designated forwarder stops forwarding traffic. Thus, traffic switches back to
the most eligible designated forwarder.
• associate-backup-pe-groups—Monitors the health of the routing device at the other end of the LSP.
You can configure multiple backup PE groups that contain the same routing device’s address. Failure
of this LSP indicates to all of these groups that the destination PE routing device is down. So, the
associate-backup-pe-groups statement is not tied to any specific group but applies to all groups that
are monitoring the health of the LSP to the remote address.
If there are multiple LSPs with the associate-backup-pe-groups statement to the same destination
PE, then the local routing device picks the first LSP to that PE for detection purposes.
We do not recommend configuring multiple LSPs to the same destination. If you do, make sure that
the LSP parameters (for example, liveliness detection) are similar to avoid false failure notification
even when the remote PE is up.
• label-switched-path—Configures an LSP. You must configure a full mesh of P2P LSPs between the
primary and backup PEs.
NOTE: We recommend that you configure the P2P LSPs with fast reroute and node link
protection so that link failures do not result in the LSP failure. For the purpose of PE
redundancy, a failure in the P2P LSP is treated as a PE failure. Redundancy in the inter-PE
path is also encouraged.
• static—Applies the backup group to a static route on the PE. This ensures that the static route is
active (installed in the forwarding table) when the local PE is the designated forwarder for the
configured backup PE group.
Configuration
IN THIS SECTION
Procedure | 1329
Results | 1332
Procedure
To quickly configure this example, copy the following commands, paste them into a text file, remove any
line breaks, change any details necessary to match your network configuration, copy and paste the
commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.
Step-by-Step Procedure
The following example requires you to navigate various levels in the configuration hierarchy. For
information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the Junos OS
CLI User Guide.
4. Configure the static routes for the point-to-multipoint LSPs backup PE group.
user@host# commit
1332
Results
Confirm your configuration by entering the show policy, show protocols, and show routing-options
commands.
}
}
Verification
SEE ALSO
RELATED DOCUMENTATION
Troubleshooting
CHAPTER 27
Knowledge Base
8 PART
CHAPTER 28
Configuration Statements
IN THIS CHAPTER
accept-remote-source | 1350
active-source-limit | 1360
advertise-from-main-vpn-tables | 1368
algorithm | 1370
anycast-pim | 1377
anycast-prefix | 1379
asm-override-ssm | 1380
assert-timeout | 1382
authentication-key | 1385
auto-rp | 1386
autodiscovery | 1388
autodiscovery-only | 1389
1339
backoff-period | 1391
backup-pe-group | 1393
backups | 1396
bandwidth | 1397
bootstrap | 1403
bootstrap-export | 1405
bootstrap-import | 1406
bootstrap-priority | 1408
cont-stats-collection-interval | 1414
count | 1416
create-new-ucast-tunnel | 1417
dampen | 1419
data-encapsulation | 1420
data-forwarding | 1422
data-mdt-reuse | 1424
default-peer | 1425
default-vpn-source | 1427
defaults | 1428
dense-groups | 1430
df-election | 1433
disable | 1434
distributed-dr | 1450
dr-election-on-p2p | 1453
dr-register-policy | 1454
dvmrp | 1456
embedded-rp | 1458
export-target | 1468
flood-groups | 1479
flow-map | 1480
group-ranges | 1526
group-rp-mapping | 1528
hello-interval | 1533
host-only-interface | 1540
idle-standby-path-switchover-delay | 1545
1342
igmp | 1547
igmp-snooping | 1551
igmp-snooping-options | 1557
ignore-stp-topology-change | 1558
immediate-leave | 1559
import-target | 1568
inclusive | 1570
infinity | 1571
ingress-replication | 1572
inet-mdt | 1576
interface | 1593
interface-name | 1600
interval | 1602
1343
intra-as | 1605
join-load-balance | 1607
join-prune-timeout | 1608
l2-querier | 1613
ldp-p2mp | 1617
listen | 1623
local | 1624
loose-check | 1643
mapping-agent-election | 1644
maximum-bandwidth | 1649
maximum-rps | 1651
mdt | 1655
1344
min-rate | 1661
minimum-receive-interval | 1665
mld | 1667
mld-snooping | 1669
mpls-internet-multicast | 1689
msdp | 1690
multicast | 1693
multicast-replication | 1697
multicast-snooping-options | 1703
multichassis-lag-replicate-state | 1707
multiplier | 1708
multiple-triggered-joins | 1710
mvpn | 1713
mvpn-iana-rt-import | 1716
mvpn-mode | 1720
neighbor-policy | 1721
nexthop-hold-time | 1723
no-bidirectional-mode | 1727
no-qos-adjust | 1730
offer-period | 1731
omit-wildcard-address | 1735
override-interval | 1738
pim | 1747
pim-asm | 1754
pim-snooping | 1755
pim-to-igmp-proxy | 1760
pim-to-mld-proxy | 1761
prefix | 1771
process-non-null-as-null-register | 1782
propagation-delay | 1784
provider-tunnel | 1787
proxy | 1793
qualified-vlan | 1797
receiver | 1817
redundant-sources | 1820
register-limit | 1822
register-probe-time | 1824
reset-tracking-bit | 1828
restart-duration | 1831
1347
reverse-oif-mapping | 1832
robustness-count | 1846
rp | 1850
rp-register-policy | 1853
rp-set | 1855
rpf-selection | 1858
rpt-spt | 1861
sap | 1866
scope | 1868
scope-policy | 1869
secret-key-timeout | 1871
selective | 1872
sglimit | 1877
signaling | 1879
snoop-pseudowires | 1881
source-active-advertisement | 1882
source-address | 1899
spt-only | 1908
spt-threshold | 1909
ssm-groups | 1911
standby-path-creation-delay | 1921
static-lsp | 1932
stickydr | 1935
subscriber-leave-timer | 1939
threshold-rate | 1954
tunnel-source | 2001
unicast-umh-election | 2007
upstream-interface | 2008
use-p2mp-lsp | 2010
vrf-advertise-selective | 2019
vpn-group-address | 2031
wildcard-group-inet | 2032
wildcard-group-inet6 | 2034
accept-remote-source
IN THIS SECTION
Syntax | 1351
Description | 1351
Syntax
accept-remote-source;
Hierarchy Level
Description
You can configure an incoming interface to accept multicast traffic from a remote source. A remote
source is a source that is not on the same subnet as the incoming interface. Figure 141 on page 1351
shows just such a topology – R2 connects to the R1 source on one subnet, and to the incoming interface
on R3 on another subnet (ge-1/3/0.0 in the figure).
In this topology R2 is a pass-through device not running PIM, so R3 is the first hop router for multicast
packets sent from R1. Because R1 and R3 are in different subnets, the default behavior of R3 is to
1352
disregard R1 as a remote source. You can have R3 accept multicast traffic from R1, however, by enabling
accept-remote-source on the target interface.
NOTE: If the interface you identified is not the only path from the remote source, be sure it is the
best path. For example you can configure a static route on the receiver side PE router to the
source, or you can prepend the AS path on the other possible routes. That said, do not use
accept-remote-source to receive multicast traffic over multiple upstream interfaces, as this use
case for the command is not supported.
Commit the configuration changes, and then to confirm that the interface you configured is
accepting traffic from the remote source, run the following command:
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1353
Description | 1353
Syntax
accounting;
Hierarchy Level
Description
Enable the collection of MLD join and leave event statistics on the system.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1354
Description | 1354
Syntax
(accounting | no-accounting);
Hierarchy Level
Description
Enable or disable the collection of MLD join and leave event statistics for an interface.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1355
Description | 1355
Syntax
(accounting | no-accounting);
Hierarchy Level
Description
Enable or disable the collection of IGMP join and leave event statistics for an interface.
1356
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1356
Description | 1357
Default | 1357
Syntax
(accounting | no-accounting);
1357
Hierarchy Level
Description
Enable or disable the collection of IGMP join and leave event statistics for an Automatic Multicast
Tunneling (AMT) interface.
Default
Disabled
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1358
Description | 1358
Syntax
accounting;
Hierarchy Level
Description
Enable the collection of IGMP join and leave event statistics on the system.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1359
Description | 1359
Default | 1360
Syntax
accounting;
Hierarchy Level
Description
Enable the collection of statistics for an Automatic Multicast Tunneling (AMT) interface.
1360
Default
Disabled
Release Information
RELATED DOCUMENTATION
active-source-limit
IN THIS SECTION
Syntax | 1361
Description | 1361
Default | 1362
Options | 1362
Syntax
active-source-limit {
log-interval seconds;
log-warning value;
maximum number;
threshold number;
}
Hierarchy Level
Description
Limit the number of active source messages the routing device accepts.
1362
Default
If you do not include this statement, the router accepts any number of MSDP active source messages.
Options
Release Information
RELATED DOCUMENTATION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
IN THIS SECTION
Syntax | 1363
Description | 1363
Options | 1363
Syntax
address address;
Hierarchy Level
Description
Options
address—Local RP address.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1364
Description | 1364
Options | 1364
Syntax
Hierarchy Level
Description
Configure the anycast rendezvous point (RP) addresses in the RP set. Multiple addresses can be
configured in an RP set. If the RP has peer Multicast Source Discovery Protocol (MSDP) connections,
then the RP must forward MSDP source active (SA) messages.
Options
Release Information
IN THIS SECTION
Syntax | 1365
Description | 1366
Options | 1366
Syntax
address address {
group-ranges {
destination-ip-prefix</prefix-length>;
}
hold-time seconds;
priority number;
}
1366
Hierarchy Level
Description
Configure bidirectional rendezvous point (RP) addresses. The address can be a loopback interface
address, an address of a link interface, or an address that is not assigned to an interface but belongs to a
subnet that is reachable by the bidirectional PIM routers in the network.
Options
address—Bidirectional RP address.
• Default: 232.0.0.0/8
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1367
Description | 1367
Options | 1368
Syntax
address address {
group-ranges {
destination-ip-prefix</prefix-length>;
}
override;
version version;
}
Hierarchy Level
Description
Configure static rendezvous point (RP) addresses. You can configure a static RP in a logical system only if
the logical system is not directly connected to a source.
1368
For each static RP address, you can optionally specify the PIM version and the groups for which this
address can be the RP. The default PIM version is version 1.
Options
address—Static RP address.
• Default: 224.0.0.0/4
Release Information
RELATED DOCUMENTATION
advertise-from-main-vpn-tables
IN THIS SECTION
Syntax | 1369
Description | 1369
Default | 1369
Syntax
advertise-from-main-vpn-tables;
Hierarchy Level
Description
Advertise VPN routes from the main VPN tables in the master routing instance (for example,
bgp.l3vpn.0, bgp.mvpn.0) instead of advertising VPN routes from the tables in the VPN routing
instances (for example, instance-name.inet.0, instance-name.mvpn.0). Enable nonstop active routing
(NSR) support for BGP multicast VPN (MVPN).
When this statement is enabled, before advertising a route for a VPN prefix, the path selection
algorithm is run on all routes (local and received) that have the same route distinguisher (RD).
NOTE: Adding or removing this statement causes all BGP sessions that have VPN address
families to be removed and then added again. On the other hand, having this statement in the
configuration prevents BGP sessions from going down when route reflector (RR) or autonomous
system border router (ASBR) functionality is enabled or disabled on a routing device that has
VPN address families configured.
Default
If you do not include this statement, VPN routes are advertised from the tables in the VPN routing
instances.
Release Information
RELATED DOCUMENTATION
algorithm
IN THIS SECTION
Syntax | 1370
Description | 1371
Options | 1371
Syntax
algorithm algorithm-name;
Hierarchy Level
Description
Options
• simple-password—Plain-text password. One to 16 bytes of plain text. One or more passwords can be
configured.
• keyed-md5—Keyed Message Digest 5 hash algorithm for sessions with transmit and receive rates
greater than 100 ms.
• keyed-sha-1—Keyed Secure Hash Algorithm I for sessions with transmit and receive rates greater
than 100 ms.
Release Information
RELATED DOCUMENTATION
allow-maximum (Multicast)
IN THIS SECTION
Syntax | 1372
Description | 1372
Default | 1373
Syntax
allow-maximum;
Hierarchy Level
Description
Allow the larger of global and family-level threshold values to take effect.
This statement is optional when you configure a forwarding cache or PIM state limits. When this
statement is included in the configuration and both a family-specific and a global configuration are
present, the higher limits take precedence.
1373
For example:
This statement can be useful in single-stack devices on which IPv4 traffic is expected or IPv6 traffic is
expected, but not both.
Default
When this statement is omitted from the configuration, a family-specific forwarding cache configuration
and a global forwarding cache configuration cannot be configured together. Either the global-specific
configuration or the family-specific configuration is allowed, but not both.
Release Information
RELATED DOCUMENTATION
amt (IGMP)
IN THIS SECTION
Syntax | 1374
Description | 1375
Syntax
amt {
relay {
defaults {
1375
(accounting | no-accounting);
group-policy [ policy-names ];
query-interval seconds;
query-response-interval seconds;
robust-count number;
ssm-map ssm-map-name;
version version;
}
}
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
amt (Protocols)
IN THIS SECTION
Syntax | 1376
Description | 1377
Syntax
amt {
relay {
accounting;
family {
inet {
anycast-prefix ip-prefix</prefix-length>;
local-address ip-address;
}
}
secret-key-timeout minutes;
tunnel-limit number;
}
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
1377
Hierarchy Level
Description
Enable Automatic Multicast Tunneling (AMT) on the router or switch. You must also configure the local
address and anycast prefix for AMT to function.
Release Information
RELATED DOCUMENTATION
anycast-pim
IN THIS SECTION
Syntax | 1378
Description | 1378
Syntax
anycast-pim {
rp-set {
address address <forward-msdp-sa>;
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
anycast-prefix
IN THIS SECTION
Syntax | 1379
Description | 1379
Default | 1380
Options | 1380
Syntax
anycast-prefix ip-prefix/<prefix-length>;
Hierarchy Level
Description
Specify an IP address prefix to use for the Automatic Multicast Tunneling (AMT) relay anycast address.
The prefix is advertised by unicast routing protocols to route AMT discovery messages to the router
from nearby AMT gateways. The IP address that the prefix is derived from can be configured on any
1380
interface in the system. Typically, the router’s lo0.0 loopback address prefix is used for configuring the
AMT anycast prefix in the default routing instance, and the router’s lo0.n loopback address prefix is used
for configuring the AMT anycast prefix in VPN routing instances. However, the anycast address can be
either the primary or secondary lo0.0 loopback address.
Default
Options
Release Information
RELATED DOCUMENTATION
asm-override-ssm
IN THIS SECTION
Syntax | 1381
Description | 1381
Syntax
asm-override-ssm;
Hierarchy Level
Description
Enable the routing device to accept any-source multicast join messages (*,G) for group addresses that
are within the default or configured range of source-specific multicast groups.
Release Information
RELATED DOCUMENTATION
assert-timeout
IN THIS SECTION
Syntax | 1382
Description | 1382
Options | 1382
Syntax
assert-timeout seconds;
Hierarchy Level
Description
Multicast routing devices running PIM sparse mode often forward the same stream of multicast packets
onto the same LAN through the rendezvous-point tree (RPT) and shortest-path tree (SPT). PIM assert
messages help routing devices determine which routing device forwards the traffic and prunes the RPT
for this group. By default, routing devices enter an assert cycle every 180 seconds. You can configure
this assert timeout to be between 5 and 210 seconds.
Options
seconds—Time for routing device to wait before another assert message cycle.
1383
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1383
Description | 1384
Options | 1384
Syntax
authentication {
algorithm algorithm-name;
key-chain key-chain-name;
1384
loose-check;
}
Hierarchy Level
Description
Configure the algorithm, security keychain, and level of authentication for BFD sessions running on PIM
interfaces.
Options
Release Information
RELATED DOCUMENTATION
loose-check | 1643
authentication-key
IN THIS SECTION
Syntax | 1385
Description | 1386
Default | 1386
Options | 1386
Syntax
authentication-key peer-key;
Hierarchy Level
Description
Associate a Message Digest 5 (MD5) signature option authentication key with an MSDP peering session.
Default
If you do not include this statement, the routing device accepts any valid MSDP messages from the peer
address.
Options
peer-key—MD5 authentication key. The peer key can be a text string up to 16 letters and digits long.
Strings can include any ASCII characters with the exception of (, ), &, and [. If you include spaces in an
MSDP authentication key, enclose all characters in quotation marks (“ ”).
Release Information
RELATED DOCUMENTATION
auto-rp
IN THIS SECTION
Syntax | 1387
Description | 1387
1387
Options | 1387
Syntax
auto-rp {
(announce | discovery | mapping);
(mapping-agent-election | no-mapping-agent-election);
}
Hierarchy Level
Description
Options
announce—Configure the routing device to listen only for mapping packets and also to advertise itself if
it is an RP.
mapping—Configures the routing device to announce, listen for and generate mapping packets, and
announce that the routing device is eligible to be an RP.
Release Information
The auto-rp options announce and mapping are not supported on QFX5220-32CD devices running
Junos OS Evolved Release 19.3R1, 19.4R1, or 20.1R1.
RELATED DOCUMENTATION
autodiscovery
IN THIS SECTION
Syntax | 1388
Description | 1389
Options | 1389
Syntax
autodiscovery {
inet-mdt;
}
1389
Hierarchy Level
Description
For draft-rosen 7, enable the PE routers in the VPN to discover one another automatically.
Options
Release Information
Statement moved to [..protocols pim mvpn family inet] from [.. protocols pim mvpn] in Junos OS
Release 13.3.
RELATED DOCUMENTATION
autodiscovery-only
IN THIS SECTION
Syntax | 1390
1390
Description | 1390
Syntax
autodiscovery-only {
intra-as {
inclusive;
}
}
Hierarchy Level
Description
Enable the Rosen multicast VPN to use the MDT-SAFI autodiscovery NLRI.
Release Information
Statement moved to [..protocols pim mvpn family inet] from [.. protocols mvpn] in Junos OS Release
13.3.
RELATED DOCUMENTATION
backoff-period
IN THIS SECTION
Syntax | 1391
Description | 1392
Options | 1392
Syntax
backoff-period milliseconds;
Hierarchy Level
Description
Configure the designated forwarder (DF) election backoff period for bidirectional PIM. The backoff-
period statement configures the period that the acting DF waits between receiving a better DF Offer
and sending the Pass message to transfer DF responsibility.
NOTE: Junos OS checks rendezvous point (RP) unicast reachability before accepting incoming
DF messages. DF messages for unreachable rendezvous points are ignored. This is needed to
prevent the following example scenario. Routers A and B are downstream routers on the same
LAN, and both are supposed to send DF election messages with an infinite metric on their
upstream interfaces (reverse-path forwarding [RPF] interfaces). Router A has a higher IP address
than Router B. When both routers lose the path to the RP, both send an Offer message with the
infinite metric onto the LAN. Router A wins the election because it has a higher IP address, and
Router B backs off as a result. After three Offer messages, according to RFC 5015, Router A
looks up the RP and finds no path to the RP. As a result, Router A transitions to the Lose state
and sends nothing. On the other hand, after backing off for an interval of 3 x the Offer period,
Router B does not receive any messages, and resumes the DF election by sending a new Offer
message. Hence, the pattern repeats indefinitely.
Options
milliseconds—Period that the acting DF waits between receiving a better DF Offer and sending the Pass
message to transfer DF responsibility.
• Default: 1000
Release Information
RELATED DOCUMENTATION
backup-pe-group
IN THIS SECTION
Syntax | 1393
Description | 1394
Options | 1394
Syntax
backup-pe-group group-name {
backups [ addresses ];
local-address address;
}
Hierarchy Level
Description
Configure a backup provider edge (PE) group for ingress PE redundancy when point-to-multipoint label-
switched paths (LSPs) are used for multicast distribution.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1395
Description | 1395
Options | 1395
Syntax
backup address;
Hierarchy Level
Description
Define a backup upstream multicast hop (UMH) for type 7 (S,G) routes.
If the primary UMH is unavailable, the backup is used. If neither UMH is available, no UMH is selected.
Options
Release Information
RELATED DOCUMENTATION
backups
IN THIS SECTION
Syntax | 1396
Description | 1396
Options | 1397
Syntax
backups [ addresses ];
Hierarchy Level
Description
Configure the address of backup PEs for ingress PE redundancy when point-to-multipoint label-
switched paths (LSPs) are used for multicast distribution.
1397
Options
Release Information
RELATED DOCUMENTATION
bandwidth
IN THIS SECTION
Syntax | 1397
Description | 1398
Options | 1398
Syntax
Hierarchy Level
Description
Options
adaptive—Specify that the bandwidth is measured for the flows that are matched by the flow map.
• Default: 2 Mbps
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1399
Description | 1400
Syntax
bfd-liveness-detection {
authentication {
algorithm algorithm-name;
key-chain key-chain-name;
loose-check;
}
detection-time {
threshold milliseconds;
}
minimum-interval milliseconds;
minimum-receive-interval milliseconds;
multiplier number;
no-adaptation;
transmit-interval {
minimum-interval milliseconds;
threshold milliseconds;
}
version (0 | 1 | automatic);
}
1400
Hierarchy Level
Description
Configure bidirectional forwarding detection (BFD) timers and authentication for PIM.
Release Information
RELATED DOCUMENTATION
bidirectional (Interface)
IN THIS SECTION
Syntax | 1401
Description | 1401
Syntax
bidirectional {
df-election {
backoff-period milliseconds;
offer-period milliseconds;
robustness-count number;
}
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
bidirectional (RP)
IN THIS SECTION
Syntax | 1402
Description | 1403
Syntax
bidirectional {
address address {
group-ranges {
destination-ip-prefix</prefix-length>;
}
hold-time seconds;
priority number;
}
}
1403
Hierarchy Level
Description
Configure the routing device’s rendezvous-point (RP) properties for bidirectional PIM.
Release Information
RELATED DOCUMENTATION
bootstrap
IN THIS SECTION
Syntax | 1404
Description | 1404
Syntax
bootstrap {
family (inet | inet6) {
export [ policy-names ];
import [ policy-names ];
priority number;
}
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
bootstrap-export
IN THIS SECTION
Syntax | 1405
Description | 1406
Options | 1406
Syntax
bootstrap-export [ policy-names ];
Hierarchy Level
Description
Apply one or more export policies to control outgoing PIM bootstrap messages.
Options
Release Information
RELATED DOCUMENTATION
bootstrap-import
IN THIS SECTION
Syntax | 1407
Description | 1407
Options | 1407
Syntax
bootstrap-import [ policy-names ];
Hierarchy Level
Description
Apply one or more import policies to control incoming PIM bootstrap messages.
Options
Release Information
RELATED DOCUMENTATION
bootstrap-priority
IN THIS SECTION
Syntax | 1408
Description | 1408
Options | 1408
Syntax
bootstrap-priority number;
Hierarchy Level
Description
Configure whether this routing device is eligible to be a bootstrap router. In the case of a tie, the routing
device with the highest IP address is elected to be the bootstrap router.
Options
number—Priority for becoming the bootstrap router. A value of 0 means that the routing device is not
eligible to be the bootstrap router.
• Default: 0
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1409
Description | 1410
Default | 1411
Options | 1411
Syntax
cmcast-joins-limit-inet number;
1410
Hierarchy Level
Description
The cmcast-joins-limit-inet statement limits the number of Type-6 and Type-7 routes. These routes
contain customer-route control information.
You can configure the cmcast-joins-limit-inet statement only when the MVPN mode is rpt-spt.
The cmcast-joins-limit-inet statement is applicable on the egress PE router. It limits the customer
multiccast entries created in response to PIM (*,G) and (S,G) join messages. This statement is applicable
to both type-6 and type-7 routes because the intention is to limit the egress forwarding entries, and in
rpt-spt mode, an MVPN creates forwarding entries for both of these route types (in other words, for
both (*,G) and (S,G) entries). However, this statement does not block BGP-created customer multicast
entries because the purpose of this statement is to prevent the creation of forwarding entries on the
egress PE router only and only for non-remote receivers. If remote-side customer multicast entries or
forwarding entries need to be limited, you can use forwarding-cache threshold on the ingress routers, in
which case this statement is not required.
By placing a limit on the customer multicast entries, you can ensure that when the limit is reached or the
maximum forwarding state is created, all further local join messages will be blocked by the egress PE
router. This ensures that traffic is flowing for only those multicast entries that are permitted.
If another PE router is interested in the traffic, it might pull the traffic from the ingress PE router by
sending type-6 and type-7 routes. To prevent forwarding in this case, you can configure the leaf tunnel
limit (leaf-tunnel-limit-inet). By preventing type-4 routes from being sent in response to type-3 routes,
the formation of selective tunnels is blocked when the tunnel limit is reached. This ensures that traffic
flows only for the routes within the tunnel limit . For all other routes, traffic flows only to the PE routers
that have not reached the configured limit.
1411
Setting the cmcast-joins-limit-inet statement or reducing the value of the limit does not alter or delete
the already existing and installed routes. If needed, you can run the clear pim join command to force the
limit to take effect. Those routes that cannot be processed because of the limit are added to a queue,
and this queue is processed when the limit is removed or increased and when existing routes are
deleted.
Default
Unlimited
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1412
Description | 1412
Default | 1413
Options | 1413
Syntax
cmcast-joins-limit-inet6 number;
Hierarchy Level
Description
The cmcast-joins-limit-inet6 statement limits the number of Type-6 and Type-7 routes. These routes
contain customer-route control information.
You can configure the cmcast-joins-limit-inet6 statement only when the MVPN mode is rpt-spt.
The cmcast-joins-limit-inet6 statement is applicable on the egress PE router. It limits the customer
multiccast entries created in response to PIM (*,G) and (S,G) join messages. This statement is applicable
to both type-6 and type-7 routes because the intention is to limit the egress forwarding entries, and in
rpt-spt mode, an MVPN creates forwarding entries for both of these route types (in other words, for
1413
both (*,G) and (S,G) entries). However, this statement does not block BGP-created customer multicast
entries because the purpose of this statement is to prevent the creation of forwarding entries on the
egress PE router only and only for non-remote receivers. If remote-side customer multicast entries or
forwarding entries need to be limited, you can use forwarding-cache threshold on the ingress routers, in
which case this statement is not required.
By placing a limit on the customer multicast entries, you can ensure that when the limit is reached or the
maximum forwarding state is created, all further local join messages will be blocked by the egress PE
router. This ensures that traffic is flowing for only those multicast entries that are permitted.
If another PE router is interested in the traffic, it might pull the traffic from the ingress PE router by
sending type-6 and type-7 routes. To prevent forwarding in this case, you can configure the leaf tunnel
limit (leaf-tunnel-limit-inet6). By preventing type-4 routes from being sent in response to type-3 routes,
the formation of selective tunnels is blocked when the tunnel limit is reached. This ensures that traffic
flows only for the routes within the tunnel limit . For all other routes, traffic flows only to the PE routers
that have not reached the configured limit.
Setting the cmcast-joins-limit-inet6 statement or reducing the value of the limit does not alter or delete
the already existing and installed routes. If needed, you can run the clear pim join command to force the
limit to take effect. Those routes that cannot be processed because of the limit are added to a queue,
and this queue is processed when the limit is removed or increased and when existing routes are
deleted.
Default
Unlimited
Options
Release Information
RELATED DOCUMENTATION
cont-stats-collection-interval
IN THIS SECTION
Syntax | 1414
Description | 1415
Default | 1415
Options | 1415
Syntax
cont-stats-collection-interval interval;
Hierarchy Level
Description
Change the default interval (in seconds) at which continuous, persistent IGMP and MLD statistics are
stored on devices that support continuous statistics collection.
Junos OS multicast devices collect statistics of received and transmitted IGMP and MLD control packets
for active subscribers. Devices that support continuous IGMP and MLD statistics collection also
maintain persistent, continuous statistics of IGMP and MLD messages for past and currently active
subscribers. The device preserves these continuous statistics across routing daemon restarts, graceful
Routing Engine switchovers, ISSU, or line card reboot operations. Junos OS stores continuous statistics
in a shared database and copies it to the backup Routing Engine at this configured interval to avoid too
much processing overhead on the Routing Engine.
The show igmp statistics and show mld statistics CLI commands display currently active subscriber
IGMP or MLD statistics by default, or you can include the continuous option with either of those
commands to display the continuous statistics instead.
Default
Options
interval Interval in seconds at which you want the device to store collected continuous IGMP and MLD
statistics.
routing
Release Information
RELATED DOCUMENTATION
count
IN THIS SECTION
Syntax | 1416
Description | 1416
Syntax
count number;
Hierarchy Level
Description
Specify the count for the number of triggered joins to be sent between PIM neighbors through the PIM
interface. Optionally, you can configure the count number using the count statement at the [edit
protocols pim interface interface-name multiple-triggered-joins] hierarchy level.
• Range: 5 through 15
• Default: 5
1417
Release Information
RELATED DOCUMENTATION
interface | 1593
multiple-triggered-joins | 1710
create-new-ucast-tunnel
IN THIS SECTION
Syntax | 1417
Description | 1418
Syntax
create-new-ucast-tunnel;
1418
Hierarchy Level
Description
One of two modes for building unicast tunnels when ingress replication is configured for the provider
tunnel. When this statement is configured, each time a new destination is added to the multicast
distribution tree, a new unicast tunnel to the destination is created in the ingress replication tunnel. The
new tunnel is deleted if the destination is no longer needed. Use this mode for RSVP LSPs using ingress
replication.
Release Information
RELATED DOCUMENTATION
dampen
IN THIS SECTION
Syntax | 1419
Description | 1419
Syntax
dampen minutes
Hierarchy Level
Description
Time to wait before re-advertising the source-active route (1 to 30 minutes). After traffic on the ingress
PE falls below the threshold set for "min-rate" on page 1664, this is length of time that resuming traffic
must continue to exceed the min-rate before the ingress PE can start re-advertising Source-Active A-D
routes.
To verify that the value is set as expected, you can check whether the Type 5 (Source-Active route) has
been advertised using the show route table vrf.mvpn.0 command. It may take several minutes before
you can see the changes in the Source-Active A-D route advertisement after making changes to the
min-rate.
Release Information
RELATED DOCUMENTATION
data-encapsulation
IN THIS SECTION
Syntax | 1421
Description | 1421
Default | 1421
Options | 1421
Syntax
Hierarchy Level
Description
Configure a rendezvous point (RP) using MSDP to encapsulate multicast data received in MSDP register
messages inside forwarded MSDP source-active messages.
Default
Options
• Default: enable
Release Information
RELATED DOCUMENTATION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
data-forwarding
IN THIS SECTION
Syntax | 1422
Description | 1423
Default | 1423
Syntax
data-forwarding {
receiver {
install;
mode (proxy | transparent);
(source-list | source-vlans) vlan-list;
translate;
}
source {
groups group-prefix;
}
}
Hierarchy Level
Description
Configure a data-forwarding VLAN as a multicast source VLAN (MVLAN) or a receiver VLAN using the
multicast VLAN registration (MVR) feature.
You can configure a data-forwarding VLAN as either a multicast source VLAN (an MVLAN) or a multicast
receiver VLAN (an MVR receiver VLAN), but not both.
• When you configure an MVR receiver VLAN, you must also configure the MVLANs you list as source
VLANs for that MVR receiver VLAN.
• When you configure a source MVLAN, you aren’t required to set up MVR receiver VLANs at the
same time; you can configure those later.
NOTE: The mode, source-list, and translate statements are only applicable to MVR configuration
on EX Series switches that support the Enhanced Layer 2 Software (ELS) configuration style. The
source-vlans statement is applicable only to EX Series switches that do not support ELS, and is
equivalent to the ELS source-list statement.
The receiver, source, and mode statements and options are explained separately. See CLI Explorer.
Default
Disabled
Release Information
RELATED DOCUMENTATION
data-mdt-reuse
IN THIS SECTION
Syntax | 1424
Description | 1424
Syntax
data-mdt-reuse;
Hierarchy Level
Description
Release Information
Statement introduced in Junos OS Release 10.0. In Junos OS Release 17.3R1, the mdt hierarchy was
moved from provider-tunnel to the provider-tunnel family inet and provider-tunnel family inet6
hierarchies as part of an upgrade to add IPv6 support for default MDT in Rosen 7, and data MDT for
Rosen 6 and Rosen 7. The provider-tunnel mdt hierarchy is now hidden for backward compatibility with
existing scripts.
RELATED DOCUMENTATION
default-peer
IN THIS SECTION
Syntax | 1425
Description | 1426
Syntax
default-peer;
1426
Hierarchy Level
Description
Establish this peer as the default MSDP peer and accept source-active messages from the peer without
the usual peer-reverse-path-forwarding (peer-RPF) check.
Release Information
RELATED DOCUMENTATION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
default-vpn-source
IN THIS SECTION
Syntax | 1427
Description | 1427
Default | 1428
Syntax
default-vpn-source {
interface-name interface-name;
}
Hierarchy Level
Description
Enable the router to use the primary loopback address configured in the default routing instance as the
source address when PIM hello messages, join messages, and prune messages are sent over multicast
tunnel interfaces for interoperability with other vendors’ routers.
Default
By default, the router uses the loopback address configured in the VRF routing instance as the source
address when sending PIM hello messages, join messages, and prune messages over multicast tunnel
interfaces.
Release Information
RELATED DOCUMENTATION
interface-name | 1600
defaults
IN THIS SECTION
Syntax | 1428
Description | 1429
Syntax
defaults {
(accounting | no-accounting);
1429
group-policy [ policy-names ];
query-interval seconds;
query-response-interval seconds;
robust-count number;
ssm-map ssm-map-name;
version version;
}
Hierarchy Level
Description
Configure default IGMP attributes for all Automatic Multicast Tunneling (AMT) interfaces.
Release Information
RELATED DOCUMENTATION
dense-groups
IN THIS SECTION
Syntax | 1430
Description | 1430
Options | 1430
Syntax
dense-groups {
addresses;
}
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1431
Description | 1432
Syntax
detection-time {
threshold milliseconds;
}
1432
Hierarchy Level
Description
Enable BFD failure detection. The BFD failure detection timers are adaptive and can be adjusted to be
faster or slower. The lower the BFD failure detection timer value, the faster the failure detection and
vice versa. For example, the timers can adapt to a higher value if the adjacency fails (that is, the timer
detects failures more slowly). Or a neighbor can negotiate a higher value for a timer than the configured
value. The timers adapt to a higher value when a BFD session flap occurs more than three times in a
span of 15 seconds. A back-off algorithm increases the receive (Rx) interval by two if the local BFD
instance is the reason for the session flap. The transmission (Tx) interval is increased by two if the
remote BFD instance is the reason for the session flap. You can use the clear bfd adaptation command
to return BFD interval timers to their configured values. The clear bfd adaptation command is hitless,
meaning that the command does not affect traffic flow on the routing device.
Release Information
RELATED DOCUMENTATION
df-election
IN THIS SECTION
Syntax | 1433
Description | 1433
Syntax
df-election {
backoff-period milliseconds;
offer-period milliseconds;
robustness-count number;
}
Hierarchy Level
Description
Optionally, configure the designated forwarder (DF) election parameters for bidirectional PIM.
Release Information
RELATED DOCUMENTATION
disable
IN THIS SECTION
Syntax | 1435
Description | 1438
1435
Default | 1438
Syntax
disable;
Description
disable (PIM Graceful Restart)explicitly disables PIM sparse mode graceful restart.
disable (PIM)explicitly disable PIM at the protocol, interface or family hierarchy levels.
disable (Protocols MLD Snooping)disables MLD snooping on the VLAN. Multicast traffic will be flooded
to all interfaces in the VLAN except the source interface.
disable (IGMP Snooping)disables IGMP snooping on the VLAN. Multicast traffic will be flooded to all
interfaces on the VLAN except the source interface.
Default
If you do not include this statement, MLD snooping is enabled on all interfaces in the VLAN.
1439
If you do not include this statement in the configuration for a VLAN, IGMP snooping is enabled on the
VLAN.
Release Information
address (Local RPs) and disable (Protocols IGMP) and disable (Protocols SAP) and disable (PIM) and
disable (Protocols MLD) and disable (Protocols MSDP) introduced before Junos OS Release 7.4.
address (Local RPs) and disable (Protocols IGMP)introduced in Junos OS Release 9.0 for EX Series
switches.
disable statement extended to the [family] hierarchy level of disable (PIM) in Junos OS Release 9.6.
disable (IGMP Snooping) introduced in Junos OS Release 11.1 for the QFX Series.
disable (MLD Snooping) introduced in Junos OS Release 18.1R1 for the SRX1500 devices.
address (Local RPs) introduced in Junos OS Release 11.3 for the QFX Series.
disable (Protocols IGMP) and disable (Protocols MLD Snooping) and disable (Protocols
MSDP)introduced in Junos OS Release 12.1 for the QFX Series.
disable (Protocols MLD Snooping)introduced in Junos OS Release 12.1 for EX Series switches.
address (Local RPs) and disable (Protocols MSDP) introduced in Junos OS Release 14.1X53-D20 for the
OCX Series.
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
mld-snooping | 1669
1440
Disabling IGMP | 57
Disabling MLD | 91
Disabling PIM | 417
family (Protocols PIM) | 1477
Configuring the Session Announcement Protocol | 577
Example: Configuring Nonstop Active Routing for PIM | 517
Example: Configuring Multicast Snooping | 1240
Configuring MLD Snooping on an EX Series Switch VLAN (CLI Procedure) | 186
show mld-snooping vlans | 2259
IN THIS SECTION
Syntax | 1440
Description | 1441
Syntax
disable;
Hierarchy Level
Description
Disable IGMP snooping on the VLAN. Without IGMP snooping, multicast traffic will be flooded to all
interfaces on the VLAN except the source interface.
This option is available only on legacy switches that do not support the Enhanced Layer 2 Software (ELS)
configuration style. On these switches, IGMP snooping is enabled by default on all VLANs, and this
statement includes the disable option if you want to disable IGMP snooping selectively on some VLANs
or disable it on all VLANs.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1442
Description | 1442
Default | 1442
Syntax
disable;
Hierarchy Level
Description
Disable MLD snooping on the VLAN. Multicast traffic will be flooded to all interfaces in the VLAN
except the source interface.
Default
If you do not include this statement, MLD snooping is enabled on all interfaces in the VLAN.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1443
Description | 1443
Syntax
disable;
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
disable (PIM)
IN THIS SECTION
Syntax | 1444
Description | 1445
Syntax
disable;
Hierarchy Level
Description
Release Information
disable statement extended to the [family] hierarchy level in Junos OS Release 9.6.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1446
Description | 1446
Syntax
disable;
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
Disabling MLD | 91
IN THIS SECTION
Syntax | 1447
Description | 1448
Syntax
disable;
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1449
Description | 1449
Syntax
disable;
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
distributed-dr
IN THIS SECTION
Syntax | 1450
Description | 1450
Syntax
distributed-dr;
Hierarchy Level
Description
Enable PIM distributed designated router (DR) functionality on IRB interfaces associated with EVPN
virtual LANs (VLANs) that have been configured with IGMP snooping or MLD snooping. By effectively
disabling certain PIM features that are not required in this scenario, this statement supports using PIM
to perform intersubnet, that is, inter-VLAN, multicast routing more efficiently.
When you configure this statement on an interface on a device, PIM ignores the DR status of the
interface when processing IGMP reports received on the interface. When the interface receives the
1451
IGMPor MLD report, the device sends PIM upstream join messages to pull the multicast stream and
forward it to the interface regardless of the DR status of the interface. This setting also disables the PIM
assert mechanism on the interface.
Release Information
RELATED DOCUMENTATION
distributed (IGMP)
IN THIS SECTION
Syntax | 1451
Description | 1452
Syntax
distributed;
1452
Hierarchy Level
Description
Enable distributed IGMP by moving IGMP processing from the Routing Engine to the Packet Forwarding
Engine. Distributed IGMP reduces the join and leave latency of IGMP memberships.
NOTE: When you enable distributed IGMP, the following interface options are not supported on
the Packet Forwarding Engine: oif-map, group-limit, ssm-map, and static. However, the ssm-
map-policy option is supported on distributed IGMP interfaces. The traceoptions and
accounting statements can only be enabled for IGMP operations still performed on the Routing
Engine; they are not supported on the Packet Forwarding Engine. The clear igmp membership
command is not supported when distributed IGMP is enabled.
When the distributed command is enabled in conjunction with mldp-inband-signalling, (so PIM act as a
multipoint LDP inband edge router), it supports interconnecting separate PIM domains via a MPLS-
based core.
Release Information
Support added in Junos OS Release 18.2R1 for using distributed IGMP in conjunction with Multipoint
LDP (mLDP) in-band signalling.
RELATED DOCUMENTATION
dr-election-on-p2p
IN THIS SECTION
Syntax | 1453
Description | 1453
Default | 1454
Syntax
dr-election-on-p2p;
Hierarchy Level
Description
Default
Release Information
RELATED DOCUMENTATION
dr-register-policy
IN THIS SECTION
Syntax | 1454
Description | 1455
Options | 1455
Syntax
dr-register-policy [ policy-names ];
1455
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
dvmrp
IN THIS SECTION
Syntax | 1456
Description | 1457
Default | 1457
Options | 1457
Syntax
dvmrp {
disable;
export [ policy-names ];
import [ policy-names ];
interface interface-name {
disable;
hold-time seconds;
metric metric;
mode (forwarding | unicast-routing);
}
rib-group group-name;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
1457
Hierarchy Level
Description
Default
Options
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
embedded-rp
IN THIS SECTION
Syntax | 1458
Description | 1458
Syntax
embedded-rp {
group-ranges {
destination-ip-prefix</prefix-length>;
}
maximum-rps limit;
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1459
Description | 1460
Syntax
exclude;
Hierarchy Level
Description
Configure the static group to operate in exclude mode. In exclude mode all sources except the address
configured are accepted for the group. If this statement is not included, the group operates in include
mode.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1461
Description | 1461
Syntax
exclude;
Hierarchy Level
Description
Configure the static group to operate in exclude mode. In exclude mode all sources except the address
configured are accepted for the group. By default, the group operates in include mode.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1462
Description | 1462
Syntax
export [ policy-names ];
Hierarchy Level
Description
Apply one or more export policies to control outgoing PIM join and prune messages. PIM join and prune
filters can be applied to PIM-SM and PIM-SSM messages. PIM join and prune filters cannot be applied
to PIM-DM messages.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1463
Description | 1464
Options | 1464
Syntax
export [ policy-names ];
Hierarchy Level
Description
Apply one or more policies to routes being exported from the routing table into DVMRP. If you specify
more than one policy, they are evaluated in the order specified, from first to last, and the first matching
policy is applied to the route. If no match is found, the routing table exports into DVMRP only the routes
that it learned from DVMRP and direct routes.
Options
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
import
Example: Configuring DVMRP to Announce Unicast Routes | 605
IN THIS SECTION
Syntax | 1465
1465
Description | 1466
Options | 1466
Syntax
export [ policy-names ];
Hierarchy Level
Description
Apply one or more policies to routes being exported from the routing table into MSDP.
Options
Release Information
RELATED DOCUMENTATION
export (Bootstrap)
IN THIS SECTION
Syntax | 1467
Description | 1467
Options | 1467
Syntax
export [ policy-names ];
Hierarchy Level
Description
Apply one or more export policies to control outgoing PIM bootstrap messages.
Options
Release Information
RELATED DOCUMENTATION
export-target
IN THIS SECTION
Syntax | 1468
Description | 1468
Options | 1468
Syntax
export-target {
target target-community;
unicast;
}
Hierarchy Level
Description
Enable you to override the Layer 3 VPN import and export route targets used for importing and
exporting routes for the MBGP MVPN network layer reachability information (NLRI).
Options
Release Information
IN THIS SECTION
Syntax | 1469
Description | 1470
Options | 1470
Syntax
hold-time seconds;
override;
priority number;
}
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
family (Bootstrap)
IN THIS SECTION
Syntax | 1471
Description | 1471
Options | 1471
Syntax
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1472
Description | 1473
Syntax
family {
inet {
anycast-prefix ip-prefix/<prefix-length>;
1473
local-address ip-address;
}
}
Hierarchy Level
Description
Configure the protocol address family for Automatic Multicast Tunneling (AMT) relay functions. Only the
inet family for IPv4 protocol addresses is supported.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1474
Description | 1475
Options | 1475
Syntax
Hierarchy Level
Description
Configure one of the following PIM protocol settings for the specified family on the specified interface:
• Disable PIM
Options
inet—Enable the PIM protocol for the IP version 4 (IPv4) address family.
inet6—Enable the PIM protocol for the IP version 6 (IPv6) address family.
Release Information
Support for the Bidirectional Forwarding Detection (BFD) Protocol statements was introduced in Junos
OS Release 12.2.
RELATED DOCUMENTATION
Configuring PIM and the Bidirectional Forwarding Detection (BFD) Protocol | 499
Disabling PIM | 417
1476
IN THIS SECTION
Syntax | 1476
Description | 1476
Syntax
family {
inet-mvpn;
inet6-mvpn;
}
Hierarchy Level
Description
Explicitly enable IPv4 or IPv6 MVPN routes to be advertised from the VRF instance while preventing all
other route types from being advertised.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1477
Description | 1478
Options | 1478
Syntax
Hierarchy Level
Description
Options
inet—Disable the PIM protocol for the IP version 4 (IPv4) address family.
inet6—Disable the PIM protocol for the IP version 6 (IPv6) address family.
Release Information
RELATED DOCUMENTATION
flood-groups
IN THIS SECTION
Syntax | 1479
Description | 1479
Options | 1479
Syntax
flood-groups [ ip-addresses ];
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
flow-map
IN THIS SECTION
Syntax | 1480
Description | 1481
Options | 1481
Syntax
flow-map flow-map-name {
bandwidth (bps | adaptive);
forwarding-cache {
timeout (never non-discard-entry-only | minutes);
}
policy [ policy-names ];
1481
redundant-sources [ addresses ];
}
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1482
Description | 1482
Syntax
forwarding-cache {
timeout (minutes | never non-discard-entry-only );
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1483
Description | 1484
Options | 1484
Syntax
forwarding-cache {
threshold suppress value <reuse value>;
}
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1485
Description | 1485
1485
Syntax
graceful-restart {
disable;
no-bidirectional-mode;
restart-duration seconds;
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1486
Description | 1486
Default | 1487
Syntax
graceful-restart {
disable;
restart-duration seconds;
}
Hierarchy Level
[edit multicast-snooping-options]
Description
Establish the graceful restart duration for multicast snooping. You can set this value between 0 and 300
seconds. If you set the duration to 0, graceful restart is effectively disabled. Set this value slightly larger
than the IGMP query response interval.
1487
Default
180 seconds
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1488
Description | 1488
Options | 1488
Syntax
group ip-address {
source ip-address;
}
Hierarchy Level
Description
Configure the IGMP multicast group address that receives data on an interface and (optionally) a source
address for certain packets.
Options
ip-address—Group address.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1489
Description | 1489
Options | 1489
Syntax
group multicast-group-address {
<distributed>;
source source-address <distributed>;
}
Hierarchy Level
Description
Specify the multicast group address for the multicast group that is statically configured on an interface.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1491
Description | 1491
Options | 1491
Syntax
group ip-address;
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1492
Description | 1492
Options | 1493
Syntax
group group-address {
source source-address {
rate threshold-rate;
}
}
Hierarchy Level
Description
Specify the explicit or prefix multicast group address to which the threshold limits apply. This is typically
a well-known address for a certain type of multicast traffic.
1493
Options
Release Information
Statement introduced before Junos OS Release 7.4. In Junos OS Release 17.3R1, the mdt hierarchy was
moved from provider-tunnel to the provider-tunnel family inet and provider-tunnel family inet6
hierarchies as part of an upgrade to add IPv6 support for default MDT in Rosen 7, and data MDT for
Rosen 6 and Rosen 7. The provider-tunnel mdt hierarchy is now hidden for backward compatibility with
existing scripts.
RELATED DOCUMENTATION
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 696
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode |
690
IN THIS SECTION
Syntax | 1494
Description | 1495
Options | 1495
Syntax
group group-name {
disable;
export [ policy-names ];
import [ policy-names ];
local-address address;
mode (mesh-group | standard);
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
peer address; {
disable;
active-source-limit {
maximum number;
threshold number;
}
authentication-key peer-key;
default-peer;
export [ policy-names ];
import [ policy-names ];
local-address address;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
}
1495
Hierarchy Level
Description
Define an MSDP peer group. MSDP peers within groups share common tracing options, if present and
not overridden for an individual peer with the "peer" on page 1745 statement. To configure multiple
MSDP groups, include multiple group statements.
By default, the group's options are identical to the global MSDP options. To override the global options,
include group-specific options within the group statement.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1496
Description | 1496
Options | 1497
Syntax
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
source-count number;
source-increment increment;
}
}
Hierarchy Level
Description
The MLD multicast group address and (optionally) the source address for the multicast group being
statically configured on an interface.
1497
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1498
Description | 1498
Syntax
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
source-count number;
source-increment increment;
}
}
Hierarchy Level
Description
Specify the IGMP multicast group address and (optionally) the source address for the multicast group
being statically configured on an interface.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1499
Description | 1499
Options | 1500
Syntax
group multicast-group-address {
source ip-address;
}
Hierarchy Level
Description
Configure a static multicast group on an interface and (optionally) the source address for the multicast
group.
1500
Options
source ip-address—Valid IP multicast address for the source of the multicast group.
Release Information
Support at the [edit routing-instances instance-name protocols mld-snooping vlan vlan-name interface
interface-name static] hierarhcy level introduced in Junos OS Release 13.3 for EX Series switches.
Support for the source statement introduced in Junos OS Release 13.3 for EX Series switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1501
Description | 1502
Options | 1502
Syntax
group address {
source source-address {
inter-region-segmented {
fan-out fan-out value;
threshold rate-value;
}
ldp-p2mp;
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp lsp-name;
}
threshold-rate number;
}
wildcard-source {
inter-region-segmented {
fan-out fan-out value;
}
ldp-p2mp;
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp lsp-name;
}
}
1502
Hierarchy Level
Description
Specify the IP address for the multicast group configured for point-to-multipoint label-switched paths
(LSPs) and PIM-SSM GRE selective provider tunnels.
Options
address—Specify the IP address for the multicast group. This address must be a valid multicast group
address.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1503
Description | 1503
Default | 1504
Options | 1504
Syntax
group group-address{
sourcesource-address{
next-hop next-hop-address;
}
wildcard-source {
next-hop next-hop-address;
}
}
Hierarchy Level
Description
Configure the PIM group address for which you configure RPF selection"group (RPF Selection)" on page
1503.
1504
Default
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1505
Description | 1505
Syntax
group-address address;
Hierarchy Level
Description
Configure the PIM-ASM (Rosen 6) or PIM-SSM (Rosen 7) provider tunnel group address. Each MDT is
linked to a group address in the provider space.
Release Information
In Junos OS Release 17.3R1, the pim-ssm hierarchy was moved from provider-tunnel to the provider-
tunnel family inet and provider-tunnel family inet6 hierarchies as part of an upgrade to add IPv6
support for default multicast distribution tree (MDT) in Rosen 7, and data MDT for Rosen 6 and Rosen 7.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1506
Description | 1507
Options | 1507
Syntax
group-address address;
Hierarchy Level
Description
Specify a group address on which to encapsulate multicast traffic from a virtual private network (VPN)
instance.
NOTE: IPv6 provider tunnels are not currently supported for draft-rosen MVPNs. They are
supported for MBGP MVPNs.
Options
address—For IPv4, IP address whose high-order bits are 1110, giving an address range from 224.0.0.0
through 239.255.255.255, or simply 224.0.0.0/4. For IPv6, IP address whose high-order bits are FF00
(FF00::/8).
Release Information
Starting with Junos OS Release 11.4, to provide consistency with draft-rosen 7 and next-generation
BGP-based multicast VPNs, configure the provider tunnels for draft-rosen 6 anysource multicast VPNs
at the [edit routing-instances routing-instance-name provider-tunnel] hierarchy level. The mdt, vpn-
tunnel-source, and vpn-group-address statements are deprecated at the [edit routing-instances
routing-instance-name protocols pim] hierarchy level. Use group-address in place of vpn-group-
address.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1508
Description | 1508
Options | 1508
Syntax
group-count number;
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1509
Description | 1510
Options | 1510
Syntax
group-count number;
1510
Hierarchy Level
Description
Options
• Default: 1
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1511
Description | 1511
Options | 1511
Syntax
group-increment increment;
Hierarchy Level
Description
Configure the number of times the address should be incremented for each static group created. The
increment is specified in dotted decimal notation similar to an IPv4 address.
Options
• Default: 0.0.0.1
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1512
Description | 1513
Options | 1513
Syntax
group-increment number;
1513
Hierarchy Level
Description
Configure the number of times the address should be incremented for each static group created. The
increment is specified in a format similar to an IPv6 address.
Options
• Default: ::1
Release Information
RELATED DOCUMENTATION
group-limit (IGMP)
IN THIS SECTION
Syntax | 1514
Description | 1514
Default | 1514
Options | 1515
Syntax
group-limit limit;
Hierarchy Level
Description
Configure a limit for the number of multicast groups (or [S,G] channels in IGMPv3) allowed on an
interface. After this limit is reached, new reports are ignored and all related flows are not flooded on the
interface.
To confirm the configured group limit on the interface, use the show igmp interface command.
Default
By default, there is no limit to the number of multicast groups that can join the interface.
1515
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1516
Description | 1516
Default | 1516
Options | 1516
Syntax
group-limit limit;
Hierarchy Level
Description
Configure a limit for the number of multicast groups (or [S,G] channels in IGMPv3) allowed on an
interface. After this limit is reached, new reports are ignored and all related flows are not flooded on the
interface.
Default
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1517
Description | 1517
Default | 1518
Options | 1518
Syntax
group-limit limit;
Hierarchy Level
Description
Configure a limit for the number of multicast groups (or [S,G] channels in MLDv2) allowed on a logical
interface. After this limit is reached, new reports are ignored and all related flows are not flooded on the
interface.
1518
Default
By default, there is no limit to the number of multicast groups that can join the interface.
Options
Release Information
RELATED DOCUMENTATION
Configuring MLD | 60
IN THIS SECTION
Syntax | 1519
Description | 1519
Syntax
group-policy [ policy-names ];
Hierarchy Level
Description
When this statement is enabled on a router running IGMP version 2 (IGMPv2) or version 3 (IGMPv3),
after the router receives an IGMP report, the router compares the group against the specified group
policy and performs the action configured in that policy (for example, rejects the report).
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1520
Description | 1520
Options | 1520
Syntax
group-policy [ policy-names ];
Hierarchy Level
Description
When this statement is enabled on the Automatic Multicast Tunneling (AMT) interfaces running IGMP
version 2 (IGMPv2) or version 3 (IGMPv3), after the router receives an IGMP report, the router
compares the group against the specified group policy and performs the action configured in that policy
(for example, rejects the report).
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1521
Description | 1522
Syntax
group-policy [ policy-names ];
1522
Hierarchy Level
Description
When a routing device running MLD version 1 or version 2 (MLDv1 or MLDv2), receives an MLD report,
the routing device compares the group against the specified group policy and performs the action
configured in that policy (for example, rejects the report).
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1523
Description | 1523
Options | 1523
1523
Syntax
group-range multicast-prefix;
Hierarchy Level
Description
Establish the group range to use for data MDTs created in this VRF instance. Only IPv4 address are valid
for group range. This address range cannot overlap the default MDT addresses of any other VPNs on the
router, nor can the group range specified under the inet and inet6 hierarchies overlap. If you configure
overlapping group ranges, the configuration commit fails. Up to 8000 MDT group ranges are supported
for IPv4 and IPv6.
Options
• Default: None (No data MDTs are created for this VRF instance.)
Release Information
Statement introduced before Junos OS Release 7.4. In Junos OS Release 17.3R1, the mdt hierarchy was
moved from provider-tunnel to the provider-tunnel family inet and provider-tunnel family inet6
hierarchies as part of an upgrade to add IPv6 support for default MDT in Rosen 7, and data MDT for
Rosen 6 and Rosen 7. The provider-tunnel mdt hierarchy is now hidden for backward compatibility with
existing scripts.
RELATED DOCUMENTATION
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 696
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode |
690
IN THIS SECTION
Syntax | 1524
Description | 1525
Options | 1525
Syntax
group-range multicast-prefix;
1525
Hierarchy Level
Description
Establish the multicast group address range to use for creating MBGP MVPN source-specific multicast
selective PMSI tunnels.
Options
• Default: None
Release Information
group-ranges
IN THIS SECTION
Syntax | 1526
Description | 1527
Default | 1527
Options | 1527
Syntax
group-ranges {
destination-ip-prefix</prefix-length>;
}
Hierarchy Level
Description
Configure the address ranges of the multicast groups for which this routing device can be a rendezvous
point (RP).
Default
The routing device is eligible to be the RP for all IPv4 or IPv6 groups (224.0.0.0/4 or FF70::/12 to
FFF0::/12).
Options
Release Information
RELATED DOCUMENTATION
group-rp-mapping
IN THIS SECTION
Syntax | 1528
Description | 1529
Options | 1529
Syntax
group-rp-mapping {
family (inet | inet6) {
log-interval seconds;
maximum limit;
threshold value;
}
log-interval seconds;
maximum limit;
threshold value;
}
Hierarchy Level
Description
NOTE: The maximum limit settings that you configure with the maximum and the family (inet |
inet6) maximum statements are mutually exclusive. For example, if you configure a global
maximum group-to-RP mapping limit, you cannot configure a limit at the family level for IPv4 or
IPv6. If you attempt to configure a limit at both the global level and the family level, the device
will not accept the configuration.
Options
family (inet | inet6)—(Optional) Specify either IPv4 or IPv6 messages to be counted towards the
configured group-to-RP mapping limit.
• Default: Both IPv4 and IPv6 messages are counted towards the configured group-to-RP limit.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1530
Description | 1530
Default | 1531
Options | 1531
Syntax
group-threshold value;
Hierarchy Level
Description
Specify the threshold at which a warning message is logged for the multicast groups received on a
logical interface. The threshold is a percentage of the maximum number of multicast groups allowed on
a logical interface.
For example, if you configure a maximum number of 1,000 incoming multicast groups, and you configure
a threshold value of 90 percent, warning messages are logged in the system log when the interface
receives 900 groups.
To confirm the configured group threshold on the interface, use the show igmp interface command.
1531
Default
Options
value—Percentage of the maximum number of multicast groups allowed on the interface that starts
triggering the warning. You configure a percentage of the group-limit value that starts triggering the
warnings. You must explicitly configure the group-limit to configure a threshold value.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1532
Description | 1532
Default | 1532
1532
Options | 1532
Syntax
group-threshold value;
Hierarchy Level
Description
Specify the threshold at which a warning message is logged for the multicast groups received on a
logical interface. The threshold is a percentage of the maximum number of multicast groups allowed on
a logical interface.
For example, if you configure a maximum number of 1,000 incoming multicast groups, and you configure
a threshold value of 90 percent, warning messages are logged in the system log when the interface
receives 900 groups.
To confirm the configured group threshold on the interface, use the show mld interface command.
Default
Options
value—Percentage of the maximum number of multicast groups allowed on the interface that starts
triggering the warning. You configure a percentage of the group-limit value that starts triggering the
warnings. You must explicitly configure the group-limit to configure a threshold value.
1533
Release Information
RELATED DOCUMENTATION
hello-interval
IN THIS SECTION
Syntax | 1533
Description | 1534
Options | 1534
Syntax
hello-interval seconds;
1534
Hierarchy Level
Description
Specify how often the routing device sends PIM hello packets out of an interface.
Options
• Default: 30 seconds
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1535
Description | 1535
Options | 1535
Syntax
hold-time seconds;
Hierarchy Level
Description
Specify the time period for which a neighbor is to consider the sending router (this router) to be
operative (up).
Options
seconds—Hold time.
• Default: 35 seconds
1536
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1537
Description | 1537
Default | 1537
Options | 1538
Syntax
hold-time seconds;
Hierarchy Level
Description
Specify the hold-time period to use when maintaining a connection with the MSDP peer. If a keepalive
message is not received for the hold-time period, the MSDP peer connection is terminated. According to
the RFC 3618, Multicast Source Discovery Protocol (MSDP), the recommended value for the hold-time
period is 75 seconds.
You might want to change the hold-time period and keepalive timer for consistency in a multi-vendor
environment.
Default
In Junos OS, the default hold-time period is 75 seconds, and the default keepalive interval is 60 seconds.
1538
Options
seconds—Hold time.
• Default: 75 seconds
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1539
Description | 1539
Options | 1539
Syntax
hold-time seconds;
Hierarchy Level
Description
Specify the time period for which a neighbor is to consider the sending routing device (this routing
device) to be operative (up).
Options
seconds—Hold time.
Release Information
RELATED DOCUMENTATION
host-only-interface
IN THIS SECTION
Syntax | 1540
Description | 1541
Default | 1541
Syntax
host-only-interface;
Hierarchy Level
Description
Configure an interface as a host-facing interface. IGMP queries received on these interfaces are
dropped.
Default
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1542
Description | 1542
Options | 1542
Syntax
host-outbound-traffic {
forwarding-class class-name;
dot1p number;
}
Hierarchy Level
[edit multicast-snooping-options],
[edit bridge-domains bridge-domain-name multicast-snooping-options],
[edit routing-instances routing-instance-name multicast-snooping-options],
[edit routing-instances routing-instance-name bridge-domains bridge-domain-name]
Description
On an MX Series router in a network enabled for CET service and IGMP snooping, configure multicast
forwarding class and IEEE 802.1p value to rewrite of IGMP self generated packets.
Options
• Range: 0 through 7
• Default: 0
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1543
Description | 1544
Syntax
hot-root-standby {
min-rate <rate>;
source-tree;
}
Hierarchy Level
Description
In a BGP multicast VPN (MVPN) with either RSVP-TE point-to-multipoint or MLDP point-to-multipoint
provider tunnels, configure hot-root standby, as defined in Multicast VPN fast upstream failover, draft-
morin-l3vpn-mvpn-fast-failover-05.
Starting in Junos OS Release 21.1R1, you can configure MLDP point-to-multipoint provider tunnel on
MX Series router.
Hot-root standby enables an egress PE router to select two upstream PE routers for an (S,G) and send C-
multicast joins to both the PE routers. Multiple ingress PE routers then receive traffic from the source
and forward into the core. The egress PE router uses sender-based RPF to forward the one stream
received by the primary upstream PE router.
When hot-root-standby is configured, based on local policy, as soon as the PE router receives this
standby BGP customer multicast route, the PE can install the VRF PIM state corresponding to this BGP
source-tree join route. The result is that join messages are sent to the CE device toward the customer
source (C-S0, and the PE router receives (C-S,C-G) traffic. Also, based on local policy, as soon as the PE
router receives this standby BGP customer multicast route, the PE router can forward (C-S, C-G) traffic
to other PE routers through a P-tunnel independently of the reachability of the C-S through some other
PE router.
The receivers must join the source tree (SPT) to establish a hot-root standby. Customer multicast join
messages continue to be sent to a single upstream provider edge (PE) router for shared-tree state, and
duplicate data does not flow through the core in this case.
Section 4 of Draft Morin specifies that hot-root standby is limited to the case where the site that
contains the C-S is connected to exactly two PE routers. In the case that there are more than two PE
routers multihomed to the source, the backup PE router is the PE router chosen with the highest IP
address (not including the primary upstream PE router). This is a local decision that is not specified in the
specification.
There is no limitation in Junos OS on which upstream multicast hop (UMH) selection method is used.
For example, you can use static-umh (MBGP MVPN) or unicast-umh-election .
Hot-root standby is supported for RSVP point-to-multipoint and mLDP point-to-multipoint provider
tunnels. Other provider tunnels are not supported. A commit error results if hot-root-standby is
configured and the provider-tunnel is not either RSVP point-to-multipoint or mLDP point-to-multipoint.
Fast failover (sub 50ms) is supported for C-multicast streams within NG-MVPNs in a hot-standby mode.
The threshold to trigger fast failover must be set. See "min-rate" on page 1661 for information on fast
failover.
When you configure hot-root-standby on MPC10 or MPC11 linecards, the failover process takes up to
150 milliseconds.
1545
Cold-root standby and warm-root standby, as specified in draft Morin, are not supported.
The backup attribute is not sent in the customer multicast routes, as this is only needed for warm and
cold-root standby.
Release Information
Support for MLDP point-to-multipoint provider tunnel is introduced in Junos OS Release 21.1R1 for MX
Series router.
RELATED DOCUMENTATION
idle-standby-path-switchover-delay
IN THIS SECTION
Syntax | 1546
Description | 1546
Options | 1546
Syntax
idle-standby-path-switchover-delay <seconds>;
Hierarchy Level
Description
Configure the time interval after which an ECMP join is moved to the standby path in the absence of
traffic on the path.
In the absence of this statement, ECMP joins are not moved to the standby path until traffic is detected
on the path.
Options
<seconds> Time interval after which an ECMP join is moved to the standby RPF path in the absence of
traffic on the path.
Release Information
RELATED DOCUMENTATION
igmp
IN THIS SECTION
Syntax | 1547
Description | 1549
Default | 1549
Syntax
igmp {
accounting;
interface interface-name {
(accounting | no-accounting);
disable;
distributed;
group-limit limit;
group-policy [ policy-names ];
1548
group-threshold
immediate-leave;
log-interval
oif-map map-name;
passive;
promiscuous-mode;
ssm-map ssm-map-name;
ssm-map-policy ssm-map-policy-name;
static {
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
source-count number;
source-increment increment;
}
}
}
version version;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
Hierarchy Level
Description
Enable IGMP on the router or switch. IGMP must be enabled for the router or switch to receive
multicast packets.
Default
IGMP is disabled on the router or switch. IGMP is automatically enabled on all broadcast interfaces
when you configure Protocol Independent Multicast (PIM) or Distance Vector Multicast Routing
Protocol (DVMRP).
Release Information
RELATED DOCUMENTATION
Enabling IGMP | 31
IN THIS SECTION
Syntax | 1550
Description | 1550
Options | 1550
Syntax
igmp-querier {
source-addresssource address;
}
Hierarchy Level
Description
Configure a QFabric Node device to be an IGMP querier. If there are any multicast routers on the same
local network, make sure the source address for the IGMP querier is lower (a smaller number) than the
IP addresses for those routers on the network. This ensures that Node is always the IGMP querier on
the network.
Options
source-address The address that the switch uses as the source address in the IGMP queries
source address that it sends.
Release Information
RELATED DOCUMENTATION
igmp-snooping
IN THIS SECTION
Description | 1555
Default | 1556
Options | 1556
igmp-snooping {
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable> <match regex>;
flag flag (detail | disable | receive | send);
}
vlan (vlan-name | all) {
data-forwarding {
receiver {
install;
mode (proxy | transparent);
1552
igmp-snooping {
evpn-ssm-reports-only;
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
immediate-leave;
1553
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
vlan vlan-id {
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
}
igmp-snooping {
traceoptions {
1554
igmp-snooping {
vlan (all | vlan-name) {
immediate-leave;
interface interface-name {
group-limit range;
host-only-interface;
multicast-router-interface;
immediate-leave;
static {
1555
group multicast-ip-address {
source ip-address;
}
}
}
l2-querier {
source-address ip-address;
}
proxy {
source-address ip-address;
}
qualified-vlan vlan-id;
query-interval number;
query-last-member-interval number;
query-response-interval number;
robust-count number;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier>;
}
}
}
Hierarchy Level
Description
Configure IGMP snooping, which constrains multicast traffic to only the ports that have receivers
attached.
IGMP snooping enables the device to selectively send out multicast packets on only the ports that need
them. Without IGMP snooping, the device floods the packets on every port. The device listens for the
exchange of IGMP messages by the device and the end hosts. In this way, the device builds an IGMP
snooping table that has a list of all the ports that have requested a particular multicast group.
1556
You can also configure IGMP proxy, IGMP querier, and multicast VLAN registration (MVR) functions on
VLANs at this hierarchy level.
NOTE: IGMP snooping must be disabled on the device before running an ISSU operation.
Default
For most devices, IGMP snooping is disabled on the device by default, and you must configure IGMP
snooping parameters in this statement hierarchy to enable it on one or more VLANs.
On legacy switches that do not support the Enhanced Layer 2 Software (ELS) configuration style, IGMP
snooping is enabled by default on all VLANs, and the vlan statement includes a disable option if you
want to disable IGMP snooping selectively on some VLANs or disable it on all VLANs.
Options
Release Information
RELATED DOCUMENTATION
igmp-snooping-options
IN THIS SECTION
Syntax | 1557
Description | 1557
Options | 1557
Syntax
igmp-snooping-options {
snoop-pseudowires
use-p2mp-lsp
}
Hierarchy Level
Description
Supports the use-p2mp-lsp or snoop-pseudowires options for independent routing instances and
those in a logical system.
Options
Release Information
RELATED DOCUMENTATION
instance-type
Example: Configuring IGMP Snooping | 0
ignore-stp-topology-change
IN THIS SECTION
Syntax | 1558
Description | 1559
Syntax
ignore-stp-topology-change;
1559
Hierarchy Level
Description
Ignore messages about spanning tree topology changes. This statement is supported for the virtual-
switch routing instance type only.
Release Information
RELATED DOCUMENTATION
immediate-leave
IN THIS SECTION
Syntax | 1560
Description | 1560
Default | 1561
Syntax
immediate-leave;
Hierarchy Level
Description
Enable host tracking to allow the device to track the hosts that send membership reports, determine
when the last host sends a leave message for the multicast group, and immediately stop forwarding
1561
traffic for the multicast group after the last host leaves the group. This setting helps to minimize IGMP
or MLD membership leave latency—it reduces the amount of time it takes for the switch to stop sending
multicast traffic to an interface when the last host leaves the group.
NOTE: EVPN-VXLAN multicast uses special IGMP group leave processing to handle multihomed
sources and receivers, so we don’t support the immediate-leave option in EVPN-VXLAN
networks.
IGMPv2, IGMPv3, MLDv1, and MLDv2 all have immediate leave disabled by default. In this state, the
device does not track host memberships. When the device receives a leave report from a host, it sends
out a group-specific query to all hosts. If no receiver responds with a membership report within a set
interval, the device removes all hosts on the interface from the multicast group and stops forwarding
multicast traffic to the interface.
With immediate leave enabled, the device removes an interface from the forwarding-table entry
immediately without first sending IGMP group-specific queries out of the interface and waiting for a
response. The device prunes the interface from the multicast tree for the multicast group specified in
the IGMP leave message. The immediate leave setting ensures optimal bandwidth management for
hosts on a switched network, even when multiple multicast groups are active simultaneously.
Immediate leave is supported for IGMPv2, IGMPv3, MLDv1 and MLDv2 on devices that support these
protocols.
NOTE: We recommend that you configure immediate leave with IGMPv2 and MLDv1 only when
there is only one host on an interface. With IGMPv2 and MLDv1, only one host on a interface
sends a membership report in response to a general query—any other interested hosts suppress
their reports. Report suppression avoids a flood of reports for the same group, but it also
interferes with host tracking because the device knows only about one interested host on the
interface at any given time.
Default
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1562
Description | 1563
Options | 1563
Syntax
import [ policy-names ];
1563
Hierarchy Level
Description
Apply one or more policies to routes being imported into the routing table from DVMRP. If you specify
more than one policy, they are evaluated in the order specified, from first to last, and the first matching
policy is applied to the route. If no match is found, DVMRP shares with the routing table only those
routes that were learned from DVMRP routers.
Options
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
export
Example: Configuring DVMRP to Announce Unicast Routes | 605
1564
IN THIS SECTION
Syntax | 1564
Description | 1565
Options | 1565
Syntax
import [ policy-names ];
Hierarchy Level
Description
Apply one or more policies to routes being imported into the routing table from MSDP.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1566
Description | 1566
1566
Options | 1566
Syntax
import [ policy-names ];
Hierarchy Level
Description
Apply one or more policies to routes being imported into the routing table from PIM. Use the import
statement to filter PIM join messages and prevent them from entering the network.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1567
Description | 1567
Options | 1568
Syntax
import [ policy-names ];
Hierarchy Level
Description
Apply one or more import policies to control incoming PIM bootstrap messages.
1568
Options
Release Information
RELATED DOCUMENTATION
import-target
IN THIS SECTION
Syntax | 1569
Description | 1569
Options | 1569
Syntax
import-target {
target {
target-value;
receiver target-value;
sender target-value;
}
unicast {
receiver;
sender;
}
}
Hierarchy Level
Description
Enable you to override the Layer 3 VPN import and export route targets used for importing and
exporting routes for the MBGP MVPN NLRI.
Options
Release Information
inclusive
IN THIS SECTION
Syntax | 1570
Description | 1570
Syntax
inclusive;
Hierarchy Level
Description
For Rosen 7, enable the MVPN control plane for autodiscovery only, using intra-AS autodiscovery routes
over an inclusive provider multicast service interface (PMSI).
Release Information
Statement moved to [..protocols mvpn family inet] from [.. protocols mvpn] in Junos OS Release 13.3.
RELATED DOCUMENTATION
infinity
IN THIS SECTION
Syntax | 1571
Description | 1572
Options | 1572
Syntax
infinity [ policy-names ];
Hierarchy Level
Description
Apply one or more policies to set the SPT threshold to infinity for a source-group address pair. Use the
infinity statement to prevent the last-hop routing device from transitioning from the RPT rooted at the
RP to an SPT rooted at the source for that source-group address pair.
Options
Release Information
RELATED DOCUMENTATION
ingress-replication
IN THIS SECTION
Syntax | 1573
Description | 1573
Options | 1574
1573
Syntax
ingress-replication {
create-new-ucast-tunnel;
label-switched-path {
label-switched-path-template {
(template-name | default-template);
}
}
}
Hierarchy Level
Description
A provider tunnel type used for passing multicast traffic between routers through the MPLS cloud, or
between PE routers when using MVPN. The ingress replication provider tunnel uses MPLS point-to-
point LSPs to create the multicast distribution tree.
Optionally, you can specify a label-switched path template. If you configure ingress-replication label-
switched-path and do not include label-switched-path-template, ingress replication works with existing
LDP or RSVP tunnels. If you include label-switched-path-template, the tunnels must be RSVP.
1574
Options
create-new-ucast-tunnel—When specified, a new unicast tunnel to the destination is created and used
for ingress replication. The unicast tunnel is deleted later if the destination is no longer included in the
multicast distribution tree.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1575
Description | 1575
Syntax
inet {
anycast-prefix ip-prefix/<prefix-length>;
local-address ip-address;
}
Hierarchy Level
Description
Specify the IPv4 local address and anycast prefix for Automatic Multicast Tunneling (AMT) relay
functions.
Release Information
RELATED DOCUMENTATION
inet-mdt
IN THIS SECTION
Syntax | 1576
Description | 1576
Syntax
inet-mdt;
Hierarchy Level
Description
For Rosen 7, configure the PE router in a VPN to use an SSM multicast distribution tree (MDT)
subsequent address family identifier (SAFI) NLRI .
Release Information
Statement moved to [..protocols pim mvpn family inet] from [.. protocols mvpn] in Junos OS Release
13.3.
RELATED DOCUMENTATION
inet-mvpn (BGP)
IN THIS SECTION
Syntax | 1577
Description | 1578
Syntax
inet-mvpn {
signaling {
accepted-prefix-limit {
maximum number;
teardown percentage {
idle-timeout (forever | minutes);
}
}
damping;
loops number;
1578
prefix-limit {
maximum number;
teardown percentage {
idle-timeout (forever | minutes);
}
}
}
}
Hierarchy Level
Description
Release Information
IN THIS SECTION
Syntax | 1579
1579
Description | 1579
Syntax
inet-mvpn;
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
inet6-mvpn (BGP)
IN THIS SECTION
Syntax | 1580
Description | 1581
Syntax
inet6-mvpn {
signaling {
accepted-prefix-limit {
maximum number;
teardown percentage {
idle-timeout (forever | minutes);
}
}
loops number
prefix-limit {
maximum number;
teardown percentage {
idle-timeout (forever | minutes);
}
}
}
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1582
Description | 1582
Syntax
inet6-mvpn;
Hierarchy Level
Description
Release Information
IN THIS SECTION
Syntax | 1583
Description | 1583
Options | 1583
Syntax
interface interface-name {
group-limit limit;
host-only-interface;
static {
group ip-address {
source ip-address;
}
}
}
Hierarchy Level
Description
Options
interface-name—Name of the interface. Specify the full interface name, including the physical and logical
address components. To configure all interfaces, you can specify all.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1584
Description | 1585
Options | 1585
Syntax
interface interface-name {
group-limit limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group multicast-group-address {
1585
source ip-address;
}
}
}
Hierarchy Level
Description
For IGMP snooping, configure an interface as either a multicast-router interface or as a static member of
a multicast group with optional interface-specific properties.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1586
Description | 1586
Options | 1587
Syntax
Hierarchy Level
Description
For MLD snooping, configure an interface as a static multicast-router interface, a host-side interface, or
a static member of a multicast group.
1587
Options
all (All EX Series switches except EX9200) All interfaces in the VLAN.
Release Information
Support for the group-limit, host-only-interface, and the immediate-leave statements introduced in
Junos OS Release 13.3 for EX Series switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1588
1588
Description | 1588
Options | 1588
Syntax
interface interface-name {
disable;
hold-time seconds;
metric metric;
mode (forwarding | unicast-routing);
}
Hierarchy Level
Description
Options
interface-name—Name of the interface. Specify the full interface name, including the physical and logical
address components. To configure all interfaces, you can specify all.
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1589
Description | 1590
Options | 1590
Syntax
interface interface-name {
(accounting | no-accounting);
disable;
distributed;
group-limit limit;
1590
group-policy [ policy-names ];
immediate-leave;
oif-map map-name;
passive;
promiscuous-mode;
ssm-map ssm-map-name;
ssm-map-policy ssm-map-policy-name;
static {
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
source-count number;
source-increment increment;
}
}
}
version version;
}
Hierarchy Level
Description
Options
interface-name—Name of the interface. Specify the full interface name, including the physical and logical
address components. To configure all interfaces, you can specify all.
Release Information
RELATED DOCUMENTATION
Enabling IGMP | 31
IN THIS SECTION
Syntax | 1591
Description | 1592
Options | 1592
Syntax
interface interface-name {
(accounting | no-accounting);
disable;
distributed;
group-limit limit;
group-policy [ policy-names ];
group-threshold value;
immediate-leave;
log-interval seconds;
oif-map [ map-names ];
passive;
1592
ssm-map ssm-map-name;
ssm-map-policy ssm-map-policy-name;
static {
group multicast-group-address {
exclude;
group-count number
group-increment increment
source ip-address {
source-count number;
source-increment increment;
}
}
}
version version;
}
Hierarchy Level
Description
Options
interface-name—Name of the interface. Specify the full interface name, including the physical and logical
address components. To configure all interfaces, you can specify all.
Release Information
RELATED DOCUMENTATION
Enabling MLD | 65
interface
IN THIS SECTION
Syntax | 1593
Description | 1595
Options | 1595
Syntax
detection-time {
threshold milliseconds;
}
minimum-interval milliseconds;
minimum-receive-interval milliseconds;
multiplier number;
no-adaptation;
transmit-interval {
minimum-interval milliseconds;
threshold milliseconds;
}
version (0 | 1 | automatic);
}
bidirectional {
df-election {
backoff-period milliseconds;
offer-period milliseconds;
robustness-count number;
}
}
family (inet | inet6) {
disable;
}
hello-interval seconds;
mode (bidirectional-sparse | bidirectional-sparse-dense | dense | sparse |
sparse-dense);
neighbor-policy [ policy-names ];
override-interval milliseconds;
priority number;
propagation-delay milliseconds;
reset-tracking-bit;
version version;
}
Hierarchy Level
Description
Options
interface-name—Name of the interface. Specify the full interface name, including the physical and logical
address components. To configure all interfaces, you can specify all.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1596
Description | 1596
Options | 1596
1596
Syntax
interface interface-names {
maximum-bandwidth bps;
no-qos-adjust;
reverse-oif-mapping {
no-qos-adjust;
}
subscriber-leave-timer seconds;
}
Hierarchy Level
Description
TIP: You cannot enable multicast traffic on an interface by using the routing-options multicast
interface statement and configure PIM on the interface.
Options
Release Information
RELATED DOCUMENTATION
interface (Scoping)
IN THIS SECTION
Syntax | 1597
Description | 1598
Options | 1598
Syntax
interface [ interface-names ];
1598
Hierarchy Level
Description
Options
interface-names—Names of the interfaces to scope. Specify the full interface name, including the
physical and logical address components. To configure all interfaces, you can specify all.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1599
Description | 1599
Options | 1600
Syntax
interface vt-fpc/pic/port.unit-number {
multicast;
primary;
unicast;
}
Hierarchy Level
Description
In a multiprotocol BGP (MBGP) multicast VPN (MVPN), configure a virtual tunnel (VT) interface.
VT interfaces are needed for multicast traffic on routing devices that function as combined provider
edge (PE) and provider core (P) routers to optimize bandwidth usage on core links. VT interfaces prevent
traffic replication when a P router also acts as a PE router (an exit point for multicast traffic).
1600
In an MBGP MVPN extranet, if there is more than one VRF routing instance on a PE router that has
receivers interested in receiving multicast traffic from the same source, VT interfaces must be configured
on all instances.
Starting in Junos OS Release 12.3, you can configure multiple VT interfaces in each routing instance.
This provides redundancy. A VT interface can be used in only one routing instance.
Options
Release Information
RELATED DOCUMENTATION
interface-name
IN THIS SECTION
Syntax | 1601
Description | 1601
Options | 1601
1601
Syntax
interface-name interface-name;
Hierarchy Level
Description
Specify the primary loopback address configured in the default routing instance to use as the source
address when PIM hello messages, join messages, and prune messages are sent over multicast tunnel
interfaces for interoperability with other vendors’ routers.
Options
interface-name—Primary loopback address configured in the default routing instance to use as the
source address when PIM control messages are sent. Typically, the lo0.0 interface is specified for this
purpose.
Release Information
interval
IN THIS SECTION
Syntax | 1602
Description | 1602
Options | 1602
Syntax
interval milliseconds;
Hierarchy Level
Description
Specify the duration between the triggered joins of the PIM neighbors through the PIM interface.
Options
• Default: 100
Release Information
RELATED DOCUMENTATION
interface | 1593
multiple-triggered-joins | 1710
IN THIS SECTION
Syntax | 1603
Description | 1604
Options | 1604
Syntax
inter-as{
ingress-replication {
create-new-ucast-tunnel;
label-switched-path-template {
(default-template lsp-template-name);
}
}
inter-region-segmented {
fan-out <leaf-AD routes>);
1604
threshold <kilobits>);
}
ldp-p2mp;
rsvp-te {
label-switched-path-template {
(default-template lsp-template-name);
}
}
}
Hierarchy Level
Description
These statements add Junos support for segmented RSVP-TE provider tunnels with next-generation
Layer 3 multicast VPNs (MVPN), that is, Inter-AS Option B. Inter-AS (autonomous-systems) support is
required when an L3VPN spans multiple ASes, which can be under the same or different administrative
authority (such as in an inter-provider scenario). Provider-tunnels (p-tunnels) segmentation occurs at the
Autonomous System Border Routers (ASBR). The ASBRs are actively involved in BGP-MVPN signaling as
well as data-plane setup.
In addition to creating the Intra-AS p-tunnel segment, these Inter-AS configurations are also used for
ASBRs to originate the Inter-AS Auto Discovery (AD) route into Exterior Border Gateway Protocol
(eBGP).
Options
inter-region- Select whether Inter-Region Segmented LSPs are triggered by threshold rate, or fan-
segmented out, or both. Supported tunnels are PIM-SSM and PIM-ASM; Inter-region-segmented
cannot be set for PIM tunnels.
1605
• Choose fan-out and then specify the number (from 1 to 10,000) of remote Leaf-AD
routes to use as a trigger point for segmentation.
• Choose threshold and then specify a data threshold rate (from 0 to 1,000,000
kilobytes per second) to use to use as a trigger point for segmentation.
ldp-p2mp Select to use LDP point-to-multipoint LSP for flooding; LDP P2MP must be configured
in the master routing instance.
Release Information
RELATED DOCUMENTATION
intra-as
IN THIS SECTION
Syntax | 1606
Description | 1606
Syntax
intra-as {
inclusive;
}
Hierarchy Level
Description
For Rosen 7, enable the MVPN control plane for autodiscovery only, using intra-AS autodiscovery
routes.
Release Information
Statement moved to [..protocols mvpn family inet] from [.. protocols mvpn] in Junos OS Release 13.3.
RELATED DOCUMENTATION
join-load-balance
IN THIS SECTION
Syntax | 1607
Description | 1607
Options | 1608
Syntax
join-load-balance {
automatic;
}
Hierarchy Level
Description
Enable load balancing of PIM join messages across interfaces and routing devices.
1608
Options
automatic Enables automatic load balancing of PIM join messages. When a new interface or neighbor
is introduced into the network, ECMP joins are redistributed with minimal disruption to
traffic.
Release Information
RELATED DOCUMENTATION
join-prune-timeout
IN THIS SECTION
Syntax | 1609
Description | 1609
Options | 1609
Syntax
join-prune-timeout seconds;
Hierarchy Level
Description
Configure the timeout for the join state. If the periodic join refresh message is not received before the
timeout expires, the join state is removed.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1610
Description | 1611
Default | 1611
Options | 1611
Syntax
keep-alive seconds;
Hierarchy Level
address]
[edit routing-instances instance-name protocols msdp peer address],
Description
Specify the keepalive interval to use when maintaining a connection with the MSDP peer. If a keepalive
message is not received for the hold-time period, the MSDP peer connection is terminated. According to
the RFC 3618, Multicast Source Discovery Protocol (MSDP), the recommended value for the keepalive
timer is 60 seconds.
You might want to change the keepalive interval and hold-time period for consistency in a multi-vendor
environment.
Default
In Junos OS, the default hold-time period is 75 seconds, and the default keepalive interval is 60 seconds.
Options
seconds—Keepalive interval.
• Default: 60 seconds
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1612
Description | 1612
Options | 1613
Syntax
key-chain key-chain-name;
Hierarchy Level
Description
Options
key-chain-name—Name of the security keychain to use for BFD authentication. The name is a unique
integer between 0 and 63. This must match one of the keychains in the authentication-key-chains
statement at the [edit security] hierarchy level.
Release Information
Statement modified in Junos OS Release 12.2 to include family in the hierarchy level.
RELATED DOCUMENTATION
l2-querier
IN THIS SECTION
Syntax | 1614
Description | 1614
Options | 1614
Syntax
l2-querier {
source-address ip-address;
}
Hierarchy Level
Description
Configure the device to be an IGMP querier. IGMP querier allows the device to proxy for a multicast
router and send out periodic IGMP queries in the network. This action causes the device to consider
itself an multicast router port. The remaining devices in the network simply define their respective
multicast router ports as the interface on which they received this IGMP query. Use the source-address
statement to configure the source address to use for IGMP snooping queries.
Options
seconds—Time interval.
Release Information
RELATED DOCUMENTATION
label-switched-path-template (Multicast)
IN THIS SECTION
Syntax | 1615
Description | 1616
Options | 1616
Syntax
label-switched-path-template {
(default-template | lsp-template-name);
}
Hierarchy Level
name rsvpe-te],
[edit protocols mvpn inter-region-template template template-name all-regions
ingress-replication label-switched-path],
[edit protocols mvpn inter-region-template template template-name all-regions
rsvp-te],
[edit routing-instances routing-instance-name provider-tunnel ingress-
replication label-switched-path],
[edit routing-instances routing-instance-name provider-tunnel rsvp-te],
[edit routing-instances routing-instance-name provider-tunnel selective group
address source source-address rsvp-te],
[edit routing-options dynamic-tunnels tunnel-name rsvp-te entry-name]
[edit routing-instances instance-name provider-tunnel]
Description
Specify the LSP template. An LSP template is used as the basis for other dynamically generated LSPs.
This feature can be used for a number of applications, including point-to-multipoint LSPs, flooding VPLS
traffic, configuring ingress replication for IP multicast using MBGP MVPNs, and to enable RSVP
automatic mesh. There is no default setting for the label-switched-path-template statement, so you
must configure either the default-template using the default-template option, or you must specify the
name of your preconfigured LSP template.
Options
default-template—Specify that the default LSP template be used for the dynamically generated LSPs.
lsp-template-name—Specify the name of an LSP to be used as a template for the dynamically generated
LSPs.
Release Information
RELATED DOCUMENTATION
ldp-p2mp
IN THIS SECTION
Syntax | 1617
Description | 1618
Syntax
ldp-p2mp;
Hierarchy Level
Description
Specify a point-to-multipoint provider tunnel with LDP signalling for an MBGP MVPN.
Release Information
RELATED DOCUMENTATION
Example: Configuring Point-to-Multipoint LDP LSPs as the Data Plane for Intra-AS MBGP MVPNs |
781
1619
IN THIS SECTION
Syntax | 1619
Description | 1619
Default | 1620
Options | 1620
Syntax
leaf-tunnel-limit-inet number;
Hierarchy Level
Description
Configure the maximum number of selective leaf tunnels for IPv4 control-plane routes.
The leaf-tunnel-limit-inet statement limits the number of Type-4 leaf autodiscovery (AD) route
messages that can be originated by receiver provider edge (PE) routers in response to receiving from the
1620
sender PE router S-PMSI AD routes with the leaf-information-required flag set. Thus, this statement
limits the number of leaf nodes that are created when a selective tunnel is formed.
You can configure the statement only when the MVPN mode is rpt-spt.
Setting the leaf-tunnel-limit-inet statement or reducing the value of the limit does not alter or delete
the already existing and installed routes. If needed, you can run the clear pim join command to force the
limit to take effect. Those routes that cannot be processed because of the limit are added to a queue,
and this queue is processed when the limit is removed or increased and when existing routes are
deleted.
Default
Unlimited
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1621
Description | 1621
Default | 1622
Options | 1622
Syntax
leaf-tunnel-limit-inet6 number;
Hierarchy Level
Description
Configure the maximum number of selective leaf tunnels for IPv6 control-plane routes.
The leaf-tunnel-limit-inet6 statement limits the number of Type-4 leaf autodiscovery (AD) route
messages that can be originated by receiver provider edge (PE) routers in response to receiving from the
1622
sender PE router S-PMSI AD routes with the leaf-information-required flag set. Thus, this statement
limits the number of leaf nodes that are created when a selective tunnel is formed.
You can configure the statement only when the MVPN mode is rpt-spt.
Setting the leaf-tunnel-limit-inet6 statement or reducing the value of the limit does not alter or delete
the already existing and installed routes. If needed, you can run the clear pim join command to force the
limit to take effect. Those routes that cannot be processed because of the limit are added to a queue,
and this queue is processed when the limit is removed or increased and when existing routes are
deleted.
Default
Unlimited
Options
Release Information
RELATED DOCUMENTATION
listen
IN THIS SECTION
Syntax | 1623
Description | 1623
Options | 1623
Syntax
Hierarchy Level
Description
Specify an address and optionally a port on which SAP and SDP listen, in addition to the default SAP
address and port on which they always listen, 224.2.127.254:9875. To specify multiple additional
addresses or pairs of address and port, include multiple listen statements.
Options
• Default: 224.2.127.254
• Default: 9875
1624
Release Information
RELATED DOCUMENTATION
local
IN THIS SECTION
Syntax | 1624
Description | 1625
Syntax
local {
address address;
disable;
family (inet | inet6) anycast-pim;
}
group-ranges {
destination-ip-prefix</prefix-length>;
}
1625
hold-time seconds;
override;
priority number;
process-non-null-as-null-register ;
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1626
Description | 1626
Default | 1627
Options | 1627
Syntax
local-address ip-address;
Hierarchy Level
Description
Specify the local unique IP address to send in Automatic Multicast Tunneling (AMT) relay advertisement
messages, for use as the IP source of AMT control messages, and as the source of the data tunnel
encapsulation. The address can be configured on any interface in the system. Typically, the router’s lo0.0
loopback address is used for configuring the AMT local address in the default routing instance, and the
router’s lo0.n loopback address is used for configuring the AMT local address in VPN routing instances.
1627
Default
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1628
Description | 1628
Options | 1628
Syntax
local-address address;
Hierarchy Level
Description
Configure the local end of an MSDP session. You must configure at least one peer for MSDP to function.
When configuring a peer, you must include this statement. This address is used to accept incoming
connections to the peer and to establish connections to the remote peer.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1629
Description | 1630
Options | 1630
Syntax
local-address address;
1630
Hierarchy Level
Description
Configure the routing device local address for the anycast rendezvous point (RP). If this statement is
omitted, the router ID is used as this address.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1631
Description | 1631
Options | 1631
Syntax
local-address address;
Hierarchy Level
Description
Configure the address of the local PE for ingress PE redundancy when point-to-multipoint LSPs are used
for multicast distribution.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1632
Description | 1633
Options | 1634
Syntax
log-interval value;
1633
Hierarchy Level
Description
Options
seconds—Minimum time interval (in seconds) between log messages. To configure the time interval, you
must explicitly configure the maximum number of entries received with the maximum statement. You
can apply the log interval to incoming PIM join messages, PIM register messages, and group-to-RP
mappings.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1635
Description | 1635
Default | 1635
Options | 1635
Syntax
log-interval seconds;
Hierarchy Level
Description
Specify the minimum time interval (in seconds) between sending consecutive log messages to the
system log for multicast groups on static or dynamic IGMP interfaces. To configure the time interval, you
must specify the maximum number of multicast groups allowed on the interface. You must configure the
group-limit statement before you configure the log-interval statement.
To confirm the configured log interval on the interface, use the show igmp interface command.
Default
Options
seconds—Minimum time interval (in seconds) between log messages. You must explicitly configure the
group-limit to configure a time interval to send log messages.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1636
Description | 1637
Default | 1637
Options | 1637
Syntax
log-interval seconds;
Hierarchy Level
name],
[edit protocols mld interface interface-name]
Description
Specify the minimum time interval (in seconds) between sending consecutive log messages to the
system log for multicast groups on static or dynamic MLD interfaces. To configure the time interval, you
must specify the maximum number of multicast groups allowed on the interface.
To confirm the configured log interval on the interface, use the show mld interface command.
Default
Options
seconds—Minimum time interval (in seconds) between log messages. You must explicitly configure the
group-limit to configure a time interval to send log messages.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1638
Description | 1638
Options | 1639
Syntax
log-interval seconds;
Hierarchy Level
Description
Specify the minimum time interval (in seconds) between sending consecutive log messages to the
system log for MSDP active source messages. To configure the time interval, you must specify the
maximum number of MSDP active source messages received by the device.
To confirm the configured log interval, use the show msdp source-active command.
1639
Options
seconds—Minimum time interval (in seconds) between log messages. You must explicitly configure the
maximum value to configure a time interval to send log messages.
Release Information
RELATED DOCUMENTATION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
maximum (MSDP Active Source Messages) | 1645
IN THIS SECTION
Syntax | 1640
Description | 1640
Options | 1640
Syntax
log-warning value;
Hierarchy Level
Description
Specify the threshold at which the device logs a warning message in the system log for received MSDP
active source messages. This threshold is a percentage of the maximum number of MSDP active source
messages received by the device.
To confirm the configured warning threshold, use the show msdp source-active command.
Options
value—Percentage of the number of active source messages that starts triggering the warnings. You
must explicitly configure the maximum value to configure a warning threshold value.
Release Information
RELATED DOCUMENTATION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
maximum (MSDP Active Source Messages) | 1645
IN THIS SECTION
Syntax | 1641
Description | 1642
Options | 1642
Syntax
log-warning value;
Hierarchy Level
Description
Specify the threshold at which the device logs a warning message in the system log for multicast
forwarding cache entries. This threshold is a percentage of the maximum number of multicast
forwarding cache entries received by the device. Configuring the threshold statement globally for the
multicast forwarding cache or including the family statement to configure the thresholds for the IPv4
and IPv6 multicast forwarding caches are mutually exclusive.
To confirm the configured warning threshold, use the show multicast forwarding-cache
statistics command.
Options
value—Percentage of the number of multicast forwarding cache entries that can be added to the cache
that starts triggering the warning. You must explicitly configure the suppress value to configure a
warning threshold value.
Release Information
RELATED DOCUMENTATION
loose-check
IN THIS SECTION
Syntax | 1643
Description | 1643
Syntax
loose-check;
Hierarchy Level
Description
Specify loose authentication checking on the BFD session. Use loose authentication for transitional
periods only when authentication might not be configured at both ends of the BFD session.
By default, strict authentication is enabled and authentication is checked at both ends of each BFD
session. Optionally, to smooth migration from nonauthenticated sessions to authenticated sessions, you
can configure loose checking. When loose checking is configured, packets are accepted without
authentication being checked at each end of the session.
Release Information
RELATED DOCUMENTATION
mapping-agent-election
IN THIS SECTION
Syntax | 1644
Description | 1645
Options | 1645
Syntax
(mapping-agent-election | no-mapping-agent-election);
Hierarchy Level
Description
Options
• Default: mapping-agent-election
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1646
Description | 1646
Options | 1646
Syntax
maximum number;
Hierarchy Level
Description
Configure the maximum number of MSDP active source messages the router accepts.
Options
• Default: 25,000
Release Information
RELATED DOCUMENTATION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
threshold (MSDP Active Source Messages) | 1943
IN THIS SECTION
Syntax | 1647
Description | 1648
Options | 1649
Syntax
maximum limit;
Hierarchy Level
Description
Configure the maximum number of specified PIM entries received by the device. If the device reaches
the configured limit, no new entries are received.
NOTE: The maximum limit settings that you configure with the maximum and the family (inet |
inet6) maximum statements are mutually exclusive. For example, if you configure a global
maximum PIM join state limit, you cannot configure a limit at the family level for IPv4 or IPv6
joins. If you attempt to configure a limit at both the global level and the family level, the device
will not accept the configuration.
1649
Options
limit—Maximum number of PIM entries received by the device. If you configure both the log-interval and
the maximum statements, a warning is triggered when the maximum limit is reached.
Depending on your configuration, this limit specifies the maximum number of PIM joins, PIM register
messages, or group-to-RP mappings received by the device.
Release Information
RELATED DOCUMENTATION
maximum-bandwidth
IN THIS SECTION
Syntax | 1650
Description | 1650
Options | 1650
Syntax
maximum-bandwidth bps;
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
maximum-rps
IN THIS SECTION
Syntax | 1651
Description | 1651
Options | 1651
Syntax
maximum-rps limit;
Hierarchy Level
Description
Options
limit—Number of RPs.
• Default: 100
1652
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1652
Description | 1653
Options | 1653
Syntax
maximum-transmit-rate packets-per-second;
1653
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1654
Description | 1654
Options | 1654
Syntax
maximum-transmit-rate packets-per-second;
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
mdt
IN THIS SECTION
Syntax | 1655
Description | 1656
Syntax
mdt {
data-mdt-reuse;
group-range multicast-prefix;
threshold {
group group-address {
source source-address {
rate threshold-rate;
}
1656
}
tunnel-limit limit;
}
}
Hierarchy Level
Description
Establish the group address range for data MDTs, the threshold for the creation of data MDTs, and
tunnel limits for a multicast group and source. A multicast group can have more than one source of
traffic.
Release Information
Statement introduced before Junos OS Release 7.4. In Junos OS Release 17.3R1, the mdt hierarchy was
moved from provider-tunnel to the provider-tunnel family inet and provider-tunnel family inet6
hierarchies as part of an upgrade to add IPv6 support for default MDT in Rosen 7, and data MDT for
Rosen 6 and Rosen 7. The provider-tunnel mdt hierarchy is now hidden for backward compatibility with
existing scripts.
1657
RELATED DOCUMENTATION
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 696
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode |
690
IN THIS SECTION
Syntax | 1657
Description | 1657
Options | 1658
Syntax
metric metric;
Hierarchy Level
Description
Options
metric—Metric value.
• Range: 1 through 31
• Default: 1
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1659
Description | 1659
Options | 1659
Syntax
minimum-interval milliseconds;
Hierarchy Level
Description
Configure the minimum interval after which the local routing device transmits hello packets and then
expects to receive a reply from a neighbor with which it has established a BFD session. Optionally,
instead of using this statement, you can specify the minimum transmit and receive intervals separately
using the transmit-interval minimum-interval and minimum-receive-interval statements.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1660
Description | 1660
Options | 1661
Syntax
minimum-interval milliseconds;
Hierarchy Level
Description
Configure the minimum interval after which the local routing device transmits hello packets to a
neighbor with which it has established a BFD session. Optionally, instead of using this statement, you
can configure the minimum transmit interval using the minimum-interval statement at the [edit
protocols pim interface interface-name bfd-liveness-detection] hierarchy level.
1661
Options
NOTE: The threshold value specified in the threshold statement must be greater than the value
specified in the minimum-interval statement for the transmit-interval statement.
Release Information
RELATED DOCUMENTATION
min-rate
IN THIS SECTION
Syntax | 1662
Description | 1662
1662
Options | 1663
Syntax
min-rate {
rate bps;
revert-delay seconds;
}
Hierarchy Level
Description
Fast failover (that is, sub-50ms switch over for C-multicast streams as defined in Draft Morin L3VPN
Fast Failover 05, ) is supported for MPC cards operating in enhanced-ip mode that are running next
generation (NG) MVPNs with hot-root-standby enabled.
Live-live NG MVPN traffic is available by enabling both sender-based reverse path forwarding (RPF) and
hot-root standby. In this scenario, any upstream failure in the network can be repaired locally at the
egress PE, and fast failover is triggered if the flow rate of monitored traffic falls below the threshold
configured for min-rate.
On the egress PE, redundant multicast streams are received from a source that has been multihomed to
two or more senders (upstream PEs). Only one stream is forwarded to the customer network, however,
because the sender-based RPF running on the egress PE prevents any duplication.
Note that fast failover only supports VRF configured with a virtual tunnel (VT) interface, that is,
anchored to a tunnel PIC to provide upstream tunnel termination. Label switched interfaces (LSI) are not
supported.
1663
NOTE: min-rate is not strictly supported for MPC3 and MPC4 line cards (these cards have
multiple lookup chips and an aggregate value is not calculated across chips). So, when setting the
rate, choose a value that is high enough to ensure that lookup will be triggered at least once on
each chip every 10 milliseconds or less. As a result, for line cards with multiple look up chips, a
small percentage of duplicate multicast packets may be observed being leaked to the to the
egress interface. This is normal behavior. The re-route is triggered when traffic rate on the
primary tunnel hits zero. Likewise, if no packets are detected on any of the lookup chips during
the configured interval, the tunnel will go down.
Options
rate—Specify a rate to represent the typical flow rate of aggregate multicast traffic from the provider
tunnel (P tunnel). Aggregate multicast traffic from the P tunnel is monitored, and if it falls below the
threshold set here a failover to the hot-root standby is triggered.
revert-delay seconds—Use the specified interval to allow time for the network to converge when and if
the original link comes back online. You can specify a time, in seconds, for the router to wait before
updating its multicast routes. For example, if the original link goes down and triggers the switchover to
an alternative link, and then the original link comes back up, the update of multicast routes reflecting the
new path can be delayed to accommodate the time it may take to for the network to converge back on
the original link.
Release Information
RELATED DOCUMENTATION
Example: Configuring Sender-Based RPF in a BGP MVPN with RSVP-TE Point-to-Multipoint Provider
Tunnels | 966
hot-root-standby (MBGP MVPN) | 1543
min-rate (source-active-advertisement)
IN THIS SECTION
Syntax | 1664
Description | 1664
Syntax
min-rate bps
Hierarchy Level
Description
Minimum traffic rate required to advertise Source-Active route (1 to 1000000 bits per second), set on
the ingress PEs.
1665
Use the command, for example, to ensure that the egress PEs only receive Source-Active A-D route
advertisements from ingress PEs that are receiving traffic at or above a minimum rate, regardless of how
many ingress PEs there may be. Only one of the ingress PEs is chosen as the upstream multicast hop
(UMH). Traffic flow continues because the egress PE removes its Type 7 advertisements to the old UMH
and re-advertises a Type 7 to the new UMH.
The min-rate command works by polling traffic stats to determine the traffic rate of each flow on the
ingress PE. Rather than advertising the Source-Active A-D route immediately upon learning of the S,G,
the ingress PE waits until the traffic rate reaches the threshold set for min-rate before sending the
Source-Active A-D route. If the rate then drops below the threshold, the Source-Active A-D route is
withdrawn.
To verify that the value is set as expected, you can check whether the Type 5 (Source-Active route) has
been advertised using the show route table vrf.mvpn.0 command. It may take several minutes before
you can see the changes in the Source-Active A-D route advertisement after making changes to the
min-rate.
Release Information
RELATED DOCUMENTATION
minimum-receive-interval
IN THIS SECTION
Syntax | 1666
1666
Description | 1666
Options | 1666
Syntax
minimum-receive-interval milliseconds;
Hierarchy Level
Description
Configure the minimum interval after which the local routing device must receive a reply from a
neighbor with which it has established a BFD session. Optionally, instead of using this statement, you
can configure the minimum receive interval using the minimum-interval statement at the [edit protocols
pim interface interface-name bfd-liveness-detection] hierarchy level.
Options
Release Information
RELATED DOCUMENTATION
mld
IN THIS SECTION
Syntax | 1667
Description | 1668
Default | 1668
Options | 1668
Syntax
mld {
accounting;
interface interface-name {
(accounting | no-accounting);
disable;
distributed;
group-limit limit;
group-policy [ policy-names ];
immediate-leave;
oif-map [ map-names ];
passive;
ssm-map ssm-map-name;
1668
ssm-map-policy ssm-map-policy-name;
static {
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
source-count number;
source-increment increment;
}
}
}
version version;
}
maximum-transmit-rate packets-per-second;
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
Hierarchy Level
Description
Enable MLD on the router. MLD must be enabled for the router to receive multicast packets.
Default
MLD is disabled on the router. MLD is automatically enabled on all broadcast interfaces when you
configure Protocol Independent Multicast (PIM) or Distance Vector Multicast Routing Protocol
(DVMRP).
Options
Release Information
RELATED DOCUMENTATION
Enabling MLD | 65
show mld group
show mld interface
show mld statistics | 2237
clear mld membership
clear mld statistics | 2064
mld-snooping
IN THIS SECTION
Description | 1673
Default | 1673
mld-snooping {
evpn-ssm-reports-only;
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
vlan vlan-id {
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
1671
query-response-interval seconds;
robust-count number;
}
}
mld-snooping {
vlan (all | vlan-name) {
immediate-leave;
interface interface-name {
group-limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
qualified-vlan vlan-id;
query-interval;
query-last-member-interval;
query-response-interval;
robust-count number;
trace-options {
file (files | no-word-readable | size | word-readable):
flag (all | client-notification | general | group | host-
notification | leave | noraml | packest | policy | query | report | route |
report | state | task |timer):
}
}
}
mld-snooping {
vlan (vlan-name) {
1672
evpn-ssm-reports-only;
immediate-leave;
interface (all | interface-name) {
group-limit limit;
host-only-interface;
immediate-leave;
multicast-router-interface;
static {
group ip-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
qualified-vlan vlan-id;
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier>;
}
}
}
Hierarchy Level
Description
Enable and configure Multicast Listener Discovery (MLD) snooping. MLD snooping constrains IPv6
multicast traffic at Layer 2 by configuring Layer 2 LAN ports dynamically to forward IPv6 multicast
traffic only to those ports that want to receive it.
MLD is a protocol built on ICMPv6 and used by IPv6 routers and hosts to discover and indicate interest
in a multicast group, similar to how IGMP manages multicast group membership for IPv4 multicast
traffic. There are two versions, MLDv1 (RFC 2710), which is equivalent to IGMP version 2 (IGMPv2), and
MLDv2 (RFC 3810), which is equivalent to IGMP version 3 (IGMPv3). Like IGMP, both MLDv1 and
MLDv2 support Query, Report and Done messages. MLDv2 further supports source-specific Query
messages (reports) and multi-record reports. MLD configuration options are similar to those for IGMP
snooping.
MLD restricts forwarding IPv6 multicast traffic to only those interfaces in a bridge-domain, VLAN, or
VPLS that have interested listeners, rather than flooding the traffic to all interfaces in the bridge-domain,
VLAN, or VPLS. The device finds the interfaces with interested listeners using the following steps:
The device snoops Query messages and floods them to all ports. The device snoops Report and Done
messages and selectively forwards them only to multicast router ports.
NOTE: For MX Series devices, MLD snooping is not supported on DPC linecards. The operational
commands for MLD snooping, including defaults, functionality, logging, and tracing are similar to
those for IGMP snooping.
Default
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1674
Description | 1675
Default | 1675
Options | 1676
Syntax
Hierarchy Level
Description
Configure the operating mode for a Multicast VLAN Registration (MVR) receiver VLAN.
A multicast VLAN (MVLAN) forwards multicast streams to interfaces on other VLANs that are
configured as MVR receiver VLANs for that MVLAN, and can operate in either of two modes,
transparent or proxy. The mode setting affects how IGMP reports are sent to the upstream multicast
router. In transparent mode, the device sends IGMP reports out of the MVR receiver VLAN, and in proxy
mode, the device sends IGMP reports out of the MVLAN.
We recommend that you configure proxy mode on devices that are closest to the upstream multicast
router, because in transparent mode, IGMP reports are only sent out on the MVR receiver VLAN. As a
result, MVR receiver ports receiving an IGMP query from an upstream router on the MVLAN will only
reply on MVR receiver VLAN multicast router ports, the upstream router will not receive the replies, and
the upstream router will not continue to forward traffic. In proxy mode, IGMP reports are sent out on
the MVLAN for its MVR receiver VLANs, so the upstream multicast router receives IGMP replies on the
MVLAN and continues to forward the multicast traffic on the MVLAN.
In either mode, the device forms multicast group memberships on the MVLAN, and IGMP queries and
forwards multicast traffic received on the MVLAN to subscribers in MVR receiver VLANs tagged with
the MVLAN tag by default. If you also configure the translate option at the [edit protocols igmp-
snooping vlans vlan-name data-forwarding receiver] hierarchy level for hosts on trunk ports in MVR
receiver VLANs, then upon egress, the device translates MVLAN tags into the MVR receiver VLAN tags
instead.
NOTE: This statement is available to configure the MVR mode only on devices that support the
Enhanced Layer 2 Software (ELS) configuration style. Devices with software that does not
support ELS operate in transparent mode by default, or operate in proxy mode if you configure
the proxy statement at the [edit protocols igmp-snooping vlan vlan-name] hierarchy level for a
VLAN configured as a data-forwarding VLAN.
Default
Transparent mode
1676
Options
transparent MVR operates in transparent mode if this option is configured (and is also the default if no
mode is configured). In transparent mode, IGMP reports are sent out from the device in
the context of the MVR receiver VLAN. IGMP join and leave messages received on MVR
receiver VLAN interfaces are forwarded to the multicast router ports on the MVR receiver
VLAN. IGMP queries received on the MVR receiver VLAN are forwarded to all MVR
receiver ports. IGMP queries received on the MVLAN are forwarded to the MVR receiver
ports that are in the receiver VLANs belonging to the MVLAN, even though those ports
might not be on the MVLAN itself.
When a host on an MVR receiver VLAN joins a multicast group, the device installs a
bridging entry on the MVLAN and forwards MVLAN traffic for that group to the host,
even though the host is not in the MVLAN. You can also configure the device to install the
bridging entries on the MVR receiver VLAN (see the install option at the [edit protocols
igmp-snooping vlans vlan-name data-forwarding receiver] hierarchy level).
proxy When you configure proxy mode for an MVR receiver VLAN, the device acts as a proxy to
the IGMP multicast router for MVR group membership requests received on MVR receiver
VLANs. The device forwards IGMP reports from hosts on MVR receiver VLANs in the
context of the MVLAN and forwards them to the multicast router ports on the MVLAN
only, so the multicast router receives IGMP reports only on the MVLAN for those MVR
receiver hosts. IGMP queries are handled in the same way as in transparent mode; IGMP
queries received on either the MVR receiver VLAN or the MVLAN are forwarded to all
MVR receiver ports in receiver VLANs belonging to the MVLAN (even though those ports
are not on the MVLAN itself).
When a host on an MVR receiver VLAN joins a multicast group, the device installs a
bridging entry on the MVLAN, and subsequently forwards MVLAN traffic for that group to
the host although the host is not in the MVLAN. You cannot configure the install option to
install the bridging entries on the MVR receiver VLAN for a data-forwarding MVR receiver
VLAN that is configured in proxy mode.
Release Information
Support added in Junos OS Release 18.4R1 for EX2300 and EX3400 switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1677
Description | 1677
Options | 1678
Syntax
Hierarchy Level
Description
Options
unicast-routing—DVMRP performs unicast routing only. To forward multicast data, you must configure
Protocol Independent Multicast (PIM) on the interface.
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1679
Description | 1679
Default | 1679
Options | 1679
1679
Syntax
Hierarchy Level
Description
Configure groups of peers in a full mesh topology to limit excessive flooding of source-active messages
to neighboring peers. The default flooding mode is standard.
Default
Options
• Default: standard
Release Information
RELATED DOCUMENTATION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
IN THIS SECTION
Syntax | 1680
Description | 1681
Options | 1681
Syntax
Hierarchy Level
Description
Options
The choice of PIM mode is closely tied to controlling how groups are mapped to PIM modes, as follows:
• bidirectional-sparse—Use if all multicast groups are operating in bidirectional, sparse, or SSM mode.
• bidirectional-sparse-dense—Use if multicast groups, except those that are specified in the dense-
groups statement, are operating in bidirectional, sparse, or SSM mode.
• sparse—Use if all multicast groups are operating in sparse mode or SSM mode.
• sparse-dense—Use if multicast groups, except those that are specified in the dense-groups
statement, are operating in sparse mode or SSM mode.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1682
Description | 1682
Syntax
mofrr-asm-starg;
Hierarchy Level
Description
Enable mofrr-asm-starg to include any-source multicast (ASM) for (*,G) joins in the Multicast-only fast
reroute (MoFRR).
1683
NOTE: mofrr-asm-starg applies to IP-PIM only. When enabled for group G, *,G will undergo
MoFRR as long as there is no S#,G for Group G. In other words, *,G MoFRR will cease and any
old states will be torn down when S#,G is created. Note too, that mofrr-asm-starg is not
supported for mLDP (since mLDP itself does not support *,G).
In a PIM domain with MoFRR enabled, the default for stream-protection is S,G routes only.
Context: Multicast-only fast reroute (MoFRR) can be used to reduce traffic loss in a multicast
distribution tree in the event of link down. To employ MoFRR, a downstream router is configured with
an alternative path back towards the source, over which it receives a backup live stream of the same
multicast traffic. That router propagates the same (S,G) join toward both upstream neighbors in order to
create duplicate multicast trees. If a failure is detected on the primary tree, the router switches to the
backup tree to prevent packet loss.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1684
Description | 1684
Syntax
mofrr-disjoint-upstream-only;
Hierarchy Level
Description
When you configure multicast-only fast reroute (MoFRR) in a PIM domain, allow only a disjoint RPF (an
RPF on a separate plane) to be selected as the backup RPF path.
In a multipoint LDP MoFRR domain, the same label is shared between parallel links to the same
upstream neighbor. This is not the case in a PIM domain, where each link forms a neighbor. The mofrr-
disjoint-upstream-only statement does not allow a backup RPF path to be selected if the path goes to
1685
the same upstream neighbor as that of the primary RPF path. This ensures that MoFRR is triggered only
on a topology that has multiple RPF upstream neighbors.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1685
Description | 1686
Syntax
mofrr-no-backup-join;
1686
Hierarchy Level
Description
When you configure multicast-only fast reroute (MoFRR) in a PIM domain, prevent sending join
messages on the backup path, but retain all other MoFRR functionality.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1687
Description | 1687
Default | 1688
Syntax
mofrr-primary-path-selection-by-routing;
Hierarchy Level
Description
MoFRR is supported on both equal-cost multipath (ECMP) paths and non-ECMP paths. Unicast loop-
free alternate (LFA) routes need to be enabled to support MoFRR on non-ECMP paths. LFA routes are
enabled with the link-protection statement in the interior gateway protocol (IGP) configuration. When
you enable link protection on an OSPF or IS-IS interface, Junos OS creates a backup LFA path to the
primary next hop for all destination routes that traverse the protected interface.
1688
In the context of load balancing, MoFRR prioritizes the disjoint backup in favor of load balancing the
available paths.
For Junos OS releases before 15.1R7, for both ECMP and Non-ECMP scenarios, the default MoFRR
behavior was sticky , that is, if the Active link went down, the Active Path selection would give
preference to Backup Path for the transition. The Active Path would not follow the unicast selected
gateway
Starting in Junos OS Release 15.1R7 however, the default behavior for non-EMCP scenarios is to be
nonsticky, that is, the selection of Active Path strictly follows unicast selected gateway. MoFRR no
longer chooses a unicast LFA path to become the MoFRR Active path; only a unicast LFA path can be
selected to become MoFRR Backup.
Default
By default, the backup path gets promoted to be the primary path when MoFRR is configured in a PIM
domain.
Release Information
RELATED DOCUMENTATION
mpls-internet-multicast
IN THIS SECTION
Syntax | 1689
Description | 1689
Syntax
mpls-internet-multicast;
Hierarchy Level
Description
A nonforwarding routing instance type that supports Internet multicast over an MPLS network for the
default master instance. No interfaces can be configured for it. Only one mpls-internet-multicast
instance can be configured for each logical system.
The mpls-internet-multicast configuration statement is also explicitly required under PIM in the master
instance.
Release Information
RELATED DOCUMENTATION
msdp
IN THIS SECTION
Syntax | 1690
Description | 1692
Default | 1692
Options | 1692
Syntax
msdp {
disable;
active-source-limit {
log-interval seconds;
log-warning value;
maximum number;
threshold number;
}
data-encapsulation (disable | enable);
export [ policy-names ];
1691
group group-name {
... group-configuration ...
}
hold-time seconds;
import [ policy-names ];
local-address address;
keep-alive seconds;
peer address {
... peer-configuration ...
}
rib-group group-name;
source ip-prefix</prefix-length> {
active-source-limit {
maximum number;
threshold number;
}
}
sa-hold-time seconds;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
group group-name {
disable;
export [ policy-names ];
import [ policy-names ];
local-address address;
mode (mesh-group | standard);
peer address {
... same statements as at the [edit protocols msdp peer address]
hierarchy level shown just following ...
}
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
peer address {
disable;
active-source-limit {
maximum number;
1692
threshold number;
}
authentication-key peer-key;
default-peer;
export [ policy-names ];
import [ policy-names ];
local-address address;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
}
Hierarchy Level
Description
Enable MSDP on the router or switch. You must also configure at least one peer for MSDP to function.
Default
Options
Release Information
RELATED DOCUMENTATION
multicast
IN THIS SECTION
Syntax | 1693
Description | 1695
Syntax
multicast {
asm-override-ssm;
backup-pe-group group-name {
backups [ addresses ];
local-address address;
}
cont-stats-collection-interval interval;
flow-map flow-map-name {
bandwidth (bps | adaptive);
forwarding-cache {
timeout (never non-discard-entry-only | minutes);
}
policy [ policy-names ];
redundant-sources [ addresses ];
1694
}
forwarding-cache {
threshold suppress value <reuse value>;
timeout minutes;
}
interface interface-name {
enable;
maximum-bandwidth bps;
no-qos-adjust;
reverse-oif-mapping {
no-qos-adjust;
}
subscriber-leave-timer seconds;
}
local-address address
omit-wildcard-address
pim-to-igmp-proxy {
upstream-interface [ interface-names ];
}
pim-to-mld-proxy {
upstream-interface [ interface-names ];
}
rpf-check-policy [ policy-names ];
scope scope-name {
interface [ interface-names ];
prefix destination-prefix;
}
scope-policy [ policy-names ];
ssm-groups [ addresses ];
ssm-map ssm-map-name {
policy [ policy-names ];
source [ addresses ];
}
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <disable>;
}
}
1695
Hierarchy Level
Description
Configure multicast routing options properties. Note that you cannot apply a scope policy to a specific
routing instance. That is, all scoping policies are applied to all routing instances. However, the scope
statement does apply individually to a specific routing instance.
Release Information
interface and maximum-bandwidth statements introduced in Junos OS Release 9.0 for EX Series
switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1696
Description | 1696
Default | 1697
Syntax
multicast;
Hierarchy Level
Description
In a multiprotocol BGP (MBGP) multicast VPN (MVPN), configure the virtual tunnel (VT) interface to be
used for multicast traffic only.
1697
Default
If you omit this statement, the VT interface can be used for both multicast and unicast traffic.
Release Information
RELATED DOCUMENTATION
multicast-replication
IN THIS SECTION
Syntax | 1698
Description | 1698
Default | 1698
Options | 1698
Syntax
multicast-replication {
evpn {
irb (local-only | local-remote);
smet-nexthop-limit smet-nexthop-limit;
}
ingress;
local-latency-fairness;
}
Hierarchy Level
[edit forwarding-options]
Description
Configure the mode of multicast replication that helps to optimize multicast latency.
NOTE: The multicast-replication statement is supported only on platforms with the enhanced-ip
mode enabled.
Default
Options
NOTE: The ingress and local-latency-fairness options do not apply to EVPN configurations.
ingress Complete ingress replication of the multicast data packets where all the egress Packet
Forwarding Engines receive packets from the ingress Packet Forwarding Engines
directly.
1699
evpn irb local- Enables IPv4 inter-VLAN multicast forwarding in an EVPN-VXLAN network with a
remote two-layer IP fabric, which is also known as a centrally-routed bridging overlay.
smet-nexthop- Configures a limit for the number of SMET next hops for selective multicast
limit smet- forwarding. SMET next hops is a list of outgoing interfaces used by a PE device in
nexthop-limit
selectively replicating and forwarding multicast traffic. When this limit is reached, no
new SMET next hop is created and the PE device will send the new multicast group
traffic to all egress devices.
• Default: 10,000
Release Information
evpn stanza introduced in Junos OS Release 17.3R3 for QFX Series switches.
RELATED DOCUMENTATION
forwarding-options
IPv4 Inter-VLAN Multicast Forwarding Modes for EVPN-VXLAN Overlay Networks
1700
IN THIS SECTION
Syntax | 1700
Description | 1700
Default | 1701
Syntax
multicast-router-interface;
Hierarchy Level
Description
Statically configure the interface as an IGMP snooping multicast-router interface—that is, an interface
that faces toward a multicast router or other IGMP querier.
1701
NOTE: If the specified interface is a trunk port, the interface becomes a multicast-routing device
interface for all VLANs configured on the trunk port. In addition, all unregistered multicast
packets, whether they are IPv4 or IPv6 packets, are forwarded to the multicast routing device
interface, even if the interface is configured as a multicast routing device interface only for IGMP
snooping.
Configure an interface as a bridge interface toward other multicast routing devices.
Default
Disabled. If this statement is disabled, the interface drops IGMP messages it receives.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1702
Description | 1702
Syntax
multicast-router-interface;
Hierarchy Level
Description
Statically configure the interface as a multicast-router interface—that is, an interface that faces towards
a multicast router or other MLD querier.
NOTE: If the specified interface is a trunk port, the interface becomes a multicast-router
interface for all VLANs configured on the trunk port. In addition, all unregistered multicast
packets, whether they are IPv4 or IPv6 packets, are forwarded to the multicast router interface,
even if the interface is configured as a multicast-router interface only for MLD snooping.
1703
Release Information
Support at the [edit routing-instances instance-name protocols mld-snooping vlan vlan-name interface
interface-name] hierarchy level introduced in Junos OS Release 13.3 for EX Series switches.
RELATED DOCUMENTATION
multicast-snooping-options
IN THIS SECTION
Syntax | 1703
Description | 1704
Options | 1704
Syntax
multicast-snooping-options {
flood-groups [ ip-addresses ];
forwarding-cache {
threshold suppress value <reuse value>;
1704
}
host-outbound-traffic (Multicast Snooping) {
forwarding-class class-name;
dot1p number;
}
graceful-restart <restart-duration seconds>;
ignore-stp-topology-change;
multichassis-lag-replicate-state;
nexthop-hold-time milliseconds;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
multicast-statistics (packet-forwarding-options)
IN THIS SECTION
Syntax | 1705
Description | 1705
Syntax
multicast-statistics;
Hierarchy Level
Description
Counts packets and checks the bandwidth of IPv4 and IPv6 multicast traffic received from a host and
group in a routing instance by using firewall filters.
With multicast-statistics enabled, route statistics are updated by a firewall counter for the next 512
multicast routes. Statistics are attached and collected on a first-come, first-served basis. To count the
packets and bandwidth, the switch uses ingress filters to match on the source IP, destination IP and VRF
1706
ID fields. These filters reside in an ingress filter processor (IFP) group that contains a list of routes and
their corresponding filter IDs.
• The multicast statistic group is the group with the least priority. If there’s a rule conflict in another
group, the action for the group with the higher priority takes effect.
• Each route takes up one entry in the IFP ternary content-addressable memory (TCAM). If no TCAM
space is available, the filter installation fails.
• If you delete this command, any installed firewall rules for multicast statistics are deleted. If you
delete a route, the corresponding filter entry is also deleted. When you delete the last entry, the
group is automatically removed.
To check the rate and bandwidth per route, enter the "show multicast route" on page 2336 extensive
command. To see how many filters are on the switch, enter the VTY command show filter hw groups. To
clear the route counters, enter the "clear multicast statistics" on page 2077 command.
Release Information
RELATED DOCUMENTATION
multichassis-lag-replicate-state
IN THIS SECTION
Syntax | 1707
Description | 1707
Default | 1707
Syntax
multichassis-lag-replicate-state;
Hierarchy Level
Description
Provide multicast snooping for multichassis link aggregation group interfaces. Replicate IGMP join and
leave messages from the active link to the standby link of a dual-link multichassis link aggregation group
interface, enabling faster recovery of membership information after failover.
Default
If not included, membership information is recovered using a standard IGMP network query.
1708
Release Information
RELATED DOCUMENTATION
multiplier
IN THIS SECTION
Syntax | 1708
Description | 1709
Options | 1709
Syntax
multiplier number;
1709
Hierarchy Level
Description
Configure the number of hello packets not received by a neighbor that causes the originating interface
to be declared down.
Options
• Default: 3
Release Information
RELATED DOCUMENTATION
multiple-triggered-joins
IN THIS SECTION
Syntax | 1710
Description | 1710
Options | 1710
Syntax
multiple-triggered-joins {
count number;
interval milliseconds;
}
Hierarchy Level
Description
Enable PIM which emits multiple triggered joins between PIM neighbors at configured or default short
intervals.
Options
interface-name—Name of the interface. Specify the full interface name, including the physical and logical
address components. To configure all interfaces, you can specify all.
1711
• Range: 5 through 15
• Default: 5
• Default: 100
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1712
Description | 1712
1712
Options | 1712
Syntax
mvpn {
family {
inet {
autodiscovery {
inet-mdt;
}
disable
}
inet6 {
disable
}
}
}
Hierarchy Level
Description
Configure the control plane to be used for PE routers in the VPN to discover one another automatically.
From here, you can also disable IPv6 draft-rosen multicast VPN at this hierarchy by using the disable
command at the protocols pim mvpn family inet6 hierarchy.
Options
Release Information
The autodiscovery statement was moved from [.. protocols pim mvpn] to [..protocols pim mvpn family
inet] in Junos OS Release 13.3.
RELATED DOCUMENTATION
mvpn
IN THIS SECTION
Syntax | 1713
Description | 1715
Options | 1715
Syntax
mvpn {
inter-region-template{
template template-name {
all-regions {
1714
incoming;
ingress-replication {
create-new-ucast-tunnel;
label-switched-path {
label-switched-path-template (Multicast) {
(default-template | lsp-template-name);
}
}
}
ldp-p2mp;
rsvp-te {
label-switched-path-template (Multicast) {
(default-template | lsp-template-name);
}
static-lsp static-lsp;
region region-name{
incoming;
ingress-replication {
create-new-ucast-tunnel;
label-switched-path {
label-switched-path-template (Multicast){
(default-template | lsp-template-name);
}
}
}
ldp-p2mp;
rsvp-te {
label-switched-path-template (Multicast) {
(default-template | lsp-template-name);
}
static-lsp static-lsp;
}
}
}
}
mvpn-mode (rpt-spt | spt-only);
receiver-site;
sender-site;
route-target {
export-target {
target target-community;
unicast;
}
1715
import-target {
target {
target-value;
receiver target-value;
sender target-value;
}
unicast {
receiver;
sender;
}
}
}
}
}
Hierarchy Level
Description
Options
Release Information
Support for the traceoptions statement at the [edit protocols mvpn] hierarchy level introduced in Junos
OS Release 13.3.
Support for the inter-region-template statement at the [edit protocols mvpn] hierarchy level introduced
in Junos OS Release 15.1.
RELATED DOCUMENTATION
mvpn-iana-rt-import
IN THIS SECTION
Syntax | 1716
Description | 1717
Default | 1717
Syntax
mvpn-iana-rt-import;
1717
Hierarchy Level
Description
Enables the use of IANA assigned rt-import type values (0x010b) for mutlicast VPNs. You can configure
this statement on ingress PE routers only.
NOTE: If you configure the mvpn-iana-rt-import statement in Junos OS release 10.4R2 and later,
the Juniper Networks router can inter-operate with other vendors routers for multicast VPNs.
However, the Juniper Networks router cannot inter-operate with Juniper Networks routers
running Junos OS release 10.4R1 and earlier.
If you do not configure the mvpn-iana-rt-import statement in Junos OS release 10.4R2 and later,
the Juniper Networks router cannot inter-operate with other vendors routers for multicast VPNs.
However, the Juniper Networks router can inter-operate with Juniper Networks routers running
Junos OS release 10.4R1 and earlier.
Default
Release Information
Statement deprecated in Junos OS release 17.3, which means it no longer appears in the CLI but can be
accessed by scripts or by typing the command name until it is finally removed.
1718
mvpn (NG-MVPN)
IN THIS SECTION
Syntax | 1718
Description | 1719
Syntax
mvpn {
autodiscovery-only {
intra-as {
inclusive;
}
}
receiver-site;
route-target {
export-target {
target target-community;
unicast;
}
import-target {
target {
target <target:number:number> <receiver | sender>;
unicast <receiver | sender>;
}
unicast {
receiver;
sender;
}
}
}
sender-site;
traceoptions {
1719
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
mvpn-mode
IN THIS SECTION
Syntax | 1720
Description | 1720
Default | 1720
Syntax
Hierarchy Level
Description
Configure the mode for customer PIM (C-PIM) join messages. Mixing MVPN modes within the same
VPN is not supported. For example, you cannot have spt-only mode on a source PE and rpt-spt mode on
the receiver PE.
Default
spt-only
1721
Release Information
RELATED DOCUMENTATION
Configuring Shared-Tree Data Distribution Across Provider Cores for Providers of MBGP MVPNs
Configuring SPT-Only Mode for Multiprotocol BGP-Based Multicast VPNs
neighbor-policy
IN THIS SECTION
Syntax | 1721
Description | 1722
Options | 1722
Syntax
neighbor-policy [ policy-names ];
1722
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
nexthop-hold-time
IN THIS SECTION
Syntax | 1723
Description | 1723
Options | 1723
Syntax
nexthop-hold-time milliseconds;
Hierarchy Level
Description
Accumulate outgoing interface changes in order to perform bulk updates to the forwarding table and the
routing table. Delete the statement to turn off bulk updates.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1724
Description | 1725
Options | 1725
Syntax
next-hop next-hop-address;
Hierarchy Level
Description
Configure the specific next-hop address for the PIM group source.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1726
Description | 1726
Syntax
no-adaptation;
Hierarchy Level
Description
Configure BFD sessions not to adapt to changing network conditions. We recommend that you do not
disable BFD adaptation unless it is preferable to have BFD adaptation disabled in your network.
Release Information
RELATED DOCUMENTATION
no-bidirectional-mode
IN THIS SECTION
Syntax | 1727
Description | 1727
Default | 1728
Syntax
no-birectional-mode;
Hierarchy Level
Description
Disable forwarding for bidirectional PIM routes during graceful restart recovery, both in cases of a
routing protocol process (rpd) restart and graceful Routing Engine switchover.
Bidirectional PIM accepts packets for a bidirectional route on multiple interfaces. This means that some
topologies might develop multicast routing loops if all PIM neighbors are not synchronized with regard
to the identity of the designated forwarder (DF) on each link. If one router is forwarding without actively
participating in DF elections, particularly after unicast routing changes, multicast routing loops might
occur.
1728
If graceful restart for PIM is enabled and the forwarding of packets on bidirectional routes is disallowed
(by including the no-bidirectional-mode statement in the configuration), PIM behaves conservatively to
avoid multicast routing loops during the recovery period. When the routing protocol process (rpd)
restarts, all bidirectional routes are deleted. After graceful restart has completed, the routes are re-
added, based on the converged unicast and bidirectional PIM state. While graceful restart is active,
bidirectional multicast flows drop packets.
Default
If graceful restart for PIM is enabled and the bidirectional PIM is enabled, the default graceful restart
behavior is to continue forwarding packets on bidirectional routes. If the gracefully restarting router was
serving as a DF for some interfaces to rendezvous points, the restarting router sends a DF Winner
message with a metric of 0 on each of these RP interfaces. This ensures that a neighbor router does not
become the DF due to unicast topology changes that might occur during the graceful restart period.
Sending a DF Winner message with a metric of 0 prevents another PIM neighbor from assuming the DF
role until after graceful restart completes. When graceful restart completes, the gracefully restarted
router sends another DF Winner message with the actual converged unicast metric.
NOTE: Graceful Routing Engine switchover operates independently of the graceful restart
behavior. If graceful Routing Engine switchover is configured without graceful restart, all PIM
routes for all modes are deleted when the rpd process restarts. If graceful Routing Engine
switchover is configured with graceful restart, the behavior is the same as described here, except
that the recovery happens on the Routing Engine that assumes primary role.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1729
Description | 1729
Syntax
no-dr-flood;
Hierarchy Level
Description
Disable default flooding of multicast data on the PIM designated router port.
Release Information
no-qos-adjust
IN THIS SECTION
Syntax | 1730
Description | 1731
Syntax
no-qos-adjust;
Hierarchy Level
Description
Disable hierarchical bandwidth adjustment for all subscriber interfaces that are identified by their MLD
or IGMP request from a specific multicast interface.
Release Information
RELATED DOCUMENTATION
offer-period
IN THIS SECTION
Syntax | 1732
Description | 1732
Options | 1732
Syntax
offer-period milliseconds;
Hierarchy Level
Description
Configure the designated forwarder (DF) election offer period for bidirectional PIM. When a DF election
Offer or Winner message fails to be received, the message is retransmitted. The offer-period statement
modifies the interval between repeated DF election messages. The robustness-count statement
determines the minimum number of DF election messages that must fail to be received for DF election
to fail. To prevent routing loops, all routing devices on the link must have a consistent view of the DF.
When the DF election fails because DF election messages are not received, forwarding on bidirectional
PIM routes is suspended.
If a router receives from a neighbor a better offer than its own, the router stops participating in the
election for a period of robustness-count * offer-period. Eventually, all routers except the best
candidate stop sending Offer messages.
Options
• Default: 100
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1733
Description | 1734
Syntax
oif-map map-name;
1734
Hierarchy Level
Description
Associates an outgoing interface (OIF) map to the IGMP interface. The OIF map is a routing policy
statement that can contain multiple terms.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1735
Description | 1735
Syntax
oif-map map-name;
Hierarchy Level
Description
Associate an outgoing interface (OIF) map to an MLD logical interface. The OIF map is a routing policy
statement that can contain multiple terms.
Release Information
RELATED DOCUMENTATION
omit-wildcard-address
IN THIS SECTION
Syntax | 1736
1736
Description | 1736
Syntax
omit-wildcard-address;
Hierarchy Level
Description
[none specified]
Release Information
IN THIS SECTION
Syntax | 1737
1737
Description | 1738
Default | 1738
Syntax
override;
Hierarchy Level
Description
When you configure both static RP mapping and dynamic RP mapping (such as auto-RP) in a single
routing instance, allow the static mapping to take precedence for a given group range, and allow
dynamic RP mapping for all other groups.
Default
Release Information
RELATED DOCUMENTATION
override-interval
IN THIS SECTION
Syntax | 1739
Description | 1739
Options | 1739
Syntax
override-interval milliseconds;
Hierarchy Level
Description
Set the maximum time in milliseconds to delay sending override join messages for a multicast network
that has join suppression enabled. When a router or switch sees a prune message for a join it is currently
suppressing, it waits for the interval specified by the override timer before it sends an override join
message.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1740
Description | 1741
Options | 1741
Syntax
p2mp {
no-rsvp-tunneling;
recursive;
root-address root-address;
}
1741
Hierarchy Level
Description
Options
no-rsvp- (Optional) Disable LDP point-to-multipoint LSPs from using RSVP-TE LSPs for tunneling,
tunneling and use LDP paths instead.
Starting in Junos OS Release 12.3R1, Junos OS provides support for Multipoint LDP (M-
LDP) for Targeted LDP (T-LDP) sessions with unicast replication, in addition to link
sessions. As a result, the default behavior of M-LDP over RSVP tunneling is similar to
unicast LDP. However, because T-LDP is chosen over LDP and link sessions to signal
point-to-multipoint LSPs, the no-rsvp-tunelling option enables LDP natively throughout
the network.
Release Information
RELATED DOCUMENTATION
Example: Configuring Point-to-Multipoint LDP LSPs as the Data Plane for Intra-AS MBGP MVPNs |
781
Point-to-Multipoint LSPs Overview
passive (IGMP)
IN THIS SECTION
Syntax | 1742
Description | 1743
Options | 1743
Syntax
Hierarchy Level
Description
When configured for passive IGMP mode, the interface listens for IGMP reports but it will not send or
receive IGMP control traffic such as IGMP reports, queries and leaves. You can, however, configure
exceptions to allow the interface to receive certain control traffic or queries.
NOTE: When an interface is configured for IGMP passive mode, Junos no longer processes static
IGMP group membership on the interface.
Options
You can selectively activate up to two out of the three available options for the passive statement while
keeping the other functions passive (inactive). Activating all three options would be equivalent to not
using the passive statement.
Release Information
allow-receive, send-general-query, and send-group-query options were added in Junos OS Release 10.0.
RELATED DOCUMENTATION
passive (MLD)
IN THIS SECTION
Syntax | 1744
Description | 1744
Options | 1745
Syntax
Hierarchy Level
Description
Specify that MLD run on the interface and either not send and receive control traffic or selectively send
and receive control traffic such as MLD reports, queries, and leaves.
NOTE: You can selectively activate up to two out of the three available options for the passive
statement while keeping the other functions passive (inactive). Activating all three options is
equivalent to not using the passive statement.
1745
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1746
Description | 1746
Options | 1747
Syntax
peer address {
disable;
active-source-limit {
maximum number;
threshold number;
}
authentication-key peer-key;
default-peer;
export [ policy-names ];
import [ policy-names ];
local-address address;
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
}
Hierarchy Level
Description
Define an MSDP peering relationship. An MSDP routing device must know which routing devices are its
peers. You define the peer relationships explicitly by configuring the neighboring routing devices that are
the MSDP peers of the local routing device. After peer relationships are established, the MSDP peers
1747
exchange messages to advertise active multicast sources. To configure multiple MSDP peers, include
multiple peer statements.
By default, the peer's options are identical to the global or group-level MSDP options. To override the
global or group-level options, include peer-specific options within the "peer (Protocols MSDP)" on page
1745 statement.
At least one peer must be configured for MSDP to function. You must configure address and local-
address.
Options
Release Information
RELATED DOCUMENTATION
pim
IN THIS SECTION
Syntax | 1748
Description | 1753
1748
Default | 1753
Syntax
pim {
disable;
assert-timeout seconds;
dense-groups {
addresses;
}
dr-election-on-p2p;
export;
family (inet | inet6) {
disable;
}
graceful-restart {
disable;
no-bidirectional-mode;
restart-duration seconds;
}
import [ policy-names ];
interface (Protocols PIM) interface-name {
family (inet | inet6) {
disable;
}
bfd-liveness-detection {
authentication {
algorithm algorithm-name;
key-chain key-chain-name;
loose-check;
detection-time {
threshold milliseconds;
}
minimum-interval milliseconds;
minimum-receive-interval milliseconds;
multiplier number;
1749
no-adaptation;
transmit-interval {
minimum-interval milliseconds;
threshold milliseconds;
}
version (0 | 1 | automatic);
}
accept-remote-source;
disable;
bidirectional {
df-election {
backoff-period milliseconds;
offer-period milliseconds;
robustness-count number;
}
}
family (inet | inet6) {
disable;
}
hello-interval seconds;
mode (bidirectional-sparse | bidirectional-sparse-dense | dense |
sparse | sparse-dense);
neighbor-policy [ policy-names ];
override-interval milliseconds;
priority number;
propagation-delay milliseconds;
reset-tracking-bit;
version version;
}
join-load-balance;
join-prune-timeout;
mdt {
data-mdt-reuse;
group-range multicast-prefix;
threshold {
group group-address {
source source-address {
rate threshold-rate;
}
}
tunnel-limit limit;
}
}
1750
mvpn {
autodiscovery {
inet-mdt;
}
}
nonstop-routing;
override-interval milliseconds;
propagation-delay milliseconds;
reset-tracking-bit;
rib-group group-name;
rp {
auto-rp {
(announce | discovery | mapping);
(mapping-agent-election | no-mapping-agent-election);
}
bidirectional {
address address {
group-ranges {
destination-ip-prefix</prefix-length>;
}
hold-time seconds;
priority number;
}
}
bootstrap {
family (inet | inet6) {
export [ policy-names ];
import [ policy-names ];
priority number;
}
}
bootstrap-import [ policy-names ];
bootstrap-export [ policy-names ];
bootstrap-priority number;
dr-register-policy [ policy-names ];
embedded-rp {
group-ranges {
destination-ip-prefix</prefix-length>;
}
maximum-rps limit;
}
group-rp-mapping {
family (inet | inet6) {
1751
log-interval seconds;
maximum limit;
threshold value;
}
}
log-interval seconds;
maximum limit;
threshold value;
}
}
local {
family (inet | inet6) {
address address;
anycast-pim {
rp-set {
address address <forward-msdp-sa>;
}
disable;
local-address address;
}
group-ranges {
destination-ip-prefix</prefix-length>;
}
hold-time seconds;
override;
priority number;
}
}
register-limit {
family (inet | inet6) {
log-interval seconds;
maximum limit;
threshold value;
}
}
log-interval seconds;
maximum limit;
threshold value;
}
}
rp-register-policy [ policy-names ];
spt-threshold {
infinity [ policy-names ];
1752
}
static {
address address {
override;
version version;
group-ranges {
destination-ip-prefix</prefix-length>;
}
}
}
}
rpf-selection {
group group-address{
sourcesource-address{
next-hop next-hop-address;
}
wildcard-source {
next-hop next-hop-address;
}
}
prefix-list prefix-list-addresses {
source source-address {
next-hop next-hop-address;
}
wildcard-source {
next-hop next-hop-address;
}
}
sglimit {
family (inet | inet6) {
log-interval seconds;
maximum limit;
threshold value;
}
}
log-interval seconds;
maximum limit;
threshold value;
}
}
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
1753
Hierarchy Level
Description
Default
Release Information
RELATED DOCUMENTATION
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode |
690
1754
pim-asm
IN THIS SECTION
Syntax | 1754
Description | 1754
Syntax
pim-asm {
group-address (Routing Instances) address;
}
Hierarchy Level
Description
Specify a Protocol Independent Multicast (PIM) sparse mode provider tunnel for an MBGP MVPN or for
a draft-rosen MVPN.
Release Information
pim-snooping
IN THIS SECTION
Syntax | 1755
Description | 1756
Default | 1756
Options | 1756
Syntax
pim-snooping {
no-dr-flood;
traceoptions{
file [filename files | no-word-readable | size | word-readable];
flag [all | general | hello | join | normal | packets | policy |
prune | route | state | task | timer];
}
vlan<vlan-id>{
no-dr-flood;
1756
}
}
Hierarchy Level
Description
PIM snooping snoops PIM hello and join/prune packets on each interface to find interested multicast
receivers and then populates the multicast forwarding tree with the information. PIM snooping is
configured on PE routers connected using pseudowires and ensures that no new PIM packets are
generated in the VPLS (with the exception of PIM messages sent through LDP on pseudowires). PIM
snooping differs from PIM proxying in that PIM snooping floods both the PIM hello and join/prune
packets in the VPLS, whereas PIM proxying only floods hello packets.
Default
Options
no-dr-flood Disable default flooding of multicast data on the PIM-designated router port.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1757
Description | 1758
Syntax
pim-ssm {
group-address (Routing Instances) address;
tunnel-source address;
}
Hierarchy Level
Description
Configure the PIM source-specific multicast (SSM) provider tunnel. Use family inet6 pim-ssm for Rosen
7 running on IPv6 . For Rosen 7 on IPv4, use family inet pim-ssm. The customer data-MDT can be
configured on IPv4 or IPv6, but not both (the provider space always runs on IPv4). Enable Rosen IPv4
before enabling Rosen IPv6.
Release Information
In Junos OS Release 17.3R1, the pim-ssm hierarchy was moved from provider-tunnel to the provider-
tunnel family inet and provider-tunnel family inet6 hierarchies as part of an upgrade to add IPv6
support for default multicast distribution tree (MDT) in Rosen 7, and data MDT for Rosen 6 and Rosen 7.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1759
Description | 1759
Syntax
pim-ssm {
group-range multicast-prefix;
}
Hierarchy Level
Description
Establish the multicast group address range to use for creating MBGP MVPN source-specific multicast
selective PMSI tunnels.
Release Information
pim-to-igmp-proxy
IN THIS SECTION
Syntax | 1760
Description | 1760
Syntax
pim-to-igmp-proxy {
upstream-interface [ interface-names ];
}
Hierarchy Level
Description
Use the pim-to-igmp-proxy statement to have Internet Group Management Protocol (IGMP) forward
IPv4 multicast traffic across Protocol Independent Multicast (PIM) sparse mode domains.
Configure the rendezvous point (RP) routing device that resides between a customer edge-facing PIM
domain and a core-facing PIM domain to translate PIM join or prune messages into corresponding IGMP
report or leave messages. The routing device then transmits the report or leave messages by proxying
them to one or two upstream interfaces that you configure on the RP routing device.
On the IGMP upstream interface(s) used to send proxied PIM traffic, set the IP address so it is the lowest
IP on the network to ensure that the proxying router is always the IGMP querier.
Note too that you should not enable PIM on the IGMP upstream interface(s).
1761
The pim-to-igmp-proxy statement is not supported for routing instances configured with multicast
VPNs.
Release Information
RELATED DOCUMENTATION
pim-to-mld-proxy
IN THIS SECTION
Syntax | 1761
Description | 1762
Syntax
pim-to-mld-proxy {
upstream-interface [ interface-names ];
}
1762
Hierarchy Level
Description
Configure the rendezvous point (RP) routing device that resides between a customer edge–facing
Protocol Independent Multicast (PIM) domain and a core-facing PIM domain to translate PIM join or
prune messages into corresponding Multicast Listener Discovery (MLD) report or leave messages. The
routing device then transmits the report or leave messages by proxying them to one or two upstream
interfaces that you configure on the RP routing device. Including the pim-to-mld-proxy statement
enables you to use MLD to forward IPv6 multicast traffic across the PIM sparse mode domains.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1763
Description | 1763
Options | 1763
Syntax
policy [ policy-names ];
Hierarchy Level
Description
Options
Release Information
IN THIS SECTION
Syntax | 1764
Description | 1765
Syntax
policy policy-name;
Hierarchy Level
Description
When you configure multicast-only fast reroute (MoFRR), apply a routing policy that filters for a
restricted set of multicast streams to be affected by your MoFRR configuration. You can apply filters
that are based on source or group addresses.
For example:
routing-options {
multicast {
stream-protection {
policy mofrr-select;
}
}
}
policy-statement mofrr-select {
term A {
from {
source-address-filter 225.1.1.1/32 exact;
}
then {
accept;
}
}
term B {
from {
source-address-filter 226.0.0.0/8 orlonger;
}
then {
accept;
}
}
term C {
from {
source-address-filter 227.1.1.0/24 orlonger;
source-address-filter 227.4.1.0/24 orlonger;
source-address-filter 227.16.1.0/24 orlonger;
}
then {
accept;
}
}
1766
term D {
from {
source-address-filter 227.1.1.1/32 exact;
}
then {
reject; #MoFRR disabled
}
}
term E {
from {
route-filter 227.1.1.0/24 orlonger;
}
then accept;
}
...
}
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1767
Description | 1767
Syntax
policy [policy-name];
Hierarchy Level
Description
Create a filter policy. The configured device checks the policy configuration to determine whether or not
to apply "rpf-vector" on page 1860 to (S,G).
This example policy shows Source and Group, using Source, using Group.
policy-statement pim-rpf-vector-example {
term A {
1768
from {
source-address-filter <filter A>;
}
then {
accept;
}
}
term B {
from {
source-address-filter <filter A>;
route-filter <filter D>;
}
then {
p2mp-lsp-root {
address root address;
}
accept;
}
}
term C {
from {
route-filter <filter D>;
}
then {
accept;
}
}
...
}
address 200.1.1.2
set policy-options policy-statement rpf-vector-policy term 1 then accept
routing
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1770
Description | 1770
Options | 1770
Syntax
policy [ policy-names ];
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
prefix
IN THIS SECTION
Syntax | 1771
Description | 1772
Options | 1772
Syntax
prefix destination-prefix;
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1773
Description | 1773
Options | 1773
Syntax
prefix-list prefix-list-addresses {
source source-address {
next-hop next-hop-address;
}
wildcard-source {
next-hop next-hop-address;
}
}
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1774
Description | 1775
Default | 1775
Syntax
primary;
Hierarchy Level
Description
In a multiprotocol BGP (MBGP) multicast VPN (MVPN), configure the virtual tunnel (VT) interface to be
used as the primary interface for multicast traffic.
Junos OS supports up to eight VT interfaces configured for multicast in a routing instance to provide
redundancy for MBGP (next-generation) MVPNs. This support is for RSVP point-to-multipoint provider
tunnels as well as multicast Label Distribution Protocol (MLDP) provider tunnels. This feature works for
extranets as well.
This statement allows you to configure one of the VT interfaces to be the primary interface, which is
always used if it is operational. If a VT interface is configured as the primary, it becomes the nexthop
that is used for traffic coming in from the core on the label-switched path (LSP) into the routing instance.
When a VT interface is configured to be primary and the VT interface is used for both unicast and
multicast traffic, only the multicast traffic is affected.
If no VT interface is configured to be the primary or if the primary VT interface is unusable, one of the
usable configured VT interfaces is chosen to be the nexthop that is used for traffic coming in from the
core on the LSP into the routing instance. If the VT interface in use goes down for any reason, another
usable configured VT interface in the routing instance is chosen. When the VT interface in use changes,
all multicast routes in the instance also switch their reverse-path forwarding (RPF) interface to the new
VT interface to allow the traffic to be received.
To realize the full benefit of redundancy, we recommend that when you configure multiple VT interfaces,
at least one of the VT interfaces be on a different Tunnel PIC from the other VT interfaces. However,
Junos OS does not enforce this.
Default
If you omit this statement, Junos OS chooses a VT interface to be the active interface for multicast
traffic.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1776
Description | 1776
Options | 1777
Syntax
primary address;
Hierarchy Level
Description
Statically set the primary upstream multicast hop (UMH) for type 7 (S,G) routes.
Options
Release Information
RELATED DOCUMENTATION
priority (Bootstrap)
IN THIS SECTION
Syntax | 1778
Description | 1778
Options | 1778
Syntax
priority number;
Hierarchy Level
Description
Options
number—Routing device’s priority for becoming the bootstrap router. A higher value corresponds to a
higher priority.
• Default: 0 (The routing device has the least likelihood of becoming the bootstrap router and sends
packets with a priority of 0.)
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1779
Description | 1780
Options | 1780
Syntax
priority number;
Hierarchy Level
Description
Configure the routing device’s likelihood to be elected as the designated router. DR priority is specific to
PIM sparse mode; as per RFC 3973, PIM DR priority cannot be configured explicitly in PIM Dense Mode
(PIM-DM) in IGMPv2 – PIM-DM only support DRs with IGMPv1.
Options
number—Routing device’s priority for becoming the designated router. A higher value corresponds to a
higher priority.
• Default: 1 (Each routing device has an equal probability of becoming the DR.)
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1781
Description | 1781
Options | 1781
1781
Syntax
priority number;
Hierarchy Level
Description
For PIM-SM, configure this routing device’s priority for becoming an RP.
For bidirectional PIM, configure this RP address’ priority for becoming an RP.
The bootstrap router uses this field when selecting the list of candidate rendezvous points to send in the
bootstrap message. A smaller number increases the likelihood that the routing device or RP address
becomes the RP. A priority value of 0 means that bootstrap router can override the group range being
advertised by the candidate RP.
Options
• Default: 1
Release Information
RELATED DOCUMENTATION
process-non-null-as-null-register
IN THIS SECTION
Syntax | 1782
Description | 1783
Syntax
process-non-null-as-null-register;
1783
Hierarchy Level
Description
More Information
In typical operation, for PIM any-source multicast (ASM), all *,G PIM joins travel hop-by-hop towards the
RP, where they ultimately end. When the FHR receives its first traffic, it forms a register state with the
RP in the network for the corresponding S,G. It does this by sending a PIM non-null register to form a
multicast route with the downstream encapsulation interface. The RP decapsulates the non-null register
and forms a multicast route with the upstream decapsulation device. In this way, multicast data traffic
flows across the encapsulation/decapsulation tunnel interface, from the FHR to the RP, to all the
downstream receivers until the RP has formed the S,G multicast tree in the direction of the source.
Without process-non-null-as-null-register enabled, for PIM ASM, PTX10003 devices can only act as a
PIM transit router or last hop router. These devices can receive a PIM join from downstream interfaces
and propagate the joins towards the RP, or they can receive an IGMP/MLD join and propagate it
towards a PIM RP, but they cannot act as a PIM RP itself. Nor can they form a register state machine
with the PIM FHR in the network.
Release Information
RELATED DOCUMENTATION
propagation-delay
IN THIS SECTION
Syntax | 1784
Description | 1784
Options | 1785
Syntax
propagation-delay milliseconds;
Hierarchy Level
Description
Set a delay for implementing a PIM prune message on the upstream routing device on a multicast
network for which join suppression has been enabled. The routing device waits for the prune pending
period to detect whether a join message is currently being suppressed by another routing device.
1785
Options
milliseconds—Interval for the prune pending timer, which is the sum of the propagation-delay value and
the override-interval value.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1786
Description | 1786
Syntax
promiscuous-mode;
Hierarchy Level
Description
Specify that the interface accepts IGMP reports from hosts on any subnetwork. Note that when
enabling promiscuous-mode, all routing devices on the ethernet segment must be configured with the
promiscuous mode statement. Otherwise, only the interface configured with lowest IPv4 address acts as
the querier for IGMP for this Ethernet segment.
Release Information
RELATED DOCUMENTATION
provider-tunnel
IN THIS SECTION
Syntax | 1787
Description | 1791
Options | 1792
Syntax
provider-tunnel {
external-controller pccd;
family {
inet {
ingress-replication {
create-new-ucast-tunnel;
label-switched-path-template {
(default-template | lsp-template-name);
}
}
ldp-p2mp;
mdt {
data-mdt-reuse;
group-range multicast-prefix;
threshold {
group group-address {
source source-address {
rate threshold-rate;
}
}
}
tunnel-limit limit;
}
pim-asm {
1788
static-lsp lsp-name;
}
ingress-replication {
create-new-ucast-tunnel;
label-switched-path-template {
(default-template | lsp-template-name);
}
}
inter-as{
ingress-replication {
create-new-ucast-tunnel;
label-switched-path-template {
(default-template | lsp-template-name);
}
}
inter-region-segmented {
fan-out| <leaf-AD routes>);
threshold| <kilobits>);
}
ldp-p2mp;
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
}
}
ldp-p2mp;
pim-asm {
group-address (Routing Instances) address;
}
pim-ssm {
group-address (Routing Instances) address;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp lsp-name;
}
selective {
group multicast--prefix/prefix-length {
source ip--prefix/prefix-length {
ldp-p2mp;
1790
create-new-ucast-tunnel;
label-switched-path-template {
(default-template | lsp-template-name);
}
}
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp point-to-multipoint-lsp-name;
}
threshold-rate kbps;
}
wildcard-source {
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp point-to-multipoint-lsp-name;
}
threshold-rate kbps;
}
}
tunnel-limit number;
wildcard-group-inet {
wildcard-source {
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp lsp-name;
}
threshold-rate number;
}
1791
}
wildcard-group-inet6 {
wildcard-source {
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp lsp-name;
}
threshold-rate number;
}
}
}
}
Hierarchy Level
Description
Configure virtual private LAN service (VPLS) flooding of unknown unicast, broadcast, and multicast
traffic using point-to-multipoint LSPs. Also configure point-to-multipoint LSPs for MBGP MVPNs.
Starting in Junos OS Release 21.1R1, following provider tunnel types are supported on QFX10002,
QFX10008, and QFX10016 Switches:
• Ingress Replication
A point-to-multipoint (P2MP) is a MPLS LSP with a single source and multiple destinations. By taking
advantage of MPLS packet replication capability of the network, point-to-multipoint LSPs avoid
unnecessary packet replication at the ingress router. Packet replication takes place only when packets
are forwarded to two or more different destinations requiring different network paths.
1792
• A P2MP LSP enables use of MPLS for point-to-multipoint data distribution. This functionality is
similar to that provided by the IP multicast.
• The branch LSPs can be added and removed without disruption traffic.
• A node can be configured as both a transit and egress router for different LPSs of the same point-to-
multipoint LSP.
• LSPs can be configured statically or dynamic or as a combination of both static and dynamic LSPs.
P2MP LSPs are used to carry the IP unicast and multicast traffic.
Following tunnel types are not supported on QFX10002, QFX10008, and QFX10016 Switches:
• PIM-SSM tree
• PIM-SM tree
• PIM-Bidir tree
Options
external- (Optional) Specifies that point-to-multipoint LSP and (S,G) for MVPN can be provided by
controller an external controller.
pccd
This option enables an external controller to dynamically configure (S,G) and point-to-
multipoint LSP for MVPN. This is for only selective types. When not configured for a
particular MVPN routing-instance, the external controller is not allowed to configure
(S,G) and map point-to-multipoint LSP to that (S,G).
Release Information
In Junos OS Release 17.3R1, the mdt hierarchy was moved from provider-tunnel to the provider-tunnel
family inet and provider-tunnel family inet6 hierarchies as part of an upgrade to add IPv6 support for
default MDT in Rosen 7, and data MDT for Rosen 6 and Rosen 7. The provider-tunnel mdt hierarchy is
now hidden for backward compatibility with existing scripts.
The inter-as statement and its substatements were added in Junos OS Release 19.1R1 to support next
generation MVPN inter-AS option B.
RELATED DOCUMENTATION
proxy
IN THIS SECTION
Syntax | 1794
Description | 1794
Default | 1794
Syntax
proxy {
source-address ip-address;
}
Hierarchy Level
Description
Configure proxy mode and options, including source address. All the queries generated by IGMP
snooping are sent using 0.0.0.0 as the source address in order to avoid participating in IGMP querier
election. Also, all reports generated by IGMP snooping are sent with 0.0.0.0 as the source address
unless there is a configured source address to use.
Default
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1795
Description | 1795
Default | 1796
Options | 1796
Syntax
Hierarchy Level
Description
On EX Series switches that do not use the Enhanced Layer 2 Software (ELS) configuration style, this
statement is used only to set proxy mode for multicast VLAN registration (MVR) on a VLAN acting as a
data-forwarding source (an MVLAN).
1796
On ELS EX Series switches, this statement is available to enable IGMP snooping proxy mode either with
or without MVR configuration. When you configure this option for a VLAN without MVR, the switch
acts as an IGMP proxy to the multicast router for ports in that VLAN. When you configure this option
with MVR on an MVLAN, the switch acts as an IGMP proxy between the multicast router and hosts in
any MVR receiver VLANs associated with the MVLAN. This mode is configured on the MVLAN only, not
on MVR receiver VLANs.
NOTE: ELS switches also support MVR proxy mode, which is configured on individual MVR
receiver VLANs associated with an MVLAN rather than on an MVLAN (unlike IGMP snooping
proxy mode). To enable MVR proxy mode on an MVR receiver VLAN on ELS switches, use the
"mode" on page 1674 statement with the proxy option.
See "Understanding Multicast VLAN Registration" on page 243 for details on MVR modes.
Default
Disabled
Options
Release Information
RELATED DOCUMENTATION
qualified-vlan
IN THIS SECTION
Syntax | 1797
Description | 1797
Options | 1797
Syntax
qualified-vlan vlan-id;
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1798
Description | 1799
Options | 1799
Syntax
query-interval seconds;
1799
Hierarchy Level
Description
Options
seconds—Time interval. This value must be greater than the interval set for query-response-interval.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1800
Description | 1801
Options | 1801
Syntax
query-interval seconds;
Hierarchy Level
Description
Specify how often the querier routing device sends general host-query messages.
Options
seconds—Time interval.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1802
Description | 1802
Options | 1802
1802
Syntax
query-interval seconds;
Hierarchy Level
Description
Specify how often the querier router sends IGMP general host-query messages through an Automatic
Multicast Tunneling (AMT) interface.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1803
Description | 1803
Options | 1804
Syntax
query-interval seconds;
Hierarchy Level
Description
Specify how often the querier router sends general host-query messages.
1804
Options
seconds—Time interval.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1805
Description | 1805
Options | 1805
Syntax
query-last-member-interval seconds;
Hierarchy Level
Description
Options
• Default: 1 second
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1806
Description | 1807
Options | 1807
Syntax
query-last-member-interval seconds;
1807
Hierarchy Level
Description
Specify how often the querier routing device sends group-specific query messages.
Options
• Default: 1 second
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1808
Description | 1808
Options | 1808
Syntax
query-last-member-interval seconds;
Hierarchy Level
Description
Specify how often the querier routing device sends group-specific query messages.
Options
• Range: 0.1 through 0.9, then in 1-second intervals from 1 through 1024
• Default: 1 second
1809
Release Information
Support at the [edit protocols mld-snooping vlan vlan-id] and the [edit routing-instances instance-
nameprotocols mld-snooping vlan vlan-id] hierarchy levels introduced in Junos OS Release 13.3 for EX
Series switches.
Support at the [edit protocols mld-snooping vlan vlan-id] hierarchy level introduced in Junos OS Release
18.1R1 for the SRX1500 devices.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1810
Description | 1810
Options | 1810
Syntax
query-response-interval seconds;
Hierarchy Level
Description
Specify how long to wait to receive a response to a specific query message from a host.
Options
seconds—Time interval. This interval should be less than the host-query interval.
• Default: 10 seconds
1811
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1812
Description | 1812
Options | 1812
Syntax
query-response-interval seconds;
Hierarchy Level
Description
Specify how long the querier routing device waits to receive a response to a host-query message from a
host.
Options
seconds—The query response interval must be less than the query interval.
• Default: 10 seconds
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1813
Description | 1813
Options | 1813
Syntax
query-response-interval seconds;
Hierarchy Level
Description
Specify how long the IGMP querier router waits to receive a response to a host query message from a
host through an Automatic Multicast Tunneling (AMT) interface.
Options
seconds—Time to wait to receive a response to a host query message. The query response interval must
be less than the query interval.
• Default: 10 seconds
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1814
Description | 1815
Options | 1815
Syntax
query-response-interval seconds;
1815
Hierarchy Level
Description
Specify how long the querier routing device waits to receive a response to a host-query message from a
host.
Options
seconds—Time interval.
• Default: 10 seconds
Release Information
Support at the [edit protocols mld-snooping vlan vlan-id] and the [edit routing-instances instance-name
protocols mld-snooping vlan vlan-id] hierarchy levels introduced in Junos OS Release 13.3 for EX Series
switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1816
Description | 1816
Options | 1816
Syntax
rate threshold-rate;
Hierarchy Level
Description
Options
• Default: 10 Kbps
Release Information
Statement introduced before Junos OS Release 7.4. mdt hierarchy was moved from provider-tunnel to
the provider-tunnel family inet and provider-tunnel family inet6 hierarchies as part of an upgrade to
add IPv6 support for default MDT in Rosen 7, and data MDT for Rosen 6 and Rosen 7. The provider-
tunnel mdt hierarchy is now hidden for backward compatibility with existing scripts.
RELATED DOCUMENTATION
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 696
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode |
690
receiver
IN THIS SECTION
Syntax | 1818
Description | 1818
Default | 1818
Options | 1819
Syntax
receiver {
install;
mode (proxy | transparent);
(source-list | source-vlans) vlan-list;
translate;
}
Hierarchy Level
Description
Configure a VLAN as a multicast receiver VLAN of a multicast source VLAN (MVLAN) using the
multicast VLAN registration (MVR) feature.
You must associate an MVR receiver VLAN with at least one data-forwarding source MVLAN. You can
configure an MVR receiver VLAN with multiple source MVLANs using the source-list or source-vlans
statement.
NOTE: The mode, source-list, and translate statements are only applicable to MVR configuration
on EX Series switches that support the Enhanced Layer 2 Software (ELS) configuration style.
The source-vlans statement is applicable only to EX Series switches that do not support ELS, and
is equivalent to the ELS source-list statement.
Default
Options
install Install forwarding table entries (also called bridging entries) on the MVR receiver VLAN
when MVR is enabled. By default, MVR only installs bridging entries on the source
MVLAN for a group address.
You cannot configure the install option for a data-forwarding receiver VLAN that is
configured in proxy mode (see the MVR "mode" on page 1674 option). In MVR
transparent mode, by default, the device installs bridging entries only on the MVLAN
for a multicast group, so upon receiving MVR receiver VLAN traffic for that group, the
switch doesn’t forward the traffic to receiver ports on the MVR receiver VLAN that
sent the join message for that group. The traffic is only forwarded on the MVLAN to
MVR receiver interfaces. Configure this option when in transparent mode to enable
MVR receiver VLAN ports to receive traffic forwarded on the MVR receiver VLAN.
mode (proxy | (ELS devices only) Set proxy or transparent mode for an MVR receiver VLAN. This
transparent) statement is explained separately. The mode is transparent by default.
source-list (ELS devices only) Specify a list of multicast source VLANs (MVLANs) from which a
vlan-list multicast receiver VLAN receives multicast traffic when multicast VLAN registration
(MVR) is configured. This option is available only on on-ELS devices. (Use the source-
vlans option for the same function on non-ELS switches.)
source-vlans (Non-ELS switches only) Specify a list of MVLANs for MVR operation from which the
vlan-list MVR receiver VLAN receives multicast traffic when multicast VLAN registration (MVR)
is configured. Either all of these MVLANs must be in proxy mode or none of them can
be in proxy mode (see "proxy" on page 1795). This option is available only on non-ELS
switches. (Use the source-list option for the same function on ELS devices.)
translate (ELS devices only) Translate VLAN tags in multicast VLAN (MVLAN) packets from the
MVLAN tag to the multicast receiver VLAN tag on an MVR receiver VLAN. Without
this option, tagged traffic has the MVLAN ID by default.
We recommend you set this option for MVR receiver VLANs with trunk ports, so hosts
on the trunk interfaces receive multicast traffic tagged with the expected VLAN ID (the
MVR receiver VLAN ID).
Release Information
Statement and mode, source-list, and translate options introduced in Junos OS Release 18.3R1 for
EX4300 switches (ELS switches).
Statement and mode, source-list, and translate options added in Junos OS Release 18.4R1 for EX2300
and EX3400 switches (ELS switches).
RELATED DOCUMENTATION
redundant-sources
IN THIS SECTION
Syntax | 1820
Description | 1821
Options | 1821
Syntax
redundant-sources [ addresses ];
1821
Hierarchy Level
Description
Configure a list of redundant sources for multicast flows defined by a flow map.
Options
addresses—List of IPv4 or IPv6 addresses for use as redundant (backup) sources for multicast flows
defined by a flow map.
Release Information
RELATED DOCUMENTATION
register-limit
IN THIS SECTION
Syntax | 1822
Description | 1823
Options | 1823
Syntax
register-limit {
family (inet | inet6) {
log-interval seconds;
maximum limit;
threshold value;
}
log-interval seconds;
maximum limit;
threshold value;
}
Hierarchy Level
Description
NOTE: The maximum limit settings that you configure with the maximum and the family (inet |
inet6) maximum statements are mutually exclusive. For example, if you configure a global
maximum PIM register message limit, you cannot configure a limit at the family level for IPv4 or
IPv6. If you attempt to configure a limit at both the global level and the family level, the device
will not accept the configuration.
Options
family (inet | inet6)—(Optional) Specify either IPv4 or IPv6 messages to be counted towards the
configured register message limit.
• Default: Both IPv4 and IPv6 messages are counted towards the configured register message limit.
Release Information
RELATED DOCUMENTATION
register-probe-time
IN THIS SECTION
Syntax | 1824
Description | 1824
Options | 1824
Syntax
register-probe-time register-probe-time;
Hierarchy Level
Description
Specify the amount of time before the register suppression time (RST) expires when a designated switch
can send a NULL-Register to the rendezvous point (RP).
Options
• Default: 5 seconds
• Range: 5 to 60 seconds
1825
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1825
Description | 1826
Syntax
relay {
accounting;
family {
inet {
anycast-prefix ip-prefix/<prefix-length>;
local-address ip-address;
}
1826
}
secret-key-timeout minutes;
tunnel-devicesvalue ;
tunnel-limit number;
unicast-stream-limitnumber;
}
Hierarchy Level
Description
Configure the protocol address family, secret key timeout, and tunnel limit for Automatic Multicast
Tunneling (AMT) relay functions.
Release Information
RELATED DOCUMENTATION
relay (IGMP)
IN THIS SECTION
Syntax | 1827
Description | 1828
Syntax
relay {
defaults {
(accounting | no-accounting);
group-policy [ policy-names ];
query-interval seconds;
query-response-interval seconds;
robust-count number;
ssm-map ssm-map-name;
version version;
}
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
reset-tracking-bit
IN THIS SECTION
Syntax | 1828
Description | 1829
Syntax
reset-tracking-bit;
1829
Hierarchy Level
Description
Change the value of a tracking bit (T-bit) field in the LAN prune delay hello option from the default of 1
to 0, which enables join suppression for a multicast interface. When the network starts receiving
multiple identical join messages, join suppression triggers a random timer with a value of 66 through 84
milliseconds (1.1 × periodic through 1.4 × periodic, where periodic is 60 seconds). This creates an
interval during which no identical join messages are sent. Eventually, only one of the identical messages
is sent. Join suppression is triggered each time identical messages are sent for the same join.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1830
Description | 1830
Options | 1830
Syntax
restart-duration seconds;
Hierarchy Level
Description
Options
• Default: 180
Release Information
RELATED DOCUMENTATION
restart-duration
IN THIS SECTION
Syntax | 1831
Description | 1832
Options | 1832
Syntax
restart-duration seconds;
Hierarchy Level
Description
Options
seconds—Time that the routing device waits (in seconds) to complete PIM sparse mode graceful restart.
• Default: 60
Release Information
RELATED DOCUMENTATION
reverse-oif-mapping
IN THIS SECTION
Syntax | 1833
Description | 1833
1833
Syntax
reverse-oif-mapping {
no-qos-adjust;
}
Hierarchy Level
Description
Enable the routing device to identify a subscriber VLAN or interface based on an IGMP or MLD request
it receives over the multicast VLAN.
Release Information
The no-qos-adjust statement introduced in Junos OS Release 9.5 for EX Series switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1834
Description | 1834
Options | 1835
Syntax
rib-group group-name;
Hierarchy Level
Description
Options
group-name—Name of the routing table group. The name must be one that you defined with the rib-
groups statement at the [edit routing-options] hierarchy level.
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1836
Description | 1836
Options | 1836
Syntax
rib-group group-name;
Hierarchy Level
Description
Options
group-name—Name of the routing table group. The name must be one that you defined with the rib-
groups statement at the [edit routing-options] hierarchy level.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1837
Description | 1837
Options | 1837
Syntax
rib-group {
inet group-name;
inet6 group-name;
}
Hierarchy Level
Description
Options
table-name—Name of the routing table. The name must be one that you defined with the rib-groups
statement at the [edit routing-options] hierarchy level.
1838
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1838
Description | 1839
Options | 1839
Syntax
robust-count number;
1839
Hierarchy Level
Description
Configure the number of queries a device sends before removing a multicast group from the multicast
forwarding table. We recommend that the robust count be set to the same value on all multicast routers
and switches in the VLAN.
This option provides fine-tuning to allow for expected packet loss on a subnet. You can wait more
intervals if subnet packet loss is high and IGMP report messages might be lost.
Options
number—Number of intervals the switch waits before timing out a multicast group.
• Range: 2 through 10
• Default: 2
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1840
Description | 1840
Options | 1841
Syntax
robust-count number;
Hierarchy Level
Description
Tune the expected packet loss on a subnet. This factor is used to calculate the group member interval,
other querier present interval, and last-member query count.
1841
Options
number—Robustness variable.
• Range: 2 through 10
• Default: 2
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1842
Description | 1842
Options | 1842
Syntax
robust-count number;
Hierarchy Level
Description
Configure the expected IGMP packet loss on an Automatic Multicast Tunneling (AMT) tunnel. If a tunnel
is expected to have packet loss, increase the robust count.
Options
number—Number of packets that can be lost before the AMT protocol deletes the multicast state.
• Range: 2 through 10
• Default: 2
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1843
Description | 1843
Options | 1843
Syntax
robust-count number;
Hierarchy Level
Description
Options
number—Time interval. This interval must be less than the interval between general host-query
messages.
• Range: 2 through 10
• Default: 2 seconds
1844
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1844
Description | 1845
Default | 1845
Options | 1845
Syntax
robust-count number;
1845
Hierarchy Level
Description
Configure the number of queries the switch sends before removing a multicast group from the multicast
forwarding table. We recommend that the robust count be set to the same value on all multicast routers
and switches in the VLAN.
Default
The default is the value of the robust-count statement configured for MLD. The default for the MLD
robust-count statement is 2.
Options
number—Number of queries the switch sends before timing out a multicast group.
• Range: 2 through 10
Release Information
RELATED DOCUMENTATION
robustness-count
IN THIS SECTION
Syntax | 1846
Description | 1847
Options | 1847
Syntax
robustness-count number;
Hierarchy Level
Description
Configure the designated forwarder (DF) election robustness count for bidirectional PIM. When a DF
election Offer or Winner message fails to be received, the message is retransmitted. The robustness-
count statement sets the minimum number of DF election messages that must fail to be received for DF
election to fail. To prevent routing loops, all routers on the link must have a consistent view of the DF.
When the DF election fails because DF election messages are not received, forwarding on bidirectional
PIM routes is suspended.
If a router receives from a neighbor a better offer than its own, the router stops participating in the
election for a period of robustness-count * offer-period. Eventually, all routers except the best candidate
stop sending Offer messages.
Options
• Range: 1 through 10
• Default: 3
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1848
Description | 1849
Default | 1849
Options | 1849
Syntax
route-target {
export-target {
target target-community;
unicast;
}
import-target {
target {
target-value;
receiver target-value;
sender target-value;
}
unicast {
receiver;
sender;
}
}
}
1849
Hierarchy Level
Description
Enable you to override the Layer 3 VPN import and export route targets used for importing and
exporting routes for the MBGP MVPN NLRI.
Default
The multicast VPN routing instance uses the import and export route targets configured for the Layer 3
VPN.
Options
Release Information
RELATED DOCUMENTATION
Configuring VRF Route Targets for Routing Instances for an MBGP MVPN
1850
rp
IN THIS SECTION
Syntax | 1850
Description | 1852
Default | 1852
Syntax
rp {
auto-rp {
(announce | discovery | mapping);
(mapping-agent-election | no-mapping-agent-election);
}
bidirectional {
address address {
group-ranges {
destination-ip-prefix</prefix-length>;
}
hold-time seconds;
priority number;
}
}
bootstrap {
family (inet | inet6) {
export [ policy-names ];
import [ policy-names ];
priority number;
}
}
bootstrap-export [ policy-names ];
bootstrap-import [ policy-names ];
bootstrap-priority number;
1851
dr-register-policy [ policy-names ];
embedded-rp {
group-ranges {
destination-ip-prefix</prefix-length>;
}
maximum-rps limit;
}
group-rp-mapping {
family (inet | inet6) {
log-interval seconds;
maximum limit;
threshold value;
}
}
log-interval seconds;
maximum limit;
threshold value;
}
}
local {
family (inet | inet6) {
disable;
address address;
anycast-pim {
local-address address;
address address <forward-msdp-sa>;
rp-set {
}
}
group-ranges {
destination-ip-prefix</prefix-length>;
}
hold-time seconds;
override;
priority number;
}
}
register-limit {
family (inet | inet6) {
log-interval seconds;
maximum limit;
threshold value;
}
1852
}
log-interval seconds;
maximum limit;
threshold value;
}
}
register-probe-time register-probe-time;
}
rp-register-policy [ policy-names ];
static {
address address {
override;
version version;
group-ranges {
destination-ip-prefix</prefix-length>;
}
}
}
}
Hierarchy Level
Description
Configure the routing device as an actual or potential RP. A routing device can be an RP for more than
one group.
Default
If you do not include the rp statement, the routing device can never become the RP.
1853
Release Information
RELATED DOCUMENTATION
rp-register-policy
IN THIS SECTION
Syntax | 1853
Description | 1854
Options | 1854
Syntax
rp-register-policy [ policy-names ];
1854
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
rp-set
IN THIS SECTION
Syntax | 1855
Description | 1855
Syntax
rp-set {
address address <forward-msdp-sa>;
}
Hierarchy Level
Description
Configure a set of rendezvous point (RP) addresses for anycast RP. You can configure up to 15 RPs.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1856
Description | 1857
Options | 1857
Syntax
rpf-check-policy [ policy-names ];
1857
Hierarchy Level
Description
Apply policies for disabling RPF checks on arriving multicast packets. The policies must be correctly
configured.
Options
Release Information
RELATED DOCUMENTATION
rpf-selection
IN THIS SECTION
Syntax | 1858
Description | 1859
Default | 1859
Options | 1859
Syntax
rpf-selection {
group group-address {
sourcesource-address {
next-hop next-hop-address;
}
wildcard-source {
next-hop next-hop-address;
}
}
prefix-list prefix-list-addresses {
source source-address {
next-hop next-hop-address;
}
wildcard-source {
next-hop next-hop-address;
}
}
1859
Hierarchy Level
Description
Configure the PIM RPF next-hop neighbor for a specific group and source for a VRF routing instance.
NOTE: Starting in Junos OS 17.4R1, you can configure rpf-selection statement at the [edit
protocols pim] hierarchy level.
Default
If you omit the rpf-selection statement, PIM RPF checks typically choose the best path determined by
the unicast protocol for all multicast flows.
Options
Release Information
RELATED DOCUMENTATION
rpf-vector (PIM)
IN THIS SECTION
Syntax | 1860
Description | 1860
Options | 1861
Syntax
rpf-vector {
policy (rpf-vector)[ policy-name];
}
Hierarchy Level
Description
This feature provides a way for PIM source-specific multicast (SSM) to resolve Vector Type Length (TLV)
for multicast in a seamless Multiprotocol Label Switching (MPLS) networks. In other words, it enables
PIM to build multicast trees through an MPLS core. rpf-vector implements RFC 5496, Reverse Path
Forwarding (RPF) Vector TLV .
When rpf-vector is enabled on an edge router that sends PIM join messages into the core, the join
message includes a vector specifying the IP address of the next edge router along the path to the root of
1861
the multicast distribution tree (MDT). The core routers can then process the join message by sending it
towards the specified edge router (i.e., toward the Vector). The address of the edge router serves as the
RPF vector in the PIM join message so routers in the core can resolve the next-hop towards the source
without the need for BGP in the core.
Options
routing
Release Information
RELATED DOCUMENTATION
rpt-spt
IN THIS SECTION
Syntax | 1862
Description | 1862
Syntax
rpt-spt;
Hierarchy Level
Description
Use rendezvous-point trees for customer PIM (C-PIM) join messages, and switch to the shortest-path
tree after the source is known.
Release Information
IN THIS SECTION
Syntax | 1863
Description | 1863
Syntax
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp lsp-name;
}
Hierarchy Level
Description
Configure the properties of the RSVP traffic-engineered point-to-multipoint LSP for MBGP MVPNs.
NOTE: Junos OS Release 11.2 and earlier do not support point-to-multipoint LSPs with next-
generation multicast VPNs on MX80 routers.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1865
Description | 1865
Options | 1865
Syntax
sa-hold-time seconds;
Hierarchy Level
Description
Specify the source address (SA) message hold time to use when maintaining a connection with the
MSDP peer. Each entry in an SA cache has an associated hold time. The hold timer is started when an
SA message is received by an MSDP peer. The timer is reset when another SA message is received
before the timer expires. If another SA message is not received during the SA message hold-time period,
the SA message is removed from the cache.
You might want to change the SA message hold time for consistency in a multi-vendor environment.
Options
• Default: 75 seconds
Release Information
RELATED DOCUMENTATION
sap
IN THIS SECTION
Syntax | 1866
Description | 1867
Options | 1867
Syntax
sap {
disable;
1867
Hierarchy Level
Description
Enable the router to listen to session directory announcements for multimedia and other multicast
sessions.
SAP and SDP always listen on the default SAP address and port, 224.2.127.254:9875. To have SAP
listen on additional addresses or pairs of address and port, include a listen statement for each address or
pair.
Options
Release Information
RELATED DOCUMENTATION
scope
IN THIS SECTION
Syntax | 1868
Description | 1868
Options | 1868
Syntax
scope scope-name {
interface [ interface-names ];
prefix destination-prefix;
}
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
scope-policy
IN THIS SECTION
Syntax | 1869
Description | 1870
Options | 1870
Syntax
scope-policy [ policy-names ];
1870
Hierarchy Level
NOTE: You can configure a scope policy at these two hierarchy levels only. You cannot apply a
scope policy to a specific routing instance, because all scoping policies are applied to all routing
instances. However, you can apply the scope statement to a specific routing instance at the [edit
routing-instances routing-instance-name routing-options multicast] or [edit logical-systems
logical-system-name routing-instances routing-instance-name routing-options multicast]
hierarchy level.
Description
Apply policies for scoping. The policy must be correctly configured at the edit policy-options policy-
statement hierarchy level.
Options
Release Information
RELATED DOCUMENTATION
scope
1871
secret-key-timeout
IN THIS SECTION
Syntax | 1871
Description | 1871
Default | 1871
Options | 1872
Syntax
secret-key-timeout minutes;
Hierarchy Level
Description
Specify the period in minutes after which the local opaque secret key used in the Automatic Multicast
Tunneling (AMT) Message Authentication Code (MAC) times out and is regenerated.
Default
60 minutes
1872
Options
minutes—Number of minutes to wait before generating a new MAC opaque secret key.
Release Information
RELATED DOCUMENTATION
selective
IN THIS SECTION
Syntax | 1872
Description | 1874
Syntax
selective {
group multicast-prefix/prefix-length {
source ip-prefix/prefix-length {
ingress-replication {
1873
create-new-ucast-tunnel;
label-switched-path-template {
(default-template | lsp-template-name);
}
}
ldp-p2mp;
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp point-to-multipoint-lsp-name;
}
threshold-rate kbps;
}
wildcard-source {
ldp-p2mp;
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
}
static-lsp point-to-multipoint-lsp-name;
}
threshold-rate kbps;
}
}
tunnel-limit number;
wildcard-group-inet {
wildcard-source {
ldp-p2mp;
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
1874
static-lsp lsp-name;
}
threshold-rate number;
}
}
wildcard-group-inet6 {
wildcard-source {
ldp-p2mp;
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp lsp-name;
}
threshold-rate number;
}
}
}
Hierarchy Level
Description
Configure selective point-to-multipoint LSPs for an MBGP MVPN. Selective point-to-multipoint LSPs
send traffic only to the receivers configured for the MBGP MVPNs, helping to minimize flooding in the
service provider's network.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1875
Description | 1876
Syntax
sender-based-rpf;
Hierarchy Level
Description
In a BGP multicast VPN (MVPN) with either RSVP-TE point-to-multipoint or MLDP point-to-multipoint
provider tunnels, configure a downstream provider edge (PE) router to forward multicast traffic only
from a selected upstream sender PE router.
Starting in Junos OS Release 21.1R1, you can configure MLDP point-to-multipoint provider tunnel on
MX Series router.
BGP MVPNs use an alternative to data-driven-event solutions and bidirectional mode DF election
because, for one thing, the core network is not exactly a LAN. Because, in an MVPN scenario, it is
possible to determine which PE router has sent the traffic, Junos OS uses this information to only
forward the traffic if it is sent from the correct PE router. With sender-based RPF, the RPF check is
enhanced to check whether data arrived on the correct incoming virtual tunnel (vt-) interface and that
the data was sent from the correct upstream PE router.
More specifically, the data must arrive with the correct MPLS label in the outer header used to
encapsulate data through the core. The label identifies the tunnel and, if the tunnel is point-to-
multipoint, the upstream PE router.
Sender-based RPF is not a replacement for single-forwarder election, but is a complementary feature.
Configuring a higher primary loopback address (or router ID) on one PE device (PE1) than on another
(PE2) ensures that PE1 is the single-forwarder election winner. The unicast-umh-election statement
causes the unicast route preference to determine the single-forwarder election. If single-forwarder
election is not used or if it is not sufficient to prevent duplicates in the core, sender-based RPF is
recommended.
For RSVP point-to-multipoint provider tunnels, the transport label identifies the sending PE router
because it is a requirement that penultimate hop popping (PHP) is disabled when using point-to-
multipoint provider tunnels with MVPNs. PHP is disabled by default when you configure the MVPN
protocol in a routing instance. The label identifies the tunnel, and (because the RSVP-TE tunnel is point-
to-multipoint) the sending PE router.
The sender-based RPF mechanism is described in RFC 6513, Multicast in MPLS/BGP IP VPNs in section
9.1.1.
Sender-based RPF prevents duplicates from being sent to the customer even if there is duplication in
the provider network. Duplication could exist in the provider because of a hot-root standby
configuration or if the single-forwarder election is not sufficient to prevent duplicates. Single-forwarder
election is used to prevent duplicates to the core network, while sender-based RPF prevents duplicates
to the customer even if there are duplicates in the core. There are cases in which single-forwarder
election cannot prevent duplicate traffic from arriving at the egress PE router. One example of this
(outlined in section 9.3.1 of RFC 6513) is when PIM sparse mode is configured in the customer network
and the MVPN is in RPT-SPT mode with an I-PMSI.
1877
Release Information
Support for MLDP point-to-multipoint provider tunnel is introduced in Junos OS Release 21.1R1 for MX
Series router.
RELATED DOCUMENTATION
sglimit
IN THIS SECTION
Syntax | 1878
Description | 1878
Options | 1878
Syntax
sglimit {
family (inet | inet6) {
log-interval seconds;
maximum limit;
threshold value;
}
log-interval seconds;
maximum limit;
threshold value;
}
Hierarchy Level
Description
Configure a limit for the number of accepted (*,G) and (S,G) PIM join states.
NOTE: The maximum limit settings that you configure with the maximum and the family (inet |
inet6) maximum statements are mutually exclusive. For example, if you configure a global
maximum PIM join state limit, you cannot configure a limit at the family level for IPv4 or IPv6
joins. If you attempt to configure a limit at both the global level and the family level, the device
will not accept the configuration.
Options
family (inet | inet6)—(Optional) Specify either IPv4 or IPv6 join states to be counted towards the
configured join state limit.
• Default: Both IPv4 and IPv6 join states are counted towards the configured join state limit.
1879
Release Information
RELATED DOCUMENTATION
signaling
IN THIS SECTION
Syntax | 1879
Description | 1880
Syntax
signaling;
1880
Hierarchy Level
Description
Enable signaling in BGP. For multicast distribution tree (MDT) subaddress family identifier (SAFI) NLRI
signaling, configure signaling under the inet-mdt family. For multiprotocol BGP (MBGP) intra-AS NLRI
signaling, configure signaling under the inet-mvpn family.
Release Information
RELATED DOCUMENTATION
snoop-pseudowires
IN THIS SECTION
Syntax | 1881
Description | 1881
Syntax
snoop-pseudowires;
Hierarchy Level
Description
The default IGMP snooping implementation for a VPLS instance adds each pseudowire interface to its
oif list. It includes traffic from the ingress PE that is sent to egress PE even if there is no interest. The
snoop-pseudowires option prevents multicast traffic from traversing the pseudowire (to egress PEs)
unless there are IGMP receivers for the traffic. In other words, multicast traffic is forwarded only to
VPLS core interfaces that are router interfaces, or that are IGMP receivers. In addition to the benefit of
sending traffic to only interested PEs, snoop-pseudowires also optimizes a common path between PE-P
routers wherever possible (so if two PEs connect via the same P router, only one copy of packet is sent;
the packet would be replicated only on P routers for which the path is divergent).
1882
NOTE: Note that this option can only be enabled when instance-type is vpls. The snoop-
pseudowires option cannot be enabled if use-p2mp-lsp is enabled for igmp-snooping-options.
Release Information
RELATED DOCUMENTATION
instance-type
Example: Configuring IGMP Snooping | 144
source-active-advertisement
IN THIS SECTION
Syntax | 1883
Description | 1883
Syntax
source-active-advertisement {
dampen minutes;
min-rate seconds;
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1884
Description | 1884
Options | 1884
Syntax
source ip-address;
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1885
Description | 1886
Options | 1886
Syntax
Hierarchy Level
Description
Specify an IP unicast source address for a multicast group being statically configured on an interface.
Options
distributed (Optional) Enable a static join for multiple multicast address groups so that all Packet
Forwarding Engines receive traffic, but preprovision only one multicast group.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1887
Description | 1887
1887
Default | 1888
Options | 1888
Syntax
source {
groups group-prefix;
}
Hierarchy Level
Description
Configure a VLAN to be a multicast source VLAN (MVLAN), and specify the IP address range of the
multicast source groups.
To configure a data-forwarding VLAN as an MVLAN, you also configure one or more multicast receiver
VLANs (MVR receiver VLANs) with hosts that might be interested in receiving traffic on the MVLAN for
the specified multicast groups. You can configure a VLAN as either an MVLAN or MVR receiver VLAN,
but not both at the same time.
NOTE: On EX4300 and EX4300 multigigabit switches, you can configure up to 10 MVLANs, and
up to a total of 4K MVR receiver VLANs and MVLANs together. On EX2300 and EX3400, you
can configure up to 5 MVLANs and the remaining configurable VLANs can be MVR receiver
VLANs.
Default
Disabled
Options
groups IP address range of the source groups. Each MVLAN must have exactly one groups
group-prefix statement. If there are multiple MVLANs on the switch, their group ranges must be
unique.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1889
Description | 1889
Options | 1889
Syntax
source source-address {
next-hop next-hop-address;
}
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1890
Description | 1890
Options | 1891
Syntax
source ip-address {
source-count number;
source-increment increment;
}
Hierarchy Level
Description
Specify the IP version 4 (IPv4) unicast source address for the multicast group being statically configured
on an interface.
1891
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1892
Description | 1892
Options | 1892
Syntax
source ip-address {
source-count number;
source-increment increment;
}
Hierarchy Level
Description
IP version 6 (IPv6) unicast source address for the multicast group being statically configured on an
interface.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1893
Description | 1893
Default | 1894
Options | 1894
Syntax
source ip-address</prefix-length> {
active-source-limit {
maximum number;
threshold number;
}
}
Hierarchy Level
Description
Limit the number of active source messages the routing device accepts from sources in this address
range.
1894
Default
If you do not include this statement, the routing device accepts any number of MSDP active source
messages.
Options
Release Information
RELATED DOCUMENTATION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
IN THIS SECTION
Syntax | 1895
Description | 1895
Options | 1895
Syntax
source source-address {
rate threshold-rate;
}
Hierarchy Level
Description
Establish a threshold to trigger the automatic creation of a data MDT for the specified unicast address or
prefix of the source of multicast information.
Options
Release Information
Statement introduced before Junos OS Release 7.4. In Junos OS Release 17.3R1, the mdt hierarchy was
moved from provider-tunnel to the provider-tunnel family inet and provider-tunnel family inet6
hierarchies as part of an upgrade to add IPv6 support for default MDT in Rosen 7, and data MDT for
1896
Rosen 6 and Rosen 7. The provider-tunnel mdt hierarchy is now hidden for backward compatibility with
existing scripts.
RELATED DOCUMENTATION
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 696
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode |
690
IN THIS SECTION
Syntax | 1896
Description | 1897
Options | 1897
Syntax
source source-address {
ldp-p2mp;
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp lsp-name;
}
1897
threshold-rate number;
}
Hierarchy Level
Description
Specify the IP address for the multicast source. This statement is a part of the point-to-multipoint LSP
and PIM-SSM GRE selective provider tunnel configuration for MBGP MVPNs.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1898
Description | 1898
Options | 1898
Syntax
source [ addresses ];
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
source-address
IN THIS SECTION
Syntax | 1899
Description | 1900
Options | 1900
Syntax
source-address ip-address;
1900
Hierarchy Level
Description
Specify the IP address to use as the source for IGMP snooping or MLD snooping reports in proxy mode.
Reports are sent with address 0.0.0.0 as the source address unless there is a source address configured.
You can also use this statement to configure the source address to use for IGMP snooping or MLD
snooping queries.
Options
ip-address—IP address to use as the source for proxy-mode IGMP snooping or MLD snooping reports.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1901
Description | 1901
Options | 1901
Syntax
source-count number;
Hierarchy Level
Description
Configure the number of multicast source addresses that should be accepted for each static group
created.
Options
• Default: 1
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1902
Description | 1903
Options | 1903
Syntax
source-count number;
1903
Hierarchy Level
Description
Configure the number of multicast source addresses that should be accepted for each static group
created.
Options
• Default: 1
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1904
Description | 1904
Options | 1904
Syntax
source-increment number;
Hierarchy Level
Description
Configure the number of times the multicast source address should be incremented for each static
group created. The increment is specified in dotted decimal notation similar to an IPv4 address.
Options
• Default: 0.0.0.1
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1905
Description | 1906
Options | 1906
Syntax
source-increment number;
1906
Hierarchy Level
Description
Configure the number of times the address should be incremented for each static group created. The
increment is specified in a format similar to an IPv6 address.
Options
• Default: ::1
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1907
Description | 1907
Syntax
source-tree;
Hierarchy Level
Description
Specify that a statically selected upstream multicast hop (UMH) only affects type 7 (S,G) routes.
The source-tree option is mandatory. Type 6 routes are sent toward the rendezvous point (RP), and use
the dynamic UMH selection that is configured with the unicast-umh-election statement, or the
default method of highest IP address is used if unicast-umh-election is not configured.
Release Information
RELATED DOCUMENTATION
spt-only
IN THIS SECTION
Syntax | 1908
Description | 1909
Syntax
spt-only;
1909
Hierarchy Level
Description
Set the MVPN mode to learn about active multicast sources using multicast VPN source-active routes.
This is the default mode.
Release Information
RELATED DOCUMENTATION
spt-threshold
IN THIS SECTION
Syntax | 1910
Description | 1910
Syntax
spt-threshold {
infinity [ policy-names ];
}
Hierarchy Level
Description
Set the SPT threshold to infinity for a source-group address pair. Last-hop multicast routing devices
running PIM sparse mode can forward the same stream of multicast packets onto the same LAN through
an RPT rooted at the RP or an SPT rooted at the source. By default, last-hop routing devices transition
to a direct SPT to the source. You can configure this routing device to set the SPT transition value to
infinity to prevent this transition for any source-group address pair.
Release Information
RELATED DOCUMENTATION
ssm-groups
IN THIS SECTION
Syntax | 1911
Description | 1911
Options | 1912
Syntax
ssm-groups [ ip-addresses ];
Hierarchy Level
Description
By default, the SSM group multicast address is limited to the IP address range from 232.0.0.0 through
232.255.255.255. However, you can extend SSM operations into another Class D range by including the
ssm-groups statement in the configuration. The default SSM address range from 232.0.0.0 through
232.255.255.255 cannot be used in the ssm-groups statement. This statement is for adding other
multicast addresses to the default SSM group addresses. This statement does not override the default
SSM group address range.
1912
IGMPv3 supports SSM groups. By utilizing inclusion lists, only sources that are specified send to the
SSM group.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1913
Description | 1913
Options | 1913
Syntax
ssm-map ssm-map-name;
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1914
Description | 1914
Options | 1914
Syntax
ssm-map ssm-map-name;
Hierarchy Level
Description
Apply a source-specific multicast (SSM) map to all Automatic Multicast Tunneling (AMT) interfaces.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1915
Description | 1916
Options | 1916
Syntax
ssm-map ssm-map-name;
1916
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1917
Description | 1917
1917
Options | 1917
Syntax
ssm-map ssm-map-name {
policy [ policy-names ];
source [ addresses ];
}
Hierarchy Level
Description
Options
Release Information
RELATED DOCUMENTATION
ssm-map-policy (MLD)
IN THIS SECTION
Syntax | 1918
Description | 1919
Options | 1919
Syntax
ssm-map-policy ssm-map-policy-name;
Hierarchy Level
Description
For dynamically-configured MLD interfaces, use the ssm-map-policy (Dynamic MLD Interface)
statement.
Options
Release Information
RELATED DOCUMENTATION
Example: Configuring SSM Maps for Different Groups to Different Sources | 464
ssm-map-policy (IGMP)
IN THIS SECTION
Syntax | 1920
Description | 1920
Options | 1920
Syntax
ssm-map-policy ssm-map-policy-name;
Hierarchy Level
Description
For dynamically-configured IGMP interfaces, use the ssm-map-policy (Dynamic IGMP Interface)
statement.
Options
Release Information
RELATED DOCUMENTATION
Example: Configuring SSM Maps for Different Groups to Different Sources | 464
1921
standby-path-creation-delay
IN THIS SECTION
Syntax | 1921
Description | 1921
Options | 1922
Syntax
standby-path-creation-delay <seconds>;
Hierarchy Level
Description
Configure the time interval after which a standby path is created, when a new ECMP interface or
neighbor is added to the network.
In the absence of this statement, ECMP joins are redistributed as soon as a new ECMP interface or
neighbor is added to the network.
1922
Options
<seconds> Time interval after which a standby path is created, when a new ECMP interface or
neighbor is added to the network. Range is from 1 through 300.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1923
Description | 1923
Syntax
static {
group multicast-group-address {
source ip-address;
}
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1924
Description | 1924
Options | 1925
Syntax
static {
<distributed>;
group multicast-group-address{
<distributed>;
source source- address<distributed>;
}
}
Hierarchy Level
Description
Configure static source and group (S, G) addresses when distributed IGMP is enabled. Reduces the first
join delay time and brings multicast traffic to the last-hop router. Specified (S, G) addresses join statically
without waiting for the first join.
1925
Options
distributed (Optional) Enable static joins for specified (S,G) addresses and preprovision all of them so
that all distributed IGMP Packet Forwarding Engines receive traffic.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1926
Description | 1926
Default | 1926
Syntax
static {
group ip-address;
}
Hierarchy Level
Description
Default
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1927
Description | 1927
Syntax
static {
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
source-count number;
source-increment increment;
}
}
}
Hierarchy Level
Description
The static statement simulates IGMP joins on a routing device statically on an interface without any
IGMP hosts. It is supported for both IGMPv2 and IGMPv3 joins. This statement is especially useful for
testing multicast forwarding on an interface without a receiver host.
NOTE: To prevent joining too many groups accidentally, the static statement is not supported
with the interface all statement.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1929
Description | 1929
Syntax
static {
group multicast-group-address {
exclude;
group-count number;
group-increment increment;
source ip-address {
source-count number;
source-increment increment;
}
}
}
Hierarchy Level
Description
The static statement simulates MLD joins on a routing device statically on an interface without any MLD
hosts. It is supported for both MLDv1 and MLDv2 joins. This statement is especially useful for testing
multicast forwarding on an interface without a receiver host.
NOTE: To prevent joining too many groups accidentally, the static statement is not supported
with the interface all statement.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1930
Description | 1931
Syntax
static {
address address {
group-ranges {
destination-ip-prefix</prefix-length>;
}
override;
version version;
}
}
1931
Hierarchy Level
Description
Configure static RP addresses. The default static RP address is 224.0.0.0/4. To configure other
addresses, include one or more address statements. You can configure a static RP in a logical system
only if the logical system is not directly connected to a source.
For each static RP address, you can optionally specify the PIM version and the groups for which this
address can be the RP. The default PIM version is version 1.
Release Information
RELATED DOCUMENTATION
static-lsp
IN THIS SECTION
Syntax | 1932
Description | 1933
Syntax
static-lsp lsp-name;
Hierarchy Level
Description
Specify the name of the static point-to-multipoint (P2MP) LSP used for a specific MBGP MVPN; static
P2MP LSP cannot be shared by multiple VPNs. Use this statement to specify the static LSP for both
inclusive and selective point-to-multipoint LSPs.
Use a static P2MP LSP when you know all the egress PE router endpoints (receiver nodes) and you want
to avoid the setup delay incurred by dynamically created P2MP LSPs (configured with the label-
switched-path-template). These static LSPs are signaled before the MVPN requires or uses them,
consequently avoiding any signaling latency and minimizing traffic loss due to latency.
If you add new endpoints after the static P2MP LSP is established, you must update the configuration
on the ingress PE router. In contrast, a dynamic P2MP LSP learns new endpoints without any
configuration changes.
BEST PRACTICE: Multiple multicast flows can share the same static P2MP LSP; this is the
preferred configuration when the set of egress PE router endpoints on the LSP are all interested
in the same set of multicast flows. When the set of relevant flows is different between
endpoints, we recommend that you create a new static P2MP LSP to associate endpoints with
flows of interest.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1934
Description | 1934
Syntax
static-umh {
primary address;
backup address;
source-tree;
}
Hierarchy Level
Description
In a BGP multicast VPN (MVPN) with RSVP-TE point-to-multipoint provider tunnels, statically set the
upstream multicast hop (UMH), instead of using one of the dynamic methods to choose the UMH
routers, such as that described in unicast-umh-election.
1935
The static-umh statement causes all type 7 (S,G) routes to use the configured primary and backup
upstream multicast hops. If these UMHs are not available, no UMH is selected. If the primary is not
available, but the backup UMH is available, the backup is used as the UMH.
The static-umh statement only affects type 7 (S,G) routes. Type 6 routes are sent toward the rendezvous
point (RP), and use the dynamic UMH selection that is configured with the unicast-umh-election
statement, or the default method of highest IP address is used if unicast-umh-election is not configured.
Release Information
RELATED DOCUMENTATION
stickydr
IN THIS SECTION
Syntax | 1936
Description | 1936
Syntax
stickydr
Hierarchy Level
Description
The stickydr feature protects against traffic loss as can happen when the designated router (DR)
changes once a new router joins the LAN and/or following an interface down event, or device upgrade.
Set stickydr on all the last hop devices in the LAN, and it will assign one DR special priority (that is,
0xfffffffe, the second highest priority) irrespective of existing DR election logic (DM priority and IP
address of PIM neighbors). The sticky DR priority remains with the device until it is explicitly transferred
to another eligible device on the LAN.
This feature is especially useful for countering DR elections cases wherein a new interface on the LAN
appears, immediately wins the DR election, and even before it has received an IGMP join from host,
starts pulling traffic from the upstream router.
Consider the example of a new device with higher DM priority and/or IP address that joins the LAN.
Instead of immediately ceding DR status to the new interface, an existing device with a lower IP address
and/or lower priority can remain the DR and receive IGMP joins and send PIM joins upstream. When the
new device (with higher priority or IP address) appears, it detects the sticky DR and joins as a non-DR.
No traffic is lost because of a DR transition.
Another example is when a DR interface goes down. If the devices in the LAN were configured for
stickydr, a new DR election amongst the remaining PIM routers will take place as usual, and as per the
RFC, but the election winner will inherit the “sticky” property of the down DR when wins. The sticky
status will persist even if another device with higher priority joins the LAN. Later, when the previous DR
comes back up, it’s DR status is not resumed.
1937
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1937
Description | 1938
Syntax
stream-protection {
mofrr-asm-starg;
mofrr-disjoint-upstream-only;
mofrr-no-backup-join;
mofrr-primary-path-selection-by-routing;
1938
policy policy-name;
}
Hierarchy Level
Description
Enable multicast-only fast reroute (MoFRR) on a routing or switching device. MoFRR minimizes packet
loss in a network when there is a link failure.
Release Information
RELATED DOCUMENTATION
subscriber-leave-timer
IN THIS SECTION
Syntax | 1939
Description | 1939
Options | 1940
Syntax
subscriber-leave-timer seconds;
Hierarchy Level
Description
Length of time before the multicast VLAN updates QoS data (for example, available bandwidth) for
subscriber interfaces after it receives an IGMP leave message.
1940
Options
seconds—Length of time before the multicast VLAN updates QoS data (for example, available
bandwidth) for subscriber interfaces after it receives an IGMP leave message. Specifying a value of 0
results in an immediate update. This is the same as if the statement were not configured.
• Range: 0 through 30
• Default: 0 seconds
Release Information
IN THIS SECTION
Syntax | 1940
Description | 1941
Options | 1941
Syntax
target target-value {
receiver target-value;
1941
sender target-value;
}
Hierarchy Level
Description
Specify the target value when importing sender and receiver site routes.
Options
target-value—Specify the target value when importing sender and receiver site routes.
receiver—Specify the target community used when importing receiver site routes.
sender—Specify the target community used when importing sender site routes.
Release Information
RELATED DOCUMENTATION
Configuring VRF Route Targets for Routing Instances for an MBGP MVPN
1942
IN THIS SECTION
Syntax | 1942
Description | 1942
Options | 1943
Syntax
Hierarchy Level
Description
Configure the suppression and reuse thresholds for multicast snooping forwarding cache limits.
1943
Options
suppress value—Value to begin suppressing new multicast forwarding cache entries. This value is
mandatory. This number must be greater than the reuse value.
reuse value—(Optional) Value to begin creating new multicast forwarding cache entries. If configured,
this number must be less than the suppress value.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1944
Description | 1944
Options | 1944
Syntax
threshold number;
Hierarchy Level
Description
Configure the random early detection (RED) threshold for MSDP active source messages. This number
must be less than the configured or default maximum.
Options
• Default: 24,000
Release Information
RELATED DOCUMENTATION
Example: Configuring MSDP with Active Source Limits and Mesh Groups | 562
1945
IN THIS SECTION
Syntax | 1945
Description | 1946
Options | 1946
Syntax
threshold {
log-warning value;
suppress value;
reuse value;
mvpn-rpt-suppress value;
mvpn-rpt-reuse value;
}
Hierarchy Level
Description
Configure the suppression, reuse, and warning log message thresholds for multicast forwarding cache
limits. You can configure the thresholds globally for the multicast forwarding cache or individually for the
IPv4 and IPv6 multicast forwarding caches. Configuring the threshold statement globally for the
multicast forwarding cache or including the family statement to configure the thresholds for the IPv4
and IPv6 multicast forwarding caches are mutually exclusive.
When MVPN RPT suppression is active, for all PE routers in excess of the threshold (including RP PEs),
MVPN will not add new (*,G) forwarding entries to the forwarding-cache. Changes are visible once the
entries in the current forwarding-cache have timed out or are deleted.
To use mvpn-rpt-suppress and/or mvpn-rpt-reuse, you must first configure the general suppress
threshold. If suppress is configured but mvpn-rpt-suppress is not, both mvpn-rpt-suppress and mvpn-
rpt-reuse will inherit and use the value set for the general suppress.
Options
reuse or mvpn-rpt-reusevalue (Optional) Value at which to begin creating new multicast forwarding
cache entries. If configured, this number should be less than the suppress value.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1947
Description | 1948
Options | 1948
Syntax
threshold milliseconds;
Hierarchy Level
Description
Specify the threshold for the adaptation of the BFD session detection time. When the detection time
adapts to a value equal to or greater than the threshold, a single trap and a single system log message
are sent.
NOTE: The threshold value must be equal to or greater than the transmit interval.
The threshold time must be equal to or greater than the value specified in the minimum-interval
or the minimum-receive-interval statement.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1949
Description | 1949
Options | 1949
Syntax
threshold milliseconds;
Hierarchy Level
Description
Specify the threshold for the adaptation of the BFD session transmit interval. When the transmit
interval adapts to a value greater than the threshold, a single trap and a single system message are sent.
Options
NOTE: The threshold value specified in the threshold statement must be greater than the
value specified in the minimum-interval statement for the transmit-interval statement.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1951
Description | 1952
Options | 1952
Syntax
threshold value;
Hierarchy Level
Description
Configure a threshold at which a warning message is logged when a certain number of PIM entries have
been received by the device.
Options
value—Threshold at which a warning message is logged. This is a percentage of the maximum number of
entries accepted by the device as defined with the maximum statement. You can apply this threshold to
incoming PIM join messages, PIM register messages, and group-to-RP mappings.
For example, if you configure a maximum number of 1,000 incoming group-to-RP mappings, and you
configure a threshold value of 90 percent, warning messages are logged in the system log when the
device receives 900 group-to-RP mappings. The same formula applies to incoming PIM join messages
and PIM register messages if configured with both the maximum limit and the threshold value
statements.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1953
1953
Description | 1953
Syntax
threshold {
group group-address {
source source-address {
rate threshold-rate;
}
}
}
Hierarchy Level
Description
Release Information
Statement introduced before Junos OS Release 7.4. In Junos OS Release 17.3R1, the mdt hierarchy was
moved from provider-tunnel to the provider-tunnel family inet and provider-tunnel family inet6
hierarchies as part of an upgrade to add IPv6 support for default MDT in Rosen 7, and data MDT for
Rosen 6 and Rosen 7. The provider-tunnel mdt hierarchy is now hidden for backward compatibility with
existing scripts.
RELATED DOCUMENTATION
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 696
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode |
690
threshold-rate
IN THIS SECTION
Syntax | 1954
Description | 1955
Options | 1955
Syntax
threshold-rate kbps;
1955
Hierarchy Level
Description
Specify the data threshold required before a new tunnel is created for a dynamic selective point-to-
multipoint LSP. This statement is part of the configuration for point-to-multipoint LSPs for MBGP
MVPNs and PIM-SSM GRE or RSVP-TE selective provider tunnels.
Options
• Range: 0 through 1,000,000 kilobits per second. Specifying 0 is equivalent to not including the
statement.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1956
Description | 1956
Options | 1957
Syntax
Hierarchy Level
Description
Configure the timeout value for multicast forwarding cache entries associated with the flow map.
1957
Options
never non-discard-entry-only—Specify that the forwarding cache entry always remain active. If you omit
the non-discard-entry-only option, all multicast forwarding entries, including those in forwarding and
pruned states, are kept forever. If you include the non-discard-entry-only option, entries with forwarding
states are kept forever, and entries with pruned states time out.
Release Information
timeout (Multicast)
IN THIS SECTION
Syntax | 1957
Description | 1958
Options | 1958
Syntax
Hierarchy Level
Description
Configure the timeout value for multicast forwarding cache entries. In general, you should regularly
refresh the forwarding cache so it does not fill up with old entries and thus prevent newer, higher-
priority, entries from being added.
Options
family (inet | inet6)—(Optional) Apply the configured timeout to either IPv4 or IPv6 multicast forwarding
cache entries. Configuring the timeout statement globally for the multicast forwarding cache or
including the family statement to configure the timeout value for the IPv4 and IPv6 multicast forwarding
caches are mutually exclusive.
• Default: Six minutes. By default, the configured timeout applies to both IPv4 and IPv6 multicast
forwarding cache entries.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1959
Description | 1959
Default | 1960
Options | 1960
Syntax
traceoptions {
file filename <files number> <no-stamp> <replace> <size size> <world-
readable | no-world-readable>;
flag flag <flag-modifier>;
}
Hierarchy Level
Description
Default
Options
file filename—Name of the file to receive the output of the tracing operation. Enclose the name within
quotation marks. All files are placed in the directory /var/log.
files number—(Optional) Maximum number of trace files, including the active trace file. When a trace file
reaches its maximum size, its contents are archived into a compressed file named filename.0 and the
trace file is emptied. When the trace file reaches its maximum size again, the filename.0 archive file is
renamed filename.1 and a new filename.0 archive file is created from the contents of the trace file. This
process continues until the maximum number of trace files is reached, at which point the system starts
overwriting the oldest archive file each time the trace file is archived. If you specify a maximum number
of files, you also must specify a maximum file size with the size option.
• Default: 10 files
flag flag—Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements. You can include the following flags:
• normal—Trace normal IGMP snooping protocol events. If you do not specify this flag, only unusual or
abnormal operations are traced.
flag-modifier—(Optional) Modifier for the tracing flag. You can specify one or more of these modifiers
per flag:
• disable—Disable the tracing operation. You can use this option to disable a single operation when
you have defined a broad group of tracing operations, such as all.
no-stamp—(Optional) Omit the timestamp at the beginning of each line in the trace file.
no-world-readable—(Optional) Restrict file access to the user who created the file.
replace—(Optional) Replace an existing trace file if there is one. If you do not include this option, tracing
output is appended to an existing trace file.
size size —(Optional) Maximum size of each trace file, in bytes, kilobytes (KB), megabytes (MB), or
gigabytes (GB). When a trace file named trace-file reaches its maximum size, it is zipped and renamed
trace-file.0, then trace-file.1, and so on, until the maximum number of trace files is reached. Then the
oldest trace file is overwritten. If you specify a maximum size, you also must specify a maximum number
of files with the files option.
• Default: 128 KB
Release Information
IN THIS SECTION
Syntax | 1962
Description | 1962
Default | 1963
Options | 1963
Syntax
traceoptions {
file filename<files number> <size size> <world-readable | no-world-readable>;
flag flag <disable>;
}
Hierarchy Level
[edit multicast-snooping-options]
Description
Default
Options
disable—(Optional) Disable the tracing operation. One use of this option is to disable a single operation
when you have defined a broad group of tracing operations, such as all.
file name—Name of the file to receive the output of the tracing operation. Enclose the name in
quotation marks. We recommend that you place multicast snooping tracing output in the file /var/log/
multicast-snooping-log.
files number—(Optional) Maximum number of trace files. When a trace file named trace-file reaches its
maximum size, it is renamed trace-file.0, then trace-file.1, and so on, until the maximum number of trace
files is reached. Then, the oldest trace file is overwritten.
If you specify a maximum number of files, you must also specify a maximum file size with the size
option.
flag—Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements.
• Default: If you do not specify this option, only unusual or abnormal operations are traced.
size size—(Optional) Maximum size of each trace file, in kilobytes (KB) or megabytes (MB). When a trace
file named trace-file reaches this size, it is renamed trace-file.0. When the trace-file again reaches its
maximum size, trace-file.0 is renamed trace-file.1 and trace-file is renamed trace-file.0. This renaming
scheme continues until the maximum number of trace files is reached. Then the oldest trace file is
overwritten.
If you specify a maximum file size, you must also specify a maximum number of trace files with the files
option.
• Default: 1 MB
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1965
Description | 1965
Default | 1965
Options | 1966
Syntax
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
Hierarchy Level
Description
Default
The default PIM trace options are those inherited from the routing protocol's traceoptions statement
included at the [edit routing-options] hierarchy level.
Options
file filename—Name of the file to receive the output of the tracing operation. Enclose the name within
quotation marks. All files are placed in the directory /var/log.
flag flag—Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements.
• normal—Trace normal PIM snooping events. If you do not specify this flag, only unusual or abnormal
operations are traced.
flag-modifier—(Optional) Modifier for the tracing flag. You can specify one or more of these modifiers
per flag:
• disable—Disable the tracing operation. You can use this option to disable a single operation when
you have defined a broad group of tracing operations, such as all.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1967
Description | 1968
Options | 1968
Syntax
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
1968
Hierarchy Level
Description
To specify more than one tracing operation, include multiple flag statements.
Options
disable—(Optional) Disable the tracing operation. You can use this option to disable a single operation
when you have defined a broad group of tracing operations, such as all.
file filename—Name of the file to receive the output of the tracing operation. Enclose the name within
quotation marks. All files are placed in the directory /var/log. We recommend that you place tracing
output in the file igmp-log.
files number—(Optional) Maximum number of trace files. When a trace file named trace-file reaches its
maximum size, it is renamed trace-file.0, then trace-file.1, and so on, until the maximum number of trace
files is reached. Then the oldest trace file is overwritten.
If you specify a maximum number of files, you must also include the size statement to specify the
maximum file size.
• Default: 2 files
flag—Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements.
• Default: If you do not specify this option, only unusual or abnormal operations are traced.
• state—State transitions
• timer—Timer usage
flag-modifier—(Optional) Modifier for the tracing flag. You can specify one or more of these modifiers:
no-stamp—(Optional) Do not place timestamp information at the beginning of each line in the trace file.
• Default: If you omit this option, timestamp information is placed at the beginning of each line of the
tracing output.
• Default: If you do not include this option, tracing output is appended to an existing trace file.
size size—(Optional) Maximum size of each trace file, in kilobytes (KB), megabytes (MB), or gigabytes
(GB). When a trace file named trace-file reaches this size, it is renamed trace-file.0. When trace-file again
reaches this size, trace-file.0 is renamed trace-file.1 and trace-file is renamed trace-file.0. This renaming
scheme continues until the maximum number of trace files is reached. Then the oldest trace file is
overwritten.
If you specify a maximum file size, you must also include the files statement to specify the maximum
number of trace files.
• Default: 1 MB
1970
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1970
Description | 1971
Default | 1971
Options | 1971
Syntax
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
1971
Hierarchy Level
Description
To specify more than one tracing operation, include multiple flag statements.
Default
The default DVMRP trace options are those inherited from the routing protocols traceoptions
statement included at the [edit routing-options] hierarchy level.
Options
disable—(Optional) Disable the tracing operation. You can use this option to disable a single operation
when you have defined a broad group of tracing operations, such as all.
file filename—Name of the file to receive the output of the tracing operation. Enclose the name within
quotation marks. All files are placed in the directory /var/log. We recommend that you place tracing
output in the dvmrp-log file.
files number—(Optional) Maximum number of trace files. When a trace file named trace-file reaches its
maximum size, it is renamed trace-file.0, then trace-file.1, and so on, until the maximum number of trace
files is reached. Then the oldest trace file is overwritten.
If you specify a maximum number of files, you must also include the size statement to specify the
maximum file size.
• Default: 2 files
flag—Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements.
• graft—Graft messages
• Default: If you do not specify this option, only unusual or abnormal operations are traced.
• poison—Poison-route-reverse packets
• probe—Probe packets
• prune—Prune messages
• state—State transitions
• timer—Timer usage
flag-modifier—(Optional) Modifier for the tracing flag. You can specify one or more of these modifiers:
no-stamp—(Optional) Do not place timestamp information at the beginning of each line in the trace file.
• Default: If you omit this option, timestamp information is placed at the beginning of each line of the
tracing output.
• Default: If you do not include this option, tracing output is appended to an existing trace file.
1973
size size—(Optional) Maximum size of each trace file, in kilobytes (KB), megabytes (MB), or gigabytes
(GB). When a trace file named trace-file reaches this size, it is renamed trace-file.0. When trace-file again
reaches this size, trace-file.0 is renamed trace-file.1 and trace-file is renamed trace-file.0. This renaming
scheme continues until the maximum number of trace files is reached. Then the oldest trace file is
overwritten.
If you specify a maximum file size, you must also include the files statement to specify the maximum
number of trace files.
• Default: 1 MB
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1974
Description | 1974
Default | 1975
Options | 1975
Syntax
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
Hierarchy Level
Description
To specify more than one tracing operation, include multiple flag statements.
Default
The default IGMP trace options are those inherited from the routing protocols traceoptions statement
included at the [edit routing-options] hierarchy level.
Options
disable—(Optional) Disable the tracing operation. You can use this option to disable a single operation
when you have defined a broad group of tracing operations, such as all.
file filename—Name of the file to receive the output of the tracing operation. Enclose the name within
quotation marks. All files are placed in the directory /var/log. We recommend that you place tracing
output in the file igmp-log.
files number—(Optional) Maximum number of trace files. When a trace file named trace-file reaches its
maximum size, it is renamed trace-file.0, then trace-file.1, and so on, until the maximum number of trace
files is reached. Then the oldest trace file is overwritten.
If you specify a maximum number of files, you must also include the size statement to specify the
maximum file size.
• Default: 2 files
flag—Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements.
• Default: If you do not specify this option, only unusual or abnormal operations are traced.
• state—State transitions
• timer—Timer usage
flag-modifier—(Optional) Modifier for the tracing flag. You can specify one or more of these modifiers:
no-stamp—(Optional) Do not place timestamp information at the beginning of each line in the trace file.
• Default: If you omit this option, timestamp information is placed at the beginning of each line of the
tracing output.
• Default: If you do not include this option, tracing output is appended to an existing trace file.
size size—(Optional) Maximum size of each trace file, in kilobytes (KB), megabytes (MB), or gigabytes
(GB). When a trace file named trace-file reaches this size, it is renamed trace-file.0. When trace-file again
reaches this size, trace-file.0 is renamed trace-file.1 and trace-file is renamed trace-file.0. This renaming
scheme continues until the maximum number of trace files is reached. Then the oldest trace file is
overwritten.
If you specify a maximum file size, you must also include the files statement to specify the maximum
number of trace files.
• Default: 1 MB
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1977
Description | 1978
Default | 1978
Options | 1978
Syntax
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable> ;
flag flag (detail | disable | receive | send);
}
1978
Hierarchy Level
Description
Default
Options
file filename—Name of the file to receive the output of the tracing operation. All files are placed in the
directory /var/log.
files number—(Optional) Maximum number of trace files. When a trace file named trace-file reaches its
maximum size, it is renamed trace-file.0, then trace-file.1, and so on, until the maximum number of trace
files is reached (xk to specify KB, xm to specify MB, or xg to specify gigabytes), at which point the oldest
trace file is overwritten. If you specify a maximum number of files, you also must specify a maximum file
size with the size option.
• Default: 3 files
flag flag —Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements. You can include the following flags:
• client-notification—Trace notifications.
1979
no-world-readable—(Optional) Restrict file access to the user who created the file.
size size—(Optional) Maximum size of each trace file, in kilobytes (KB), megabytes (MB), or gigabytes
(GB). When a trace file named trace-file reaches its maximum size, it is renamed trace-file.0, then trace-
file.1, and so on, until the maximum number of trace files is reached. Then the oldest trace file is
overwritten. If you specify a maximum number of files, you also must specify a maximum file size with
the files option.
• Default: 128 KB
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1980
Description | 1981
Default | 1981
Options | 1981
Syntax
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
1981
Hierarchy Level
Description
To specify more than one tracing operation, include multiple flag statements.
Default
The default MSDP trace options are those inherited from the routing protocol's traceoptions statement
included at the [edit routing-options] hierarchy level.
Options
disable—(Optional) Disable the tracing operation. You can use this option to disable a single operation
when you have defined a broad group of tracing operations, such as all.
1982
file filename—Name of the file to receive the output of the tracing operation. Enclose the name within
quotation marks. All files are placed in the directory /var/log. We recommend that you place tracing
output in the msdp-log file.
files number—(Optional) Maximum number of trace files. When a trace file named trace-file reaches its
maximum size, it is renamed trace-file.0, then trace-file.1, and so on, until the maximum number of trace
files is reached. Then the oldest trace file is overwritten.
If you specify a maximum number of files, you must also include the size statement to specify the
maximum file size.
• Default: 2 files
flag flag—Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements.
• keepalive—Keepalive messages
• source-active—Source-active packets
• Default: If you do not specify this option, only unusual or abnormal operations are traced.
• state—State transitions
• timer—Timer usage
flag-modifier—(Optional) Modifier for the tracing flag. You can specify one or more of these modifiers:
no-stamp—(Optional) Do not place timestamp information at the beginning of each line in the trace file.
• Default: If you omit this option, timestamp information is placed at the beginning of each line of the
tracing output.
• Default: If you do not include this option, tracing output is appended to an existing trace file.
size size—(Optional) Maximum size of each trace file, in kilobytes (KB), megabytes (MB), or gigabytes
(GB). When a trace file named trace-file reaches this size, it is renamed trace-file.0. When trace-file again
reaches this size, trace-file.0 is renamed trace-file.1 and trace-file is renamed trace-file.0. This renaming
scheme continues until the maximum number of trace files is reached. Then the oldest trace file is
overwritten.
If you specify a maximum file size, you must also include the files statement to specify the maximum
number of trace files.
• Default: 1 MB
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1984
Description | 1985
Options | 1985
Syntax
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
Hierarchy Level
Description
Options
disable—(Optional) Disable the tracing operation. You can use this option to disable a single operation
when you have defined a broad group of tracing operations, such as all.
file filename—Name of the file to receive the output of the tracing operation. Enclose the name in
quotation marks (" ").
files number—(Optional) Maximum number of trace files. When a trace file named trace-file reaches this
size, it is renamed trace-file.0. When trace-file again reaches its maximum size, trace-file.0 is renamed
trace-file.1 and trace-file is renamed trace-file.0. This renaming scheme continues until the maximum
number of trace files is reached. Then the oldest trace file is overwritten.
If you specify a maximum number of files, you also must specify a maximum file size with the size
option.
• Default: 2 files
flag flag—Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements. You can specify any of the following flags:
• error—Error conditions
• general—General events
• normal—Normal events
• policy—Policy processing
1986
• route—Routing information
• state—State transitions
flag-modifier—(Optional) Modifier for the tracing flag. You can specify the following modifiers:
size size—(Optional) Maximum size of each trace file, in kilobytes (KB), megabytes (MB), or gigabytes
(GB). When a trace file named trace-file reaches this size, it is renamed trace-file.0. When trace-file
again reaches its maximum size, trace-file.0 is renamed trace-file.1 and trace-file is renamed trace-file.0.
This renaming scheme continues until the maximum number of trace files is reached. Then the oldest
trace file is overwritten.
If you specify a maximum file size, you also must specify a maximum number of trace files with the files
option.
• Default: 1 MB
Release Information
Support at the [edit protocols mvpn] hierarchy level introduced in Junos OS Release 13.3.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1987
Description | 1988
Default | 1988
Options | 1988
Syntax
traceoptions {
file filename <files number> <size size> <world-readable | no-world-
readable>;
flag flag <flag-modifier> <disable>;
}
1988
Hierarchy Level
Description
To specify more than one tracing operation, include multiple flag statements.
Default
The default PIM trace options are those inherited from the routing protocol's traceoptions statement
included at the [edit routing-options] hierarchy level.
Options
disable—(Optional) Disable the tracing operation. You can use this option to disable a single operation
when you have defined a broad group of tracing operations, such as all.
file filename—Name of the file to receive the output of the tracing operation. Enclose the name within
quotation marks. All files are placed in the directory /var/log. We recommend that you place tracing
output in the pim-log file.
files number—(Optional) Maximum number of trace files. When a trace file named trace-file reaches its
maximum size, it is renamed trace-file.0, then trace-file.1, and so on, until the maximum number of trace
files is reached. Then the oldest trace file is overwritten.
If you specify a maximum number of files, you must also include the size statement to specify the
maximum file size.
• Default: 2 files
flag flag—Tracing operation to perform. To specify more than one tracing operation, include multiple flag
statements.
• assert—Assert messages
• bootstrap—Bootstrap messages
• hello—Hello packets
• join—Join messages
• prune—Prune messages
• rp—Candidate RP advertisements
• Default: If you do not specify this option, only unusual or abnormal operations are traced.
• state—State transitions
• timer—Timer usage
flag-modifier—(Optional) Modifier for the tracing flag. You can specify one or more of these modifiers:
no-stamp—(Optional) Do not place timestamp information at the beginning of each line in the trace file.
• Default: If you omit this option, timestamp information is placed at the beginning of each line of the
tracing output.
• Default: If you do not include this option, tracing output is appended to an existing trace file.
size size—(Optional) Maximum size of each trace file, in kilobytes (KB), megabytes (MB), or gigabytes
(GB). When a trace file named trace-file reaches this size, it is renamed trace-file.0. When trace-file again
reaches this size, trace-file.0 is renamed trace-file.1 and trace-file is renamed trace-file.0. This renaming
scheme continues until the maximum number of trace files is reached. Then the oldest trace file is
overwritten.
If you specify a maximum file size, you must also include the files statement to specify the maximum
number of trace files.
• Default: 1 MB
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1991
Description | 1991
Syntax
transmit-interval {
minimum-interval milliseconds;
threshold milliseconds;
}
Hierarchy Level
Description
Specify the transmit interval for the bfd-liveness-detection statement. The negotiated transmit interval
for a peer is the interval between the sending of BFD packets to peers. The receive interval for a peer is
the minimum interval between receiving packets sent from its peer; the receive interval is not negotiated
between peers. To determine the transmit interval, each peer compares its configured minimum transmit
interval with its peer's minimum receive interval. The larger of the two numbers is accepted as the
transmit interval for that peer.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1993
Description | 1993
Default | 1993
Options | 1993
Syntax
tunnel-devices [ ud-fpc/pic/port ];
Hierarchy Level
Description
List one or more tunnel-capable Automatic Multicast Tunneling (AMT) PICs to be used for creating
multicast tunnel (ud) interfaces. Creating an AMT PIC list enables you to control the load-balancing
implementation.
The physical position of the PIC in the routing device determines the multicast tunnel interface name.
Default
Multicast tunnel interfaces are created on all available tunnel-capable AMT PICs, based on a round-robin
algorithm.
Options
NOTE: Each tunnel-devices statement keyword is optional. By default, all configured tunnel
devices are used. The keyword selects the subset of configured tunnel devices.
Tunnel devices must be configured on MX Series routers. They are not automatically available
like M Series routers that have dedicated PICs. On MX Series routers, the tunnel device port is
1994
the next highest number after the physical ports – a PIC created with the tunnel-services
statement at the [edit chassis fpc slot-number pic number] hierarchy level.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1995
Description | 1995
Default | 1995
Options | 1995
Syntax
tunnel-devices [ mt-fpc/pic/port ];
Hierarchy Level
Description
List one or more tunnel-capable PICs to be used for creating multicast tunnel (mt) interfaces. Creating a
PIC list enables you to control the load-balancing implementation.
• On MX Series routers, a PIC created with the tunnel-services statement at the [edit chassis fpc slot-
number pic number] hierarchy level.
The physical position of the PIC in the routing device determines the multicast tunnel interface name.
For example, if you have an Adaptive Services PIC installed in FPC slot 0 and PIC slot 0, the
corresponding multicast tunnel interface name is mt-0/0/0. The same is true for Tunnel Services PICs,
Multiservices PICs, and Multiservices DPCs.
Default
Multicast tunnel interfaces are created on all available tunnel-capable PICs, based on a round-robin
algorithm.
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1996
Description | 1997
Options | 1997
Syntax
tunnel-limit number;
1997
Hierarchy Level
Description
Limit the number of Automatic Multicast Tunneling (AMT) data tunnels created. The system might reach
a dynamic upper limit of tunnels of all types before the static AMT limit is reached.
Options
• Default: 1 tunnel
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 1998
Description | 1998
Options | 1998
Syntax
tunnel-limit limit;
Hierarchy Level
Description
Limit the number of data MDTs created in this VRF instance. If the limit is 0, then no data MDTs are
created for this VRF instance.
Options
• Default: 0 (No data MDTs are created for this VRF instance.)
Release Information
Statement introduced before Junos OS Release 7.4. In Junos OS Release 17.3R1, the mdt hierarchy was
moved from provider-tunnel to the provider-tunnel family inet and provider-tunnel family inet6
hierarchies as part of an upgrade to add IPv6 support for default MDT in Rosen 7, and data MDT for
Rosen 6 and Rosen 7. The provider-tunnel mdt hierarchy is now hidden for backward compatibility with
existing scripts.
RELATED DOCUMENTATION
Example: Configuring Data MDTs and Provider Tunnels Operating in Source-Specific Multicast
Mode | 696
Example: Configuring Data MDTs and Provider Tunnels Operating in Any-Source Multicast Mode |
690
IN THIS SECTION
Syntax | 2000
Description | 2000
Options | 2000
Syntax
tunnel-limit number;
Hierarchy Level
Description
Specify a limit on the number of selective tunnels that can be created for an LSP. This limit can be
applied to the following types of selective tunnels:
• LDP-signaled LSP
• RSVP-signaled LSP
Options
Release Information
RELATED DOCUMENTATION
tunnel-source
IN THIS SECTION
Syntax | 2001
Description | 2002
Syntax
tunnel-source address;
Hierarchy Level
Description
Configure the source address for the provider space multipoint generic router encapsulation (mGRE)
tunnel. This statement enables a VPN tunnel source for Rosen 6 or Rosen 7 multicast VPNs. .
Release Information
In Junos OS Release 17.3R1, the pim-ssm hierarchy was moved from provider-tunnel to the provider-
tunnel family inet and provider-tunnel family inet6 hierarchies as part of an upgrade to add IPv6
support for default multicast distribution tree (MDT) in Rosen 7, and data MDT for Rosen 6 and Rosen 7.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2003
Description | 2003
Options | 2003
Syntax
unicast {
receiver;
sender;
}
Hierarchy Level
Description
Options
receiver—Specify the unicast target community used when importing receiver site routes.
sender—Specify the unicast target community used when importing sender site routes.
Release Information
RELATED DOCUMENTATION
Configuring VRF Route Targets for Routing Instances for an MBGP MVPN
2004
IN THIS SECTION
Syntax | 2004
Description | 2004
Default | 2004
Syntax
unicast;
Hierarchy Level
Description
In a multiprotocol BGP (MBGP) multicast VPN (MVPN), configure the virtual tunnel (VT) interface to be
used for unicast traffic only.
Default
If you omit this statement, the VT interface can be used for both multicast and unicast traffic.
2005
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2005
Description | 2006
Options | 2006
Syntax
unicast-stream-limit ;
2006
Hierarchy Level
Description
Options
number—Maximum number of data unicast streams that can be created on the system.
• Default: 1
Release Information
RELATED DOCUMENTATION
unicast-umh-election
IN THIS SECTION
Syntax | 2007
Description | 2007
Syntax
unicast-umh-election;
Hierarchy Level
Description
Configure a router to use the unicast route preference to determine the single forwarder election.
Release Information
RELATED DOCUMENTATION
upstream-interface
IN THIS SECTION
Syntax | 2008
Description | 2009
Options | 2009
Syntax
upstream-interface [ interface-names ];
Hierarchy Level
Description
Configure at least one, but not more than two, upstream interfaces on the rendezvous point (RP) routing
device that resides between a customer edge–facing Protocol Independent Multicast (PIM) domain and
a core-facing PIM domain. The RP routing device translates PIM join or prune messages into
corresponding IGMP report or leave messages (if you include the pim-to-igmp-proxy statement), or into
corresponding MLD report or leave messages (if you include the pim-to-mld-proxy statement). The
routing device then proxies the IGMP or MLD report or leave messages to one or both upstream
interfaces to forward IPv4 multicast traffic (for IGMP) or IPv6 multicast traffic (for MLD) across the PIM
domains.
Options
interface-names—Names of one or two upstream interfaces to which the RP routing device proxies
IGMP or MLD report or leave messages for transmission of multicast traffic across PIM domains. You
can specify a maximum of two upstream interfaces on the RP routing device. To configure a set of two
upstream interfaces, specify the full interface names, including all physical and logical address
components, within square brackets ( [ ] ).
Release Information
RELATED DOCUMENTATION
use-p2mp-lsp
IN THIS SECTION
Syntax | 2010
Description | 2010
Syntax
igmp-snooping-options {
use-p2mp-lsp;
}
}
Hierarchy Level
Description
Point-to-multipoint LSP for IGMP snooping enables multicast data traffic in the core to take the point-
to-multipoint path. The effect is a reduction in the amount of traffic generated on the PE router when
sending multicast packets for multiple VPLS sessions because it avoids the need to send multiple parallel
streams when forwarding multicast traffic to PE routers participating in the VPLS. Note that the options
configured for IGMP snooping are applied on a per-routing-instance so all IGMP snooping routes in the
same instance will use the same mode, point to multipoint or pseudowire.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2011
Description | 2012
Options | 2012
Syntax
version (0 | 1 | automatic);
Hierarchy Level
Description
Specify the bidirectional forwarding detection (BFD) protocol version that you want to detect.
Options
Configure the BFD version to detect: 1 (BFD version 1) or automatic (autodetect the BFD version)
• Default: automatic
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2013
Description | 2013
Options | 2013
2013
Syntax
version version;
Hierarchy Level
Description
Starting in Junos OS Release 16.1, it is no longer necessary to specify a PIM version. PIMv1 is being
obsoleted so the version choice is moot.
Options
• Default: PIMv2 for both rendezvous point (RP) mode (at the [edit protocols pim rp static address
address] hierarchy level). and interface mode (at the [edit protocols pim interface interface-name]
hierarchy level).
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2015
Description | 2015
Options | 2015
Syntax
version version;
Hierarchy Level
Description
Options
• Range: 1, 2, or 3
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2016
Description | 2016
Options | 2016
Syntax
version version;
Hierarchy Level
Description
Specify the version of IGMP used through an Automatic Multicast Tunneling (AMT) interface.
Options
• Range: 1, 2, or 3
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2017
Description | 2018
Options | 2018
Syntax
version version;
2018
Hierarchy Level
Description
Configure the MLD version explicitly. MLD version 2 (MLDv2) is used only to support source-specific
multicast (SSM).
Options
• Range: 1 or 2
• Default: 1 (MLDv1)
Release Information
RELATED DOCUMENTATION
vrf-advertise-selective
IN THIS SECTION
Syntax | 2019
Description | 2019
Syntax
vrf-advertise-selective {
family {
inet-mvpn;
inet6-mvpn;
}
}
Hierarchy Level
Description
Explicitly enable IPv4 or IPv6 MVPN routes to be advertised from the VRF instance while preventing all
other route types from being advertised.
If you configure the vrf-advertise-selective statement without any of its options, the router or switch
has the same behavior as if you configured the no-vrf-advertise statement. All VPN routes are
prevented from being advertised from a VRF routing instance to the remote PE routers. This behavior is
useful for hub-and-spoke configurations, enabling you to configure a PE router to not advertise VPN
2020
routes from the primary (hub) instance. Instead, these routes are advertised from the secondary
(downstream) instance.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2021
Description | 2021
Default | 2021
Options | 2022
Syntax
vlan vlan-id {
all
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
multicast-router-interface;
static {
group multicast-group-address {
source ip-address;
}
}
}
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
Hierarchy Level
Description
Default
Options
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax (EX4600, NFX Series, QFabric Systems, and QFX Series) | 2023
Description | 2024
Default | 2025
Options | 2025
vlan vlan-name {
immediate-leave;
interface interface-name {
group-limit limit;
host-only-interface;
multicast-router-interface;
static {
group multicast-group-address {
source ip-address;
2024
}
}
}
(l2-querier | igmp-querier (QFabric Systems only)) {
source-address ip-address;
}
qualified-vlan ;
proxy {
source-address ip-address;
}
query-interval seconds;
query-last-member-interval seconds;
query-response-interval seconds;
robust-count number;
}
Hierarchy Level
Description
Configure IGMP snooping parameters for a VLAN (or all VLANs if you use the all option, where
supported).
On legacy EX Series switches, which do not support the Enhanced Layer 2 Software (ELS) configuration
style, IGMP snooping is enabled by default on all VLANs, and this statement includes a disable option if
you want to disable IGMP snooping selectively on some VLANs or disable it on all VLANs. Otherwise,
IGMP snooping is enabled on the specified VLANs if you configure any statements and options in this
hierarchy.
NOTE: You cannot configure IGMP snooping on a secondary (private) VLAN (PVLAN). However,
starting in Junos OS Release 18.3R1 on EX4300 switches and EX4300 Virtual Chassis, and Junos
OS Release 19.2R1 on EX4300 multigigabit switches, enabling IGMP snooping on a primary
VLAN implicitly enables IGMP snooping on its secondary VLANs. See "IGMP Snooping on
Private VLANs (PVLANs)" on page 98 for details.
2025
TIP: To display a list of all configured VLANs on the system, including VLANs that are configured
but not committed, type ? after vlan or vlans on the command line in configuration mode. Note
that only one VLAN is displayed for a VLAN range, and for IGMP snooping, secondary private
VLANs are not listed.
Default
On devices that support the all option, by default, IGMP snooping options apply to all VLANs . For all
other devices, you must specify the vlan statement with a VLAN name to enable IGMP snooping.
Options
• all—All VLANs on the switch. This option is available only on EX Series switches that do not support
the ELS configuration style.
• disable—Disable IGMP snooping on all or specified VLANs. This option is available only on EX Series
switches that do not support the ELS configuration style.
• vlan-name—Name of a VLAN. A VLAN name must be provided on switches that support ELS to
enable IGMP snooping.
TIP: On devices that support the all option, when you configure IGMP snooping parameters
using the vlan all statement, any VLAN that is not individually configured for IGMP snooping
inherits the vlan all configuration. Any VLAN that is individually configured for IGMP snooping,
on the other hand, inherits none of its configuration from vlan all. Any parameters that are not
explicitly defined for the individual VLAN assume their default values, not the values specified in
the vlan all configuration.
For example, in the following configuration:
protocols {
igmp-snooping {
vlan all {
robust-count 8;
}
vlan employee {
interface ge-0/0/8.0 {
static {
group 239.0.10.3
2026
}
}
}
}
}
all VLANs, except employee, have a robust count of 8. Because employee has been individually
configured, its robust count value is not determined by the value set under vlan all. Instead, its
robust count is the default value of 2.
Release Information
Statement updated with enhanced ? (CLI completion feature) functionality in Junos OS Release 9.5 for
EX Series switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2027
Description | 2028
Default | 2028
Options | 2028
Syntax
Hierarchy Level
Description
When the vlan configuration statement is used without the disable statement, MLD snooping is enabled
on the specified VLAN or on all VLANs.
Default
If the vlan statement is not included in the configuration, MLD snooping is disabled.
Options
all (All EX Series switches except EX9200) Configure MLD snooping parameters for all VLANs
on the switch.
TIP: When you configure MLD snooping parameters using the vlan all statement, any VLAN that
is not individually configured for MLD snooping inherits the vlan all configuration. Any VLAN
that is individually configured for MLD snooping, on the other hand, inherits none of its
configuration from vlan all. Any parameters that are not explicitly defined for the individual
VLAN assume their default values, not the values specified in the vlan all configuration.
For example, in the following configuration:
protocols {
mld-snooping {
2029
vlan all {
robust-count 8;
}
vlan employee {
interface ge-0/0/8.0 {
static {
group ff1e::1;
}
}
}
}
}
all VLANs, except employee, have a robust count of 8. Because employee has been individually
configured, its robust count value is not determined by the value set under vlan all. Instead, its
robust count is the default value of 2.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2030
Description | 2030
Syntax
vlan <vlan-id>{
no-dr-flood;
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
vpn-group-address
IN THIS SECTION
Syntax | 2031
Description | 2032
Options | 2032
Syntax
vpn-group-address address;
Hierarchy Level
Description
Configure the group address for the Layer 3 VPN in the service provider’s network.
Options
Release Information
Starting with Junos OS Release 11.4, to provide consistency with draft-rosen 7 and next-generation
BGP-based multicast VPNs, configure the provider tunnels for draft-rosen 6 anysource multicast VPNs
at the [edit routing-instances routing-instance-name provider-tunnel] hierarchy level. The mdt, vpn-
tunnel-source, and vpn-group-address statements are deprecated at the [edit routing-instances
routing-instance-name protocols pim] hierarchy level.
RELATED DOCUMENTATION
wildcard-group-inet
IN THIS SECTION
Syntax | 2033
Description | 2033
2033
Syntax
wildcard-group-inet {
wildcard-source {
inter-region-segmented{
fan-out fan-out value;
}
ldp-p2mp;
pim-ssm {
group-range multicast-prefix;
}
rsvp-te {
label-switched-path-template {
(default-template | lsp-template-name);
}
static-lsp lsp-name;
}
threshold-rate number;
}
}
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
wildcard-group-inet6 | 2034
Example: Configuring Selective Provider Tunnels Using Wildcards
Understanding Wildcards to Configure Selective Point-to-Multipoint LSPs for an MBGP MVPN
Configuring a Selective Provider Tunnel Using Wildcards
wildcard-group-inet6
IN THIS SECTION
Syntax | 2034
Description | 2035
Syntax
wildcard-group-inet6 {
wildcard-source {
inter-region-segmented{
2035
Hierarchy Level
Description
Release Information
RELATED DOCUMENTATION
wildcard-group-inet | 2032
Example: Configuring Selective Provider Tunnels Using Wildcards
Understanding Wildcards to Configure Selective Point-to-Multipoint LSPs for an MBGP MVPN
Configuring a Selective Provider Tunnel Using Wildcards
IN THIS SECTION
Syntax | 2036
Description | 2037
Syntax
wildcard-source {
next-hop next-hop-address;
}
Hierarchy Level
Description
Use a wildcard for the multicast source instead of (or in addition to) a specific multicast source.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2037
Description | 2038
Syntax
wildcard-source {
inter-region-segmented {
2038
Hierarchy Level
Description
Configure a selective provider tunnel for a shared tree using a wildcard source.
Release Information
RELATED DOCUMENTATION
wildcard-group-inet | 2032
wildcard-group-inet6 | 2034
Example: Configuring Selective Provider Tunnels Using Wildcards
Understanding Wildcards to Configure Selective Point-to-Multipoint LSPs for an MBGP MVPN
Configuring a Selective Provider Tunnel Using Wildcards
2040
CHAPTER 29
Operational Commands
IN THIS CHAPTER
mtrace | 2096
IN THIS SECTION
Syntax | 2044
Description | 2044
Options | 2044
Syntax
Description
Options
none Clear the multicast statistics for all AMT tunnel interfaces.
instance instance-name (Optional) Clear AMT multicast statistics for the specified instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2045
Description | 2045
Options | 2045
Syntax
Description
Clear the Automatic Multicast Tunneling (AMT) multicast state. Optionally, clear AMT protocol statistics.
Options
gateway gateway-ip-addr port (Optional) Clear the AMT multicast state for the specified gateway
port-number address. If no port is specified, clear the AMT multicast state for all
AMT gateways with the given IP address.
instance instance-name (Optional) Clear the AMT multicast state for the specified instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
statistics (Optional) Clear multicast statistics for all AMT tunnels or for
specified tunnels.
tunnel-interface interface-name (Optional) Clear the AMT multicast state for the specified AMT
tunnel interface.
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2047
Description | 2048
Options | 2048
Syntax
Description
Options
all Clear IGMP members for groups and interfaces in the master instance.
group address-range (Optional) Clear all IGMP members that are in a particular address range.
An example of a range is 233.252/16. If you omit the destination prefix
length, the default is /32.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
clear
Output Fields
Sample Output
The following sample output displays IGMP group information before and after the clear igmp
membership command is entered:
The following sample output displays IGMP group information before and after the clear igmp
membership interface command is issued:
The following sample output displays IGMP group information before and after the clear igmp
membership group command is entered:
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2051
Description | 2052
Options | 2052
Syntax
Description
Clear IGMP snooping dynamic membership information from the multicast forwarding table.
Options
vlan vlan-name (Optional) Clear dynamic membership information for the specified
VLAN.
group | source address (Optional) Clear IGMP snooping membership for the specified
multicast group or source address.
instance instance-name (Optional) Clear IGMP snooping membership for the specified
instance.
clear
2053
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2054
Description | 2054
Options | 2054
Syntax
Description
Options
none Clear IGMP snooping statistics for all supported address families on
all interfaces.
instance instance-name (Optional) Clear IGMP snooping statistics for the specified instance.
learning-domain (all | learning- (Optional) Perform this operation on all learning domains or on a
domain-name) particular learning domain.
logical-system logical-system- (Optional) Delete the IGMP snooping statistics for a given logical
name system or for all logical systems.
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
2055
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2056
Description | 2056
Options | 2057
Syntax
Description
Clear Internet Group Management Protocol (IGMP) statistics. Clearing IGMP statistics zeros the
statistics counters as if you rebooted the device.
By default, Junos OS multicast devices collect statistics of received and transmitted IGMP control
messages that reflect currently active multicast group subscribers. Some devices also automatically
maintain continuous IGMP statistics globally on the device in addition to the default active subscriber
statistics—these are persistent, continuous statistics of received and transmitted IGMP control packets
that account for both past and current multicast group subscriptions processed on the device. The
device maintains continuous statistics across events or operations such as routing daemon restarts,
graceful Routing Engine switchovers (GRES), in-service software upgrades (ISSU), or line card reboots.
The default active subscriber-only statistics are not preserved in these cases.
Run this command to clear the currently active subscriber statistics. On devices that support continuous
statistics, run this command with the continuous option to clear the continuous statistics. You must run
these commands separately to clear both types of statistics because the device maintains and clears the
two types of statistics separately.
2057
Options
none Clear IGMP statistics on all interfaces. This form of the command clears
statistics for currently active subscribers only.
continuous Clear only the continuous IGMP statistics that account for both past and
current multicast group subscribers instead of the default statistics that
only reflect currently active subscribers. This option is not available with
the interface option for interface-specific statistics.
interface interface-name (Optional) Clear IGMP statistics for the specified interface only. This option
is not available with the continuous option.
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular
logical-system-name) logical system.
clear
Output Fields
See "show igmp statistics" on page 2207 for an explanation of output fields.
Sample Output
The following sample output displays IGMP statistics information before and after the clear igmp
statistics command is entered:
Mtrace Response 0 0 0
Mtrace Request 0 0 0
Domain Wide Report 0 0 0
V3 Membership Report 0 0 0
Other Unknown types 0
IGMP v3 unsupported type 0
IGMP v3 source required for SSM 0
IGMP v3 mode not applicable for SSM 0
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2059
Description | 2059
Options | 2060
Syntax
Description
Options
all Clear MLD memberships for groups and interfaces in the master
instance.
group group-name (Optional) Clear MLD membership for the specified group.
interface interface-name (Optional) Clear MLD group membership for the specified interface.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2061
Description | 2061
Options | 2061
Syntax
Description
Clear MLD snooping dynamic membership information from the multicast forwarding table.
Options
vlan vlan-name (Optional) Clear dynamic membership information for the specified VLAN.
view
2062
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2063
Description | 2063
Syntax
Description
view
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2064
Description | 2064
Options | 2065
Syntax
Description
Clear Multicast Listener Discovery (MLD) statistics. Clearing MLD statistics zeros the statistics counters
as if you rebooted the device.
By default, Junos OS multicast devices collect statistics of received and transmitted MLD control
messages that reflect currently active multicast group subscribers. Some devices also automatically
maintain continuous MLD statistics globally on the device in addition to the default active subscriber
statistics—these are persistent, continuous statistics of received and transmitted MLD control packets
2065
that account for both past and current multicast group subscriptions processed on the device. The
device maintains continuous statistics across events or operations such as routing daemon restarts,
graceful Routing Engine switchovers (GRES), in-service software upgrades (ISSU), or line card reboots.
The default active subscriber-only statistics are not preserved in these cases.
Run this command to clear the currently active subscriber statistics. On devices that support continuous
statistics, run this command with the continuous option to clear the continuous statistics. You must run
these commands separately to clear both types of statistics because the device maintains and clears the
two types of statistics separately.
Options
none (Same as logical-system all) Clear MLD statistics for all interfaces. This form
of the command clears statistics for currently active subscribers only.
continuous Clear only the continuous MLD statistics that account for both past and
current multicast group subscribers instead of the default statistics that only
reflect currently active subscribers. This option is not available with the
interface option for interface-specific statistics.
interface interface-name (Optional) Clear MLD statistics for the specified interface. This option is not
available with the continuous option.
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular
logical-system-name) logical system.
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2066
Description | 2067
Options | 2067
Syntax
Description
Clear the entries in the Multicast Source Discovery Protocol (MSDP) source-active cache.
Options
all Clear all MSDP source-active cache entries in the master instance.
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular
logical-system-name) logical system.
peer peer-address (Optional) Clear the MSDP source-active cache entries learned from a
specific peer.
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2068
Description | 2068
Options | 2068
Syntax
Description
Options
logical-system (all | logical-system- (Optional) Perform this operation on all logical systems or on a
name) particular logical system.
peer peer-address (Optional) Clear the statistics for the specified peer.
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2070
Description | 2070
2070
Options | 2070
Syntax
Description
Options
none Reapply multicast bandwidth admissions for all IPv4 forwarding entries in the
master routing instance.
group group-address (Optional) Reapply multicast bandwidth admissions for the specified group.
inet (Optional) Reapply multicast bandwidth admission settings for IPv4 flows.
inet6 (Optional) Reapply multicast bandwidth admission settings for IPv6 flows.
instance instance-name (Optional) Reapply multicast bandwidth admission settings for the specified
instance. If you do not specify an instance, the command applies to the
master routing instance.
interface interface- (Optional) Examines the corresponding outbound interface in the relevant
name entries and acts as follows:
2071
source source-address (Optional) Use with the group option to reapply multicast bandwidth
admission settings for the specified (source, group) entry.
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2072
Description | 2072
Options | 2072
Syntax
Description
This command is not supported for next-generation multiprotocol BGP multicast VPNs (MVPNs).
Options
all Clear all multicast forwarding cache entries in the master instance.
inet (Optional) Clear multicast forwarding cache entries for IPv4 family addresses.
inet6 (Optional) Clear multicast forwarding cache entries for IPv6 family addresses.
2073
instance instance- (Optional) Clear multicast forwarding cache entries on a specific routing
name instance.
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular
logical-system-name) logical system.
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2074
Description | 2074
Options | 2074
Syntax
Description
Options
inet (Optional) Clear multicast scope statistics for IPv4 family addresses.
inet6 (Optional) Clear multicast scope statistics for IPv6 family addresses.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2076
Description | 2076
2076
Options | 2076
Syntax
Description
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
regular-expression (Optional) Clear only multicast sessions that contain the specified regular
expression.
clear
2077
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2078
Description | 2078
Options | 2078
Syntax
Description
Options
none Clear multicast statistics for all supported address families on all
interfaces.
instance instance-name (Optional) Clear multicast statistics for the specified instance.
2079
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
clear
Output Fields
When you enter this command, you get feedback on the status of your request.
Sample Output
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
Syntax added in Junos OS Release 19.2R1 for clearing multicast route statistics (EX4300 switches).
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2080
Description | 2081
Options | 2081
Syntax
<inet | inet6>
<instance instance-name>
<rp ip-address/prefix | source ip-address/prefix>
<sg | star-g>
Description
Clear the Protocol Independent Multicast (PIM) join and prune states.
Options
all To clear PIM join and prune states for all groups and family addresses in
the master instance, you must specify “all”..
group-address (Optional) Clear the PIM join and prune states for a group address.
bidirectional | dense | (Optional) Clear PIM bidirectional mode, dense mode, or sparse and
sparse source-specific multicast (SSM) mode entries.
exact (Optional) Clear only the group that exactly matches the specified group
address.
inet | inet6 (Optional) Clear the PIM entries for IPv4 or IPv6 family addresses,
respectively.
instance instance-name (Optional) Clear the entries for a specific PIM-enabled routing instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
rp ip-address/prefix | (Optional) Clear the PIM entries with a specified rendezvous point (RP)
source ip-address/prefix address and prefix or with a specified source address and prefix. You can
omit the prefix.
Additional Information
The clear pim join command cannot be used to clear the PIM join and prune state on a backup Routing
Engine when nonstop active routing is enabled.
2082
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2083
Description | 2083
Options | 2084
Syntax
Description
Use the show pim source command to find out if there are multiple paths available for a source (for
example, an RP).
When you include the join-load-balance statement in the configuration, the PIM join states are
distributed evenly on available equal-cost multipath links. When an upstream neighbor link fails, Junos
OS redistributes the PIM join states to the remaining links. However, when new links are added or the
failed link is restored, the existing PIM joins are not redistributed to the new link. New flows will be
distributed to the new links. However, in a network without new joins and prunes, the new link is not
used for multicast traffic. The clear pim join-distribution command redistributes the existing flows to
the new upstream neighbors. Redistributing the existing flows causes traffic to be disrupted, so we
recommend that you run the clear pim join-distribution command during a maintenance window.
2084
Options
all (Optional) Clear the PIM join-redistribute states for all groups and family
addresses in the master instance.
instance instance- (Optional) Redistribute the join states for a specific PIM-enabled routing
name instance.
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular logical
logical-system-name) system.
Additional Information
The clear pim join-distribution command cannot be used to redistribute the PIM join states on a backup
Routing Engine when nonstop active routing is enabled.
clear
Output Fields
When you enter this command, you are provided no feedback on the status of your request. You can
enter the show pim join command before and after distributing the join state to verify the operation.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2085
Description | 2086
Options | 2086
Syntax
Description
Options
all Required to clear the PIM register message counters for all groups and family
addresses in the master instance.
inet | inet6 (Optional) Clear PIM register message counters for IPv4 or IPv6 family
addresses, respectively.
instance instance-name (Optional) Clear register message counters for a specific PIM-enabled routing
instance.
interface interface- (Optional) Clear PIM register message counters for a specific interface.
name
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular
logical-system-name) logical system.
Additional Information
The clear pim register command cannot be used to clear the PIM register state on a backup Routing
Engine when nonstop active routing is enabled.
2087
clear
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2088
Description | 2088
Options | 2088
Syntax
Description
Options
instance instance-name (Optional) Clear PIM snooping join information for the specified routing
instance.
logical-system logical- (Optional) Delete the IGMP snooping statistics for a given logical system or
system-name for all logical systems.
vlan-id vlan-identifier (Optional) Clear PIM snooping join information for the specified VLAN.
view
Output Fields
See show pim snooping join for an explanation of the output fields.
2089
Sample Output
The following sample output displays information about PIM snooping joins before and after the clear
pim snooping join command is entered:
Group: 198.51.100.2
Source: *
Flags: sparse,rptree,wildcard
Upstream state: None
Upstream neighbor: 192.0.2.5, port: ge-1/3/7.20
Downstream port: ge-1/3/1.20
Downstream neighbors:
192.0.2.2 State: Join Flags: SRW Timeout: 185
Group: 198.51.100.3
Source: *
Flags: sparse,rptree,wildcard
Upstream state: None
Upstream neighbor: 192.0.2.4, port: ge-1/3/5.20
Downstream port: ge-1/3/3.20
Downstream neighbors:
192.0.2.3 State: Join Flags: SRW Timeout: 175
user@host> clear pim snooping join
Clearing the Join/Prune state for 203.0.113.0/24
Clearing the Join/Prune state for 203.0.113.0/24
user@host> show pim snooping join extensive
Instance: vpls1
Learning-Domain: vlan-id 10
Learning-Domain: vlan-id 20
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2090
Description | 2090
Options | 2090
Syntax
Description
Options
none Clear PIM snooping statistics for all family addresses, instances, and
interfaces.
2091
interface interface-name (Optional) Clear PIM snooping statistics for a specific interface.
logical-system logical-system- (Optional) Delete the IGMP snooping statistics for a given logical
name system or for all logical systems.
vlan-id vlan-identifier (Optional) Clear PIM snooping statistics information for the specified
VLAN.
clear
Output Fields
See show pim snooping statistics for an explanation of the output fields.
Sample Output
The following sample output displays PIM snooping statistics before and after the clear pim snooping
statistics command is entered:
Tx J/P messages 0
RX J/P messages 660
Rx J/P messages -- seen 0
Rx J/P messages -- received 660
Rx Hello messages 1396
Rx Version Unknown 0
Rx Neighbor Unknown 0
Rx Upstream Neighbor Unknown 0
Rx Bad Length 0
Rx J/P Busy Drop 0
Rx J/P Group Aggregate 0
2092
Rx Malformed Packet 0
Learning-Domain: vlan-id 20
user@host> clear pim snooping statistics
user@host> show pim snooping statistics
Instance: vpls1
Learning-Domain: vlan-id 10
Tx J/P messages 0
RX J/P messages 0
Rx J/P messages -- seen 0
Rx J/P messages -- received 0
Rx Hello messages 0
Rx Version Unknown 0
Rx Neighbor Unknown 0
Rx Upstream Neighbor Unknown 0
Rx Bad Length 0
Rx J/P Busy Drop 0
Rx J/P Group Aggregate 0
Rx Malformed Packet 0
Learning-Domain: vlan-id 20
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2093
2093
Description | 2093
Options | 2093
Syntax
Description
Options
none Clear PIM statistics for all family addresses, instances, and
interfaces.
2094
inet | inet6 (Optional) Clear PIM statistics for IPv4 or IPv6 family addresses,
respectively.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
Additional Information
The clear pim statistics command cannot be used to clear the PIM statistics on a backup Routing Engine
when nonstop active routing is enabled.
clear
Output Fields
See "show pim statistics" on page 2492 for an explanation of output fields.
Sample Output
The following sample output displays PIM statistics before and after the clear pim statistics command is
entered:
Graft 0 0 0
Graft Ack 0 0 0
Candidate RP 0 0 0
V1 Query 2111 4222 0
V1 Register 0 0 0
V1 Register Stop 0 0 0
V1 Join Prune 14200 13115 0
V1 RP Reachability 0 0 0
V1 Assert 0 0 0
V1 Graft 0 0 0
V1 Graft Ack 0 0 0
PIM statistics summary for all interfaces:
Unknown type 0
V1 Unknown type 0
Unknown Version 0
Neighbor unknown 0
Bad Length 0
Bad Checksum 0
Bad Receive If 0
Rx Intf disabled 2007
Rx V1 Require V2 0
Rx Register not RP 0
RP Filtered Source 0
Unknown Reg Stop 0
Rx Join/Prune no state 1040
Rx Graft/Graft Ack no state 0
...
V1 Register 0 0 0
...
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
RELATED DOCUMENTATION
mtrace
IN THIS SECTION
Syntax | 2096
Description | 2097
Options | 2097
Syntax
mtrace source
<logical-system logical-system-name>
<routing-instance routing-instance-name>
2097
Description
Options
Additional Information
The mtrace command for multicast traffic is similar to the traceroute command used for unicast traffic.
Unlike traceroute, mtrace traces traffic backwards, from the receiver to the source.
view
Output Fields
Table 35 on page 2097 describes the output fields for the mtrace command. Output fields are listed in
the approximate order in which they appear.
Querying full reverse path Indicates the full reverse path query has begun.
2098
number-of-hops Number of hops from the source to the named router or switch.
Sample Output
mtrace source
Release Information
mtrace from-source
IN THIS SECTION
Syntax | 2099
Description | 2099
Options | 2100
Syntax
Description
Display trace information about an IP multicast path from a source to this router or switch. If you specify
a group address with this command, Junos OS returns additional information, such as packet rates and
losses.
2100
Options
extra-hops extra-hops (Optional) Number of hops to take after reaching a nonresponsive router.
You can specify a number between 0 and 255.
group group (Optional) Group address for which to trace the path. The default group
address is 0.0.0.0.
interval interval (Optional) Number of seconds to wait before gathering statistics again.
The default value is 10 seconds.
max-hops max-hops (Optional) Maximum hops to trace toward the source. The range of values
is 0 through 255. The default value is 32 hops.
max-queries max-queries (Optional) Maximum number of query attempts for any hop. The range of
values is 1 through 32. The default is 3.
ttl ttl (Optional) IP time-to-live (TTL) value. You can specify a number between
0 and 255. Local queries to the multicast group use a value of 1.
Otherwise, the default value is 127.
wait-time wait-time (Optional) Number of seconds to wait for a response. The default value is
3.
2101
view
Output Fields
Table 36 on page 2101 describes the output fields for the mtrace from-source command. Output fields
are listed in the approximate order in which they appear.
Querying full reverse path Indicates the full reverse path query has begun.
number-of-hops Number of hops from the source to the named router or switch.
Packet Statistics for Traffic Number of packets lost, number of packets sent, percentage of packets
From lost, and average packet rate at each hop.
Sample Output
mtrace from-source
192.168.2.2 routerB.lab.mycompany.net
v \__ ttl 3 ?/0 0 pps
192.168.1.2 192.168.1.2
Receiver Query Source
Release Information
mtrace monitor
IN THIS SECTION
Syntax | 2103
Description | 2103
Options | 2103
Syntax
mtrace monitor
Description
Listen passively for IP multicast responses. To exit the mtrace monitor command, type Ctrl+c.
Options
view
Output Fields
Table 37 on page 2104 describes the output fields for the mtrace monitor command. Output fields are
listed in the approximate order in which they appear.
packet from...to IP address of the query source and default group destination.
Sample Output
mtrace monitor
Release Information
mtrace to-gateway
IN THIS SECTION
Syntax | 2105
Description | 2106
Options | 2106
Syntax
<extra-hops extra-hops>
<group group>
<interface interface-name>
<interval interval>
<loop>
<max-hops max-hops>
<max-queries max-queries>
<multicast-response | unicast-response>
<no-resolve>
<no-router-alert>
<response response>
<routing-instance routing-instance-name>
<ttl ttl>
<unicast-response>
<wait-time wait-time>
Description
Display trace information about a multicast path from this router or switch to a gateway router or
switch.
Options
extra-hops extra-hops (Optional) Number of hops to take after reaching a nonresponsive router
or switch. You can specify a number between 0 and 255.
group group (Optional) Group address for which to trace the path. The default group
address is 0.0.0.0.
interface interface-name (Optional) Source address for sending the trace query.
interval interval (Optional) Number of seconds to wait before gathering statistics again.
The default value is 10.
max-hops max-hops (Optional) Maximum hops to trace toward the source. You can specify a
number between 0 and 255.. The default value is 32.
2107
max-queries max-queries (Optional) Maximum number of query attempts for any hop. You can
specify a number between 0 and 255. The default value is 3.
wait-time wait-time (Optional) Number of seconds to wait for a response. The default value is
3.
view
Output Fields
Table 38 on page 2107 describes the output fields for the mtrace to-gateway command. Output fields
are listed in the approximate order in which they appear.
Querying full reverse path Indicates the full reverse path query has begun.
number-of-hops Number of hops from the source to the named router or switch.
Sample Output
mtrace to-gateway
Release Information
IN THIS SECTION
Syntax | 2109
Description | 2109
Options | 2110
Syntax
Description
Rebalance the assignment of multicast tunnel encapsulation interfaces across available tunnel-capable
PICs or across a configured list of tunnel-capable PICs. You can determine whether a rebalance is
necessary by running the show pim interfaces instance instance-name command.
2110
Options
none Re-create and rebalance all tunnel interfaces for all routing instances.
instance instance-name Re-create and rebalance all tunnel interfaces for a specific instance.
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular
logical-system-name) logical system.
maintenance
Output Fields
This command produces no output. To verify the operation of the command, run the show pim interface
instance instance-name before and after running the request pim multicast-tunnel rebalance command.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2111
Description | 2111
Options | 2111
Syntax
Description
Display information about the Automatic Multicast Tunneling (AMT) protocol tunnel statistics.
Options
instance instance-name (Optional) Display information for the specified instance only.
logical-system (all | logical-system- (Optional) Perform this operation on all logical systems or on a
name) particular logical system.
view
Output Fields
Table 39 on page 2112 describes the output fields for the show amt statistics command. Output fields
are listed in the approximate order in which they appear.
2112
AMT receive Summary of AMT statistics for messages received on all interfaces.
message count
• AMT relay discovery—Number of AMT relay discovery messages received.
AMT send Summary of AMT statistics for messages sent on all interfaces.
message count
• AMT relay advertisement—Number of AMT relay advertisement messages sent.
AMT error Summary of AMT statistics for error messages received on all interfaces.
message count
• AMT incomplete packet—Number of messages received with length errors so
severe that further classification could not occur.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2115
Description | 2115
Options | 2115
Syntax
Description
Display summary information about the Automatic Multicast Tunneling (AMT) protocol.
Options
instance instance-name (Optional) Display information for the specified instance only.
logical-system (all | logical-system- (Optional) Perform this operation on all logical systems or on a
name) particular logical system.
view
2116
Output Fields
Table 40 on page 2116 describes the output fields for the show amt summary command. Output fields
are listed in the approximate order in which they appear.
AMT anycast Prefix advertised by unicast routing protocols to route AMT discovery All levels
prefix messages to the router from nearby AMT gateways.
AMT anycast Anycast address configured from which the anycast prefix is derived. All levels
address
AMT local Local unique AMT relay IP address configured. Used to send AMT relay All levels
address advertisement messages, it is the IP source address of AMT control
messages and the source address of the data tunnel encapsulation.
AMT tunnel Maximum number of AMT tunnels that can be created. All levels
limit
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2117
Description | 2118
Options | 2118
Syntax
Description
Display information about the Automatic Multicast Tunneling (AMT) dynamic tunnels.
Options
gateway-address gateway-ip- (Optional) Display information for the specified AMT gateway only.
address port port-number If no port is specified, display information for all AMT gateways
with the given IP address.
instance instance-name (Optional) Display information for the specified instance only.
logical-system (all | logical-system- (Optional) Perform this operation on all logical systems or on a
name) particular logical system.
tunnel-interface interface-name (Optional) Display information for the specified AMT tunnel
interface only.
view
Output Fields
Table 41 on page 2118 describes the output fields for the show amt tunnel command. Output fields are
listed in the approximate order in which they appear.
AMT gateway Address of the AMT gateway that is being connected by the AMT All levels
address tunnel.
AMT tunnel Dynamically created AMT logical interfaces used by the AMT tunnel in All levels
interface the format ud-FPC/PIC/Port.unit.
AMT tunnel State of the AMT tunnel. The state is normally Active. All levels
state
• Active—The tunnel is active.
AMT tunnel Number of seconds since the most recent control message was received All levels
inactivity from an AMT gateway. If no message is received before the AMT tunnel
timeout inactivity timer expires, the tunnel is deleted.
Include Source Multicast source address for each IGMPv3 group using the tunnel. detail
2120
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2123
Description | 2123
2123
Options | 2123
Syntax
Description
Options
exact-instance instance- (Optional) Display information for the specified instance only.
name
instance instance-name (Optional) Display information about BGP groups for all routing
instances whose name begins with this string (for example, cust1,
cust11, and cust111 are all displayed when you run the show bgp group
instance cust1 command). The instance name can be primary for the
main instance, or any valid configured instance name or its prefix.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 42 on page 2124 describes the output fields for the show bgp group command. Output fields are
listed in the approximate order in which they appear.
Group Type or Group Type of BGP group: Internal or External. All levels
group-index Index number for the BGP peer group. The index rtf detail
number differentiates between groups when a single
BGP group is split because of different configuration
options at the group and peer levels.
Flags Flags associated with the BGP group. This field is brief
used by Juniper Networks customer support. detail
none
BGP-Static Advertisement Policies configured for the BGP group with the brief
Policy advertise-bgp-static policy statement. none
Export Export policies configured for the BGP group with the brief
export statement. detail
none
Optimal Route Reflection Client nodes (primary and backup) configured in the brief
BGP group. detail
none
MED tracks IGP metric update Time, in seconds, that updates to multiple exit All levels
delay discriminator (MED) are delayed. Also displays the
time remaining before the interval is set to expire
Traffic Statistics Interval Time between sample periods for labeled-unicast brief
traffic statistics, in seconds. detail
none
Established Number of peers in the group that are in the All levels
established state.
2127
ip-addresses List of peers who are members of the group. The All levels
address is followed by the peer’s port number.
Route Queue Timer Number of seconds until queued routes are sent. If detail
this time has already elapsed, this field displays the
number of seconds by which the updates are delayed.
Route Queue Number of prefixes that are queued up for sending to detail
the peers in the group.
2128
Damp State Number of active routes with a figure of merit greater brief,
than zero, but lower than the threshold at which none
suppression occurs.
Receive mask Mask of the received target included in the advertised detail
route.
Mask Mask which specifies that the peer receive routes detail
with the given route target.
2132
Sample Output
0 0 0 0 0 0
vpn-1.inet.2
2 2 0 0 0 0
vpn-1.inet6.0
0 0 0 0 0 0
vpn-1.mdt.0
0 0 0 0 0 0
Internals suppressed: 0
RIB State: BGP restart is complete
RIB State: VPN restart is complete
Release Information
From Junos OS release 18.4 onwards, show bgp group group-name does an exact match and displays
groups with names matching exactly with that of the specified group-name. For all Junos OS releases
preceding 18.4, the implemenation was performed using the prefix matches (example: if there are two
2136
groups grp1, grp2 and the CLI command show bgp group grp was issued, then both grp1, grp2 were
displayed).
IN THIS SECTION
Syntax | 2136
Description | 2136
Options | 2136
Syntax
Description
Display information about Distance Vector Multicast Routing Protocol (DVMRP)–enabled interfaces.
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
2137
view
Output Fields
Table 43 on page 2137 describes the output fields for the show dvmrp interfaces command. Output
fields are listed in the approximate order in which they appear.
Leaf Whether the interface is a leaf (that is, whether it has no neighbors) or
whether it has neighbors.
Sample Output
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
IN THIS SECTION
Syntax | 2139
Description | 2139
Options | 2139
Syntax
Description
Display information about Distance Vector Multicast Routing Protocol (DVMRP) neighbors.
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 44 on page 2139 describes the output fields for the show dvmrp neighbors command. Output
fields are listed in the approximate order in which they appear.
Version Version of DVMRP that the neighbor is running, in the format majorminor.
2140
• 1—One way. The local router has seen the neighbor, but the neighbor has not
seen the local router.
Timeout How long until the DVMRP neighbor information times out, in seconds.
Transitions Number of generation ID changes that have occurred since the local router learned
about the neighbor.
Sample Output
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
IN THIS SECTION
Syntax | 2141
Description | 2141
Options | 2142
Syntax
Description
Display information about Distance Vector Multicast Routing Protocol (DVMRP) prefixes.
2142
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 45 on page 2142 describes the output fields for the show dvmrp prefix command. Output fields
are listed in the approximate order in which they appear.
Next hop Next hop from which the route was learned. All levels
Age Last time that the route was refreshed. All levels
Prunes sent Number of prune messages sent to the multicast group. detail
Cache lifetime Lifetime of the group in the multicast cache, in seconds. detail
Prune lifetime Lifetime remaining and total lifetime of prune messages, in seconds. detail
Sample Output
The output for the show dvmrp prefix brief command is identical to that for the show dvmrp prefix
command.
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
IN THIS SECTION
Syntax | 2145
Description | 2145
Options | 2145
Syntax
Description
Display information about active Distance Vector Multicast Routing Protocol (DVMRP) prune messages.
Options
all (Optional) Display information about all received and transmitted prune
messages.
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular logical
logical-system-name) system.
view
Output Fields
Table 46 on page 2145 describes the output fields for the show dvmrp prunes command. Output fields
are listed in the approximate order in which they appear.
Neighbor Neighbor to which the prune was sent or from which the prune was
received.
Sample Output
Release Information
NOTE: Distance Vector Multicast Routing Protocol (DVMRP) was deprecated in Junos OS
Release 16.1. Although DVMRP commands continue to be available and configurable in the CLI,
they are no longer visible and are scheduled for removal in a subsequent release.
IN THIS SECTION
Syntax | 2147
Description | 2147
Options | 2148
Syntax
Description
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 47 on page 2148 describes the output fields for the show igmp interface command. Output fields
are listed in the approximate order in which they appear.
Querier Address of the routing device that has been elected to send All levels
membership queries.
SSM Map Name of the source-specific multicast (SSM) map policy that has been All levels
Policy applied to the IGMP interface.
Timeout How long until the IGMP querier is declared to be unreachable, in All levels
seconds.
2149
Group limit Maximum number of groups allowed on the interface. Any joins All levels
requested after the limit is reached are rejected.
Group log- Time (in seconds) between consecutive log messages. All levels
interval
• Off—Indicates that the router can accept IGMP reports only from
subnetworks that are associated with its interfaces.
2150
Distributed State of IGMP, which, by default, takes place on the Routing Engine for All levels
MX Series routers but can be distributed to the Packet Forwarding
Engine to provide faster processing of join and leave events.
• On—Indicates that the router can run IGMP on the interface but not
send or receive control traffic such as IGMP reports, queries, and
leaves.
• Off—Indicates that the router can run IGMP on the interface and
send or receive control traffic such as IGMP reports, queries, and
leaves.
OIF map Name of the OIF map (if configured) associated with the interface. All levels
SSM map Name of the source-specific multicast (SSM) map (if configured) used on All levels
the interface.
2151
• IGMP Last Member Query Interval—Time (in seconds) that the router
waits for a report in response to a group-specific query.
Sample Output
Interface: so-1/0/1.0
Querier: 203.0.113.21
State: Up Timeout: None Version: 2 Groups: 4
SSM Map Policy: ssm-policy-C
Immediate Leave: On
Promiscuous Mode: Off
Passive: Off
Distributed: OnConfigured Parameters:
Derived Parameters:
IGMP Membership Timeout: 260.0
IGMP Other Querier Present Timeout: 255.0
The output for the show igmp interface brief command is identical to that for the show igmp interface
command. For sample output, see "show igmp interface" on page 2151.
The output for the show igmp interface detail command is identical to that for the show igmp interface
command. For sample output, see "show igmp interface" on page 2151.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2153
Description | 2154
Options | 2154
Syntax
Description
Options
none Display standard information about membership for all IGMP groups.
group-name (Optional) Display group membership for the specified IP address only.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 48 on page 2154 describes the output fields for the show igmp group command. Output fields are
listed in the approximate order in which they appear.
Interface Name of the interface that received the IGMP membership All levels
report. A name of local indicates that the local routing device
joined the group itself.
2155
Group Mode Mode the SSM group is operating in: Include or Exclude. All levels
Source timeout Time remaining until the group traffic is no longer forwarded. detail
The timer is refreshed when a listener in include mode sends a
report. A group in exclude mode or configured as a static group
displays a zero timer.
Last reported Address of the host that last reported membership in this group. All levels
by
Timeout Time remaining until the group membership is removed. brief none
Group timeout Time remaining until a group in exclude mode moves to include detail
mode. The timer is refreshed when a listener in exclude mode
sends a report. A group in include mode or configured as a static
group displays a zero timer.
• Static—Membership is configured.
2156
Sample Output
The output for the show igmp group brief command is identical to that for the show igmp group
command.
Source: 203.0.113.4
Source timeout: 12
Last reported by: 203.0.113.52
Group timeout: 0 Type: Dynamic
Group: 198.51.100.2
Group mode: Include
Source: 203.0.113.4
Source timeout: 12
Last reported by: 203.0.113.52
Group timeout: 0 Type: Dynamic
Interface: t1-0/1/1.0
Interface: ge-0/2/2.0
Interface: ge-0/2/0.0
Interface: local
Group: 198.51.100.12
Group mode: Exclude
Source: 0.0.0.0
Source timeout: 0
Last reported by: Local
Group timeout: 0 Type: Dynamic
Group: 198.51.100.22
Group mode: Exclude
Source: 0.0.0.0
Source timeout: 0
Last reported by: Local
Group timeout: 0 Type: Dynamic
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2159
Description | 2159
Options | 2159
Syntax
Description
Display multicast source VLAN (MVLAN) and data-forwarding receiver VLAN associations and related
information when you configure multicast VLAN registration (MVR) in a routing instance.
Options
vlan vlan-name (Optional) Display configured MVR information about a particular VLAN only.
view
Output Fields
Table 49 on page 2160 lists the output fields for the show igmp snooping data-forwarding command.
Output fields are listed in the approximate order in which they appear.
2160
Vlan VLAN names of the multicast source and receiver VLANs configured in the routing
instance.
Learning Domain Learning domain for snooping and MVR data forwarding.
Type MVR VLAN type configured for the listed VLAN, either MVR Receiver Vlan or
MVR Source Vlan.
Group subnet Group subnet address for the multicast source VLAN in the MVR configuration
(the MVLAN).
Receiver vlans Multicast receiver VLANs associated with the MVLAN. When you configure a
source MVLAN, you associate one or more MVR receiver VLANs with it.
Mode MVR operating mode configured for the listed receiver VLAN:
Egress translate VLAN tag translation setting for an MVR receiver VLAN:
• FALSE—The translate option for VLAN tag translation is not configured for the
MVR receiver VLAN. MVLAN traffic is forwarded with the MVLAN tag for
receivers on trunk ports or untagged for hosts on access ports.
2161
Install route If TRUE, the device installs forwarding entries for the MVR receiver VLAN as well
as for the MVLAN. If FALSE, only MVLAN forwarding entries are stored.
Source vlans One or more source MVLANs associated with the listed MVR receiver VLAN.
Sample Output
Vlan: v2
Learning-Domain : default
Type : MVR Source Vlan
Group subnet : 225.0.0.0/24
Receiver vlans:
vlan: v1
vlan: v3
Vlan: v1
Learning-Domain : default
Type : MVR Receiver Vlan
Mode : PROXY
Egress translate : FALSE
Install route : FALSE
Source vlans:
vlan: v2
Vlan: v3
Learning-Domain : default
Type : MVR Receiver Vlan
2162
Mode : TRANSPARENT
Egress translate : FALSE
Install route : TRUE
Source vlans:
vlan: v2
Vlan: v1
Learning-Domain : default
Type : MVR Receiver Vlan
Mode : PROXY
Egress translate : FALSE
Install route : FALSE
Source vlans:
vlan: v2
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2163
Description | 2163
Options | 2163
Syntax
Description
Options
brief | detail (Optional) When applicable, this option lets you choose the how much
detail to display.
view
Output Fields
Table 50 on page 2164 lists the output fields for the show igmp snooping interface command. Output
fields are listed in the approximate order in which they appear.
Bridge Domain Bridge domain or VLAN for which IGMP snooping is enabled. All levels
or Vlan
interface Interfaces that are being snooped in this learning domain. All levels
Up Groups Number of active multicast groups attached to the logical interface. All levels
2165
router- Router interfaces that are part of this learning domain. All levels
interface
Group limit Maximum number of (source,group) pairs allowed per interface. All levels
When a group limit is not configured, this field is not shown.
Data- VLAN associated with the interface is configured as a data- All levels
forwarding forwarding multicast receiver VLAN using multicast VLAN
receiver: yes registration (MVR) on EX Series switches with Enhanced Layer 2
Software (ELS).
IGMP Query Frequency (in seconds) with which this router sends membership All levels
Interval queries when it is the querier.
IGMP Query Time (in seconds) that the router waits for a response to a general All levels
Response query.
Interval
IGMP Last Time (in seconds) that the router waits for a report in response to a All levels
Member Query group-specific query.
Interval
IGMP Timeout for group membership. If no report is received for these All levels
Membeship groups before the timeout expires, the group membership is
Timeout removed.
2166
IGMP Other Time that the router waits for the IGMP querier to send a query. All levels
Querier
Present
Timeout
Sample Output
Bridge-Domain: sample
Learning-Domain: default
Interface: ge-0/1/4.0
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Derived Parameters:
IGMP Membership Timeout: 260.0
IGMP Other Querier Present Timeout: 255.0
Instance: VPLS-6
Learning-Domain: default
Interface: ge-0/2/2.601
State: Up Groups: 10
Immediate leave: Off
Router interface: no
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Instance: VS-4
Bridge-Domain: VS-4-BD-1
Learning-Domain: vlan-id 1041
Interface: ae2.3
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Interface: ge-0/2/2.1041
State: Up Groups: 20
Immediate leave: Off
Router interface: no
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Instance: default-switch
Bridge-Domain: bd-200
Learning-Domain: default
Interface: ge-0/2/2.100
State: Up Groups: 20
Immediate leave: Off
Router interface: no
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
2168
Bridge-Domain: bd0
Learning-Domain: default
Interface: ae0.0
State: Up Groups: 0
Immediate leave: Off
Router interface: yes
Interface: ae1.0
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Interface: ge-0/2/2.0
State: Up Groups: 32
Immediate leave: Off
Router interface: no
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Instance: VPLS-1
Learning-Domain: default
Interface: ge-0/2/2.502
State: Up Groups: 11
Immediate leave: Off
Router interface: no
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Instance: VS-1
Bridge-Domain: VS-BD-1
Learning-Domain: default
Interface: ae2.0
State: Up Groups: 0
Immediate leave: Off
Router interface: no
2169
Interface: ge-0/2/2.1010
State: Up Groups: 20
Immediate leave: Off
Router interface: no
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Bridge-Domain: VS-BD-2
Learning-Domain: default
Interface: ae2.0
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Interface: ge-0/2/2.1011
State: Up Groups: 20
Immediate leave: Off
Router interface: no
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
Instance: VPLS-p2mp
Learning-Domain: default
Interface: ge-0/2/2.3001
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
2170
Learning-Domain: default
Interface: ge-1/3/9.0
State: Up Groups: 0
Immediate leave: Off
Router interface: yes
Interface: ge-1/3/8.0
State: Up Groups: 0
Immediate leave: Off
Router interface: yes
Group limit: 1000
Configured Parameters:
IGMP Query Interval: 125.0
IGMP Query Response Interval: 10.0
IGMP Last Member Query Interval: 1.0
IGMP Robustness Count: 2
show igmp snooping interface (ELS EX Series switches with MVR configured)
Vlan: v2
Learning-Domain: default
Interface: ge-0/0/0.0
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Group limit: 3
Data-forwarding receiver: yes
2171
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2171
Description | 2172
Options | 2172
Syntax
Description
Options
none Display the multicast group membership information about all VLANs on
which IGMP snooping is enabled.
brief | detail (Optional) Display the specified level of output. The default is brief.
instance routing-instance- (Optional) Display the multicast group membership information about the
name specified routing instance.
interface interface-name (Optional) Display the multicast group membership information about the
specified interface.
vlan (vlan-id | vlan-name) (Optional) Display the multicast group membership for the specified VLAN.
logical-system logical- (Optional) Display information about a particular logical system, or type ’all’.
system-name
virtual-switch virtual- (Optional) Display information about a particular virtual switch.
switch-name
vlan-id vlan-identifier (Optional) Display information about a particular VLAN.
view
Output Fields
Table 51 on page 2173 lists the output fields for the show igmp snooping membership command.
Output fields are listed in the approximate order in which they appear.
2173
Data- (EX Series switches with Enhanced Layer 2 Software (ELS) only) All levels
forwarding VLAN associated with the interface is configured as a data-
receiver: yes forwarding multicast receiver VLAN using multicast VLAN
registration (MVR).
Up Groups or Number of active multicast groups attached to the logical All levels
Groups interface.
2174
Group Mode Mode the SSM group is operating in: Include or Exclude. All levels
Last reported Address of source last replying to the query. All levels
by
2175
Group Timeout Time remaining until a group in exclude mode moves to include All levels
mode. The timer is refreshed when a listener in exclude mode
sends a report. A group in include mode or configured as a static
group displays a zero timer.
Timeout Length of time (in seconds) left until the entry is purged. detail
Type Way that the group membership information was learned: All levels
Sample Output
Learning-Domain: vlan-id 2
Interface: ge-3/0/0.2
Up Groups: 0
Interface: ge-3/1/0.2
Up Groups: 0
Interface: ge-3/1/5.2
Up Groups: 0
Instance: vpls1
Learning-Domain: vlan-id 1
2176
Interface: ge-3/0/0.1
Up Groups: 0
Interface: ge-3/1/0.1
Up Groups: 0
Interface: ge-3/1/5.1
Up Groups: 1
Group: 233.252.0.99
Group mode: Exclude
Source: 0.0.0.0
Last reported by: 233.252.0.87
Group timeout: 173 Type: Dynamic
Vlan: v1
Learning-Domain: default
Interface: ge-0/0/3.0, Groups: 1
Group: 233.252.0.100
Group mode: Exclude
Source: 0.0.0.0
Last reported by: Local
Group timeout: 0 Type: Static
Learning-Domain: vlan-id 2
Interface: ge-3/0/0.2
Up Groups: 0
Interface: ge-3/1/0.2
Up Groups: 0
Interface: ge-3/1/5.2
Up Groups: 0
Instance: vpls1
Learning-Domain: vlan-id 1
Interface: ge-3/0/0.1
Up Groups: 0
Interface: ge-3/1/0.1
Up Groups: 0
Interface: ge-3/1/5.1
Up Groups: 1
Group: 233.252.0.99
Group mode: Exclude
Source: 0.0.0.0
Last reported by: 233.252.0.87
Group timeout: 173 Type: Dynamic
Learning-Domain: default
Interface: ge-0/1/2.200
Group: 233.252.0.1
Source: 0.0.0.0
Timeout: 391 Type: Static
Group: 232.1.1.1
2178
Source: 192.128.1.1
Timeout: 0 Type: Static
Instance: vpls1
Learning-Domain: vlan-id 1
Interface: ge-3/0/0.1
Up Groups: 0
Interface: ge-3/1/0.1
Up Groups: 0
Interface: ge-3/1/5.1
Up Groups: 1
Group: 233.252.0.1
Group mode: Exclude
Source: 0.0.0.0
Last reported by: 233.252.0.82
Group timeout: 209 Type: Dynamic
Vlan: v2
Learning-Domain: default
Interface: ge-0/0/0.0, Groups: 0
Data-forwarding receiver: yes
Learning-Domain: default
Interface: ge-0/0/12.0, Groups: 1
Group: 233.252.0.1
Group mode: Exclude
Source: 0.0.0.0
2179
show igmp snooping membership <detail> (QFX5100 switches—same output with or without
detail option)
Vlan: v100
Learning-Domain: default
Interface: xe-0/0/51:0.0, Groups: 1
Group: 233.252.0.1
Group mode: Exclude
Source: 0.0.0.0
Last reported by: 233.252.0.82
Group timeout: 251 Type: Dynamic
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2180
Description | 2180
Options | 2180
Syntax
Description
Show the operational status of point-to-multipoint LSP for IGMP snooping routes.
Options
brief | detail Display the specified level of output per routing instance. The default is
brief.
logical-system logical- (Optional) Display information about a particular logical system, or type ’all’.
system-name
view
2181
Sample Output
Instance: master
P2MP LSP in use: no
Instance: default-switch
P2MP LSP in use: no
Instance: name
P2MP LSP in use: yes
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2182
Description | 2182
Options | 2182
Syntax
Description
Options
view
2183
Output Fields
Table 52 on page 2183 lists the output fields for the show igmp snooping statistics command. Output
fields are listed in the approximate order in which they appear.
IGMP packet Heading for IGMP snooping statistics for all interfaces or for the All levels
statistics specified interface.
IGMP Global Summary of IGMP snooping statistics for all interfaces. All levels
Statistics
• Bad Length—Number of messages received with length errors
so severe that further classification could not occur.
Sample Output
Routing-instance bar
V1 Membership Report 0 0 0
DVMRP 0 0 0
PIM V1 0 0 0
Cisco Trace 0 0 0
V2 Membership Report 0 0 0
Group Leave 0 0 0
Mtrace Response 0 0 0
Mtrace Request 0 0 0
Domain Wide Report 0 0 0
V3 Membership Report 0 0 0
Other Unknown types 0
logical-system: default
Bridge: VPLS-6
IGMP Message type Received Sent Rx errors
Membership Query 0 4 0
V1 Membership Report 0 0 0
DVMRP 0 0 0
PIM V1 0 0 0
Cisco Trace 0 0 0
V2 Membership Report 0 0 0
Group Leave 0 0 0
Mtrace Response 0 0 0
Mtrace Request 0 0 0
Domain Wide Report 0 0 0
V3 Membership Report 0 0 0
Other Unknown types 0
Mtrace Response 0 0 0
Mtrace Request 0 0 0
Domain Wide Report 0 0 0
V3 Membership Report 0 0 0
Other Unknown types 0
Bridge: VPLS-p2mp
IGMP Message type Received Sent Rx errors
Membership Query 0 2 0
V1 Membership Report 0 0 0
DVMRP 0 0 0
PIM V1 0 0 0
Cisco Trace 0 0 0
V2 Membership Report 0 0 0
Group Leave 0 0 0
Mtrace Response 0 0 0
Mtrace Request 0 0 0
Domain Wide Report 0 0 0
V3 Membership Report 0 0 0
Other Unknown types 0
Bridge: VS-BD-1
IGMP Message type Received Sent Rx errors
Membership Query 0 6 0
V1 Membership Report 0 0 0
DVMRP 0 0 0
PIM V1 0 0 0
Cisco Trace 0 0 0
V2 Membership Report 0 0 0
Group Leave 0 0 0
Mtrace Response 0 0 0
Mtrace Request 0 0 0
Domain Wide Report 0 0 0
V3 Membership Report 0 0 0
Other Unknown types 0
Bridge: bridge-domain1
IGMP interface packet statistics for ge-2/0/8.0
IGMP Message type Received Sent Rx errors
Membership Query 0 2 0
V1 Membership Report 0 0 0
DVMRP 0 0 0
PIM V1 0 0 0
Cisco Trace 0 0 0
V2 Membership Report 0 0 0
Group Leave 0 0 0
Mtrace Response 0 0 0
Mtrace Request 0 0 0
Domain Wide Report 0 0 0
V3 Membership Report 0 0 0
Other Unknown types 0
Bridge: bridge-domain2
IGMP interface packet statistics for ge-2/0/8.0
IGMP Message type Received Sent Rx errors
Membership Query 0 2 0
V1 Membership Report 0 0 0
DVMRP 0 0 0
PIM V1 0 0 0
Cisco Trace 0 0 0
V2 Membership Report 0 0 0
Group Leave 0 0 0
Mtrace Response 0 0 0
Mtrace Request 0 0 0
Domain Wide Report 0 0 0
V3 Membership Report 0 0 0
Other Unknown types 0
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2190
Description | 2190
Options | 2191
Syntax
Description
NOTE: To display similar information on routing devices or switches that support the Enhanced
Layer 2 Software (ELS) configuration style, use the equivalent command "show igmp snooping
membership" on page 2171.
2191
Options
interface interface-name (Optional) Display IGMP snooping information for the specified interface.
vlan vlan-id | vlan-name (Optional) Display IGMP snooping information for the specified VLAN.
view
Output Fields
Table 53 on page 2191 lists the output fields for the show igmp-snooping membership command.
Output fields are listed in the approximate order in which they appear.
• static or dynamic—
Whether the multicast
router interface is
statically or dynamically
assigned.
• Uptime—For static
interfaces, amount of
time since the interface
was configured as a
multicast-router
interface or since the
interface last flapped.
For dynamic interfaces,
amount of time since
the first query was
received on the
interface or since the
interface last flapped.
• timeout—Query
timeout in seconds.
2193
• Receiver count—
Number of hosts on the
interface that are
members of the
multicast group (field
appears only if
immediate-leave is
configured on the
VLAN), or number of
interfaces that have
membership in a
multicast group.
• Uptime—Length of time
(in hours, minutes, and
seconds) a multicast
group has been active
on the interface.
• timeout—Time (in
seconds) left until the
entry for the multicast
group is removed from
the multicast group if
no membership reports
are received on the
2194
• Flags—The lowest
IGMP version in use by
a host that is a member
of the group on the
interface.
• Include source—Source
addresses from which
multicast streams are
allowed based on
IGMPv3 reports.
Sample Output
Release Information
IGMPv3 output introduced in Junos OS Release 12.1 for the QFX Series.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2197
Description | 2197
Options | 2197
2197
Syntax
Description
NOTE: This command is only available on switches that do not support the Enhanced Layer 2
Software (ELS) configuration style.
Options
none Display general route information for all VLANs on which IGMP snooping is
enabled.
brief | detail (Optional) Display the specified level of output. The default is brief.
ethernet-switching (Optional) Display information on Layer 2 multicast routes. This is the default.
vlan vlan-name (Optional) Display route information for the specified VLAN.
2198
view
Output Fields
Table 54 on page 2198 lists the output fields for the show igmp-snooping route command. Output fields
are listed in the approximate order in which they appear. Some output fields are not displayed by this
command on some devices.
Interface or Interfaces Name of the interface or interfaces in the VLAN associated with
the multicast group.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2201
Description | 2201
Syntax
Description
NOTE: To display similar information on routing devices or switches that support the Enhanced
Layer 2 Software (ELS) configuration style, use the equivalent command "show igmp snooping
statistics" on page 2181.
view
Output Fields
Table 55 on page 2201 lists the output fields for the show igmp-snooping statistics command. Output
fields are listed in the approximate order in which they appear.
Not local Number of packets received from senders that are not local, or 0
if not used (on some devices).
Timed out Number of timeouts for all multicast groups, or 0 if not used (on
some devices).
Recv Errors Number of general receive errors, for packets received that did
not conform to IGMP version 1 (IGMPv1), IGMPv2, or IGMPv3
standards.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2203
Description | 2203
Options | 2204
Syntax
Description
NOTE: To display similar information on routing devices or switches that support the Enhanced
Layer 2 Software (ELS) configuration style, use equivalent commands such as "show igmp
snooping interface" on page 2163.
Options
none Display general IGMP snooping information for all VLANs on which IGMP
snooping is enabled.
brief | detail (Optional) Display the specified level of output. The default is brief.
vlan vlan-id | vlan vlan- (Optional) Display VLAN information for the specified VLAN.
number
view
Output Fields
Table 56 on page 2204 lists the output fields for the show igmp-snooping vlans command. Output fields
are listed in the approximate order in which they appear. Some output fields are not displayed by this
command on some devices.
IGMP-L2-Querier Source address for IGMP snooping queries (if switch is an All levels
IGMP querier)
Groups Number of groups in the VLAN to which the interface All levels
belongs.
MRouters Number of multicast routers associated with the VLAN. All levels
Receivers Number of host receivers in the VLAN. Indicates how many All levels
VLAN interfaces would receive data because of IGMP
membership.
tagged | untagged Interface accepts tagged (802.1Q) packets for trunk mode detail
and tagged-access mode ports, or untagged (native VLAN)
packets for access mode ports.
Querier timeout Maximum length of time the switch waits to take over as detail
IGMP querier if no query is received.
Reporters Number of hosts on the interface that are current members detail
of multicast groups. This field appears only when
immediate-leave is configured on the VLAN.
2206
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2208
Description | 2208
Options | 2209
Syntax
Description
By default, Junos OS multicast devices collect statistics of received and transmitted IGMP control
messages that reflect currently active multicast group subscribers.
2209
Some devices also automatically maintain continuous IGMP statistics globally on the device in addition
to the default active subscriber statistics—these are persistent, continuous statistics of received and
transmitted IGMP control packets that account for both past and current multicast group subscriptions
processed on the device. With continuous statistics, you can see the total count of IGMP control
packets the device processed since the last device reboot or clear igmp statistics continuous command.
The device collects and displays continuous statistics only for the fields shown in the IGMP packet
statistics output section of this command, and does not display the IGMP Global statistics section.
Devices that support continuous statistics maintain this information in a shared database and copy it to
the backup Routing Engine at a configurable interval to avoid too much processing overhead on the
Routing Engine. These actions preserve statistics counts across the following events or operations
(which doesn’t happen for the default active subscriber statistics):
You can change the default interval (300 seconds) using the cont-stats-collection-interval
configuration statement at the [edit routing-options multicast] hierarchy level.
You can display either the default currently active subscriber statistics or continuous subscriber
statistics (if supported), but not both at the same time. Include the continuous option to display
continuous statistics, otherwise the command displays the statistics only for active subscribers.
Run the clear igmp statistics command to clear the currently active subscriber statistics. On devices that
support continuous statistics, run the clear command with the continuous option to clear all continuous
statistics. You must run these commands separately to clear both types of statistics because the device
maintains and clears the two types of statistics separately.
Options
none Display IGMP statistics for all interfaces. These statistics represent
currently active subscribers.
continuous (Optional) Display continuous IGMP statistics that account for both past
and current multicast group subscribers instead of the default statistics
that only reflect currently active subscribers. This option is not available
with the interface option for interface-specific statistics.
2210
interface interface-name (Optional) Display IGMP statistics about the specified interface only. This
option is not available with the continuous option.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 57 on page 2210 describes the output fields for the show igmp statistics command. Output fields
are listed in the approximate order in which they appear.
IGMP packet Heading for IGMP packet statistics for all interfaces or for the specified interface
statistics name.
Max Rx rate (pps) Maximum number of IGMP packets received during 1 second interval.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2215
Description | 2215
Syntax
Description
Display the state and configuration of the ingress replication tunnels created for the MVPN application
when using the mpls-internet-multicast routing instance type.
View
Output Fields
Table 58 on page 2215 lists the output fields for the show ingress-replication mvpn command. Output
fields are listed in the approximate order in which they appear.
Mode Indicates whether the tunnel was created as a new tunnel for the ingress
replication, or if an existing tunnel was used.
Sample Output
Release Information
IN THIS SECTION
Syntax | 2217
Description | 2217
Options | 2218
Syntax
Description
Display status information about the specified multicast tunnel interface and its logical encapsulation
and de-encapsulation interfaces.
2218
Options
snmp-index snmp-index (Optional) Display information for the specified SNMP index of the
interface.
Additional Information
The multicast tunnel interface has two logical interfaces: encapsulation and de-encapsulation. These
interfaces are automatically created by the Junos OS for every multicast-enabled VPN routing and
forwarding (VRF) instance. The encapsulation interface carries multicast traffic traveling from the edge
interface to the core interface. The de-encapsulation interface carries traffic coming from the core
interface to the edge interface.
view
Output Fields
Table 59 on page 2218 lists the output fields for the show interfaces (Multicast Tunnel) command.
Output fields are listed in the approximate order in which they appear.
Physical Interface
Enabled State of the interface. Possible values are described in the All levels
“Enabled Field” section under Common Output Fields
Description.
Interface index Physical interface's index number, which reflects its initialization detail extensive
sequence. none
SNMP ifIndex SNMP index number for the physical interface. detail extensive
none
Generation Unique number for use by Juniper Networks technical support detail extensive
only.
Device flags Information about the physical device. Possible values are All levels
described in the “Device Flags” section under Common Output
Fields Description.
Interface flags Information about the interface. Possible values are described in All levels
the “Interface Flags” section under Common Output Fields
Description.
2220
Input Rate Input rate in bits per second (bps) and packets per second (pps). None specified
Statistics last Time when the statistics for the interface were last set to zero. detail extensive
cleared
Traffic statistics Number and rate of bytes and packets received and transmitted All levels
on the physical interface.
Sample Output
Traffic statistics:
Input bytes : 246132
Output bytes : 355524
Input packets: 4558
Output packets: 4558
IPv6 transit statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Local statistics:
Input bytes : 246132
Output bytes : 0
Input packets: 4558
Output packets: 0
Transit statistics:
Input bytes : 0 0 bps
Output bytes : 355524 0 bps
Input packets: 0 0 pps
Output packets: 4558 0 pps
IPv6 transit statistics:
Input bytes : 0
Output bytes : 0
Input packets: 0
Output packets: 0
Protocol inet, MTU: Unlimited, Generation: 184, Route table: 4
Flags: None
Protocol inet6, MTU: Unlimited, Generation: 185, Route table: 4
Flags: None
Release Information
IN THIS SECTION
Syntax | 2225
Description | 2225
Options | 2225
Syntax
Description
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 60 on page 2225 describes the output fields for the show mld group command. Output fields are
listed in the approximate order in which they appear.
Interface Name of the interface that received the MLD membership All levels
report; local means that the local router joined the group itself.
2226
Group Mode Mode the SSM group is operating in: Include or Exclude. All levels
Last reported Address of the host that last reported membership in this group. All levels
by
Source timeout Time remaining until the group traffic is no longer forwarded. detail
The timer is refreshed when a listener in include mode sends a
report. A group in exclude mode or configured as a static group
displays a zero timer.
Timeout Time remaining until the group membership is removed. brief none
Group timeout Time remaining until a group in exclude mode moves to include detail
mode. The timer is refreshed when a listener in exclude mode
sends a report. A group in include mode or configured as a static
group displays a zero timer.
• Static—Membership is configured.
2227
Sample Output
Interface: ge-0/2/0.0
Group: ff02::6
Source: ::
Last reported by: fe80::21f:12ff:feb6:4b3a
Timeout: 245 Type: Dynamic
Group: ff02::16
Source: ::
Last reported by: fe80::21f:12ff:feb6:4b3a
Timeout: 28 Type: Dynamic
Interface: local
Group: ff02::2
Source: ::
Last reported by: Local
Timeout: 0 Type: Dynamic
Group: ff02::16
Source: ::
Last reported by: Local
Timeout: 0 Type: Dynamic
The output for the show mld group brief command is identical to that for the show mld group
command. For sample output, see "show mld group (Include Mode)" on page 2227 "show mld group
(Exclude Mode)" on page 2227.
Source: ::
Last reported by: fe80::2e0:81ff:fe05:1a67
Timeout: 223 Type: Dynamic
Group: ff05::2
Group mode: Include
Source: ::
Last reported by: fe80::2e0:81ff:fe05:1a67
Timeout: 223 Type: Dynamic
Interface: so-1/0/1.0
Group: ff02::2
Group mode: Include
Source: ::
Last reported by: fe80::280:42ff:fe15:f445
Timeout: 258 Type: Dynamic
Interface: local
Group: ff02::2
Group mode: Include
Source: ::
Last reported by: Local
Timeout: 0 Type: Dynamic
Group: ff02::16
Source: ::
Last reported by: Local
Timeout: 0 Type: Dynamic
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2231
Description | 2231
Options | 2231
Syntax
Description
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 61 on page 2231 describes the output fields for the show mld interface command. Output fields
are listed in the approximate order in which they appear.
Querier Address of the router that has been elected to send membership All levels
queries.
SSM Map Name of the source-specific multicast (SSM) map policy that has All levels
Policy been applied to the interface.
SSM Map Name of the source-specific multicast (SSM) map policy at the All levels
Policy MLD interface.
Timeout How long until the MLD querier is declared to be unreachable, in All levels
seconds.
OIF map Name of the OIF map associated to the interface. All levels
SSM map Name of the source-specific multicast (SSM) map used on the All levels
interface, if configured.
Group limit Maximum number of groups allowed on the interface. Any All levels
memberships requested after the limit is reached are rejected.
Group log- Time (in seconds) between consecutive log messages. All levels
interval
Distributed State of MLD, which, by default, takes place on the Routing All levels
Engine for MX Series routers but can be distributed to the
Packet Forwarding Engine to provide faster processing of join
and leave events.
Sample Output
Configured Parameters:
MLD Query Interval (.1 secs): 1250
2236
Derived Parameters:
MLD Membership Timeout (.1secs): 2600
MLD Other Querier Present Timeout (.1 secs): 2550
The output for the show mld interface brief command is identical to that for the show mld interface
command. For sample output, see "show mld interface" on page 2235.
The output for the show mld interface detail command is identical to that for the show mld interface
command. For sample output, see "show mld interface" on page 2235.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2237
Description | 2237
Options | 2238
Syntax
Description
By default, Junos OS multicast devices collect statistics of received and transmitted MLD control
messages that reflect currently active multicast group subscribers.
Some devices also automatically maintain continuous MLD statistics globally on the device in addition
to the default active subscriber statistics—these are persistent, continuous statistics of received and
2238
transmitted MLD control packets that account for both past and current multicast group subscriptions
processed on the device. With continuous statistics, you can see the total count of MLD control packets
the device processed since the last device reboot or clear mld statistics continuous command. The
device collects and displays continuous statistics only for the fields shown in the MLD packet statistics...
output section of this command, and does not display the MLD Global statistics section.
Devices that support continuous statistics maintain this information in a shared database and copy it to
the backup Routing Engine at a configurable interval to avoid too much processing overhead on the
Routing Engine. These actions preserve statistics counts across the following events or operations
(which doesn’t happen for the default active subscriber statistics):
You can change the default interval (300 seconds) using the cont-stats-collection-interval
configuration statement at the [edit routing-options multicast] hierarchy level.
You can display either the default currently active subscriber statistics or continuous subscriber
statistics (if supported), but not both at the same time. Include the continuous option to display
continuous statistics, otherwise the command displays the statistics only for currently active
subscribers.
Run the clear mld statistics command to clear the currently active subscriber statistics. On devices that
support continuous statistics, run the clear command with the continuous option to clear all continuous
statistics. You must run these commands separately to clear both types of statistics because the device
maintains and clears the two types of statistics separately.
Options
none Display MLD statistics for all interfaces. These statistics represent
currently active subscribers.
continuous (Optional) Display continuous MLD statistics that account for both past
and current multicast group subscribers instead of the default statistics
that only reflect currently active subscribers. This option is not available
with the interface option for interface-specific statistics.
interface interface-name (Optional) Display statistics about the specified interface. This option is
not available with the continuous option.
2239
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 62 on page 2239 describes the output fields for the show mld statistics command. Output fields
are listed in the approximate order in which they appear.
MLD Packet Statistics... Heading for MLD packet statistics for all interfaces or for the specified
interface name.
NOTE: These statistics are not supported or displayed with the continuous
option.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2244
Description | 2244
Options | 2244
Syntax
Description
Options
none Display MLD snooping information for all interfaces on which MLD
snooping is enabled.
brief | detail (Optional) Display the specified level of output. The default is brief.
instance routing- (Optional) Display MLD snooping information for the specified routing
instance instance.
interface-name (Optional) Display MLD snooping information for the specified interface.
qualified-vlan vlan-name (Optional) Display MLD snooping information for the specified qualified
VLAN.
vlan vlan-name (Optional) Display MLD snooping information for the specified VLAN.
view
Output Fields
Table 63 on page 2245 lists the output fields for the show mld snooping interface command. Output
fields are listed in the approximate order in which they appear. Details may differ for EX switches and
MX routers.
2245
Vlan Name of the VLAN for which MLD snooping is enabled. All levels
Router interface Indicates whether the interface is a multicast router interface: Yes detail
or No.
2246
Sample Output
Vlan: v100
Learning-Domain: default
Interface: ge-0/0/1.0
State: Up Groups: 1
Immediate leave: Off
Router interface: no
Interface: ge-0/0/2.0
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Configured Parameters:
MLD Query Interval: 125.0
2247
Vlan: v100
Learning-Domain: default
Interface: ge-0/0/2.0
State: Up Groups: 0
Immediate leave: Off
Router interface: no
Configured Parameters:
MLD Query Interval: 125.0
MLD Query Response Interval: 10.0
MLD Last Member Query Interval: 1.0
MLD Robustness Count: 2
Vlan: v1
Learning-Domain: default
Interface: ge-0/0/1.0
Interface: ge-0/0/2.0
Configured Parameters:
MLD Query Interval: 125.0
MLD Query Response Interval: 10.0
MLD Last Member Query Interval: 1.0
MLD Robustness Count: 2
2248
The output for the show mld snooping interface detail command is identical to that for the show mld
snooping interface command. For sample output, see "show mld snooping interface" on page 2246.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2248
Description | 2249
Options | 2249
Syntax
Description
Options
none Display the multicast group membership information for all VLANs on which
MLD snooping is enabled.
brief | detail (Optional) Display the specified level of output. The default is brief.
interface interface-name (Optional) Display the multicast group membership information for the
specified interface.
vlan (vlan-id | vlan-name) (Optional) Display the multicast group membership for the specified VLAN.
view
Output Fields
Table 64 on page 2249 lists the output fields for the show mld snooping membership command. Output
fields are listed in the approximate order in which they appear.
Interfaces Interfaces that are members of the listed multicast group. brief
Sample Output
2001:db8:ff1e::2011
Interfaces: ge-1/0/30.0
2001:db8:ff1e::2012
Interfaces: ge-1/0/30.0
2001:db8:ff1e::2013
Interfaces: ge-1/0/30.0
2001:db8:ff1e::2014
Interfaces: ge-1/0/30.0
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2253
Description | 2253
Options | 2253
Syntax
Description
Options
none Display route information for all VLANs on which MLD snooping is enabled.
brief | detail (Optional) Display the specified level of output. The default is brief.
ethernet-switching (Optional) Display information on Layer 2 IPv6 multicast routes. This is the
default.
2254
vlan (vlan-id | vlan-name) (Optional) Display route information for the specified VLAN.
view
Output Fields
Table 65 on page 2254 lists the output fields for the show mld-snooping route command. Output fields
are listed in the approximate order in which they appear.
Group Multicast IPv6 group address. Only the last 32 bits of the address
are shown. The switch uses only these bits in determining
multicast routes.
Interface or Interfaces Name of the interface or interfaces in the VLAN associated with
the multicast group.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2257
Description | 2257
Syntax
Description
view
Output Fields
Table 66 on page 2257 lists the output fields for the show mld snooping statistics command. Output
fields are listed in the approximate order in which they appear.
Recv Errors Number of packets received that did not conform to the MLD version
1 (MLDv1) or MLDv2 standards.
Sample Output
Leaves: 0 0 0
Other: 0 0 0
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2260
Description | 2260
Options | 2260
Syntax
Description
Options
none Display MLD snooping information for all VLANs on which MLD snooping is enabled.
brief | detail (Optional) Display the specified level of output. The default is brief.
vlan vlan-name (Optional) Display MLD snooping information for the specified VLAN.
view
Output Fields
Table 67 on page 2260 lists the output fields for the show mld-snooping vlans command. Output fields
are listed in the approximate order in which they appear.
Receivers Number of interfaces in the VLAN with a receiver for any group. brief
Indicates how many interfaces might receive data because of MLD
group membership.
vlan-interface The Layer 3 interface, if any, associated with the VLAN. detail
Sample Output
v10 1 0 0 0
v11 1 0 0 0
v180 3 0 1 0
v181 3 0 0 0
v182 3 0 0 0
Release Information
RELATED DOCUMENTATION
mld-snooping | 1669
show mld snooping membership | 2248
show mld-snooping route | 2253
show mld snooping statistics | 2257
Verifying MLD Snooping on EX Series Switches (CLI Procedure) | 232
Configuring MLD Snooping on an EX Series Switch VLAN (CLI Procedure) | 186
IN THIS SECTION
Syntax | 2263
Description | 2264
Options | 2264
Syntax
<externally-controlled>
<externally-provisioned>
<instance routing-instance-name>
<locally-provisioned>
<logical-system (all | logical-system-name)>
<lsp-type>
<name name>
<p2mp>
<reverse-statistics>
<segment>
<statistics>
<transit>
Description
Display information about configured and active dynamic Multiprotocol Label Switching (MPLS) label-
switched paths (LSPs).
Options
none Display standard information about all configured and active dynamic MPLS
LSPs.
2265
brief | detail | (Optional) Display the specified level of output. The extensive option displays the
extensive | terse same information as the detail option, but covers the most recent 50 events.
For example:
• All timestamps
• Timestamp deltas
descriptions (Optional) Display the MPLS label-switched path (LSP) descriptions. To view this
information, you must configure the description statement at the [edit protocol
mpls lsp] hierarchy level. Only LSPs with a description are displayed. This
command is only valid for the ingress routing device, because the description is
not propagated in RSVP messages.
down | up (Optional) Display only LSPs that are inactive or active, respectively.
externally- (Optional) Display the LSPs that are under the control of an external Path
controlled Computation Element (PCE).
externally- (Optional) Display the LSPs that are generated dynamically and provisioned by an
provisioned external Path Computation Element (PCE).
instance instance- (Optional) Display MPLS LSP information for the specified instance. If instance-
name name is omitted, MPLS LSP information is displayed for the master instance.
locally-provisioned (Optional) Display LSPs that have been provisioned locally by the Path
Computation Client (PCC).
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular logical
logical-system- system.
name)
lsp-type (Optional) Display information about a particular LSP type:
name name (Optional) Display information about the specified LSP or group of LSPs.
statistics (Optional) (Ingress and transit routers only) Display accounting information about
LSPs. Statistics are not available for LSPs on the egress routing device, because
2267
the penultimate routing device in the LSP sets the label to 0. Also, as the packet
arrives at the egress routing device, the hardware removes its MPLS header and
the packet reverts to being an IPv4 packet. Therefore, it is counted as an IPv4
packet, not an MPLS packet.
NOTE: If a bypass LSP is configured for the primary static LSP, display
cumulative statistics of packets traversing through the protected LSP and
bypass LSP when traffic is re-optimized when the protected LSP link is
restored. (Bypass LSPs are not supported on QFX Series switches.)
When used with the bypass option (show mpls lsp bypass statistics),
display statistics for the traffic that flows only through the bypass LSP.
view
Output Fields
Table 68 on page 2267 describes the output fields for the show mpls lsp command. Output fields are
listed in the approximate order in which they appear.
Ingress LSP Information about LSPs on the ingress routing device. Each All levels
session has one line of output.
Egress LSP Information about the LSPs on the egress routing device. MPLS All levels
learns this information by querying RSVP, which holds all the
transit and egress session information. Each session has one line
of output.
2268
Transit LSP Number of LSPs on the transit routing devices and the state of All levels
these paths. MPLS learns this information by querying RSVP,
which holds all the transit and egress session information.
P2MP name Name of the point-to-multipoint LSP. Dynamically generated All levels
P2MP LSPs used for VPLS flooding use dynamically generated
P2MP LSP names. The name uses the format
identifier:vpls:router-id:routing-instance-name. The identifier is
automatically generated by Junos OS.
P2MP branch Number of destination LSPs the point-to-multipoint LSP is All levels
count transmitting to.
P An asterisk (*) under this heading indicates that the LSP is a All levels
primary path.
address (detail and extensive) Destination (egress routing device) of the detail extensive
LSP.
State State of the LSP handled by this RSVP session: Up, Dn (down), or brief detail
Restart.
Active Route Number of active routes (prefixes) installed in the forwarding detail extensive
table. For ingress LSPs, the forwarding table is the primary IPv4
table (inet.0). For transit and egress RSVP sessions, the
forwarding table is the primary MPLS table (mpls.0).
2269
P Path. An asterisk (*) underneath this column indicates that the brief
LSP is a primary path.
ActivePath (Ingress LSP) Name of the active path: Primary or Secondary. detail extensive
Statistics Displays the number of packets and the number of bytes extensive
transmitted over the LSP. These counters are reset to zero
whenever the LSP path is optimized (for example, during an
automatic bandwidth allocation).
Aggregate Displays the number of packets and the number of bytes extensive
statistics transmitted over the LSP. These counters continue to iterate
even if the LSP path is optimized. You can reset these counters
to zero using the clear mpls lsp statistics command.
Packets Displays the number of packets transmitted over the LSP. brief extensive
Bytes Displays the number of bytes transmitted over the LSP. brief extensive
• Static configured—Static
• Dynamic configured—Dynamic
Bypass (Bypass LSP) Destination address (egress routing device) for the All levels
bypass LSP.
LSPpath Indicates whether the RSVP session is for the primary or detail
secondary LSP path. LSPpath can be either primary or
secondary and can be displayed on the ingress, egress, and
transit routing devices.
Bidir (GMPLS) The LSP allows data to travel in both directions All levels
between GMPLS devices.
Bidirectional (GMPLS) The LSP allows data to travel both ways between All levels
GMPLS devices.
FastReroute Fast reroute has been requested by the ingress routing device. detail
desired
Link protection Link protection has been requested by the ingress routing detail
desired device.
Node/Link Link protection has been requested by the ingress routing detail
protection device.
desired
2271
External Path (PCE-controlled LSPs) Status of the PCE-controlled LSP with per extensive
CSPF status path attributes:
• Local
• External
flap counter Counts the number of times a LSP flaps down or up. extensive
2272
LoadBalance (Ingress LSP) CSPF load-balancing rule that was configured to detail extensive
select the LSP's path among equal-cost paths: Most-fill, Least-
fill, or Random.
Signal type Signal type for GMPLS LSPs. The signal type determines the All levels
peak data rate for the LSP: DS0, DS3, STS-1, STM-1, or STM-4.
Encoding type LSP encoding type: Packet, Ethernet, PDH, SDH/SONET, All levels
Lambda, or Fiber.
Switching type Type of switching on the links needed for the LSP: Fiber, Lamda, All levels
Packet, TDM, or PSC-1.
GPID Generalized Payload Identifier (identifier of the payload carried All levels
by an LSP): HDLC, Ethernet, IPv4, PPP, or Unknown.
Protection Configured protection capability desired for the LSP: Extra, All levels
Enhanced, none, One plus one, One to one, or Shared.
Upstream label (Bidirectional LSPs) Incoming label for reverse direction traffic for All levels
in this LSP.
Upstream label (Bidirectional LSPs) Outgoing label for reverse direction traffic All levels
out for this LSP.
Suggested (Bidirectional LSPs) Label the upstream interface suggests to use All levels
label received in the Resv message that is sent.
Suggested (Bidirectional LSPs) Label the downstream node suggests to use All levels
label sent in the Resv message that is returned.
2273
Autobandwidt (Ingress LSP) The LSP is performing autobandwidth allocation. detail extensive
h
Mbb counter Counts the number of times a LSP incurs MBB. extensive
MinBW (Ingress LSP) Configured minimum value of the LSP, in bps. detail extensive
MaxBW (Ingress LSP) Configured maximum value of the LSP, in bps. detail extensive
Dynamic (Ingress LSP) Displays the current dynamically specified detail extensive
MinBW minimum bandwidth allocation for the LSP, in bps.
Dynamic (Ingress LSP) Displays the current dynamically specified detail extensive
MinBW minimum bandwidth allocation for the LSP, in bps.
AdjustTimer (Ingress LSP) Configured value for the adjust-timer statement, detail extensive
indicating the total amount of time allowed before bandwidth
adjustment will take place, in seconds.
Adjustment (Ingress LSP) Configured value for the adjust-threshold detail extensive
Threshold statement. Specifies how sensitive the automatic bandwidth
adjustment for an LSP is to changes in bandwidth utilization.
Time for Next (Ingress LSP) Time in seconds until the next automatic detail extensive
Adjustment bandwidth adjustment sample is taken.
Time of Last (Ingress LSP) Date and time since the last automatic bandwidth detail extensive
Adjustment adjustment was completed.
MaxAvgBW (Ingress LSP) Current value of the actual maximum average detail extensive
util bandwidth utilization, in bps.
2274
Overflow limit (Ingress LSP) Configured value of the threshold overflow limit. detail extensive
Overflow (Ingress LSP) Current value for the overflow sample count. detail extensive
sample count
Bandwidth (Ingress LSP) Current value of the bandwidth adjustment timer, detail extensive
Adjustment in indicating the amount of time remaining until the bandwidth
nnn second(s) adjustment will take place, in seconds.
In-place Current value of the in-place LSP bandwidth update counter detail extensive
Update Count indicating the number of times an LSP-ID is reused when LSP-ID
re-use is enabled for an LSP.
Underflow (Ingress LSP) Configured value of the threshold underflow limit. detail extensive
limit
Underflow (Ingress LSP) Current value for the underflow sample count. detail extensive
sample count
Underflow (Ingress LSP) The highest sample bandwidth among the detail extensive
Max AvgBW underflow samples recorded currently. This is the signaling
bandwidth if an adjustment occurs because of an underflow.
Active path (Ingress LSP) A value of * indicates that the path is active. The detail extensive
indicator absence of * indicates that the path is not active. In the following
example, “long” is the active path.
*Primary long
Standby short
Standby (Ingress LSP) Name of the path in standby mode. detail extensive
Bandwidth per (Ingress LSP) Active bandwidth for the LSP path for each MPLS detail extensive
class class type, in bps.
Priorities (Ingress LSP) Configured value of the setup priority and the hold detail extensive
priority respecitively (the setup priority is displayed first), where
0 is the highest priority and 7 is the lowest priority. If you have
not explicitly configured these values, the default values are
displayed (7 for the setup priority and 0 for the hold priority).
OptimizeTimer (Ingress LSP) Configured value of the optimize timer, indicating detail extensive
the total amount of time allowed before path reoptimization, in
seconds.
SmartOptimize (Ingress LSP) Configured value of the smart optimize timer, detail extensive
Timer indicating the total amount of time allowed before path
reoptimization, in seconds.
Reoptimization (Ingress LSP) Current value of the optimize timer, indicating the detail extensive
in xxx seconds amount of time remaining until the path will be reoptimized, in
seconds.
2276
Computed (Ingress LSP) Computed explicit route. A series of hops, each detail extensive
ERO (S [L] with an address followed by a hop indicator. The value of the
denotes strict hop indicator can be strict (S) or loose (L).
[loose] hops)
CSPF metric (Ingress LSP) Constrained Shortest Path First metric for this path. detail extensive
2277
Received RRO (Ingress LSP) Received record route. A series of hops, each with detail extensive
an address followed by a flag. (In most cases, the received record
route is the same as the computed explicit route. If Received
RRO is different from Computed ERO, there is a topology
change in the network, and the route is taking a detour.) The
following flags identify the protection capability and status of
the downstream node:
• P—Pop labels.
• D—Delegation labels.
Index number (Ingress LSP) Log entry number of each LSP path event. The extensive
numbers are in chronological descending order, with a maximum
of 50 index numbers displayed.
Created (Ingress LSP) Date and time the LSP was created. extensive
Resv style (Bypass) RSVP reservation style. This field consists of two parts. brief detail
The first is the number of active reservations. The second is the extensive
reservation style, which can be FF (fixed filter), SE (shared
explicit), or WF (wildcard filter).
Time left Number of seconds remaining in the lifetime of the reservation. detail
Since Date and time when the RSVP session was initiated. detail
Tspec Sender's traffic specification, which describes the sender's traffic detail
parameters.
Port number Protocol ID and sender or receiver port used in this RSVP detail
session.
PATH rcvfrom Address of the previous-hop (upstream) routing device or client, detail
interface the neighbor used to reach this router, and number of
packets received from the upstream neighbor.
PATH sentto Address of the next-hop (downstream) routing device or client, detail
interface used to reach this neighbor, and number of packets
sent to the downstream routing device.
RESV rcvfrom Address of the previous-hop (upstream) routing device or client, detail
interface the neighbor used to reach this routing device, and
number of packets received from the upstream neighbor. The
output in this field, which is consistent with that in the PATH
rcvfrom field, indicates that the RSVP negotiation is complete.
Record route Recorded route for the session, taken from the record route detail
object.
ETLD In Number of transport labels that the LSP-Hop can potentially extensive
receive from its upstream hop. It is recorded as Effective
Transport Label Depth (ETLD) at the transit and egress devices.
ETLD Out Number of transport labels the LSP-Hop can potentially send to extensive
its downstream hop. It is recorded as ETLD at the transit and
ingress devices.
Delegation hop Specifies if the transit hop is selected as a delegation label: extensive
• Yes
• No
Soft preempt Number of soft preemptions that occurred on a path and when detail
the last soft preemption occurred. Only successful soft
preemptions are counted (those that actually resulted in a new
path being used).
Soft Path is in the process of being soft preempted. This display is detail
preemption removed once the ingress router has calculated a new path.
pending
2281
MPLS-TE LSP Default settings for MPLS traffic engineered LSPs: defaults
Defaults
• LSP Holding Priority—Determines the degree to which an
LSP holds on to its session reservation after the LSP has been
set up successfully.
The XML tag name of the bandwidth tag under the auto-bandwidth tag has been updated to maximum-
average-bandwidth . You can see the new tag when you issue the show mpls lsp extensive command
with the | display xml pipe option. If you have any scripts that use the bandwidth tag, ensure that they
are updated to maximum-average-bandwidth.
Sample Output
192.168.0.4
From: 192.168.0.5, State: Up, ActiveRoute: 0, LSPname: E-D
ActivePath: (primary)
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
*Primary State: Up
Priorities: 7 0
SmartOptimizeTimer: 180
Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 30)
10.0.0.18 S 10.0.0.22 S
Received RRO (ProtectionFlag 1=Available 2=InUse 4=B/W 8=Node 10=SoftPreempt
20=Node-ID):
10.0.0.18 10.0.0.22
Total 1 displayed, Up 1, Down 0
192.168.0.5
From: 192.168.0.4, LSPstate: Up, ActiveRoute: 0
LSPname: E-D, LSPpath: Primary
Suggested label received: -, Suggested label sent: -
Recovery label received: -, Recovery label sent: -
Resv style: 1 FF, Label in: 3, Label out: -
Time left: 157, Since: Wed Jul 18 17:55:12 2012
Tspec: rate 0bps size 0bps peak Infbps m 20 M 1500
2283
192.168.0.4
From: 192.168.0.5, State: Up, ActiveRoute: 0, LSPname: E-D
ActivePath: (primary)
LSPtype: Static Configured, Ultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
*Primary State: Up
Priorities: 7 0
SmartOptimizeTimer: 180
Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 30)
10.0.0.18 S 10.0.0.22 S
Received RRO (ProtectionFlag 1=Available 2=InUse 4=B/W 8=Node 10=SoftPreempt
20=Node-ID):
10.0.0.18 10.0.0.22
11 Sep 20 15:54:35.032 Make-before-break: Switched to new instance
10 Sep 20 15:54:34.029 Record Route: 10.0.0.18 10.0.0.22
9 Sep 20 15:54:34.029 Up
8 Sep 20 15:54:20.271 Originate make-before-break call
7 Sep 20 15:54:20.271 CSPF: computation result accepted 10.0.0.18 10.0.0.22
6 Sep 20 15:52:10.247 Selected as active path
5 Sep 20 15:52:10.246 Record Route: 10.0.0.18 10.0.0.22
4 Sep 20 15:52:10.243 Up
3 Sep 20 15:52:09.745 Originate Call
2 Sep 20 15:52:09.745 CSPF: computation result accepted 10.0.0.18 10.0.0.22
1 Sep 20 15:51:39.903 CSPF failed: no route toward 192.168.0.4
2284
192.168.0.5
From: 192.168.0.4, LSPstate: Up, ActiveRoute: 0
LSPname: E-D, LSPpath: Primary
Suggested label received: -, Suggested label sent: -
Recovery label received: -, Recovery label sent: -
Resv style: 1 FF, Label in: 3, Label out: -
Time left: 148, Since: Thu Sep 20 15:52:10 2012
Tspec: rate 0bps size 0bps peak Infbps m 20 M 1500
Port number: sender 1 receiver 49601 protocol 0
PATH rcvfrom: 10.0.0.18 (lt-1/2/0.17) 27 pkts
Adspec: received MTU 1500
PATH sentto: localclient
RESV rcvfrom: localclient
Record route: 10.0.0.22 10.0.0.18 <self>
Total 1 displayed, Up 1, Down 0
show mpls lsp detail (When Egress Protection Is in Effect During a Local Repair)
192.168.0.4
From: 192.168.0.5, State: Up, ActiveRoute: 0, LSPname: E-D
ActivePath: (primary)
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
*Primary State: Up
Priorities: 7 0
SmartOptimizeTimer: 180
Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 30)
10.0.0.18 S 10.0.0.22 S
Received RRO (ProtectionFlag 1=Available 2=InUse 4=B/W 8=Node 10=SoftPreempt
2285
20=Node-ID):
10.0.0.18 10.0.0.22
Total 1 displayed, Up 1, Down 0
192.168.0.5
From: 192.168.0.4, LSPstate: Down, ActiveRoute: 0
LSPname: E-D, LSPpath: Primary
Suggested label received: -, Suggested label sent: -
Recovery label received: -, Recovery label sent: -
Resv style: 1 FF, Label in: 3, Label out: -
Time left: 157, Since: Wed Jul 18 17:55:12 2012
Tspec: rate 0bps size 0bps peak Infbps m 20 M 1500
Port number: sender 1 receiver 46128 protocol 0
Egress protection PLR as protector: In Use
PATH rcvfrom: 10.0.0.18 (lt-1/2/0.17) 3 pkts
Adspec: received MTU 1500
PATH sentto: localclient
RESV rcvfrom: localclient
Record route: 10.0.0.22 10.0.0.18 <self>
Total 1 displayed, Up 1, Down 0
192.168.0.4
From: 192.168.0.5, State: Up, ActiveRoute: 0, LSPname: E-D
ActivePath: (primary)
LSPtype: Static Configured, Ultimate hop popping
LSP Control Status: Externally controlled
LoadBalance: Random
Metric: 10
Encoding type: Packet, Switching type: Packet, GPID: IPv4
*Primary State: Up
Priorities: 7 0
2286
192.168.0.5
From: 192.168.0.4, LSPstate: Up, ActiveRoute: 0
LSPname: E-D, LSPpath: Primary
Suggested label received: -, Suggested label sent: -
Recovery label received: -, Recovery label sent: -
Resv style: 1 FF, Label in: 3, Label out: -
Time left: 148, Since: Thu Sep 20 15:52:10 2012
2287
50.0.0.1
From: 10.0.0.1, State: Up, ActiveRoute: 0, LSPname: test
ActivePath: (primary)
LSPtype: Static Pop-and-forward Configured, Penultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
*Primary State: Up
Priorities: 7 0
OptimizeTimer: 300
SmartOptimizeTimer: 180
Reoptimization in 240 second(s).
Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 3)
1.1.1.2 S 4.4.4.1 S 5.5.5.2 S
Received RRO (ProtectionFlag 1=Available 2=InUse 4=B/W 8=Node 10=SoftPreempt
20=Node-ID):
(Labels: P=Pop D=Delegation)
80.1.1.2(Label=18 P) 50.1.1.2(Label=17 P) 70.1.1.2(Label=16 P)
92.1.1.1(Label=16 D) 93.1.1.2(Label=16 P) 99.1.1.1(Label=16 P)
99.2.1.1(Label=16 P) 99.3.1.2(Label=3)
17 Aug 3 13:17:33.601 CSPF: computation result ignored, new path less avail
bw[3 times]
16 Aug 3 13:02:51.283 CSPF: computation result ignored, new path no
benefit[2 times]
15 Aug 3 12:54:36.678 Selected as active path
14 Aug 3 12:54:36.676 Record Route: 1.1.1.2 4.4.4.1 5.5.5.2
13 Aug 3 12:54:36.676 Up
12 Aug 3 12:54:33.924 Deselected as active
2288
192.168.0.4
From: 192.168.0.5, State: Up, ActiveRoute: 0, LSPname: E-D
ActivePath: (primary)
Node/Link protection desired
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Autobandwidth
MinBW: 300bps, MaxBW: 1000bps, Dynamic MinBW: 1000bps
Adjustment Timer: 300 secs AdjustThreshold: 25%
Max AvgBW util: 963.739bps, Bandwidth Adjustment in 0 second(s).
Min BW Adjust Interval: 1000, MinBW Adjust Threshold (in %): 50
Overflow limit: 0, Overflow sample count: 0
Underflow limit: 0, Underflow sample count: 9, Underflow Max AvgBW: 614.421bps
Encoding type: Packet, Switching type: Packet, GPID: IPv4
*Primary State: Up
Priorities: 7 0
Bandwidth: 1000bps
SmartOptimizeTimer: 180
Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 30)
10.0.0.18 S 10.0.0.22 S
2289
10.2.5.2
From: 192.168.255.1, State: Up, ActiveRoute: 0, LSPname: R1-to-R4-1
ActivePath: path-R2-R3 (primary)
Link protection desired
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Follow destination IGP metric
Encoding type: Packet, Switching type: Packet, GPID: IPv4
2290
10.2.5.2
From: 192.168.255.1, State: Up, ActiveRoute: 0, LSPname: R1-to-R4-1
ActivePath: path-R2-R3 (primary)
Link protection desired
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Follow destination IGP metric
Encoding type: Packet, Switching type: Packet, GPID: IPv4
LSP Self-ping Status : Enabled
*Primary path-R2-R3 State: Up
Priorities: 7 0
Bandwidth: 100Mbps
SmartOptimizeTimer: 180
Flap Count: 1
MBB Count: 4
In-place Update Count: 2
48 Mar 3 16:43:40.438 In-place LSP Update successful
47 Mar 3 16:43:40.477 Record Route: 192.168.255.2(flag=0x21)
10.1.2.2(flag=1 Label=415072) 192.168.255.3(flag=0x21) 10.2.3.3(flag=1
Label=418192) 192.168.255.4(flag=0x20) 10.3.4.4(Label=
3)
46 Mar 3 16:43:39.617 CSPF: ERO retrace was successful 10.1.2.2 10.2.3.3
10.3.4.4
45 Mar 3 16:43:39.617 Originate In-place LSP Update call
44 Mar 3 16:42:28.263 LSP-ID: 1 deleted
43 Mar 3 16:42:28.263 Make-before-break: Cleaned up old instance: Hold dead
expiry
2291
2.2.2.2
From: 1.1.1.1, LSPstate: Up, ActiveRoute: 0
LSPname: Bypass->1.1.2.2
LSPtype: Static Configured
Suggested label received: -, Suggested label sent: -
Recovery label received: -, Recovery label sent: 300032
Resv style: 1 SE, Label in: -, Label out: 300032
Time left: -, Since: Tue Dec 3 15:19:49 2013
Tspec: rate 0bps size 0bps peak Infbps m 20 M 1500
Port number: sender 1 receiver 55750 protocol 0
Type: Bypass LSP
Number of data route tunnel through: 1
Number of RSVP session tunnel through: 0
PATH rcvfrom: localclient
Adspec: sent MTU 1500
2292
10.255.245.51
2293
10.255.245.51
From: 10.255.245.50, State: Up, ActiveRoute: 0, LSPname: p2mp-st-br1
ActivePath: path1 (primary)
P2MP name: p2mp-lsp2
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
*Primary path1 State: Up
Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 25)
192.168.208.17 S
Received RRO (ProtectionFlag 1=Available 2=InUse 4=B/W 8=Node
10=SoftPreempt):
192.168.208.17
Total 2 displayed, Up 2, Down 0
213.119.192.2
From: 156.154.162.128, State: Up, ActiveRoute: 1, LSPname: to-lahore
ActivePath: (primary)
LSPtype: Static Configured
LoadBalance: Random
Autobandwidth
MinBW: 5Mbps MaxBW: 250Mbps
AdjustTimer: 300 secs
Max AvgBW util: 0bps, Bandwidth Adjustment in 102 second(s).
2294
192.168.0.4
From: 192.168.0.5, State: Up, ActiveRoute: 0, LSPname: E-D
Statistics: Packets 302, Bytes 28992
Aggregate statistics: Packets 302, Bytes 28992
ActivePath: (primary)
LSPtype: Static Configured, Penultimate hop popping
LoadBalance: Random
Encoding type: Packet, Switching type: Packet, GPID: IPv4
*Primary State: Up
Priorities: 7 0
SmartOptimizeTimer: 180
Computed ERO (S [L] denotes strict [loose] hops): (CSPF metric: 30)
10.0.0.18 S 10.0.0.22 S
Received RRO (ProtectionFlag 1=Available 2=InUse 4=B/W 8=Node 10=SoftPreempt
20=Node-ID):
10.0.0.18 10.0.0.22
6 Oct 3 11:18:28.281 Selected as active path
2295
Release Information
RELATED DOCUMENTATION
show msdp
IN THIS SECTION
Syntax | 2296
Description | 2296
Options | 2296
Syntax
show msdp
<brief | detail>
<instance instance-name>
<logical-system (all | logical-system-name)>
<peer peer-address>
Description
Options
instance instance-name (Optional) Display information for the specified instance only.
logical-system (all | logical-system- (Optional) Perform this operation on all logical systems or on a
name) particular logical system.
peer peer-address (Optional) Display information about the specified peer only,
view
Output Fields
Table 69 on page 2297 describes the output fields for the show msdp command. Output fields are listed
in the approximate order in which they appear.
2297
State Status of the MSDP connection: Listen, Established, or Inactive. All levels
Last up/down Time at which the most recent peer-state change occurred. All levels
SA Count Number of source-active cache entries advertised by each peer All levels
that were accepted, compared to the number that were
received, in the format number-accepted/number-received.
State timer Number of seconds before another message is sent to a peer. detail
expires
Peer Times out Number of seconds to wait for a response from the peer before detail
the peer is declared unavailable.
SA accepted Number of entries in the source-active cache accepted from the detail
peer.
Sample Output
show msdp
The output for the show msdp brief command is identical to that for the show msdp command. For
sample output, see "show msdp" on page 2298.
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2299
Description | 2299
Options | 2299
Syntax
Description
Display multicast sources learned from Multicast Source Discovery Protocol (MSDP).
Options
none Display standard MSDP source information for all routing instances.
instance instance-name (Optional) Display information for the specified instance only.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
2300
view
Output Fields
Table 70 on page 2300 describes the output fields for the show msdp source command. Output fields
are listed in the approximate order in which they appear.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2302
Description | 2302
Options | 2302
Syntax
Description
Options
group group (Optional) Display source-active cache information for the specified
group.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
originator originator (Optional) Display information about the peer that originated the
source-active cache entries.
2303
peer peer-address (Optional) Display the source-active cache of the specified peer.
source source-address (Optional) Display the source-active cache of the specified source.
view
Output Fields
Table 71 on page 2303 describes the output fields for the show msdp source-active command. Output
fields are listed in the approximate order in which they appear.
Global active Number of times all peers have exceeded configured active source limits.
source limit
exceeded
Global active Configured number of active source messages accepted by the device.
source limit
maximum
Global active Configured threshold for applying random early discard (RED) to drop some but
source limit not all MSDP active source messages.
threshold
Global active Threshold at which a warning message is logged (percentage of the number of
source limit log- active source messages accepted by the device).
warning
Originator Router ID configured on the source of the rendezvous point (RP) that originated
the message, or the loopback address when the router ID is not configured.
Sample Output
The output for the show msdp source-active brief command is identical to that for the show msdp
source-active command. For sample output, see "show msdp source-active" on page 2304.
The output for the show msdp source-active detail command is identical to that for the show msdp
source-active command. For sample output, see "show msdp source-active" on page 2304.
2305
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2306
Description | 2306
Options | 2306
Syntax
Description
Options
none Display statistics about all MSDP peers for all routing instances.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 72 on page 2307 describes the output fields for the show msdp statistics command. Output fields
are listed in the approximate order in which they appear.
Global active source limit Number of times all peers have exceeded configured active source
exceeded limits.
Global active source limit Configured number of active source messages accepted by the device.
maximum
Global active source limit Configured threshold for applying random early discard (RED) to drop
threshold some but not all MSDP active source messages.
Global active source limit Threshold at which a warning message is logged (percentage of the
log-warning number of active source messages accepted by the device).
Global active source limit log Time (in seconds) between consecutive log messages.
interval
Last State Change How long ago the peer state changed.
Last message received from How long ago the last message was received from the peer.
the peer
SA messages with zero Entry Entry Count is a field within SA message that defines how many
Count received source/group tuples are present in the SA message. The counter is
incremented each time an SA with an Entry Count of zero is received.
Active source exceeded Number of times this peer has exceeded configured source-active
limits.
Active source Maximum Configured number of active source messages accepted by this peer.
Active source threshold Configured threshold on this peer for applying random early discard
(RED) to drop some but not all MSDP active source messages.
2309
Active source log-warning Configured threshold on this peer at which a warning message is
logged (percentage of the number of active source messages accepted
by the device).
Active source log-interval Time (in seconds) between consecutive log messages on this peer.
Sample Output
Peer: 10.255.245.39
Last State Change: 11:54:49 (00:24:59)
Last message received from peer: 11:53:32 (00:26:16)
RPF Failures: 0
Remote Closes: 0
Peer Timeouts: 0
SA messages sent: 376
SA messages received: 459
2310
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2311
Description | 2311
Options | 2312
Syntax
Description
Display backup PE router group information when ingress PE redundancy is configured. Ingress PE
redundancy provides a backup resource when point-to-multipoint LSPs are configured for multicast
distribution.
2312
Options
address pe-address (Optional) Display the groups that a PE address is associated with.
group group (Optional) Display the backup PE group information for a particular
group.
instance instance-name (Optional) Display backup PE group information for a specific multicast
instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 73 on page 2312 describes the output fields for the show multicast backup-pe-groups command.
Output fields are listed in the approximate order in which they appear.
Designated PE Primary PE router. Address of the PE router that is currently forwarding traffic
on the static route.
Transitions Number of times that the designated PE router has transitioned from the most
eligible PE router to a backup PE router and back again to the most eligible PE
router.
Backup PE List List of PE routers that are configured to be backups for the group.
Sample Output
Backup PE group: b1
Designated PE: 10.255.165.7
Transitions: 1
Last Transition: 03:15:01
Local Address: 10.255.165.7
Backup PE List:
10.255.165.8
Backup PE group: b2
Designated PE: 10.255.165.7
Transitions: 2
Last Transition: 02:58:20
Local Address: 10.255.165.7
Backup PE List:
10.255.165.9
10.255.165.8
Release Information
IN THIS SECTION
Syntax | 2314
Description | 2314
Options | 2315
Syntax
Description
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 74 on page 2315 describes the output fields for the show multicast flow-map command. Output
fields are listed in the approximate order in which they appear.
Policy Name of the policy associated with the flow map. All levels
Cache-timeout Cache timeout value assigned to the flow map. All levels
Bandwidth Bandwidth setting associated with the flow map. All levels
Adaptive Whether or not adaptive mode is enabled for the flow map. none
Adaptive Whether or not adaptive mode is enabled for the flow map. detail
Bandwidth
2316
Redundant Redundant sources defined for the same destination group. detail
Sources
Sample Output
Sample Output
Release Information
IN THIS SECTION
Syntax | 2317
Description | 2317
Options | 2317
Syntax
Description
Options
none Display multicast forwarding cache statistics for all supported address
families for all routing instances.
inet | inet6 (Optional) Display multicast forwarding cache statistics for IPv4 or
IPv6 family addresses, respectively.
instance instance-name (Optional) Display multicast forwarding cache statistics for a specific
routing instance.
2318
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 75 on page 2318 describes the output fields for the show multicast forwarding-cache statistics
command. Output fields are listed in the approximate order in which they appear.
Instance Name of the routing instance for which multicast forwarding cache statistics are
displayed.
Family Protocol family for which multicast forwarding cache statistics are displayed: ALL,
INET, or INET6.
General (or MVPN Number of currently used multicast forwarding cache entries.
RPT) Entries Used
General (or MVPN Maximum number of multicast forwarding cache entries that can be added to the
RPT) Suppress cache. When the number of entries reaches the configured threshold, the device
Threshold suspends adding new multicast forwarding cache entries.
General (or MVPN Number of multicast forwarding cache entries that must be reached before the
RPT) Reuse Value device creates new multicast forwarding cache entries. When the total number of
multicast forwarding cache entries is below the reuse value, the device resumes
adding new multicast forwarding cache entries.
2319
Sample Output
Release Information
Starting in Junos OS Release 16.1, output includes general and rendezvous-point tree (RPT) suppression
states.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2320
Description | 2320
Options | 2320
Syntax
Description
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
2321
view
Output Fields
Table 76 on page 2321 describes the output fields for the show multicast interface command. Output
fields are listed in the approximate order in which they appear.
Maximum bandwidth (bps) Maximum bandwidth setting, in bits per second, for this interface.
Remaining bandwidth (bps) Amount of bandwidth, in bits per second, remaining on the interface.
Mapped bandwidth Amount of bandwidth, in bits per second, used by any flows that are
deduction (bps) mapped to the interface.
Local bandwidth deduction Amount of bandwidth, in bits per second, used by any mapped flows
(bps) that are traversing the interface.
Reverse OIF mapping State of the reverse OIF mapping feature (on or off).
NOTE: This field does not appear in the output when the no QoS
adjustment feature is disabled.
Reverse OIF mapping no State of the no QoS adjustment feature (on or off) for interfaces that
QoS adjustment are using reverse OIF mapping.
NOTE: This field does not appear in the output when the no QoS
adjustment feature is disabled.
Leave timer Amount of time a mapped interface remains active after the last
mapping ends.
NOTE: This field does not appear in the output when the no QoS
adjustment feature is disabled.
No QoS adjustment State (on) of the no QoS adjustment feature when this feature is
enabled.
NOTE: This field does not appear in the output when the no QoS
adjustment feature is disabled.
Sample Output
Release Information
IN THIS SECTION
Syntax | 2323
Description | 2323
Options | 2323
Syntax
Description
Display configuration information about IP multicast networks, including neighboring multicast router
addresses.
Options
host (Optional) Display configuration information about a particular host. Replace host with a
hostname or IP address.
2324
view
Output Fields
Table 77 on page 2324 describes the output fields for the show multicast mrinfo command. Output
fields are listed in the approximate order in which they appear.
source-address Query address, hostname (DNS name or IP address of the source address), and
multicast protocol version or the software version of another vendor.
ip-address-1--- Queried router interface address and directly attached neighbor interface address,
>ip-address-2 respectively.
Sample Output
Release Information
IN THIS SECTION
Syntax | 2326
Description | 2326
Options | 2327
Syntax
Description
Options
none Display standard information about all entries in the multicast next-hop table for all
supported address families.
brief | detail | (Optional) Display the specified level of output. Use terse to display the total
terse number of outgoing interfaces (as opposed to listing them) When you include the
detail option on M Series and T Series routers and EX Series switches, the
downstream interface name includes the next-hop ID number in parentheses, in the
form fe-0/1/2.0-(1048574), where 1048574 is the next-hop ID number.
Starting in Junos OS release 16.1, the show multicast next-hops statement shows
the hierarchical next hops contained in the top-level next hop.
identifier-number (Optional) Show a particular next hop by ID number. The range of values is 1
through 65,535.
inet | inet6 (Optional) Display entries for IPv4 or IPv6 family addresses, respectively.
logical-system (all (Optional) Perform this operation on all logical systems or on a particular logical
| logical-system- system.
name)
view
Output Fields
Table 78 on page 2327 describes the output fields for the show multicast next-hops command. Output
fields are listed in the approximate order in which they appear.
Refcount Number of cache entries that are using this next hop.
Incoming interface List of interfaces that accept incoming traffic. Only shown for routes that do not
list use strict RPF-based forwarding, for example for bidirectional PIM.
Sample Output
show multicast next-hops (Ingress Router, Multipoint LDP Inband Signaling for Point-to-
Multipoint LSPs)
(0x600e844) 1 0 (0x600e764)
1048582 2 1 1048578
(0x600df84) 1 0 1048586
(0x600e684) 1 0 (0x600e5a4)
1048581 2 1 1048577
(0x600ddc4) 1 0 1048585
(0x600ebc4) 1 0 (0x600eae4)
show multicast next-hops (Egress Router, Multipoint LDP Inband Signaling for Point-to-
Multipoint LSPs)
Family: INET6
ID Refcount KRefcount Downstream interface
2097157 2 1 ge-0/0/1.0
544 1 0 lo0.0
xe-4/1/0.0
The output for the show multicast next-hops brief command is identical to that for the show multicast
next-hops command. For sample output, see "show multicast next-hops" on page 2328.
Family: INET6
ID Refcount KRefcount Downstream interface Addr
1048586 4 2 1048585
1048583
Flags 0x20c type 0x19 members 0/0/2/0/0
Address 0xb1842e4
1048583 14 4 ge-1/1/9.0-(1048582)
Flags 0x200 type 0x19 members 0/0/0/1/0
2331
Address 0xb183ef4
1048592 4 2 1048583
1048591
Flags 0x20c type 0x19 members 0/0/2/0/0
Address 0xb184644
Release Information
detail option display of next-hop ID number introduced in Junos OS Release 11.1 for M Series and
T Series routers and EX Series switches.
IN THIS SECTION
Syntax | 2332
Description | 2332
Options | 2332
Syntax
Description
Display configuration information about PIM-to-IGMP message translation, also known as PIM-to-IGMP
proxy.
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
2333
Output Fields
Table 79 on page 2333 describes the output fields for the show multicast pim-to-igmp-proxy command.
Output fields are listed in the order in which they appear.
interface-name Name of upstream interface (no more than two allowed) on which
PIM-to-IGMP message translation is configured.
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2334
Description | 2335
Options | 2335
Syntax
Description
Display configuration information about PIM-to-MLD message translation, also known as PIM-to-MLD
proxy.
Options
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 80 on page 2335 describes the output fields for the show multicast pim-to-mld-proxy command.
Output fields are listed in the order in which they appear.
interface-name Name of upstream interface (no more than two allowed) on which
PIM-to-MLD message translation is configured.
2336
Sample Output
Release Information
IN THIS SECTION
Syntax | 2337
Description | 2337
Options | 2338
Syntax
Description
Display the entries in the IP multicast forwarding table. You can display similar information with the
show route table inet.1 command.
NOTE: On all SRX Series devices, when a multicast route is not available, pending sessions are
not torn down, and subsequent packets are queued. If no multicast route resolve comes back,
2338
then the traffic flow has to wait for the pending session to timed out. Then packets can trigger
new pending session create and route resolve.
Options
group group (Optional) Display the cache entries for a particular group.
inet | inet6 (Optional) Display multicast forwarding table entries for IPv4 or IPv6
family addresses, respectively.
instance instance-name (Optional) Display entries in the multicast forwarding table for a
specific multicast instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
source-prefix source-prefix (Optional) Display the cache entries for a particular source prefix.
view
Output Fields
Table 81 on page 2339 describes the output fields for the show multicast route command. Output fields
are listed in the approximate order in which they appear.
2339
family IPv4 address family (INET) or IPv6 address family (INET6). All levels
Source Prefix and length of the source as it is in the multicast All levels
forwarding table.
Incoming List of interfaces that accept incoming traffic. Only shown for All levels
interface list routes that do not use strict RPF-based forwarding, for example
for bidirectional PIM.
Upstream Name of the interface on which the packet with this source All levels
interface prefix is expected to arrive.
Upstream rpf When multicast-only fast reroute (MoFRR) is enabled, a PIM All levels
interface list router propagates join messages on two upstream RPF
interfaces to receive multicast traffic on both links for the same
join request.
Downstream List of interface names to which the packet with this source All levels
interface list prefix is forwarded.
Number of Total number of outgoing interfaces for each (S,G) entry. extensive
outgoing
interfaces
Statistics Rate at which packets are being forwarded for this source and detail extensive
group entry (in Kbps and pps), and number of packets that have
been forwarded to this prefix. If one or more of the kilobits per
second packet forwarding statistic queries fails or times out, the
statistics field displays Forwarding statistics are not available.
Next-hop ID Next-hop identifier of the prefix. The identifier is returned by the detail extensive
routing device’s Packet Forwarding Engine and is also displayed
in the output of the show multicast nexthops command.
Incoming For bidirectional PIM, incoming interface list identifier. detail extensive
interface list ID
Identifiers for interfaces that accept incoming traffic. Only
shown for routes that do not use strict RPF-based forwarding,
for example for bidirectional PIM.
Upstream The protocol that maintains the active multicast forwarding detail extensive
protocol route for this group or source.
Route type Type of multicast route. Values can be (S,G) or (*,G). summary
2341
Cache lifetime/ Number of seconds until the prefix is removed from the extensive
timeout multicast forwarding table. A value of never indicates a
permanent forwarding entry. A value of forever indicates routes
that do not have keepalive times.
Wrong Number of times that the upstream interface was not available. extensive
incoming
interface
notifications
Sample Output
Starting in Junos OS Release16.1, show multicast route displays the top-level hierarchical next hop.
Group: 233.252.0.0
2342
Source: 10.255.14.144/32
Upstream interface: local
Downstream interface list:
so-1/0/0.0
Group: 233.252.0.1
Source: 10.255.14.144/32
Upstream interface: local
Downstream interface list:
so-1/0/0.0
Group: 233.252.0.1
Source: 10.255.70.15/32
Upstream interface: so-1/0/0.0
Downstream interface list:
mt-1/1/0.1081344
Family: INET6
Group: 233.252.0.1/24
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0
Downstream interface list:
ge-0/0/1.0
Group: 233.252.0.3/24
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0 xe-4/1/0.0
Downstream interface list:
ge-0/0/1.0
Group: 233.252.0.11/24
Source: *
Incoming interface list:
2343
lo0.0 ge-0/0/1.0
Downstream interface list:
ge-0/0/1.0
Group: 233.252.0.13/24
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0 xe-4/1/0.0
Downstream interface list:
ge-0/0/1.0
Family: INET6
The output for the show multicast route brief command is identical to that for the show multicast route
command. For sample output, see "show multicast route" on page 2341 or "show multicast route
(Bidirectional PIM)" on page 2342.
Group: 233.252.0.0
Source: 10.255.14.144/32
Upstream interface: local
Downstream interface list:
so-1/0/0.0
2344
Group: 233.252.0.1
Source: 10.255.14.144/32
Upstream interface: local
Downstream interface list:
so-1/0/0.0
Session description: Administratively Scoped
Statistics: 0 kBps, 0 pps, 13404 packets
Next-hop ID: 262142
Upstream protocol: PIM
Group: 233.252.0.1
Source: 10.255.70.15/32
Upstream interface: so-1/0/0.0
Downstream interface list:
mt-1/1/0.1081344
Session description: Administratively Scoped
Statistics: 46 kBps, 1000 pps, 921077 packets
Family: INET6
Group: 233.252.0.1/24
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0
Downstream interface list:
ge-0/0/1.0
Number of outgoing interfaces: 1
Session description: NOB Cross media facilities
2345
Group: 233.252.0.3/24
Source: *
Incoming interface list:
lo0.0 ge-0/0/1.0 xe-4/1/0.0
Downstream interface list:
ge-0/0/1.0
Number of outgoing interfaces: 1
Session description: NOB Cross media facilities
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 2097153
Incoming interface list ID: 589
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Family: INET6
Group: 225.0.0.1
Source: 192.0.2.0/24
Upstream interface: st0.1
+ Upstream neighbor: 203.0.113.0/24
Downstream interface list:
+ st0.0-198.51.100.0 st0.0-198.51.100.1
Session description: Unknown
Statistics: 0 kBps, 1 pps, 119 packets
2346
Group: 225.0.0.1
Source: 192.0.2.0/24
Upstream interface: ge-3/0/12.0
Downstream interface list:
ge-0/0/18.0 ge-0/0/7.0 ge-2/0/11.0 ge-2/0/7.0 ge-3/0/20.0 ge-3/0/21.0
Number of outgoing interfaces: 6
Session description: Unknown
Statistics: 102 kBps, 801 pps, 5735 packets
Next-hop ID: 131076
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: 360 seconds
Wrong incoming interface notifications: 0
Uptime: 00:03:57
Group: 225.0.0.1
Source: 101.0.0.2/32
Upstream interface: ge-2/2/0.101
Downstream interface list:
distributed-gmp
Number of outgoing interfaces: 1
Session description: Unknown
Statistics: 105 kBps, 2500 pps, 4153361 packets
Next-hop ID: 1048575
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: 360 seconds
Wrong incoming interface notifications: 0
Uptime: 00:31:46
Group: 225.0.0.1
Source: 101.0.0.3/32
Upstream interface: ge-2/2/0.101
Downstream interface list:
distributed-gmp
Number of outgoing interfaces: 1
Session description: Unknown
Statistics: 105 kBps, 2500 pps, 4153289 packets
Next-hop ID: 1048575
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: 360 seconds
2348
show multicast route extensive (PIM NSR support for VXLAN on primary Routing Engine)
Group: 233.252.0.1
Source: 10.3.3.3/32
Upstream interface: ge-3/1/2.0
Downstream interface list:
-(593)
Number of outgoing interfaces: 1
Session description: Organisational Local Scope
Statistics: 0 kBps, 0 pps, 27 packets
Next-hop ID: 1048576
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding (Forwarding state is set as 'Forwarding' in
master RE.)
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Uptime: 00:06:38
Group: 233.252.0.1
Source: 10.2.1.4/32
Upstream interface: local
Downstream interface list:
ge-3/1/2.0
Number of outgoing interfaces: 1
Session description: Organisational Local Scope
Statistics: 0 kBps, 0 pps, 86 packets
Next-hop ID: 1048575
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding (Forwarding state is set as 'Forwarding' in
master RE.)
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Uptime: 00:07:45
2349
show multicast route extensive (PIM NSR support for VXLAN on backup Routing Engine)
Group: 233.252.0.1
Source: 10.3.3.3/32
Upstream interface: ge-3/1/2.0
Number of outgoing interfaces: 0
Session description: Organisational Local Scope
Forwarding statistics are not available
Next-hop ID: 0
Upstream protocol: PIM
Route state: Active
Forwarding state: Pruned (Forwarding state is set as 'Pruned' in backup RE.)
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Uptime: 00:06:46
Group: 233.252.0.1
Source: 10.2.1.4/32
Upstream interface: local
Number of outgoing interfaces: 0
Session description: Organisational Local Scope
Forwarding statistics are not available
Next-hop ID: 0
Upstream protocol: PIM
Route state: Active
Forwarding state: Pruned (Forwarding state is set as 'Pruned' in backup RE.)
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Uptime: 00:07:54
show multicast route extensive (PIM NSR support for VXLAN on backup Routing Engine)
Group: 233.252.0.1
Source: 10.3.3.3/32
Upstream interface: ge-3/1/2.0
Downstream interface list:
-(593)
Number of outgoing interfaces: 1
Session description: Organisational Local Scope
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 1048576
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding (Forwarding state is set as 'Forwarding' in
backup RE.)
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Uptime: 00:06:38
Group: 233.252.0.1
Source: 10.2.1.4/32
Upstream interface: local
Downstream interface list:
ge-3/1/2.0
Number of outgoing interfaces: 1
Session description: Organisational Local Scope
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 1048575
Upstream protocol: PIM
Route state: Active
Forwarding state: Forwarding (Forwarding state is set as 'Forwarding' in
backup RE.)
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Uptime: 00:07:45
Group: 232.255.255.100
Source: 10.1.1.2/32
Upstream interface: et-0/0/0:0.0
Downstream interface list:
et-0/0/2:1.0 et-0/0/1:0.0
Number of outgoing interfaces: 2
Session description: Source specific multicast
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 11066
Upstream protocol: Multicast
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: forever
Wrong incoming interface notifications: 0
Uptime: 14:58:34
Sensor ID: 0xf0000002
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
Support for PIM NSR support for VXLAN added in Junos OS Release 16.2.
Support for multicast traffic counters added in Junos OS 19.2R1 for EX4300 switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2352
Description | 2352
Options | 2353
Syntax
Description
Options
none Display RPF calculation information for all supported address families.
inet | inet6 (Optional) Display the RPF calculation information for IPv4 or IPv6
family addresses, respectively.
instance instance-name (Optional) Display information about multicast RPF calculations for a
specific multicast instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
prefix (Optional) Display the RPF calculation information for the specified
prefix.
view
Output Fields
Table 82 on page 2353 describes the output fields for the show multicast rpf command. Output fields
are listed in the approximate order in which they appear.
Source prefix Prefix and length of the source as it exists in the multicast forwarding
table.
Sample Output
0.0.0.0/0
Protocol: Static
10.255.14.132/32
Protocol: Direct
Interface: lo0.0
10.255.245.91/32
Protocol: IS-IS
Interface: so-1/1/1.0
2355
Neighbor: 192.168.195.21
172.16.0.1/32
Inactive172.16.0.0/12
Protocol: Static
Interface: fxp0.0
Neighbor: 192.168.14.254
192.168.0.0/16
Protocol: Static
Interface: fxp0.0
Neighbor: 192.168.14.254
192.168.14.0/24
Protocol: Direct
Interface: fxp0.0
192.168.14.132/32
Protocol: Local
192.168.195.20/30
Protocol: Direct
Interface: so-1/1/1.0
192.168.195.22/32
Protocol: Local
192.168.195.36/30
Protocol: IS-IS
Interface: so-1/1/1.0
Neighbor: 192.168.195.21
::10.255.14.132/128
Protocol: Direct
2356
Interface: lo0.0
::10.255.245.91/128
Protocol: IS-IS
Interface: so-1/1/1.0
Neighbor: 2001:db8::2a0:a5ff:fe28:2e8c
::192.168.195.20/126
Protocol: Direct
Interface: so-1/1/1.0
::192.168.195.22/128
Protocol: Local
::192.168.195.36/126
Protocol: IS-IS
Interface: so-1/1/1.0
Neighbor: 2001:db8::2a0:a5ff:fe28:2e8c
::192.168.195.76/126
Protocol: Direct
Interface: fe-2/2/0.0
::192.168.195.77/128
Protocol: Local
2001:db8::/64
Protocol: Direct
Interface: so-1/1/1.0
2001:db8::290:69ff:fe0c:993a/128
Protocol: Local
2001:db8::2a0:a5ff:fe12:84f/128
Protocol: Direct
Interface: lo0.0
2001:db8::2/128
Protocol: PIM
2001:db8::d/128
2357
Protocol: PIM
2001:db8::2/128
Protocol: PIM
2001:db8::d/128
Protocol: PIM
...
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
IN THIS SECTION
Syntax | 2358
2358
Description | 2358
Options | 2358
Syntax
Description
Options
inet | inet6 (Optional) Display scoped multicast information for IPv4 or IPv6 family
addresses, respectively.
2359
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 83 on page 2359 describes the output fields for the show multicast scope command. Output
fields are listed in the approximate order in which they appear.
Sample Output
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
IN THIS SECTION
Syntax | 2361
Description | 2361
Options | 2361
2361
Syntax
Description
NOTE: On all SRX Series devices, only 100 packets can be queued during pending (S, G) route.
However, when multiple multicast sessions enter the route resolve process at the same time,
buffer resources are not sufficient to queue 100 packets for each session.
Options
none Display standard information about all multicast sessions for all routing
instances.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
Table 84 on page 2362 describes the output fields for the show multicast sessions command. Output
fields are listed in the approximate order in which they appear.
Sample Output
1 matching sessions.
Release Information
IN THIS SECTION
Syntax | 2364
Description | 2365
Options | 2365
Syntax
Description
Options
inet (Optional) Display information for IPv4 multicast next hops only. If a family is not
specified, both IPv4 and IPv6 results will be shown.
inet6 (Optional) Display information for IPv6 multicast next hops only. If a family is not
specified, both IPv4 and IPv6 results will be shown.
logical-system (Optional) Display information about a particular logical system, or type ’all’.
logical-system-
name
view
Output Fields
Table 85 on page 2365 describes the output fields for the show multicast snooping next-hops
command. Output fields are listed in the approximate order in which they appear.
Family Protocol family for which multicast snooping next hops are displayed: INET or
INET6.
Refcount Number of cache entries that are using this next hop.
NOTE: To see the next-hop ID for a given PE mesh group, igmp-snooping must be
enabled for the relevant VPLS routing instance. (Junos OS creates a default CE and
VE mesh groups for each VPLS routing instance. The next hop of the VE mesh
group is the set of VE mesh-group interfaces of the remaining PEs in the same
VPLS routing instance.)
Sample Output
1048574 4 1 ge-0/1/0.1000-(2000)
1048575
1048576
1048575 2 0 ge-0/1/2.1000-(2001)
ge-0/1/3.1000-(2002)
1048576 2 0 lsi.1048578-(2003)
lsi.1048579-(2004)
2367
In thIS example, ID 1048585 is the VE next-hop ID created for the VE next hop that is holding VE
interfaces for the routing instance. It only appears if igmp snooping is enabled on the VPLS.
Release Information
IN THIS SECTION
Syntax | 2368
Description | 2369
Options | 2369
Syntax
Description
Display the entries in the IP multicast snooping forwarding table. You can display some of this
information with the show route table inet.1 command.
Options
active | all | inactive (Optional) Display all active entries, all entries, or all inactive entries,
respectively, in the multicast snooping table.
bridge-domain bridge-domain (Optional) Display the entries for a particular bridge domain.
mesh-group mesh-group-name (Optional) Display the entries for a particular mesh group.
qualified-vlan vlan-id (Optional) Display the entries for a particular qualified VLAN.
source-prefix source-prefix (Optional) Display the entries for a particular source prefix.
view
Output Fields
Table 86 on page 2370 describes the output fields for the show multicast snooping route command.
Output fields are listed in the approximate order in which they appear.
Nexthop Displays whether next-hop bulk updating is ON or OFF (only for All levels
Bulking routing-instance type of virtual switch or vpls).
Family IPv4 address family (INET) or IPv6 address family (INET6). All levels
Source Prefix and length of the source as it is in the multicast All levels
forwarding table. For (*,G) entries, this field is set to "*".
Routing- Name of the routing instance to which this routing information All levels
instance applies. (Displayed when multicast is configured within a routing
instance.)
Learning Name of the learning domain to which this routing information detail extensive
Domain applies.
2371
Statistics Rate at which packets are being forwarded for this source and detail extensive
group entry (in Kbps and pps), and number of packets that have
been forwarded to this prefix.
Next-hop ID Next-hop identifier of the prefix. The identifier is returned by the detail extensive
router's Packet Forwarding Engine and is also displayed in the
output of the show multicast nexthops command.
Cache lifetime/ Number of seconds until the prefix is removed from the extensive
timeout multicast forwarding table. A value of never indicates a
permanent forwarding entry.
Sample Output
Group: 232.1.1.1
Source: 192.168.3.100/32
Downstream interface list:
ge-0/1/0.200
Statistics: 0 kBps, 0 pps, 1 packets
Next-hop ID: 1048577
Route state: Active
Forwarding state: Forwarding
Cache lifetime/timeout: 240 seconds
Family: INET
Group: 224.0.0.0
Bridge-domain: vsid500
Group: 225.1.0.1
Bridge-domain: vsid500
Downstream interface list: vsid500
ge-0/3/8.500 ge-1/1/9.500 ge1/2/5.500
Family: INET6
Group: ff03::1/128
Source: ::
Bridge-domain: BD-1
Mesh-group: __all_ces__
Downstream interface list:
ae0.1 -(562) 1048576
Statistics: 2697 kBps, 3875 pps, 758819039 packets
2373
Group: ff03::1/128
Source: 6666::2/128
Bridge-domain: BD-1
Mesh-group: __all_ces__
Downstream interface list:
ae0.1 -(562) 1048576
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 1048605
Route state: Active
Forwarding state: Forwarding
user@host> show multicast snooping route extensive iinstance evpn-vxlan group 233.252.0.1/
Group: 233.252.0.1/32
Source: *
Vlan: VLAN-100
Mesh-group: __all_ces__
Downstream interface list:
ge-0/0/3.0 -(662)
evpn-core-nh -(131076)
Statistics: 0 kBps, 0 pps, 0 packets
Next-hop ID: 131070
Route state: Active
Forwarding state: Forwarding
Release Information
Support for control, data, qualified-vlan and vlan options introduced in Junos OS Release 13.3 for EX
Series switches.
2374
IN THIS SECTION
Syntax | 2374
Description | 2374
Options | 2374
Syntax
Description
Options
none Display multicast statistics for all supported address families for all
routing instances.
inet | inet6 (Optional) Display multicast statistics for IPv4 or IPv6 family
addresses, respectively.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
Additional Information
The input and output interface multicast statistics are consistent, but not timely. They are constructed
from the forwarding statistics, which are gathered at 30-second intervals. Therefore, the output from
this command always lags the true count by up to 30 seconds.
view
Output Fields
Table 87 on page 2375 describes the output fields for the show multicast statistics command. Output
fields are listed in the approximate order in which they appear.
Family Protocol family for which multicast statistics are displayed: INET or INET6.
Interface Name of the interface for which statistics are being reported.
Routing Protocol Primary multicast protocol on the interface: PIM, DVMRP for INET, or PIM for
INET6.
Mismatch Number of multicast packets that did not arrive on the correct upstream interface.
Kernel Resolve Number of resolve requests processed by the primary multicast protocol on the
interface.
2376
Resolve No Route Number of resolve requests that were ignored because there was no route to the
source.
Resolve Filtered Number of resolve requests filtered by policy if any policy is configured.
In Kbytes Total accumulated incoming packets (in KB) since the last time the clear multicast
statistics command was issued.
Out Kbytes Total accumulated outgoing packets (in KB) since the last time the clear multicast
statistics command was issued.
Mismatch error Number of mismatches that were ignored because of internal errors.
Mismatch No Number of mismatches that were ignored because there was no route to the
Route source.
Routing Notify Number of times that the multicast routing system has been notified of a new
multicast source by a multicast routing protocol .
Resolve Error Number of resolve requests that were ignored because of internal errors.
In Packets Total number of incoming packets since the last time the clear multicast statistics
command was issued.
Out Packets Total number of outgoing packets since the last time the clear multicast statistics
command was issued.
Resolve requests Number of resolve requests on interfaces that are not enabled for multicast that
on interfaces not have accumulated since the clear multicast statistics command was last issued.
enabled for
multicast n
2377
Resolve requests Number of resolve requests with no route to the source that have accumulated
with no route to since the clear multicast statistics command was last issued.
source n
Routing Number of routing notifications on interfaces not enabled for multicast that have
notifications on accumulated since the clear multicast statistics command was last issued.
interfaces not
enabled for
multicast n
Routing Number of routing notifications with no route to the source that have accumulated
notifications with since the clear multicast statistics command was last issued.
no route to source
n
Interface Number of interface mismatches on interfaces not enabled for multicast that have
Mismatches on accumulated since the clear multicast statistics command was last issued.
interfaces not
enabled for
multicast n
Group Number of group memberships on interfaces not enabled for multicast that have
Membership on accumulated since the clear multicast statistics command was last issued.
interfaces not
enabled for
multicast n
Sample Output
Interface: fe-0/0/0
Routing Protocol: PIM Mismatch error: 0
Mismatch: 0 Mismatch No Route: 0
Kernel Resolve: 10 Routing Notify: 0
Resolve No Route: 0 Resolve Error: 0
In Kbytes: 4641 In Packets: 50454
Out Kbytes: 0 Out Packets: 0
Interface: so-0/1/1.0
Routing Protocol: PIM Mismatch error: 0
Mismatch: 0 Mismatch No Route: 0
Kernel Resolve: 0 Routing Notify: 0
Resolve No Route: 0 Resolve Error: 0
In Kbytes: 0 In Packets: 0
Out Kbytes: 4641 Out Packets: 50454
Interface: st0.0-192.0.2.0
Routing protocol: PIM Mismatch error: 0
Mismatch: 0 Mismatch no route: 0
Kernel resolve: 0 Routing notify: 0
Resolve no route: 0 Resolve error: 0
Resolve filtered: 0 Notify filtered: 0
In kbytes: 0 In packets: 0
Out kbytes: 0 Out packets: 0
Interface: st0.1-198.51.100.0
Routing protocol: PIM Mismatch error: 0
Mismatch: 0 Mismatch no route: 0
Kernel resolve: 0 Routing notify: 0
Resolve no route: 0 Resolve error: 0
Resolve filtered: 0 Notify filtered: 0
In kbytes: 0 In packets: 0
Out kbytes: 0 Out packets: 0
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2381
Description | 2381
Options | 2381
Syntax
Description
Display usage information about the 10 most active Distance Vector Multicast Routing Protocol
(DVMRP) or Protocol Independent Multicast (PIM) groups.
Options
none Display multicast usage information for all supported address families
for all routing instances.
inet | inet6 (Optional) Display usage information for IPv4 or IPv6 family
addresses, respectively.
instance instance-name (Optional) Display information about the most active DVMRP or PIM
groups for a specific multicast instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
2382
Output Fields
Table 88 on page 2382 describes the output fields for the show multicast usage command. Output
fields are listed in the approximate order in which they appear.
Instance Name of the routing instance. (Displayed when multicast is configured within a
routing instance.)
Packets Number of packets that have been forwarded to this prefix. If one or more of
the packets forwarded statistic queries fails or times out, the packets field
displays unavailable.
Bytes Number of bytes that have been forwarded to this prefix. If one or more of the
packets forwarded statistic queries fails or times out, the bytes field displays
unavailable.
Prefix IP address.
Sample Output
The output for the show multicast usage brief command is identical to that for the show multicast
usage command. For sample output, see "show multicast usage" on page 2383.
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
IN THIS SECTION
Syntax | 2385
Description | 2385
Options | 2385
Syntax
Description
Options
instance-name instance- (Optional) Display output for the specified routing instance.
name
source-pe (Optional) Display source-pe output for the specified c-multicast entries.
view
Output Fields
Table 89 on page 2385 lists the output fields for the show mvpn c-multicast command. Output fields are
listed in the approximate order in which they appear.
Ptnl Provider tunnel attributes, tunnel type:tunnel source, tunnel extensive none
destination group.
MVPN instance Name of the multicast VPN routing instance extensive none
C-multicast Number of customer multicast IPv4 routes associated with the summary
IPv4 route multicast VPN routing instance.
count
C-multicast Number of customer multicast IPv6 routes associated with the summary
IPv6 route multicast VPN routing instance.
count
Sample Output
Instance: mvpn1
C-multicast IPv6 route count: 1
Instance : mvpn1
MVPN Mode : RPT-SPT
C-Multicast route address: ::/0:ff05::1/128
MVPN Source-PE1:
extended-community: no-advertise target:10.1.0.0:9
Route Distinguisher: 10.1.0.0:1
Autonomous system number: 1
Interface: ge-0/0/9.1 Index: 343
PIM Source-PE1:
extended-community: target:10.1.0.0:9
Route Distinguisher: 10.1.0.0:1
Autonomous system number: 1
Interface: ge-0/0/9.1 Index: 343
Release Information
IN THIS SECTION
Syntax | 2389
Description | 2389
Options | 2389
Syntax
Description
Display the multicast VPN routing instance information according the options specified.
Options
instance-name (Optional) Display statistics for the specified routing instance, or press Enter
without specifying an instance name to show output for all instances.
display-tunnel-name (Optional) Display the ingress provider tunnel name rather than the attribute.
2390
logical-system (Optional) Display details for the specified logical system, or type “all”.
view
Output Fields
Table 90 on page 2390 lists the output fields for the show mvpn instance command. Output fields are
listed in the approximate order in which they appear.
MVPN instance Name of the multicast VPN routing instance extensive none
Provider tunnel Provider tunnel attributes, tunnel type:tunnel source, tunnel extensive none
destination group.
Neighbor Address, type of provider tunnel (I-P-tnl, inclusive provider extensive none
tunnel and S-P-tnl, selective provider tunnel) and provider tunnel
for each neighbor.
Ptnl Provider tunnel attributes, tunnel type:tunnel source, tunnel extensive none
destination group.
Neighbor count Number of neighbors associated with the multicast VPN routing summary
instance.
C-multicast Number of customer multicast IPv4 routes associated with the summary
IPv4 route multicast VPN routing instance.
count
C-multicast Number of customer multicast IPv6 routes associated with the summary
IPv6 route multicast VPN routing instance.
count
Sample Output
Instance: VPN-A
Provider tunnel: I-P-tnl:PIM-SM:10.255.14.144, 198.51.100.1
Neighbor I-P-tnl
10.255.14.160 PIM-SM:10.255.14.160, 198.51.100.1
10.255.70.17 PIM-SM:10.255.70.17, 198.51.100.1
C-mcast IPv4 (S:G) Ptnl St
192.168.195.78/32:203.0.113.0/24 PIM-SM:10.255.14.144, 198.51.100.1 RM
MVPN instance:
Sample Output
Instance: mvpn1
Sender-Based RPF: Disabled. Reason: Not enabled by configuration.
Hot Root Standby: Disabled. Reason: Not enabled by configuration.
Neighbor count: 3
C-multicast IPv6 route count: 1
2393
Sample Output
Instance : vpn_blue
Customer Source: 10.1.1.1
RT-Import Target: 192.168.1.1:100
Route-Distinguisher: 192.168.1.1:100
Source-AS: 65000
Via unicast route: 10.1.0.0/16 in vpn-blue.inet.0
Candidate Source PE Set:
RT-Import 192.168.1.1:100, RD 1111:22222, Source-AS 65000
RT-Import 192.168.2.2:100, RD 1111:22222, Source-AS 65000
RT-Import 192.168.3.3:100, RD 1111:22222, Source-AS 65000
‘Extensive’ output will show everything in ‘detail’ output and add the list of
bound c-multicast routes.
Family : INET
Instance : vpn_blue
Customer Source: 10.1.1.1
RT-Import Target: 192.168.1.1:100
Route-Distinguisher: 192.168.1.1:100
Source-AS: 65000
Via unicast route: 10.1.0.0/16 in vpn-blue.inet.0
Candidate Source PE Set:
RT-Import 192.168.1.1:100, RD 1111:22222, Source-AS 65000
RT-Import 192.168.2.2:100, RD 1111:22222, Source-AS 65000
RT-Import 192.168.3.3:100, RD 1111:22222, Source-AS 65000
Customer-Multicast Routes:
10.1.1.1/32:198.51.100.3/24
10.1.1.1/32:198.51.100.3/24
2394
Release Information
Additional details in output for extensive option introduced in Junos OS Release 15.1.
IN THIS SECTION
Syntax | 2395
Description | 2395
Options | 2395
Syntax
Description
Options
extensive | summary (Optional) Display the specified level of output for all multicast VPN
neighbors.
inet | inet6 (Optional) Display IPv4 or IPv6 information for all multicast VPN
neighbors.
instance instance-name | (Optional) Display multicast VPN neighbor information for the specified
neighbor-address address instance or the specified neighbor.
logical-system logical- (Optional) Display multicast VPN neighbor information for the specified
system-name logical system.
view
Output Fields
Table 91 on page 2396 lists the output fields for the show mvpn neighbor command. Output fields are
listed in the approximate order in which they appear.
2396
MVPN instance Name of the multicast VPN routing instance extensive none
Neighbor Address, type of provider tunnel (I-P-tnl, inclusive provider extensive none
tunnel and S-P-tnl, selective provider tunnel) and provider tunnel
for each neighbor.
Provider tunnel Provider tunnel attributes, tunnel type:tunnel source, tunnel extensive none
destination group.
Sample Output
Sample Output
Sample Output
Sample Output
Sample Output
Sample Output
MVPN instance:
Sample Output
Sample Output
Instance: mvpn1
Neighbor count: 3
2401
Release Information
IN THIS SECTION
Syntax | 2401
Description | 2401
Options | 2402
Syntax
Description
MVPN maintains a list of suppressed customer-multicast states and the reason they were suppressed.
Display it, for example, to help understand the enforcement of forwarding-cache limits
2402
Options
instance-name (Optional) Display statistics for the specified routing instance, or press Enter
without specifying an instance name to show output for all instances.
general | mvpn-rpt (Optional) Display suppressed multicast prefixes and reason they were suppressed.
view
Output Fields
Table 92 on page 2402 lists the output fields for the show mvpn suppressed command. Output fields
are listed in the approximate order in which they appear.
reason MVPN *,G entries are deleted either because they exceed either the general
forwarding-cache limit or because they exceed the forwarding-cache limit set
for MVPN RPT.
Sample Output
limit
Sample Output
Release Information
show policy
IN THIS SECTION
Syntax | 2404
Description | 2404
Options | 2404
2404
Syntax
show policy
<logical-system (all | logical-system-name)>
<policy-name>
<statistics >
show policy
<policy-name>
Description
Options
logical-system (Optional) Perform this operation on all logical systems or on a particular logical
(all | logical- system.
system-name)
policy-name (Optional) Show the contents of the specified policy.
statistics (Optional) Use in conjunction with the test policy command to show the length of
time (in microseconds) required to evaluate a given policy and the number of times it
has been executed. This information can be used, for example, to help structure a
policy so it is evaluated efficiently. Timers shown are per route; times are not
2405
cumulative. Statistics are incremented even when the router is learning (and thus
evaluating) routes from peering routers.
view
Output Fields
Table 93 on page 2405 lists the output fields for the show policy command. Output fields are listed in
the approximate order in which they appear.
term Name of the user-defined policy term. The term name unnamed is
used for policy elements that occur outside of user defined terms
Sample Output
show policy
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2407
Description | 2407
Options | 2408
Syntax
Description
For bidirectional PIM, display the designated forwarder (DF) election results for each interface grouped
by the rendezvous point addresses (RPAs).
2408
Options
inet | inet6 (Optional) Display DF election results for IPv4 or IPv6 family
addresses, respectively.
instance instance-name (Optional) Display DF election results for a specific routing instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 94 on page 2408 describes the output fields for the show pim bidirectional df-election command.
Output fields are listed in the approximate order in which they appear.
Family IPv4 address family (INET) or IPv6 address family (INET6). All levels
Group ranges Address ranges of the multicast groups mapped to this RP All levels
address.
2409
Interfaces Bidirectional PIM interfaces on this routing device. An interface All levels
can win the DF election (Win), lose the DF election (Lose), or be
brief displays the
the RP link (RPL). The RP link is the interface directly connected
DF election
to a subnet that contains a phantom RP address. A phantom RP
winner only.
address is an RP address that is not assigned to a routing device
interface.
Sample Output
RPA: 10.10.1.3
Group ranges: 224.1.3.0/24, 225.1.3.0/24
Interfaces:
ge-0/0/1.0 (RPL) DF: none
lo0.0 (Win) DF: 10.255.179.246
xe-4/1/0.0 (Win) DF: 10.10.2.1
RPA: 10.10.13.2
Group ranges: 224.1.1.0/24, 225.1.1.0/24
Interfaces:
ge-0/0/1.0 (Lose) DF: 10.10.1.2
lo0.0 (Win) DF: 10.255.179.246
xe-4/1/0.0 (Lose) DF: 10.10.2.2
RPA: fec0::10:10:1:3
Group ranges: ff00::/8
Interfaces:
2410
RPA: fec0::10:10:13:2
Group ranges: ff00::/8
Interfaces:
ge-0/0/1.0 (Lose) DF: fe80::b2c6:9aff:fe95:86fa
lo0.0 (Win) DF: fe80::2a0:a50f:fc64:e661
xe-4/1/0.0 (Win) DF: fe80::226:88ff:fec5:3c37
RPA: 10.10.1.3
Group ranges: 224.1.3.0/24, 225.1.3.0/24
Interfaces:
lo0.0 (Win) DF: 10.255.179.246
xe-4/1/0.0 (Win) DF: 10.10.2.1
RPA: 10.10.13.2
Group ranges: 224.1.1.0/24, 225.1.1.0/24
Interfaces:
lo0.0 (Win) DF: 10.255.179.246
RPA: fec0::10:10:1:3
Group ranges: ff00::/8
Interfaces:
lo0.0 (Win) DF: fe80::2a0:a50f:fc64:e661
xe-4/1/0.0 (Win) DF: fe80::226:88ff:fec5:3c37
RPA: fec0::10:10:13:2
Group ranges: ff00::/8
Interfaces:
lo0.0 (Win) DF: fe80::2a0:a50f:fc64:e661
xe-4/1/0.0 (Win) DF: fe80::226:88ff:fec5:3c37
2411
Release Information
IN THIS SECTION
Syntax | 2411
Description | 2411
Options | 2411
Syntax
Description
For bidirectional PIM, display the default and the configured designated forwarder (DF) election
parameters for each interface.
Options
inet | inet6 (Optional) Display DF election parameters for IPv4 or IPv6 family
addresses, respectively.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 95 on page 2412 describes the output fields for the show pim bidirectional df-election interface
command. Output fields are listed in the approximate order in which they appear.
Robustnes Count Minimum number of DF election messages that must fail to be received for DF
election to fail.
Backoff Period Period that the acting DF waits between receiving a better DF Offer and
sending the Pass message to transfer DF responsibility.
2413
Table 95: show pim bidirectional df-election interface Output Fields (Continued)
RPA RP address.
State For each RP address, state of each interface with respect to the DF election:
Offer (when the election is in progress), Win, or Lose.
Sample Output
Interface: ge-0/0/1.0
Robustness Count: 3
Offer Period: 100 ms
Backoff Period: 1000 ms
RPA State DF
10.10.1.3 Offer none
10.10.13.2 Lose 10.10.1.2
Interface: lo0.0
Robustness Count: 3
Offer Period: 100 ms
Backoff Period: 1000 ms
RPA State DF
10.10.1.3 Win 10.255.179.246
10.10.13.2 Win 10.255.179.246
Interface: xe-4/1/0.0
Robustness Count: 3
2414
RPA State DF
10.10.1.3 Win 10.10.2.1
10.10.13.2 Lose 10.10.2.2
Interface: ge-0/0/1.0
Robustness Count: 3
Offer Period: 100 ms
Backoff Period: 1000 ms
RPA State DF
fec0::10:10:1:3 Lose fe80::b2c6:9aff:fe95:86fa
fec0::10:10:13:2 Lose fe80::b2c6:9aff:fe95:86fa
Interface: lo0.0
Robustness Count: 3
Offer Period: 100 ms
Backoff Period: 1000 ms
RPA State DF
fec0::10:10:1:3 Win fe80::2a0:a50f:fc64:e661
fec0::10:10:13:2 Win fe80::2a0:a50f:fc64:e661
Interface: xe-4/1/0.0
Robustness Count: 3
Offer Period: 100 ms
Backoff Period: 1000 ms
RPA State DF
fec0::10:10:1:3 Win fe80::226:88ff:fec5:3c37
fec0::10:10:13:2 Win fe80::226:88ff:fec5:3c37
Release Information
IN THIS SECTION
Syntax | 2415
Description | 2415
Options | 2415
Syntax
Description
For sparse mode only, display information about Protocol Independent Multicast (PIM) bootstrap
routers.
Options
none Display PIM bootstrap router information for all routing instances.
2416
instance instance-name (Optional) Display information about bootstrap routers for a specific
PIM-enabled routing instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 96 on page 2416 describes the output fields for the show pim bootstrap command. Output fields
are listed in the approximate order in which they appear.
Timeout How long until the local routing device declares the bootstrap router
to be unreachable, in seconds.
2417
Sample Output
Release Information
IN THIS SECTION
Syntax | 2418
Description | 2418
Options | 2418
Syntax
Description
Display information about the interfaces on which Protocol Independent Multicast (PIM) is configured.
Options
none Display interface information for all family addresses for the main
instance.
inet | inet6 (Optional) Display interface information for IPv4 or IPv6 family addresses,
respectively.
instance (instance-name | (Optional) Display information about interfaces for a specific PIM-enabled
all) routing instance or for all routing instances.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
2419
view
Output Fields
Table 97 on page 2419 describes the output fields for the show pim interfaces command. Output fields
are listed in the approximate order in which they appear.
State State of the interface. The state also is displayed in the show interfaces command.
2420
• B—In bidirectional mode, multicast groups are carried across the network over
bidirectional shared trees. This type of tree minimizes PIM routing state, which
is especially important in networks with numerous and dispersed senders and
receivers.
• S—In sparse mode, routing devices must join and leave multicast groups
explicitly. Upstream routing devices do not forward multicast traffic to this
routing device unless this device has sent an explicit request (using a join
message) to receive multicast traffic.
• DR—Designated router.
• P2P—Point to point.
JoinCnt(sg) Number of (s,g) join messages that have been seen on the interface.
JointCnt(*g) Number of (*,g) join messages that have been seen on the interface.
Sample Output
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
Support for the instance all option added in Junos OS Release 12.1.
IN THIS SECTION
Syntax | 2423
Description | 2423
Options | 2424
2423
Syntax
Description
Display information about Protocol Independent Multicast (PIM) groups for all PIM modes.
2424
For bidirectional PIM, display information about PIM group ranges (*,G-range) for each active
bidirectional RP group range, in addition to each of the joined (*,G) routes.
Options
none Display the standard information about PIM groups for all supported
family addresses for all routing instances.
exact (Optional) Display information about only the group that exactly
matches the specified group address.
inet | inet6 (Optional) Display PIM group information for IPv4 or IPv6 family
addresses, respectively.
instance instance-name (Optional) Display information about groups for the specified PIM-
enabled routing instance only.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
rp ip-address/prefix | source ip- (Optional) Display information about the PIM entries with a specified
address/prefix rendezvous point (RP) address and prefix or with a specified source
address and prefix. You can omit the prefix.
view
2425
Output Fields
Table 98 on page 2425 describes the output fields for the show pim join command. Output fields are
listed in the approximate order in which they appear.
Instance Name of the routing instance. brief detail extensive summary none
Family Name of the address family: inet (IPv4) or inet6 (IPv6). brief detail extensive summary none
Route count Number of (S,G) routes and number of (*,G) routes. summary
Bidirectional For bidirectional PIM, length of the IP prefix for RP group All levels
group prefix ranges.
length
• * (wildcard value)
• ipv4-address
• ipv6-address
2426
RP Rendezvous point for the PIM group. brief detail extensive none
Upstream RPF interface toward the source address for the source- brief detail extensive none
interface specific state (S,G) or toward the rendezvous point (RP)
address for the non-source-specific state (*,G).
Upstream rpf- Information about the upstream Reverse Path Forwarding extensive
vector (RPF) vector; appears in conjunction with the rpf-vector
command.
Active On the MoFRR primary path, the IP address of the neighbor extensive
upstream that is directly connected to the active upstream interface.
neighbor
MoFRR Backup The MoFRR upstream interface that is used when the extensive
upstream primary path fails.
interface
When the primary path fails, the backup path is upgraded to
primary, and traffic is forwarded accordingly. If there are
alternate paths available, a new backup path is calculated
and the appropriate multicast route is updated or installed.
• Time since last Join—Time since the last join message was
received from the downstream interface.
Number of Total number of outgoing interfaces for each (S,G) entry. extensive
downstream
interfaces
Assert Timeout Length of time between assert cycles on the downstream extensive
interface. Not displayed if the assert timer is null.
2430
Keepalive Time remaining until the downstream join state is updated (in extensive
timeout seconds). If the downstream join state is not updated before
this keepalive timer reaches zero, the entry is deleted. If
there is a directly connected host, Keepalive timeout is
Infinity.
Uptime Time since the creation of (S,G) or (*,G) state. The uptime is extensive
not refreshed every time a PIM join message is received for
an existing (S,G) or (*,G) state.
Sample Output
Group: 233.252.0.1
Source: *
RP: 10.255.14.144
Flags: sparse,rptree,wildcard
Upstream interface: Local
Group: 233.252.0.1
Source: 10.255.14.144
Flags: sparse,spt
Upstream interface: Local
Group: 233.252.0.1
Source: 10.255.70.15
Flags: sparse,spt
Upstream interface: so-1/0/0.0
Group: 233.252.0.1
Bidirectional group prefix length: 24
Source: *
RP: 10.10.13.2
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0
Group: 233.252.0.2
Bidirectional group prefix length: 24
2432
Source: *
RP: 10.10.1.3
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0 (RP Link)
Group: 233.252.0.3
Bidirectional group prefix length: 24
Source: *
RP: 10.10.13.2
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0
Group: 233.252.0.4
Bidirectional group prefix length: 24
Source: *
RP: 10.10.1.3
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0 (RP Link)
Group: 2001:db8::e000:101
Source: *
RP: ::46.0.0.13
Flags: sparse,rptree,wildcard
Upstream interface: Local
Group: 2001:db8::e000:101
Source: ::1.1.1.1
Flags: sparse
Upstream interface: unknown (no neighbor)
Group: 2001:db8::e800:101
Source: ::1.1.1.1
2433
Flags: sparse
Upstream interface: unknown (no neighbor)
Group: 2001:db8::e800:101
Source: ::1.1.1.2
Flags: sparse
Upstream interface: unknown (no neighbor)
Group: 2001:db8::e000:101
Source: *
RP: ::46.0.0.13
Flags: sparse,rptree,wildcard
Upstream interface: Local
Group: 233.252.0.2
Source: *
RP: 10.10.47.100
Flags: sparse,rptree,wildcard
Upstream interface: Local
Group: 233.252.0.2
Source: 192.168.195.74
Flags: sparse,spt
Upstream interface: at-0/3/1.0
Group: 233.252.0.2
Source: 192.168.195.169
2434
Flags: sparse
Upstream interface: so-1/0/1.0
Group: 233.252.0.1
Source: *
RP: 10.11.11.6
Flags: sparse,rptree,wildcard
Upstream interface: mt-1/2/10.32813
Number of downstream interfaces: 4
Group: 233.252.0.1
Source: 10.1.1.1
Flags: sparse,spt
Upstream interface: ge-0/0/3.5
Number of downstream interfaces: 5
Group: 233.252.0.1
Source: *
RP: 10.11.11.6
Flags: sparse,rptree,wildcard
Upstream interface: mt-1/2/10.32813
Upstream neighbor: 10.2.2.7 (assert winner)
Upstream state: Join to RP
Uptime: 02:51:41
2435
Group: 233.252.0.1
Source: 10.1.1.1
Flags: sparse,spt
Upstream interface: ge-0/0/3.5
Upstream neighbor: 10.1.1.17
Upstream state: Join to Source, Prune to RP
Keepalive timeout: 0
Uptime: 02:51:42
Number of downstream interfaces: 5
Number of downstream neighbors: 7
Group: 233.252.0.1
Source: *
RP: 10.255.14.144
Flags: sparse,rptree,wildcard
Upstream interface: Local
Group: 233.252.0.1
Source: 10.255.14.144
Flags: sparse,spt
Upstream interface: Local
Group: 233.252.0.1
Source: 10.255.70.15
Flags: sparse,spt
Upstream interface: so-1/0/0.0
show pim join extensive (PIM Resolve TLV for Multicast in Seamless MPLS)
Group: 233.252.0.1
Source: *
RP: 10.255.14.144
Flags: sparse,rptree,wildcard
Upstream interface: Local
Upstream neighbor: Local
Upstream state: Local RP
Uptime: 00:03:49
Downstream neighbors:
Interface: so-1/0/0.0
2437
Group: 233.252.0.1
Source: 10.255.14.144
Flags: sparse,spt
Upstream interface: Local
Upstream neighbor: Local
Upstream state: Local Source, Local RP
Keepalive timeout: 344
Uptime: 00:03:49
Downstream neighbors:
Interface: so-1/0/0.0
10.111.10.2 State: Join Flags: S Timeout: 174
Uptime: 00:03:49 Time since last Prune: 00:01:49
Interface: mt-1/1/0.32768
10.10.47.100 State: Join Flags: S Timeout: Infinity
Uptime: 00:03:49 Time since last Prune: 00:01:49
Number of downstream interfaces: 2
Group: 233.252.0.1
Source: 10.255.70.15
Flags: sparse,spt
Upstream interface: so-1/0/0.0
Upstream neighbor: 10.111.10.2
Upstream state: Local RP, Join to Source
Keepalive timeout: 344
Uptime: 00:03:49
Downstream neighbors:
Interface: Pseudo-GMP
fe-0/0/0.0 fe-0/0/1.0 fe-0/0/3.0
Interface: so-1/0/0.0 (pruned)
10.111.10.2 State: Prune Flags: SR Timeout: 174
Uptime: 00:03:49 Time since last Prune: 00:01:49
Interface: mt-1/1/0.32768
10.10.47.100 State: Join Flags: S Timeout: Infinity
Uptime: 00:03:49 Time since last Prune: 00:01:49
Number of downstream interfaces: 3
2438
Group: 233.252.0.0
Bidirectional group prefix length: 24
Source: *
RP: 10.10.13.2
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0
Upstream neighbor: 10.10.1.2
Upstream state: None
Uptime: 00:03:49
Bidirectional accepting interfaces:
Interface: ge-0/0/1.0 (RPF)
Interface: lo0.0 (DF Winner)
Number of downstream interfaces: 0
Group: 233.252.0.1
Bidirectional group prefix length: 24
Source: *
RP: 10.10.13.2
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0
Upstream neighbor: 10.10.1.2
Upstream state: None
Uptime: 00:03:49
Bidirectional accepting interfaces:
Interface: ge-0/0/1.0 (RPF)
Interface: lo0.0 (DF Winner)
Downstream neighbors:
Interface: lt-1/0/10.24
10.0.24.4 State: Join RW Timeout: 185
Interface: lt-1/0/10.23
10.0.23.3 State: Join RW Timeout: 184
Number of downstream interfaces: 2
2439
Group: 233.252.0.2
Bidirectional group prefix length: 24
Source: *
RP: 10.10.1.3
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0 (RP Link)
Upstream neighbor: Direct
Upstream state: Local RP
Uptime: 00:03:49
Bidirectional accepting interfaces:
Interface: ge-0/0/1.0 (RPF)
Interface: lo0.0 (DF Winner)
Interface: xe-4/1/0.0 (DF Winner)
Number of downstream interfaces: 0
show pim join extensive (Bidirectional PIM with a Directly Connected Phantom RP)
Group: 233.252.0.0
Bidirectional group prefix length: 24
Source: *
RP: 10.10.1.3
Flags: bidirectional,rptree,wildcard
Upstream interface: ge-0/0/1.0 (RP Link)
Upstream neighbor: Direct
Upstream state: Local RP
Uptime: 00:03:49
Bidirectional accepting interfaces:
Interface: ge-0/0/1.0 (RPF)
Interface: lo0.0 (DF Winner)
Interface: xe-4/1/0.0 (DF Winner)
Number of downstream interfaces: 0
2440
Group: 233.252.0.2
Source: *
RP: 10.10.47.100
Flags: sparse,rptree,wildcard
Upstream interface: Local
Upstream neighbor: Local
Upstream state: Local RP
Uptime: 00:03:49
Downstream neighbors:
Interface: mt-1/1/0.32768
10.10.47.101 State: Join Flags: SRW Timeout: 156
Uptime: 00:03:49 Time since last Join: 00:01:49
Number of downstream interfaces: 1
Group: 233.252.0.2
Source: 192.168.195.74
Flags: sparse,spt
Upstream interface: at-0/3/1.0
Upstream neighbor: 10.111.30.2
Upstream state: Local RP, Join to Source
Keepalive timeout: 156
Uptime: 00:14:52
Group: 233.252.0.2
Source: 192.168.195.169
Flags: sparse
Upstream interface: so-1/0/1.0
Upstream neighbor: 10.111.20.2
Upstream state: Local RP, Join to Source
Keepalive timeout: 156
Uptime: 00:14:52
2441
show pim join extensive (Ingress Node with Multipoint LDP Inband Signaling for Point-to-
Multipoint LSPs)
Group: 233.252.0.1
Source: 192.168.219.11
Flags: sparse,spt
Upstream interface: fe-1/3/1.0
Upstream neighbor: Direct
Upstream state: Local Source
Keepalive timeout:
Uptime: 11:27:55
Downstream neighbors:
Interface: Pseudo-MLDP
Interface: lt-1/2/0.25
10.2.5.2 State: Join Flags: S Timeout: Infinity
Uptime: 11:27:55 Time since last Join: 11:27:55
Group: 233.252.0.2
Source: 192.168.219.11
Flags: sparse,spt
Upstream interface: fe-1/3/1.0
Upstream neighbor: Direct
Upstream state: Local Source
Keepalive timeout:
Uptime: 11:27:41
Downstream neighbors:
Interface: Pseudo-MLDP
Group: 233.252.0.3
Source: 192.168.219.11
Flags: sparse,spt
Upstream interface: fe-1/3/1.0
Upstream neighbor: Direct
Upstream state: Local Source
Keepalive timeout:
Uptime: 11:27:41
Downstream neighbors:
Interface: Pseudo-MLDP
2442
Group: 233.252.0.22
Source: 10.2.7.7
Flags: sparse,spt
Upstream interface: lt-1/2/0.27
Upstream neighbor: Direct
Upstream state: Local Source
Keepalive timeout:
Uptime: 11:27:25
Downstream neighbors:
Interface: Pseudo-MLDP
Group: 2001:db8::1:2
Source: 2001:db8::1:2:7:7
Flags: sparse,spt
Upstream interface: lt-1/2/0.27
Upstream neighbor: Direct
Upstream state: Local Source
Keepalive timeout:
Uptime: 11:27:26
Downstream neighbors:
Interface: Pseudo-MLDP
show pim join extensive (Egress Node with Multipoint LDP Inband Signaling for Point-to-
Multipoint LSPs)
Group: 233.252.0.0
Source: *
RP: 10.1.1.1
Flags: sparse,rptree,wildcard
Upstream interface: Local
Upstream neighbor: Local
Upstream state: Local RP
Uptime: 11:31:33
2443
Downstream neighbors:
Interface: fe-1/3/0.0
192.168.209.9 State: Join Flags: SRW Timeout: Infinity
Uptime: 11:31:33 Time since last Join: 11:31:32
Group: 233.252.0.1
Source: 192.168.219.11
Flags: sparse,spt
Upstream protocol: MLDP
Upstream interface: Pseudo MLDP
Upstream neighbor: MLDP LSP root <10.1.1.2>
Upstream state: Join to Source
Keepalive timeout:
Uptime: 11:31:32
Downstream neighbors:
Interface: so-0/1/3.0
192.168.92.9 State: Join Flags: S Timeout: Infinity
Uptime: 11:31:30 Time since last Join: 11:31:30
Downstream neighbors:
Interface: fe-1/3/0.0
192.168.209.9 State: Join Flags: S Timeout: Infinity
Uptime: 11:31:32 Time since last Join: 11:31:32
Group: 233.252.0.2
Source: 192.168.219.11
Flags: sparse,spt
Upstream protocol: MLDP
Upstream interface: Pseudo MLDP
Upstream neighbor: MLDP LSP root <10.1.1.2>
Upstream state: Join to Source
Keepalive timeout:
Uptime: 11:31:32
Downstream neighbors:
Interface: so-0/1/3.0
192.168.92.9 State: Join Flags: S Timeout: Infinity
Uptime: 11:31:30 Time since last Join: 11:31:30
Downstream neighbors:
Interface: lt-1/2/0.14
10.1.4.4 State: Join Flags: S Timeout: 177
Uptime: 11:30:33 Time since last Join: 00:00:33
Downstream neighbors:
Interface: fe-1/3/0.0
192.168.209.9 State: Join Flags: S Timeout: Infinity
2444
Group: 233.252.0.3
Source: 192.168.219.11
Flags: sparse,spt
Upstream protocol: MLDP
Upstream interface: Pseudo MLDP
Upstream neighbor: MLDP LSP root <10.1.1.2>
Upstream state: Join to Source
Keepalive timeout:
Uptime: 11:31:32
Downstream neighbors:
Interface: fe-1/3/0.0
192.168.209.9 State: Join Flags: S Timeout: Infinity
Uptime: 11:31:32 Time since last Join: 11:31:32
Group: 233.252.0.22
Source: 10.2.7.7
Flags: sparse,spt
Upstream protocol: MLDP
Upstream interface: Pseudo MLDP
Upstream neighbor: MLDP LSP root <10.1.1.2>
Upstream state: Join to Source
Keepalive timeout:
Uptime: 11:31:30
Downstream neighbors:
Interface: so-0/1/3.0
192.168.92.9 State: Join Flags: S Timeout: Infinity
Uptime: 11:31:30 Time since last Join: 11:31:30
Group: 2001:db8::1:2
Source: 2001:db8::1:2:7:7
Flags: sparse,spt
Upstream protocol: MLDP
Upstream interface: Pseudo MLDP
Upstream neighbor: MLDP LSP root <10.1.1.2>
Upstream state: Join to Source
Keepalive timeout:
Uptime: 11:31:32
Downstream neighbors:
2445
Interface: fe-1/3/0.0
2001:db8::21f:12ff:fea5:c4db State: Join Flags: S Timeout: Infinity
Uptime: 11:31:32 Time since last Join: 11:31:32
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
Support for PIM NSR support for VXLAN added in Junos OS Release 16.2
Support for RFC 5496 (via rpf-vector) added in Junos OS Release 17.3R1.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2446
Description | 2446
Options | 2446
2446
Syntax
Description
Options
none (Same as brief) Display standard information about PIM neighbors for
all supported family addresses for the main instance.
inet | inet6 (Optional) Display information about PIM neighbors for IPv4 or IPv6
family addresses, respectively.
2447
instance (instance-name | all) (Optional) Display information about neighbors for the specified
PIM-enabled routing instance or for all routing instances.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 99 on page 2447 describes the output fields for the show pim neighbors command. Output fields
are listed in the approximate order in which they appear.
Neighbor addr Address of the neighboring PIM routing device. All levels
Mode PIM mode of the neighbor: Sparse, Dense, SparseDense, or All levels
Unknown. When the neighbor is running PIM version 2, this
mode is always Unknown.
2448
• B—Bidirectional Capable.
• G—Generation Identifier.
• T—Tracking bit.
Uptime Time the neighbor has been operational since the PIM process All levels
was last initialized. Starting in Junos OS release 17.3R1, uptime
is not reset during ISSU.The time format is as follows:
dd:hh:mm:ss ago for less than a week and nwnd:hh:mm:ss ago
for more than a week.
Hello Option Time for which the neighbor is available, in seconds. The range detail
Holdtime of values is 0 through 65,535.
Hello Default Default holdtime and the time remaining if the holdtime option detail
Holdtime is not in the received hello message.
Hello Option Designated router election priority. The range of values is 0 detail
DR Priority through 255.
2449
Hello Option Appears in conjunction with the rpf-vector command. The Join detail
Join Attribute attribute is included in the PIM join messages of PIM routers
that can receive type 1 Encoded-Source Address.
Hello Option 9-digit or 10-digit number used to tag hello messages. detail
Generation ID
Hello Option Time to wait before the neighbor receives prune messages, in detail
LAN Prune the format delay nnn ms override nnnn ms.
Delay
Sample Output
Instance: PIM.master
Interface IP V Mode Option Uptime Neighbor addr
ae0.0 4 2 HPLGTA 19:01:24 20.0.0.13
ae1.0 4 2 HPLGTA 19:01:24 20.0.0.149
Address: 20.0.0.149, IPv4, PIM v2, sg Join Count: 0, tsg Join Count: 332
BFD: Disabled
Hello Option Holdtime: 105 seconds 86 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 853386212
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Join Suppression supported
Hello Option Join Attribute supported
Interface: lo0.0
Interface: fe-1/0/1.0
Address: 192.168.12.1, IPv4, PIM v2
BFD: Disabled
Hello Default Holdtime: 105 seconds 80 remaining
2452
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
Support for the instance all option added in Junos OS Release 12.1.
Support for RFC 5496 (via rpf-vector) added in Junos OS Release 17.3R1.
Release Description
17.3R1 Starting in Junos OS release 17.3R1, uptime is not reset during ISSU.
IN THIS SECTION
Syntax | 2453
Description | 2453
Options | 2453
Syntax
Description
Options
instance <instance-name> (Optional) Display PIM snooping interface information for the specified
routing instance.
interface <interface-name> (Optional) Display PIM snooping information for the specified interface
only.
vlan-id <vlan-identifier> (Optional) Display PIM snooping interface information for the specified
VLAN.
view
Output Fields
Table 100 on page 2454 lists the output fields for the show pim snooping interface command. Output
fields are listed in the approximate order in which they appear.
2454
Name Router interfaces that are part of this learning domain. All levels
NbrCnt Number of neighboring routers connected through the specified All levels
interface.
Sample Output
Learning-Domain: vlan-id 20
2455
Learning-Domain: vlan-id 10
Name State IP-Version NbrCnt
ge-1/3/1.10 Up 4 1
ge-1/3/3.10 Up 4 1
ge-1/3/5.10 Up 4 1
ge-1/3/7.10 Up 4 1
DR address: 192.0.2.5
DR flooding is ON
Learning-Domain: vlan-id 20
Name State IP-Version NbrCnt
ge-1/3/1.20 Up 4 1
ge-1/3/3.20 Up 4 1
ge-1/3/5.20 Up 4 1
ge-1/3/7.20 Up 4 1
DR address: 192.0.2.6
DR flooding is ON
DR address: 192.0.2.5
DR flooding is ON
Learning-Domain: vlan-id 20
DR address: 192.0.2.6
DR flooding is ON
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2457
2457
Description | 2457
Options | 2457
Syntax
Description
Options
instance instance-name (Optional) Display PIM snooping join information for the specified
routing instance.
vlan-id vlan-identifier (Optional) Display PIM snooping join information for the specified
VLAN.
2458
view
Output Fields
Table 101 on page 2458 lists the output fields for the show pim snooping join command. Output fields
are listed in the approximate order in which they appear.
• * (wildcard value)
• <ipv4-address>
• <ipv6-address>
NOTE: RP group range entries have None in the Upstream state field
because RP group ranges do not trigger actual PIM join messages
between routers.
Upstream Information about the upstream neighbor: Direct, Local, Unknown, or All levels
neighbor a specific IP address.
Upstream port RPF interface toward the source address for the source-specific state All levels
(S,G) or toward the rendezvous point (RP) address for the non-
source-specific state (*,G).
Timeout Time remaining until the downstream join state is updated (in extensive
seconds).
Sample Output
Learning-Domain: vlan-id 10
Group: 198.51.100.2
Source: *
Flags: sparse,rptree,wildcard
Upstream state: None
Upstream neighbor: 192.0.2.4, port: ge-1/3/5.10
Learning-Domain: vlan-id 20
Group: 198.51.100.3
Source: *
Flags: sparse,rptree,wildcard
Upstream state: None
Upstream neighbor: 203.0.113.4, port: ge-1/3/5.20
Group: 198.51.100.2
Source: *
Flags: sparse,rptree,wildcard
Upstream state: None
Upstream neighbor: 192.0.2.4, port: ge-1/3/5.10
Downstream port: ge-1/3/1.10
Downstream neighbors:
192.0.2.2 State: Join Flags: SRW Timeout: 166
Learning-Domain: vlan-id 20
Group: 198.51.100.3
Source: *
Flags: sparse,rptree,wildcard
Upstream state: None
Upstream neighbor: 203.0.113.4, port: ge-1/3/5.20
Downstream port: ge-1/3/3.20
Downstream neighbors:
203.0.113.3 State: Join Flags: SRW Timeout: 168
Learning-Domain: vlan-id 10
Group: 198.51.100.2
Source: *
Flags: sparse,rptree,wildcard
Upstream state: None
Upstream neighbor: 192.0.2.4, port: ge-1/3/5.10
Learning-Domain: vlan-id 20
Group: 198.51.100.3
Source: *
Flags: sparse,rptree,wildcard
Upstream state: None
Upstream neighbor: 203.0.113.4, port: ge-1/3/5.20
2462
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2463
Description | 2463
Options | 2463
Syntax
Description
Options
instance instance-name (Optional) Display PIM snooping neighbor information for the specified
routing instance.
interface interface-name (Optional) Display information for the specified PIM snooping neighbor
interface.
vlan-id vlan-identifier (Optional) Display PIM snooping neighbor information for the specified
VLAN.
view
Output Fields
Table 102 on page 2464 lists the output fields for the show pim snooping neighbors command. Output
fields are listed in the approximate order in which they appear.
2464
Interface Router interface for which PIM snooping neighbor details are All levels
displayed.
Option PIM snooping options available on the specified interface: All levels
• G = Generation Identifier
• T = Tracking Bit
Uptime Time the neighbor has been operational since the PIM process was All levels
last initialized, in the format dd:hh:mm:ss ago for less than a week
and nwnd:hh:mm:ss ago for more than a week.
Neighbor addr IP address of the PIM snooping neighbor connected through the All levels
specified interface.
Hello Option Time for which the neighbor is available, in seconds. The range of detail
Holdtime values is 0 through 65,535.
2465
Hello Option Designated router election priority. The range of values is 0 through detail
DR Priority 4294967295.
Hello Option 9-digit or 10-digit number used to tag hello messages. detail
Generation ID
Hello Option Time to wait before the neighbor receives prune messages, in the detail
LAN Prune format delay nnn ms override nnnn ms.
Delay
Sample Output
Instance: vpls1
Learning-Domain: vlan-id 10
Learning-Domain: vlan-id 20
Interface: ge-1/3/1.10
Address: 192.0.2.2
Uptime: 00:44:51
Hello Option Holdtime: 105 seconds 83 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 830908833
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Tracking is supported
Interface: ge-1/3/3.10
Address: 192.0.2.3
Uptime: 00:44:51
Hello Option Holdtime: 105 seconds 97 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 2056520742
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Tracking is supported
Interface: ge-1/3/5.10
Address: 192.0.2.4
Uptime: 00:44:51
Hello Option Holdtime: 105 seconds 81 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 1152066227
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Tracking is supported
Interface: ge-1/3/7.10
Address: 192.0.2.5
Uptime: 00:44:51
Hello Option Holdtime: 105 seconds 96 remaining
2467
Interface: ge-1/3/1.20
Address: 192.0.2.12
Uptime: 00:44:51
Hello Option Holdtime: 105 seconds 81 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 963205167
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Tracking is supported
Interface: ge-1/3/3.20
Address: 192.0.2.13
Uptime: 00:44:51
Hello Option Holdtime: 105 seconds 104 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 166921538
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Tracking is supported
Interface: ge-1/3/5.20
Address: 192.0.2.14
Uptime: 00:44:51
Hello Option Holdtime: 105 seconds 88 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 789422835
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Tracking is supported
Interface: ge-1/3/7.20
Address: 192.0.2.15
Uptime: 00:44:51
Hello Option Holdtime: 105 seconds 88 remaining
Hello Option DR Priority: 1
Hello Option Generation ID: 1563649680
Hello Option LAN Prune Delay: delay 500 ms override 2000 ms
Tracking is supported
2468
Instance: vpls1
Learning-Domain: vlan-id 10
Learning-Domain: vlan-id 20
Instance: vpls1
Learning-Domain: vlan-id 10
Learning-Domain: vlan-id 20
Instance: vpls1
Learning-Domain: vlan-id 10
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2470
Description | 2470
Options | 2470
2470
Syntax
Description
Options
instance instance-name (Optional) Display statistics for a specific routing instance enabled
by Protocol Independent Multicast (PIM) snooping.
interface interface-name (Optional) Display statistics about the specified interface for PIM
snooping.
vlan-id vlan-identifier (Optional) Display PIM snooping statistics information for the
specified VLAN.
view
2471
Output Fields
Table 103 on page 2471 lists the output fields for the show pim snooping statistics command. Output
fields are listed in the approximate order in which they appear.
Rx J/P Number of join/prune packets seen but not received on the upstream All levels
messages -- interface.
seen
Rx J/P Number of join/prune packets received on the downstream interface. All levels
messages --
received
Rx Version Number of packets received with an unknown version number. All levels
Unknown
Rx Upstream Number of packets received with unknown upstream neighbor All levels
Neighbor information.
Unknown
Rx Bad Length Number of packets received containing incorrect length information. All levels
Rx J/P Busy Number of join/prune packets dropped while the router is busy. All levels
Drop
Rx J/P Group Number of join/prune packets received containing the aggregate All levels
Aggregate 0 group information.
Rx No PIM Number of packets received without the interface information. All levels
Interface
Rx Unknown Number of hello packets received with unknown options. All levels
Hello Option
Sample Output
Tx J/P messages 0
RX J/P messages 8
Rx J/P messages -- seen 0
Rx J/P messages -- received 8
Rx Hello messages 37
Rx Version Unknown 0
Rx Neighbor Unknown 0
Rx Upstream Neighbor Unknown 0
Rx Bad Length 0
Rx J/P Busy Drop 0
Rx J/P Group Aggregate 0
Rx Malformed Packet 0
Rx No PIM Interface 0
Rx No Upstream Neighbor 0
Rx Bad Length 0
Rx Neighbor Unknown 0
Rx Unknown Hello Option 0
Rx Malformed Packet 0
Learning-Domain: vlan-id 20
Tx J/P messages 0
RX J/P messages 2
Rx J/P messages -- seen 0
Rx J/P messages -- received 2
Rx Hello messages 39
Rx Version Unknown 0
Rx Neighbor Unknown 0
Rx Upstream Neighbor Unknown 0
Rx Bad Length 0
Rx J/P Busy Drop 0
Rx J/P Group Aggregate 0
Rx Malformed Packet 0
Rx No PIM Interface 0
Rx No Upstream Neighbor 0
Rx Bad Length 0
Rx Neighbor Unknown 0
Rx Unknown Hello Option 0
Rx Malformed Packet 0
2474
Tx J/P messages 0
RX J/P messages 9
Rx J/P messages -- seen 0
Rx J/P messages -- received 9
Rx Hello messages 45
Rx Version Unknown 0
Rx Neighbor Unknown 0
Rx Upstream Neighbor Unknown 0
Rx Bad Length 0
Rx J/P Busy Drop 0
Rx J/P Group Aggregate 0
Rx Malformed Packet 0
Rx No PIM Interface 0
Rx No Upstream Neighbor 0
Rx Bad Length 0
Rx Neighbor Unknown 0
Rx Unknown Hello Option 0
Rx Malformed Packet 0
Learning-Domain: vlan-id 20
Tx J/P messages 0
RX J/P messages 3
Rx J/P messages -- seen 0
Rx J/P messages -- received 3
Rx Hello messages 47
Rx Version Unknown 0
Rx Neighbor Unknown 0
Rx Upstream Neighbor Unknown 0
Rx Bad Length 0
Rx J/P Busy Drop 0
Rx J/P Group Aggregate 0
Rx Malformed Packet 0
Rx No PIM Interface 0
Rx No Upstream Neighbor 0
2475
Rx Bad Length 0
Rx Neighbor Unknown 0
Rx Unknown Hello Option 0
Rx Malformed Packet 0
Tx J/P messages 0
RX J/P messages 11
Rx J/P messages -- seen 0
Rx J/P messages -- received 11
Rx Hello messages 64
Rx Version Unknown 0
Rx Neighbor Unknown 0
2476
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2477
Description | 2477
Options | 2477
Syntax
Description
Display information about Protocol Independent Multicast (PIM) rendezvous points (RPs).
Options
none Display standard information about PIM RPs for all groups and family
addresses for all routing instances.
group-address (Optional) Display the RPs for a particular group. If you specify a group
address, the output lists the routing device that is the RP for that
group.
inet | inet6 (Optional) Display information for IPv4 or IPv6 family addresses,
respectively.
instance instance-name (Optional) Display information about RPs for a specific PIM-enabled
routing instance.
2478
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
view
Output Fields
Table 104 on page 2478 describes the output fields for the show pim rps command. Output fields are
listed in the approximate order in which they appear.
Family or Name of the address family: inet (IPv4) or inet6 (IPv6). All levels
Address family
Holdtime How long to keep the RP active, with time remaining, in seconds. All levels
2479
Timeout How long until the local routing device determines the RP to be All levels
unreachable, in seconds.
Group prefixes Addresses of groups that this RP can span. brief none
Learned via Address and method by which the RP was learned. detail extensive
Mode The PIM mode of the RP: bidirectional or sparse. All levels
Time Active How long the RP has been active, in the format hh:mm:ss. detail extensive
Device Index Index value of the order in which Junos OS finds and initializes detail extensive
the interface.
group-address
Active groups Number of groups currently using this RP. detail extensive
using RP
total Total number of active groups for this RP. detail extensive
2481
• State:
• On the RP:
Anycast-PIM If anycast RP is configured, the addresses of the RPs in the set. extensive
rpset
Anycast-PIM If anycast RP is configured, the local address used by the RP. extensive
local address
used
2482
Anycast-PIM If anycast RP is configured, the current register state for each extensive
Register State group:
RP selected For sparse mode and bidirectional mode, the identity of the RP group-address
for the specified group address.
Sample Output
Address-family INET
RP address Type Mode Holdtime Timeout Groups Group prefixes
10.100.100.100 auto-rp sparse 150 146 0 233.252.0.0/8
233.252.0.1/24
10.200.200.200 auto-rp sparse 150 146 0 233.252.0.2/4
address-family INET6
2483
The output for the show pim rps brief command is identical to that for the show pim rps command. For
sample output, see "show pim rps" on page 2482.
RP selected: 10.100.100.100
RP selected: 10.100.100.100
233.252.0.0/16
10.4.12.75 (Bidirectional)
RP selected: 10.4.12.75
show pim rps <group-address> (SSM Range With asm-override-ssm Configured and a Sparse-
Mode RP)
Source-specific Mode (SSM) active with Sparse Mode ASM override for group
233.252.0.1
233.252.0.0/16
10.4.12.75
RP selected: 10.4.12.75
show pim rps <group-address> (SSM Range With asm-override-ssm Configured and a
Bidirectional RP)
Source-specific Mode (SSM) active with Sparse Mode ASM override for group
233.252.0.1
233.252.0.0/16
10.4.12.75 (Bidirectional)
RP selected: (null)
2485
Family: INET
RP: 10.255.245.91
Learned via: static configuration
Time Active: 00:05:48
Holdtime: 45 with 36 remaining
Device Index: 122
Subunit: 32768
Interface: pd-6/0/0.32768
Group Ranges:
233.252.0.0/4, 36s remaining
Active groups using RP:
233.252.0.1
RP: 10.10.1.3
Learned via: static configuration
Mode: Bidirectional
Time Active: 01:58:07
Holdtime: 150
Group Ranges:
233.252.0.0/24
233.252.0.01/24
RP: 10.10.13.2
Learned via: static configuration
Mode: Bidirectional
Time Active: 01:58:07
Holdtime: 150
Group Ranges:
233.252.0.3/24
233.252.0.4/24
Family: INET
RP: 10.10.10.2
Learned via: static configuration
Time Active: 00:54:52
Holdtime: 0
Device Index: 130
Subunit: 32769
Interface: pimd.32769
Group Ranges:
233.252.0.0/4
Active groups using RP:
233.252.0.10
Anycast-PIM rpset:
2487
10.100.111.34
10.100.111.17
10.100.111.55
Anycast-PIM rpset:
ab::1
ab::2
Anycast-PIM local address used: cd::1
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2488
Description | 2489
Options | 2489
Syntax
Description
Display information about the Protocol Independent Multicast (PIM) source reverse path forwarding
(RPF) state.
Options
none Display standard information about the PIM RPF state for all supported
family addresses for all routing instances.
inet | inet6 (Optional) Display information for IPv4 or IPv6 family addresses,
respectively.
instance instance-name (Optional) Display information about the RPF state for a specific PIM-
enabled routing instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a
system-name) particular logical system.
source-prefix (Optional) Display the state for source RPF states in the given range.
view
Output Fields
Table 105 on page 2489 describes the output fields for the show pim source command. Output fields
are listed in the approximate order in which they appear.
Prefix/length Prefix and prefix length for the route used to reach the RPF address.
Upstream Address of the RPF neighbor used to reach the source address.
Neighbor
The multipoint LDP (M-LDP) root appears on egress nodes in M-LDP point-to-
multipoint LSPs with inband signaling.
Sample Output
Source 10.255.14.144
Prefix 10.255.14.144/32
Upstream interface Local
Upstream neighbor Local
Source 10.255.70.15
Prefix 10.255.70.15/32
Upstream interface so-1/0/0.0
Upstream neighbor 10.111.10.2
The output for the show pim source brief command is identical to that for the show pim source
command. For sample output, see "show pim source" on page 2490.
Source 10.255.14.144
Prefix 10.255.14.144/32
Upstream interface Local
Upstream neighbor Local
Active groups:233.252.0.0
233.252.0.1
233.252.0.1
Source 10.255.70.15
Prefix 10.255.70.15/32
Upstream interface so-1/0/0.0
Upstream neighbor 10.111.10.2
Active groups:233.252.0.1
show pim source (Egress Node with Multipoint LDP Inband Signaling for Point-to-Multipoint
LSPs)
Source 10.1.1.1
Prefix 10.1.1.1/32
Upstream interface Local
Upstream neighbor Local
Source 10.2.7.7
Prefix 10.2.7.0/24
Upstream protocol MLDP
2492
Source 192.168.219.11
Prefix 192.168.219.0/28
Upstream protocol MLDP
Upstream interface Pseudo MLDP
Upstream neighbor via MLDP-inband
Upstream interface fe-1/3/0.0
Upstream neighbor 192.168.140.1
Upstream neighbor MLDP LSP root <10.1.1.2>
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
IN THIS SECTION
Syntax | 2493
Description | 2493
Options | 2493
Syntax
Description
Options
instance instance-name (Optional) Display statistics for a specific routing instance enabled
by Protocol Independent Multicast (PIM).
2494
logical-system (all | logical-system- (Optional) Perform this operation on all logical systems or on a
name) particular logical system.
view
Output Fields
Table 106 on page 2494 describes the output fields for the show pim statistics command. Output fields
are listed in the approximate order in which they appear.
• interface interface-name
Family Output is for IPv4 or IPv6 PIM statistics. INET indicates IPv4 statistics,
and INET6 indicates IPv6 statistics.
• interface interface-name
PIM statistics PIM statistics for all interfaces or for the specified interface.
2495
PIM message type Message type for which statistics are displayed.
V2 State Refresh PIM version 2 control messages related to PIM dense mode (PIM-DM)
state refresh.
V2 DF Election PIM version 2 send and receive messages associated with bidirectional
PIM designated forwarder election.
Hello dropped on neighbor Number of hello packets dropped because of a configured neighbor
policy policy.
Unknown type Number of PIM control packets received with an unknown type.
V1 Unknown type Number of PIM version 1 control packets received with an unknown
type.
Unknown Version Number of PIM control packets received with an unknown version.
The version is not version 1 or version 2.
Neighbor unknown Number of PIM control packets received (excluding PIM hello) without
first receiving the hello packet.
Bad Length Number of PIM control packets received for which the packet size
does not match the PIM length field in the packet.
Bad Checksum Number of PIM control packets received for which the calculated
checksum does not match the checksum field in the packet.
Bad Receive If Number of PIM control packets received on an interface that does not
have PIM configured.
Rx Bad Data Number of PIM control packets received that contain data for TCP
Bad register packets.
Rx Intf disabled Number of PIM control packets received on an interface that has PIM
disabled.
2498
Rx Register not RP Number of PIM register packets received when the routing device is
not the RP for the group.
Rx Register no route Number of PIM register packets received when the RP does not have
a unicast route back to the source.
Rx Register no decap if Number of PIM register packets received when the RP does not have
a de-encapsulation interface.
RP Filtered Source Number of PIM packets received when the routing device has a source
address filter configured for the RP.
Rx Unknown Reg Stop Number of register stop messages received with an unknown type.
Rx Join/Prune no state Number of join and prune messages received for which the routing
device has no state.
Rx Join/Prune on upstream Number of join and prune messages received on the interface used to
if reach the upstream routing device, toward the RP.
Rx Join/Prune for invalid Number of join or prune messages received for invalid multicast group
group addresses.
2499
Rx Join/Prune messages Number of join and prune messages received and dropped.
dropped
Rx sparse join for dense Number of PIM sparse mode join messages received for a group that is
group configured for dense mode.
Rx Graft/Graft Ack no state Number of graft and graft acknowledgment messages received for
which the router or switch has no state.
Rx Graft on upstream if Number of graft messages received on the interface used to reach the
upstream routing device, toward the RP.
Rx CRP not BSR Number of BSR messages received in which the PIM message type is
Candidate-RP-Advertisement, not Bootstrap.
Rx BSR when BSR Number of BSR messages received in which the PIM message type is
Bootstrap.
Rx BSR not RPF if Number of BSR messages received on an interface that is not the RPF
interface.
Rx unknown hello opt Number of PIM hello packets received with options that Junos OS
does not support.
Rx data no state Number of PIM control packets received for which the routing device
has no state for the data type.
Rx RP no state Number of PIM control packets received for which the routing device
has no state for the RP.
Rx malformed packet Number of PIM control packets received with a malformed IP unicast
or multicast address family.
No register encap if Number of PIM register packets received when the first-hop routing
device does not have an encapsulation interface.
No route upstream Number of PIM control packets received when the routing device does
not have a unicast route to the the interface used to reach the
upstream routing device, toward the RP.
Nexthop Unusable Number of PIM control packets with an unusable nexthop. A path can
be unusable if the route is hidden or the link is down.
RP mismatch Number of PIM control packets received for which the routing device
has an RP mismatch.
RPF neighbor unknown Number of PIM control packets received for which the routing device
has an unknown RPF neighbor for the source.
Rx Joins/Prunes filtered The number of join and prune messages filtered because of configured
route filters and source address filters.
Tx Joins/Prunes filtered The number of join and prune messages filtered because of configured
route filters and source address filters.
2501
Embedded-RP invalid addr Number of packets received with an invalid embedded RP address in
PIM join messages and other types of messages sent between routing
domains.
Embedded-RP limit exceed Number of times the limit configured with the maximum-rps
statement is exceeded. The maximum-rps statement limits the number
of embedded RPs created in a specific routing instance. The range is
from 1 through 500. The default is 100.
Embedded-RP added Number of packets in which the embedded RP for IPv6 is added.
Embedded-RP removed Number of packets in which the embedded RP for IPv6 is removed.
The embedded RP is removed whenever all PIM join states using this
RP are removed or the configuration changes to remove the
embedded RP feature.
Rx Register msgs filtering Number of received register messages dropped because of a filter
drop configured for PIM register messages.
2502
Tx Register msgs filtering Number of register messages dropped because of a filter configured
drop for PIM register messages.
Rx Bidir Join/Prune on non- Error counter for join and prune messages received on non-
Bidir if bidirectional PIM interfaces.
Rx Bidir Join/Prune on non- Error counter for join and prune messages received on non-designated
DF if forwarder interfaces.
V4 (S,G) Maximum Maximum number of (S,G) IPv4 multicast routes accepted for the VPN
routing and forwarding (VRF) routing instance. If this number is met,
additional (S,G) entries are not accepted.
V4 (S,G) Log Interval Time (in seconds) between consecutive log messages.
V6 (S,G) Maximum Maximum number of (S,G) IPv6 multicast routes accepted for the VPN
routing and forwarding (VRF) routing instance. If this number is met,
additional (S,G) entries are not accepted.
V6 (S,G) Log Interval Time (in seconds) between consecutive log messages.
V4 (grp-prefix, RP) Log Time (in seconds) between consecutive log messages.
Interval
V6 (grp-prefix, RP) Log Time (in seconds) between consecutive log messages.
Interval
2504
V4 Register Maximum Maximum number of IPv4 PIM registers accepted for the VRF routing
instance. If this number is met, additional PIM registers are not
accepted.
V4 Register Log Interval Time (in seconds) between consecutive log messages.
V6 Register Maximum Maximum number of IPv6 PIM registers accepted for the VRF routing
instance. If this number is met, additional PIM registers are not
accepted.
V6 Register Log Interval Time (in seconds) between consecutive log messages.
(*,G) Join drop due to SSM PIM join messages that are dropped because the multicast addresses
range check are outside of the SSM address range of 232.0.0.0 through
232.255.255.255. You can extend the accepted SSM address range by
configuring the ssm-groups statement.
2505
Sample Output
Global Statistics
Bad Receive If 0
Rx Bad Data 0
Rx Intf disabled 0
Rx V1 Require V2 0
Rx V2 Require V1 0
Rx Register not RP 0
Rx Register no route 0
Rx Register no decap if 0
Null Register Timeout 0
RP Filtered Source 0
Rx Unknown Reg Stop 0
Rx Join/Prune no state 0
Rx Join/Prune on upstream if 0
Rx Join/Prune for invalid group 5
Rx Join/Prune messages dropped 0
Rx sparse join for dense group 0
Rx Graft/Graft Ack no state 0
Rx Graft on upstream if 0
Rx CRP not BSR 0
Rx BSR when BSR 0
Rx BSR not RPF if 0
Rx unknown hello opt 0
Rx data no state 0
Rx RP no state 0
Rx aggregate 0
Rx malformed packet 0
Rx illegal TTL 0
Rx illegal destination address 0
No RP 0
No register encap if 0
No route upstream 0
Nexthop Unusable 0
RP mismatch 0
RP mode mismatch 0
RPF neighbor unknown 0
Rx Joins/Prunes filtered 0
Tx Joins/Prunes filtered 0
Embedded-RP invalid addr 0
Embedded-RP limit exceed 0
Embedded-RP added 0
Embedded-RP removed 0
Rx Register msgs filtering drop 0
Tx Register msgs filtering drop 0
2507
Sample Output
Sample Output
V1 Join Prune 0 0 0
V1 RP Reachability 0 0 0
V1 Assert 0 0 0
V1 Graft 0 0 0
V1 Graft Ack 0 0 0
AutoRP Announce 0 0 0
AutoRP Mapping 0 0 0
AutoRP Unknown type 0
Anycast Register 0 0 0
Anycast Register Stop 0 0 0
Global Statistics
Rx RP no state 0
Rx aggregate 0
Rx malformed packet 0
Rx illegal TTL 0
Rx illegal destination address 0
No RP 0
No register encap if 0
No route upstream 28
Nexthop Unusable 0
RP mismatch 0
RP mode mismatch 0
RPF neighbor unknown 0
Rx Joins/Prunes filtered 0
Tx Joins/Prunes filtered 0
Embedded-RP invalid addr 0
Embedded-RP limit exceed 0
Embedded-RP added 0
Embedded-RP removed 0
Rx Register msgs filtering drop 0
Tx Register msgs filtering drop 0
Rx Bidir Join/Prune on non-Bidir if 0
Rx Bidir Join/Prune on non-DF if 0
V4 (S,G) Maximum 10
V4 (S,G) Accepted 9
V4 (S,G) Threshold 80
V4 (S,G) Log Interval 80
V6 (S,G) Maximum 8
V6 (S,G) Accepted 8
V6 (S,G) Threshold 50
V6 (S,G) Log Interval 100
V4 (grp-prefix, RP) Maximum 100
V4 (grp-prefix, RP) Accepted 5
V4 (grp-prefix, RP) Threshold 80
V4 (grp-prefix, RP) Log Interval 10
V6 (grp-prefix, RP) Maximum 20
V6 (grp-prefix, RP) Accepted 0
V6 (grp-prefix, RP) Threshold 90
V6 (grp-prefix, RP) Log Interval 20
V4 Register Maximum 100
V4 Register Accepted 10
V4 Register Threshold 80
V4 Register Log Interval 10
V6 Register Maximum 20
2511
V6 Register Accepted 0
V6 Register Threshold 90
V6 Register Log Interval 20
(*,G) Join drop due to SSM range check 0
Sample Output
Release Information
inet6 and instance options introduced in Junos OS Release 10.0 for EX Series switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2513
Description | 2513
Options | 2513
Syntax
Description
Display information about Protocol Independent Multicast (PIM) default multicast distribution tree
(MDT) and the data MDTs in a Layer 3 VPN environment for a routing instance.
Options
instance instance-name Display information about data-MDTs for a specific PIM-enabled routing
instance.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
2514
view
Output Fields
Table 107 on page 2514 describes the output fields for the show pim mdt command. Output fields are
listed in the approximate order in which they appear.
Tunnel Direction the tunnel faces, from the router's perspective: Outgoing or All levels
direction Incoming.
Tunnel mode Mode the tunnel is operating in: PIM-SSM or PIM-ASM. All levels
Default group Default multicast group address using this tunnel. All levels
address
Default source Default multicast source address using this tunnel. All levels
address
Default tunnel Address used as the source address for outgoing PIM control All levels
source messages.
2515
C-Group Customer-facing multicast group address using this tunnel. If you detail
enable dynamic reuse of data MDT group addresses, more than one
group address can use the same data MDT.
C-Source IP address of the multicast source in the customer's address space. If detail
you enable dynamic reuse of data MDT group addresses, more than
one source address can use the same data MDT.
P-Group Service provider-facing multicast group address using this tunnel. detail
Data tunnel Multicast data tunnel interface that set up the data-MDT tunnel. detail
interface
Last known Last known rate, in kilobits per second, at which the tunnel was detail
forwarding rate forwarding traffic.
Configured Rate, in kilobits per second, above which a data-MDT tunnel is created detail
threshold rate and below which it is deleted.
Tunnel uptime Time that this data-MDT tunnel has existed. The format is detail
hours:minutes:seconds.
Sample Output
Use this command to display MDT information for default MDT and data-MDT for IPv4 and/or IPv6
traffic. )
Instance: PIM.VPN-A
Tunnel direction: Incoming
Tunnel mode: PIM-SM
Default group address: 224.1.1.1
Default source address: 0.0.0.0
Default tunnel interface: mt-0/0/0.1081344
Default tunnel source: 0.0.0.0
C-Group: 235.1.1.2
C-Source: 192.168.195.74
P-Group : 228.0.0.0
Data tunnel interface : mt-1/1/0.32769
Last known forwarding rate : 48 kbps (6 kBps)
Configured threshold rate : 10 kbps
Tunnel uptime : 00:00:34
Instance: PIM.VPN-A
Tunnel direction: Incoming
Default group address: 239.1.1.1
Default tunnel interface: mt-1/1/0.1081344
2517
C-Group: 235.1.1.2
C-Source: 192.168.195.74
P-Group : 228.0.0.0
Data tunnel interface : mt-1/1/0.32769
Last known forwarding rate : 48 kbps (6 kBps)
Configured threshold rate : 10 kbps
Tunnel uptime : 00:00:41
Instance: PIM.VPN-A
Tunnel direction: Incoming
Default group address: 239.1.1.1
Default tunnel interface: mt-1/1/0.1081344
Instance: PIM.vpn-a
Tunnel direction: Incoming
Tunnel mode: PIM-SSM
Default group address: 232.1.1.1
Default source address: 10.255.14.217
Default tunnel interface: mt-1/3/0.1081345
Instance: PIM.vpn-a
Tunnel direction: Incoming
Tunnel mode: PIM-SSM
Default group address: 232.1.1.1
Default source address: 10.255.14.218
Default tunnel interface: mt-1/3/0.1081345
Release Information
IN THIS SECTION
Syntax | 2519
Description | 2519
Options | 2519
Syntax
Description
In a draft-rosen Layer 3 multicast virtual private network (MVPN) configured with service provider
tunnels, display the advertisements of new multicast distribution tree (MDT) group addresses cached by
the provider edge (PE) routers in the specified VPN routing and forwarding (VRF) instance that is
configured to use the Protocol Independent Multicast (PIM) protocol.
Options
instance instance- Display data MDT join packets cached by PE routers in a specific PIM instance.
name
logical-system (all (Optional) Perform this operation on all logical systems or on a particular logical
| logical-system- system.
name)
2520
view
Output Fields
Table 108 on page 2520 describes the output fields for the show pim mdt data-mdt-joins command.
Output fields are listed in the approximate order in which they appear.
C-Group IPv4 group address in the address space of the customer’s VPN-specific PIM-
enabled routing instance of the multicast traffic destination. This 32-bit value is
carried in the C-group field of the MDT join TLV packet.
C-Source IPv4 address in the address space of the customer’s VPN-specific PIM-enabled
routing instance of the multicast traffic source. This 32-bit value is carried in the C-
source field of the MDT join TLV packet.
P-Group IPv4 group address in the service provider’s address space of the new data MDT that
the PE router will use to encapsulate the VPN multicast traffic flow (C-Source, C-
Group). This 32-bit value is carried in the P-group field of the MDT join TLV packet.
Timeout Timeout, in seconds, remaining for this cache entry. When the cache entry is
created, this field is set to 180 seconds. After an entry times out, the PE router
deletes the entry from its cache and prunes itself off the data MDT.
2521
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2522
Description | 2522
Options | 2522
Syntax
Description
Display the maximum number configured and the currently active data multicast distribution trees
(MDTs) for a specific VPN routing and forwarding (VRF) instance.
Options
instance instance- Display data MDT information for the specified VRF instance.
name
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular logical
logical-system- system.
name)
view
Output Fields
Table 109 on page 2522 describes the output fields for the show pim mdt data-mdt-limit command.
Output fields are listed in the approximate order in which they appear.
Maximum Data Maximum number of data MDTs created in this VRF instance. If the number is 0, no
Tunnels data MDTs are created for this VRF instance.
2523
Sample Output
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2524
Description | 2524
2524
Options | 2524
Syntax
Description
Options
logical-system (all | logical-system- (Optional) Perform this operation on all logical systems or on a
name) particular logical system.
view
Output Fields
Table 110 on page 2525 describes the output fields for the show pim mvpn command. Output fields are
listed in the approximate order in which they appear.
2525
VPN-Group Multicast group address configured for the default multicast All levels
distribution tree.
Mode Mode the tunnel is operating in: PIM-MVPN, NGEN-MVPN, All levels
NGEN-TRANSITION or None.
Tunnel Type of tunnel: PIM-SSM, PIM-SM, NGEN PMSI, or None (VRF- All levels
only).
Sample Output
Release Information
IN THIS SECTION
Syntax | 2526
Description | 2527
Options | 2528
Syntax
Description
Display the Routing Engine's forwarding table, including the network-layer prefixes and their next hops.
This command is used to help verify that the routing protocol process has relayed the correction
2528
information to the forwarding table. The Routing Engine constructs and maintains one or more routing
tables. From the routing tables, the Routing Engine derives a table of active routes, called the forwarding
table.
NOTE: The Routing Engine copies the forwarding table to the Packet Forwarding Engine, the part
of the router that is responsible for forwarding packets. To display the entries in the Packet
Forwarding Engine's forwarding table, use the show pfe route command.
Options
none Display the routes in the forwarding tables. By default, the show route
forwarding-table command does not display information about private, or
internal, forwarding tables.
bridge-domain (all | (MX Series routers only) (Optional) Display route entries for all bridge domains or
bridge-domain- the specified bridge domain.
name)
ccc interface-name (Optional) Display route entries for the specified circuit cross-connect interface.
interface-name (Optional) Display routing table entries for the specified interface.
interface-name
label name (Optional) Display route entries for the specified label.
lcc number (TX Matrix and TX matrix Plus routers only) (Optional) On a routing matrix
composed of a TX Matrix router and T640 routers, display information for the
specified T640 router (or line-card chassis) connected to the TX Matrix router. On
a routing matrix composed of the TX Matrix Plus router and T1600 or T4000
routers, display information for the specified router (line-card chassis) connected
to the TX Matrix Plus router.
2529
Replace number with the following values depending on the LCC configuration:
learning-vlan-id (MX Series routers only) (Optional) Display learned information for all VLANs or
learning-vlan-id for the specified VLAN.
matching matching (Optional) Display routing table entries matching the specified prefix or prefix
length.
table (Optional) Display route entries for all the routing tables in the main routing
instance or for the specified routing instance. If your device supports logical
systems, you can also display route entries for the specified logical system and
routing instance. To view the routing instances on your device, use the show
route instance command.
vlan (all | vlan- (Optional) Display information for all VLANs or for the specified VLAN.
name)
vpn vpn (Optional) Display routing table entries for a specified VPN.
view
Output Fields
Table 111 on page 2530 lists the output fields for the show route forwarding-table command. Output
fields are listed in the approximate order in which they appear. Field names might be abbreviated (as
shown in parentheses) when no level of output is specified, or when the detail keyword is used instead
of the extensive keyword.
2530
Logical system Name of the logical system. This field is displayed if you specify All levels
the table logical-system-name/routing-instance-name option on
a device that is configured for and supports logical systems.
Routing table Name of the routing table (for example, inet, inet6, mpls). All levels
2531
Enabled The features and protocols that have been enabled for a given All levels
protocols routing table. This field can contain the following values:
Address family Address family (for example, IP, IPv6, ISO, MPLS, and VPLS). All levels
Route Type How the route was placed into the forwarding table. When the All levels
(Type) detail keyword is used, the route type might be abbreviated (as
shown in parentheses):
• cached—Cache route.
• static—Static route.
Next hop IP address of the next hop to the destination. detail extensive
Next hop Type Next-hop type. When the detail keyword is used, the next-hop detail extensive
(Type) type might be abbreviated (as indicated in parentheses):
• broadcast (bcst)—Broadcast.
• deny—Deny.
• receive (recv)—Receive.
• unicast (ucst)—Unicast.
Index Software index of the next hop that is used to route the traffic detail extensive
for a given prefix. none
2536
Route Logical interface index from which the route is learned. For extensive
interface-index example, for interface routes, this is the logical interface index of
the route itself. For static routes, this field is zero. For routes
learned through routing protocols, this is the logical interface
index from which the route is learned.
Reference Number of routes that refer to this next hop. detail extensive
(NhRef) none
Weight Value used to distinguish primary, secondary, and fast reroute extensive
backup routes. Weight information is available when MPLS
label-switched path (LSP) link protection, node-link protection,
or fast reroute is enabled, or when the standby state is enabled
for secondary paths. A lower weight value is preferred. Among
routes with the same weight value, load balancing is possible
(see the Balance field description).
RPF interface List of interfaces from which the prefix can be accepted. Reverse extensive
path forwarding (RPF) information is displayed only when rpf-
check is configured on the interface.
2537
Sample Output
...
2538
...
2539
...
...
The next example is based on the following configuration, which enables an RPF check on all routes that
are learned from this interface, including the interface route:
so-1/1/0 {
unit 0 {
family inet {
rpf-check;
address 192.0.2.2/30;
2540
}
}
}
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2541
Description | 2541
Options | 2541
Syntax
Description
Display the routes based on a specified Multiprotocol Label Switching (MPLS) label value.
Options
brief | detail | extensive | (Optional) Display the specified level of output. If you do not specify a
terse level of output, the system defaults to brief.
logical-system (all | logical- (Optional) Perform this operation on all logical systems or on a particular
system-name) logical system.
view
Output Fields
For information about output fields, see the output field table for the show route command, the show
route detail command, the show route extensive command, or the show route terse command.
2542
Sample Output
Task: BGP.0.0.0.0+179
Announcement bits (1): 0-KRT
AS path: 100 I
Ref Cnt: 2
show route label detail (Multipoint LDP Inband Signaling for Point-to-Multipoint LSPs)
show route label detail (Multipoint LDP Inband Signaling for Point-to-Multipoint LSPs)
show route label detail (Multipoint LDP with Multicast-Only Fast Reroute)
The output for show route label detail shows the two indirect next hop for an ESI.
The output for the show route label extensive command is identical to that of the show route label
detail command. For sample output, see "show route label detail" on page 2542.
2547
Release Information
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2547
Description | 2548
Options | 2548
Syntax
Description
Display the entries in the routing table that were learned from snooping.
Options
none Display the entries in the routing table that were learned from snooping.
brief | detail | extensive | (Optional) Display the specified level of output. If you do not specify a level
terse of output, the system defaults to brief.
best address/prefix (Optional) Display the longest match for the provided address and optional
prefix.
exact address/prefix (Optional) Display exact matches for the provided address and optional
prefix.
logical-system logical- (Optional) Display information about a particular logical system, or type ’all’.
system-name
range prefix-range (Optional) Display information for the provided address range.
view
Output Fields
For information about output fields, see the output field tables for the show route command, the show
route detail command, the show route extensive command, or the show route terse command.
2549
Sample Output
<snip>
2550
logical-system: default
0.0,0.1,0.0,232.1.1.65,100.1.1.2/112*[Multicast/180] 00:07:36
Multicast (IPv4) Composite
0.0,0.1,0.0,232.1.1.66,100.1.1.2/112*[Multicast/180] 00:07:36
Multicast (IPv4) Composite
0.0,0.1,0.0,232.1.1.67,100.1.1.2/112*[Multicast/180] 00:07:36
<snip>
0.15,0.1,0.0,0.0.0.0,0.0.0.0,2/120*[Multicast/180] 00:08:21
Multicast (IPv4) Composite
0.15,0.1,0.0,0.0.0.0,0.0.0.0,2,17/128*[Multicast/180] 00:08:21
Multicast (IPv4) Composite
<snip>
Release Information
IN THIS SECTION
Syntax | 2551
Description | 2551
Options | 2551
Syntax
Description
Options
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular
logical-system-name) logical system. This option is only supported on Junos OS.
routing-table-name Display route entries for all routing tables whose names begin with this
string (for example, inet.0 and inet6.0 are both displayed when you run the
show route table inet command).
view
Output Fields
Table 112 on page 2552 describes the output fields for the show route table command. Output fields
are listed in the approximate order in which they appear.
Restart complete All protocols have restarted for this routing table.
Restart state:
This indicates that OSPF, LDP, and VPN protocols did not restart for the
LDP.inet.0 routing table.
This indicates that all protocols have restarted for the vpls_1.l2vpn.0 routing
table.
number Number of destinations for which there are routes in the routing table.
destinations
number routes Number of routes in the routing table and total number of routes in the following
states:
• holddown (routes that are in the pending state before being declared inactive)
route-destination Route destination (for example:10.0.0.1/24). The entry value is the number of
(entry, announced) routes for this destination, and the announced value is the number of routes being
announced for this destination. Sometimes the route destination is presented in
another format, such as:
• Ethernet tag ID—(4 octets) Identifier of the Ethernet tag. Can set to 0 or to a
valid Ethernet tag value.
label stacking (Next-to-the-last-hop routing device for MPLS only) Depth of the MPLS label
stack, where the label-popping operation is needed to remove one or more labels
from the top of the stack. A pair of routes is displayed, because the pop operation
is performed only when the stack depth is two or more labels.
• S=0 route indicates that a packet with an incoming label stack depth of 2 or
more exits this routing device with one fewer label (the label-popping operation
is performed).
[protocol, Protocol from which the route was learned and the preference value for the route.
preference]
• +—A plus sign indicates the active route, which is the route installed from the
routing table into the forwarding table.
• *—An asterisk indicates that the route is both the active and the last active
route. An asterisk before a to line indicates the best subpath to the route.
In every routing metric except for the BGP LocalPref attribute, a lesser value is
preferred. In order to use common comparison routines, Junos OS stores the 1's
complement of the LocalPref value in the Preference2 field. For example, if the
LocalPref value for Route 1 is 100, the Preference2 value is -101. If the LocalPref
value for Route 2 is 155, the Preference2 value is -156. Route 2 is preferred
because it has a higher LocalPref value and a lower Preference2 value.
Level (IS-IS only). In IS-IS, a single AS can be divided into smaller groups called areas.
Routing between areas is organized hierarchically, allowing a domain to be
administratively divided into smaller areas. This organization is accomplished by
configuring Level 1 and Level 2 intermediate systems. Level 1 systems route within
an area. When the destination is outside an area, they route toward a Level 2
system. Level 2 intermediate systems route between areas and toward other ASs.
2556
Next-hop type Type of next hop. For a description of possible values for this field, see Table 113
on page 2563.
Flood nexthop Indicates that the number of flood next-hop branches exceeded the system limit of
branches exceed 32 branches, and only a subset of the flood next-hop branches were installed in
maximum message the kernel.
Next hop Network layer address of the directly reachable neighboring system.
via Interface used to reach the next hop. If there is more than one interface available
to the next hop, the name of the interface that is actually used is followed by the
word Selected. This field can also contain the following information:
Label operation MPLS label and operation occurring at this routing device. The operation can be
pop (where a label is removed from the top of the stack), push (where another label
is added to the label stack), or swap (where a label is replaced by another label).
Protocol next hop Network layer address of the remote routing device that advertised the prefix. This
address is used to derive a forwarding next hop.
Indirect next hop Index designation used to specify the mapping between protocol next hops, tags,
kernel export policy, and the forwarding next hops.
State State of the route (a route can be in more than one state). See Table 114 on page
2565.
Metricn Cost value of the indicated route. For routes within an AS, the cost is determined
by IGP and the individual protocol metrics. For external routes, destinations, or
routing domains, the cost is determined by a preference value.
MED-plus-IGP Metric value for BGP path selection to which the IGP cost to the next-hop
destination has been added.
2558
TTL-Action For MPLS LSPs, state of the TTL propagation attribute. Can be enabled or disabled
for all RSVP-signaled and LDP-signaled LSPs or for specific VRF routing instances.
Announcement The number of BGP peers or protocols to which Junos OS has announced this
bits route, followed by the list of the recipients of the announcement. Junos OS can
also announce the route to the kernel routing table (KRT) for installing the route
into the Packet Forwarding Engine, to a resolve tree, a Layer 2 VC, or even a VPN.
For example, n-Resolve inet indicates that the specified route is used for route
resolution for next hops found in the routing table.
AS path AS path through which the route was learned. The letters at the end of the AS path
indicate the path origin, providing an indication of the state of the route at the
point at which the AS path originated:
• I—IGP.
• E—EGP.
When AS path numbers are included in the route, the format is as follows:
• [ ]—Brackets enclose the number that precedes the AS path. This number
represents the number of ASs present in the AS path, when calculated as
defined in RFC 4271. This value is used in the AS-path merge process, as
defined in RFC 4893.
• { }—Braces enclose AS sets, which are groups of AS numbers in which the order
does not matter. A set commonly results from route aggregation. The numbers
in each AS set are displayed in ascending order.
NOTE: In Junos OS Release 10.3 and later, the AS path field displays an
unrecognized attribute and associated hexadecimal value if BGP receives attribute
128 (attribute set) and you have not configured an independent domain in any
routing instance.
2560
• Unknown—Indicates that the prefix is not among the prefixes or prefix ranges in
the database.
• Unverified—Indicates that the origin of the prefix is not verified against the
database. This is because the database got populated and the validation is not
called for in the BGP import policy, although origin validation is enabled, or the
origin validation is not enabled for the BGP peers.
• Valid—Indicates that the prefix and autonomous system pair are found in the
database.
FECs bound to Indicates point-to-multipoint root address, multicast source address, and multicast
route group address when multipoint LDP (M-LDP) inband signaling is configured.
Primary Upstream When multipoint LDP with multicast-only fast reroute (MoFRR) is configured,
indicates the primary upstream path. MoFRR transmits a multicast join message
from a receiver toward a source on a primary path, while also transmitting a
secondary multicast join message from the receiver toward the source on a backup
path.
RPF Nexthops When multipoint LDP with MoFRR is configured, indicates the reverse-path
forwarding (RPF) next-hop information. Data packets are received from both the
primary path and the secondary paths. The redundant packets are discarded at
topology merge points due to the RPF checks.
Label Multiple MPLS labels are used to control MoFRR stream selection. Each label
represents a separate route, but each references the same interface list check.
Only the primary label is forwarded while all others are dropped. Multiple
interfaces can receive packets using the same label.
2561
weight Value used to distinguish MoFRR primary and backup routes. A lower weight value
is preferred. Among routes with the same weight value, load balancing is possible.
Prefixes bound to Forwarding equivalent class (FEC) bound to this route. Applicable only to routes
route installed by LDP.
Communities Community path attribute for the route. See Table 115 on page 2568 for all
possible values for this field.
Label-Base, range First label in a block of labels and label block size. A remote PE routing device uses
this first label when sending traffic toward the advertising PE routing device.
status vector Layer 2 VPN and VPLS network layer reachability information (NLRI).
Accepted The LongLivedStale flag indicates that the route was marked LLGR-stale by this
LongLivedStale router, as part of the operation of LLGR receiver mode. Either this flag or the
LongLivedStaleImport flag might be displayed for a route. Neither of these flags is
displayed at the same time as the Stale (ordinary GR stale) flag.
Accepted The LongLivedStaleImport flag indicates that the route was marked LLGR-stale
LongLivedStaleImp when it was received from a peer, or by import policy. Either this flag or the
ort LongLivedStale flag might be displayed for a route. Neither of these flags is
displayed at the same time as the Stale (ordinary GR stale) flag.
Accept all received BGP long-lived graceful restart (LLGR) and LLGR stale routes
learned from configured neighbors and import into the inet.0 routing table
ImportAccepted Accept all received BGP long-lived graceful restart (LLGR) and LLGR stale routes
LongLivedStaleImp learned from configured neighbors and imported into the inet.0 routing table
ort
The LongLivedStaleImport flag indicates that the route was marked LLGR-stale
when it was received from a peer, or by import policy.
Primary Routing In a routing table group, the name of the primary routing table in which the route
Table resides.
Secondary Tables In a routing table group, the name of one or more secondary tables in which the
route resides.
Table 113 on page 2563 describes all possible values for the Next-hop Types output field.
2563
Indirect (indr) Used with applications that have a protocol next hop address
that is remote. You are likely to see this next-hop type for
internal BGP (IBGP) routes when the BGP next hop is a BGP
neighbor that is not directly connected.
Unilist (ulst) List of unicast next hops. A packet sent to this next hop goes
to any next hop in the list.
Table 114 on page 2565 describes all possible values for the State output field. A route can be in more
than one state (for example, <Active NoReadvrt Int Ext>).
2565
Value Description
Always Compare MED Path with a lower multiple exit discriminator (MED) is available.
Cisco Non-deterministic MED Cisco nondeterministic MED is enabled, and a path with a lower
selection MED is available.
Cluster list length Length of cluster list sent by the route reflector.
Ex Exterior route.
IGP metric Path through next hop with lower IGP metric is available.
2566
Value Description
Inactive reason Flags for this route, which was not selected as best for a
particular destination.
Int Ext BGP route received from an internal BGP peer or a BGP
confederation peer.
Interior > Exterior > Exterior via Direct, static, IGP, or EBGP path is available.
Interior
Next hop address Path with lower metric next hop is available.
NotBest Route not chosen because it does not have the lowest MED.
Not Best in its group Incoming BGP AS is not the best of a group (only one AS can be
the best).
2567
Value Description
Route Metric or MED comparison Route with a lower metric or MED is available.
Unusable path Path is not usable because of one of the following conditions:
Value Description
Table 115 on page 2568 describes the possible values for the Communities output field.
Value Description
area-number 4 bytes, encoding a 32-bit area number. For AS-external routes, the value is 0.
A nonzero value identifies the route as internal to the OSPF domain, and as
within the identified area. Area numbers are relative to a particular OSPF
domain.
bandwidth: local AS Link-bandwidth community value used for unequal-cost load balancing. When
number:link- BGP has several candidate paths available for multipath purposes, it does not
bandwidth-number perform unequal-cost load balancing according to the link-bandwidth
community unless all candidate paths have this attribute.
domain-id-vendor Unique configurable number that further identifies the OSPF domain.
options 1 byte. Currently this is only used if the route type is 5 or 7. Setting the least
significant bit in the field indicates that the route carries a type 2 metric.
origin (Used with VPNs) Identifies where the route came from.
2569
Value Description
route-type-vendor Displays the area number, OSPF route type, and option of the route. This is
configured using the BGP extended community attribute 0x8000. The format
is area-number:ospf-route-type:options.
rte-type Displays the area number, OSPF route type, and option of the route. This is
configured using the BGP extended community attribute 0x0306. The format
is area-number:ospf-route-type:options.
target Defines which VPN the route participates in; target has the format 32-bit IP
address:16-bit number. For example, 10.19.0.0:100.
unknown IANA Incoming IANA codes with a value between 0x1 and 0x7fff. This code of the
BGP extended community attribute is accepted, but it is not recognized.
unknown OSPF Incoming IANA codes with a value above 0x8000. This code of the BGP
vendor community extended community attribute is accepted, but it is not recognized.
evpn-mcast-flags Identifies the value in the multicast flags extended community and whether
snooping is enabled. A value of 0x1 indicates that the route supports IGMP
proxy.
Sample Output
192.168.24.1:1:4:1/96
*[BGP/170] 01:08:58, localpref 100, from 192.168.24.1
AS path: I
> to 10.0.16.2 via fe-0/0/1.0, label-switched-path am
::10.255.245.195/128
*[LDP/9] 00:00:22, metric 1
> via so-1/0/0.0
::10.255.245.196/128
*[LDP/9] 00:00:08, metric 1
> via so-1/0/0.0, Push 100008
10.1.1.195:NoCtrlWord:1:1:Local/96
*[L2CKT/7] 00:50:47
> via so-0/1/2.0, Push 100049
via so-0/1/3.0, Push 100049
10.1.1.195:NoCtrlWord:1:1:Remote/96
2573
*[LDP/9] 00:50:14
Discard
10.1.1.195:CtrlWord:1:2:Local/96
*[L2CKT/7] 00:50:47
> via so-0/1/2.0, Push 100049
via so-0/1/3.0, Push 100049
10.1.1.195:CtrlWord:1:2:Remote/96
*[LDP/9] 00:50:14
Discard
LINK { Local { AS:4 BGP-LS ID:100 IPv4:4.4.4.4 }.{ IPv4:4.4.4.4 } Remote { AS:4
BGP-LS ID:100 IPv4:7.7.7.7 }.{ IPv4:7.7.7.7 } Undefined:0 }/1152
*[BGP-LS-EPE/170] 00:20:56
Fictitious
LINK { Local { AS:4 BGP-LS ID:100 IPv4:4.4.4.4 }.{ IPv4:4.4.4.4 IfIndex:339 }
Remote { AS:4 BGP-LS ID:100 IPv4:7.7.7.7 }.{ IPv4:7.7.7.7 } Undefined:0 }/
1152
*[BGP-LS-EPE/170] 00:20:56
Fictitious
LINK { Local { AS:4 BGP-LS ID:100 IPv4:4.4.4.4 }.{ IPv4:50.1.1.1 } Remote { AS:4
BGP-LS ID:100 IPv4:5.5.5.5 }.{ IPv4:50.1.1.2 } Undefined:0 }/1152
*[BGP-LS-EPE/170] 00:20:56
Fictitious
Receive
2 *[MPLS/0] 00:13:55, metric 1
Receive
1024 *[VPN/0] 00:04:18
to table red.inet.0, Pop
Release Information
Show route table evpn statement introduced in Junos OS Release 15.1X53-D30 for QFX Series
switches.
RELATED DOCUMENTATION
IN THIS SECTION
Syntax | 2576
Description | 2576
Options | 2576
Syntax
Description
Display the addresses that the router is listening to in order to receive multicast Session Announcement
Protocol (SAP) session announcements.
Options
none Display standard information about the addresses that the router is listening to
in order to receive multicast SAP session announcements.
logical-system (all | (Optional) Perform this operation on all logical systems or on a particular logical
logical-system-name) system.
view
Output Fields
Table 116 on page 2576 describes the output fields for the show sap listen command. Output fields are
listed in the approximate order in which they appear.
Group address Address of the group that the local router is listening to for SAP messages.
Sample Output
The output for the show sap listen brief command is identical to that for the show sap listen command.
For sample output, see "show sap listen" on page 2577.
The output for the show sap listen detail command is identical to that for the show sap listen command.
For sample output, see "show sap listen" on page 2577.
Release Information
test msdp
IN THIS SECTION
Syntax | 2578
Description | 2578
Options | 2578
Syntax
Description
Options
rpf-peer originator Find the MSDP reverse-path-forwarding (RPF) peer for the
originator.
instance instance-name (Optional) Find MDSP peers for the specified routing instance.
logical-system (all | logical-system- (Optional) Perform this operation on all logical systems or on a
name) particular logical system.
view
Output Fields
When you enter this command, you are provided feedback on the status of your request.
Sample Output
Release Information