0% found this document useful (0 votes)
148 views1 page

SudoNull - Traffic Mirroring On Juniper MX

Uploaded by

Szutor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
148 views1 page

SudoNull - Traffic Mirroring On Juniper MX

Uploaded by

Szutor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 1

Home Random Post

September 4, 2017 at 08:52

Tra c mirroring on Juniper MX


Tutorial

Today we’ll talk about tra c mirroring on the Juniper MX Series routers. After the switches made by Cisco, Huawei or Arista,
the con guration of SPAN and RSPAN on JunOS will seem very complicated, but the complex (at rst glance) con guration
hides the huge capabilities of the MX platform in the eld of tra c mirroring. Although the Juniper approach is complicated
at rst glance, it becomes simple and understandable if you don’t stupidly copy-paste the con gs from one box to another,
but understand what is being done and why. JunOS ideology suggests using lter based forwarding (FBF) for mirroring
purposes, which gives us some exibility in implementing complex tra c mirroring schemes.

So, let's begin. We will look at several examples of mirroring:

1. Local mirroring from port to port


2. Зеркалирование на два и более потребителя
3. Зеркалирование на удаленный хост
4. Избирательное зеркалирование на два и более потребителя
5. Локальное зеркалирование L2 трафика
6. Зеркалирование L2 трафика на удаленный сервер
7. Использование более двух mirroring инстансов на одном FPC

Итак, начнем по порядку.

Локальное зеркалирование с порта на порт.

Наш тестовый стенд будет постоянно меняться — будем добавлять потребителей, изменять их желания по
получаемой копии трафика и т.д. На первом этапе тестовый стенд выглядит так:

Примечание: сначала я хотел использовать генератор трафика, но потом решил, что и сгенерированные hping3
tcp/udp/icmp пакеты, отловленные на анализаторе трафика (как анализатор использовался просто хост с ubuntu
server 14.04 на борту) будут более наглядны, чем просто счетчики с портов в pps (можно к примеру сравнить
актуальность отправленных и принятых данных). Счетчики же надо использовать при проведении нагрузочного
тестирования для проверки производительности маршрутизатора при использовании данного функционала. Но на
виртуальном MX проверять производительность нет смысла — все равно все упрется в возможности сервера
вируализации.

Suppose that there is some kind of tra c exchange between the servers Server-1 (11.0.0.1) and Server-2 (12.0.0.1). The
owners of the Analyzer-1 server want to see what exactly is transferred between these two servers, so we need to con gure
sending a copy of all tra c between Server-1 and Server-2 to Analyzer-1 - that is, do local mirroring. So let's get started.

In theory, this should work like this: we create a mirroring instance in which we specify the parameters of the incoming
tra c (how often to mirror the tra c) and the outgoing parameters to which or which ports to poison the tra c. In order
to direct tra c to the instance we created, we need to use the interface or interfaces, from which we want to remove a copy
of the tra c, hang a special lter that will wrap our tra c in the desired instance. That is, this is a classic policy base routing
scheme, well, or lter base routing in terms of Juniper. We’ve gured out the theory, now let's get to practice - we need to
build such a mirroring scheme:

First we need to create an instance in the [edit forwarding-options port-mirroring] hierarchy, which we will use to mirror
tra c.

[edit forwarding-options port-mirroring]


bormoglotx@RZN-PE-1# show
instance {
SPAN-1 {
input {
rate 1;
run-length 0;
}
family inet {
output {
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
}
}
}

The instance con guration consists of two parts. First, we’ll deal with the input section - as it’s not hard to guess, these are
the parameters of incoming tra c, which should be mirrored. The rate and run-length parameters are important here. The
rst parameter is responsible for the frequency at which packets will be mirrored (trigger is triggered), the second
parameter is responsible for how many packets will still be mirrored after the trigger of the rate trigger.

In our case, rate is set to 1, that is, each packet will be mirrored. Run-length is set to 0, because with a rate of 1 its presence
does not play any role.

For completeness, we will analyze the meaning of these parameters in a more illustrative example. The rate parameter sets
the frequency of tra c mirroring, suppose rate is 5, that is, the trigger will re on every 5th packet, which means that every
5th packet will be mirrored. Now suppose that run-length is set to 4. This tells us that 4 more packets will be mirrored after
every 5th packet. That is, the trigger on the 5th packet rst worked - this packet will be mirrored, now 4 more packets
following the already mirrored packet will also be mirrored. As a result, we get that we are mirroring every 5th packet, plus
4 more packets following it - a total of 100% tra c. By changing these parameters, you can mirror for example every 10
packets out of 100, etc. (well, this is more needed for sampling than mirroring,

If we return to our case, then we already mirror each package, so we simply do not need the run-length parameter and left
the default value of zero.

To calculate the percentage of mirrored tra c, you can use the formula % = ((run-length + 1) / rate) * 100) . It is logical that
with the parameters run-length 1 and rate 1, we get a mirroring of 200% of the tra c or for example with rate 1 and run-
length 4 - 500% of the tra c. I’m grieving or delighting you - more than 100% of the tra c is not mirrored - Juniper will not
multiply packets, which is more than logical. And I couldn’t come up with a scenario when you need to make two copies of
the same tra c (if anyone knows, write in the comments).

And another important parameter is maximum-packet-length. This is the maximum packet size that will be mirrored. If you
set it, for example, to 128, then when you receive a packet, more than 128 bytes (well, for example 1514), the rst 128 bytes
will be cut from it and sent to the consumer. The rest of the packet will simply be discarded. This is convenient when you
need to send only headers to the server and you do not need a payload. It is not recommended to set less than 20 for ipv4.

Now let's move on to the output parameters. Here, in the general case, we need to specify the interface to which we go to
mirror the tra c. In the case when we just have a p2p interface, you don’t need to specify anything else - everything will y.
But as we all remember, ethernet is far from p2p (to be exact, it is csma / cd), and in addition to the interface, we need to
specify the host address to which the tra c is intended, both IP and MAC (but we will get to that later ) I chose an address
from the link-local address range to avoid any intersections with existing addresses - you can take any addressing, this will
absolutely not change anything in the way the technology works. In ethernet, in order to send a packet to some host, the
router needs to nd out the MAC address of this host using ARP. In my case, nothing has been con gured on the recipient
server side - it’s just an empty interface and it turns out that the router will vainly try to resolve the address of the remote
host. Naturally, all mirroring will end there. How to be in this situation? Everything ingenious is simple - a static ARP record is
made:

bormoglotx@RZN-PE-1# show interfaces ge-0/0/1


description Analyzer-1;
unit 0 {
family inet {
address 169.254.0.0/31 {
arp 169.254.0.1 mac 02:00:00:00:00:01;
}
}
}

As a result, we will have just such an entry on the interface:

[edit]
bormoglotx@RZN-PE-1# run show arp interface ge-0/0/1.0
MAC Address Address Name Interface Flags
02:00:00:00:00:01 169.254.0.1 169.254.0.1 ge-0/0/1.0 permanent

Here I would like to dwell in more detail. Theoretically, you can send tra c to some real address con gured on the server,
but the simplest and most exible approach is to create a ctitious IP address and ARP record to the tra c consumer - that
is, we simply make Juniper think that the speci ed interface is behind the speci ed interface The IP / MAC address, which
ultimately makes the box stupidly send tra c there, without understanding, is there really a speci ed host or not - the main
thing is that the port is up. Using static ARP recording in mirroring has a great advantage - static ARP recording does not
expire, and the router will not send requests to the ARP server (which may fall into the dump of the removed tra c, which is
not very good).

Now, in order for tra c to become mirrored, we need to somehow wrap it in the instance we created. To do this, use lter
base forwarding. We create a lter and apply it to the interface we are interested in:

[edit]
bormoglotx@RZN-PE-1# show firewall family inet filter MIRROR>>>SPAN-1
term MIRROR {
then port-mirror-instance SPAN-1;
}
[edit]
bormoglotx@RZN-PE-1# show interfaces ge-0/0/3
description Server-1;
unit 0 {
family inet {
filter {
input MIRROR>>>SPAN-1;
output MIRROR>>>SPAN-1;
}
address 11.0.0.254/24;
}
}

Since we need to collect both incoming and outgoing tra c, we hang the lter in both directions.

As practice shows, this lter does not block the passage of tra c between the servers themselves, so there is no need to
write an accept action, but they are often added to secure it.

Now you can check the state of the mirroring session:

bormoglotx@RZN-PE-1> show forwarding-options port-mirroring


Instance Name: SPAN-1
Instance Id: 2
Input parameters:
Rate : 1
Run-length : 0
Maximum-packet-length : 0
Output parameters:
Family State Destination Next-hop
inet up ge-0/0/1.0 169.254.0.1

Apparently myrroring at work. Let's now run 5 packages from Server-1 to Server-2 and see what we can catch on the
Analyzer-1 analyzer:

bormoglotx@Server-1:~$ sudo hping3 -S -c 5 12.0.0.1 -d 40 -I eth1


HPING 12.0.0.1 (eth1 12.0.0.1): S set, 40 headers + 40 data bytes
len=40 ip=12.0.0.1 ttl=63 DF id=34108 sport=0 flags=RA seq=0 win=0 rtt=3.4 ms
len=40 ip=12.0.0.1 ttl=63 DF id=34121 sport=0 flags=RA seq=1 win=0 rtt=3.5 ms
len=40 ip=12.0.0.1 ttl=63 DF id=34229 sport=0 flags=RA seq=2 win=0 rtt=3.5 ms
len=40 ip=12.0.0.1 ttl=63 DF id=34471 sport=0 flags=RA seq=3 win=0 rtt=3.5 ms
len=40 ip=12.0.0.1 ttl=63 DF id=34635 sport=0 flags=RA seq=4 win=0 rtt=3.5 ms
--- 12.0.0.1 hping statistic ---
5 packets transmitted, 5 packets received, 0% packet loss

Now let's see what we managed to zamampit on the server Analyzer-1:

bormoglotx@Analyzer-1:~$ sudo tcpdump -i eth1


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel

Everything is not so rosy, the conclusion shows that nothing actually works for us, although Juniper reported to us in the
conclusion above that everything is ok. The fact is that you can create the instance for mirroring yourself (which we did) or
use the default instance (it is one for the whole box). If we create the instance ourselves, then we must associate this
instance with the FPC on which we do Mirroring (if the ports are on several FPCs, it means associating with several). Let's go
back to Juniper and indicate in the FPC con guration the instance we created. Why did I focus on this? The fact is that a
couple of times he himself came across this and could not understand what the catch was - after all, the conclusions say that
everything is ne.

[edit]
bormoglotx@RZN-PE-1# show | compare
[edit]
+ chassis {
+ fpc 0 {
+ port-mirror-instance SPAN-1;
+ }
+ }

Now check again whether the mirror works:

bormoglotx@Server-1:~$ sudo hping3 -S -c 5 12.0.0.1 -d 40 -I eth1


HPING 12.0.0.1 (eth1 12.0.0.1): S set, 40 headers + 40 data bytes
len=40 ip=12.0.0.1 ttl=63 DF id=43901 sport=0 flags=RA seq=0 win=0 rtt=4.4 ms
len=40 ip=12.0.0.1 ttl=63 DF id=44117 sport=0 flags=RA seq=1 win=0 rtt=3.4 ms
len=40 ip=12.0.0.1 ttl=63 DF id=44217 sport=0 flags=RA seq=2 win=0 rtt=3.4 ms
len=40 ip=12.0.0.1 ttl=63 DF id=44412 sport=0 flags=RA seq=3 win=0 rtt=3.7 ms
len=40 ip=12.0.0.1 ttl=63 DF id=44416 sport=0 flags=RA seq=4 win=0 rtt=3.5 ms
--- 12.0.0.1 hping statistic ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 3.4/3.7/4.4 ms

bormoglotx@Analyzer-1:~$ sudo tcpdump -i eth1 -B 4096


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
14:48:43.641475 IP 11.0.0.1.2237 > 12.0.0.1.0: Flags [S], seq 1075183755:1075183795, win 512, length 40
14:48:43.642024 IP 12.0.0.1.0 > 11.0.0.1.2237: Flags [R.], seq 0, ack 1075183796, win 0, length 0
14:48:44.641981 IP 11.0.0.1.2238 > 12.0.0.1.0: Flags [S], seq 1410214066:1410214106, win 512, length 40
14:48:44.642818 IP 12.0.0.1.0 > 11.0.0.1.2238: Flags [R.], seq 0, ack 1410214107, win 0, length 0
14:48:45.642022 IP 11.0.0.1.2239 > 12.0.0.1.0: Flags [S], seq 1858880488:1858880528, win 512, length 40
14:48:45.642873 IP 12.0.0.1.0 > 11.0.0.1.2239: Flags [R.], seq 0, ack 1858880529, win 0, length 0
14:48:46.642127 IP 11.0.0.1.2240 > 12.0.0.1.0: Flags [S], seq 1472273281:1472273321, win 512, length 40
14:48:46.642947 IP 12.0.0.1.0 > 11.0.0.1.2240: Flags [R.], seq 0, ack 1472273322, win 0, length 0
14:48:47.642017 IP 11.0.0.1.2241 > 12.0.0.1.0: Flags [S], seq 1810623498:1810623538, win 512, length 40
14:48:47.642601 IP 12.0.0.1.0 > 11.0.0.1.2241: Flags [R.], seq 0, ack 1810623539, win 0, length 0
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel

As a result, the entire tra c exchange between Server-1 and Server-2 fell on the analyzer, which is what we sought.

Moving on, our scheme has changed and now we have Analyzer-2, which also wants to get all the tra c between Server-1
and Server-2:

Mirroring to two or more consumers.

As a result, we have another task - we need to implement a new mirroring scheme that looks like so:

It seems to be nothing complicated - create an interface in the direction of Analyzer-2, add it to the instance and the case is
in the hat.

[edit]
bormoglotx@RZN-PE-1# show interfaces ge-0/0/2
description Analyzer-2;
unit 0 {
family inet {
address 169.254.0.2/31 {
arp 169.254.0.3 mac 02:00:00:00:00:01;
}
}
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input {
rate 1;
run-length 0;
}
family inet {
output {
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}
}
}

But when we try to add another port to the output hierarchy in the Mirroring instance, we get an error when committing:

[edit]
bormoglotx@RZN-PE-1# commit check
[edit forwarding-options port-mirroring instance SPAN-1 family inet output]
Port-mirroring configuration error
Port-mirroring out of multiple nexthops is not allowed on this platform
error: configuration check-out failed

A scary phrase at rst glance - the platform’s limitations prevent us from setting up two next hop at once for mirrored
tra c. But this restriction is very easily bypassed if we use next-hop groups.

I think that it is already clear what a next-hop group is - the name speaks for itself. Juniper MX supports up to 30 next-hop
groups, each of which can have up to 16 next-hop groups. But besides this, in each next-hop group you can create next-hop
subgroups. In one next-hop group, there must be at least two next-hops, otherwise JunOS will not allow a commit.

Now let's move on to the con guration, create the next-hop group:

[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-servers
group-type inet;
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}

And now we indicate this group as next-hop in ouput:

[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input {
rate 1;
run-length 0;
}
family inet {
output {
next-hop-group Analyzer-servers;
}
}

The rest of the con g does not change.


We proceed to the veri cation. First, check what state the next-hop group is in:

bormoglotx@RZN-PE-1> show forwarding-options next-hop-group detail


Next-hop-group: Analyzer-servers
Type: inet
State: up
Number of members configured : 2
Number of members that are up : 2
Number of subgroups configured : 0
Number of subgroups that are up : 0
Members Interfaces: State
ge-0/0/1.0 next-hop 169.254.0.1 up
ge-0/0/2.0 next-hop 169.254.0.3 up

Everything is in order with the group - it is at work (the group will be in up if it has at least one interface in up). Now check
the state of the mirroring session:

bormoglotx@RZN-PE-1> show forwarding-options port-mirroring SPAN-1


Instance Name: SPAN-1
Instance Id: 2
Input parameters:
Rate : 1
Run-length : 0
Maximum-packet-length : 0
Output parameters:
Family State Destination Next-hop
inet up Analyzer-servers

Everything is also in order, but as we saw earlier, this does not mean that we did everything right and everything will take
o . Therefore, we will check whether tra c to our two servers will be mirrored:

bormoglotx@Server-1:~$ sudo hping3 -S -c 5 12.0.0.1 -d 40 -I eth1


HPING 12.0.0.1 (eth1 12.0.0.1): S set, 40 headers + 40 data bytes
len=40 ip=12.0.0.1 ttl=63 DF id=64150 sport=0 flags=RA seq=0 win=0 rtt=3.4 ms
len=40 ip=12.0.0.1 ttl=63 DF id=64222 sport=0 flags=RA seq=1 win=0 rtt=3.5 ms
len=40 ip=12.0.0.1 ttl=63 DF id=64457 sport=0 flags=RA seq=2 win=0 rtt=3.7 ms
len=40 ip=12.0.0.1 ttl=63 DF id=64593 sport=0 flags=RA seq=3 win=0 rtt=3.5 ms
len=40 ip=12.0.0.1 ttl=63 DF id=64801 sport=0 flags=RA seq=4 win=0 rtt=3.4 ms
--- 12.0.0.1 hping statistic ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 3.4/3.5/3.7 ms

Tra c on Analyzer-1:

bormoglotx@Analyzer-1:~$ sudo tcpdump -i eth1 -B 4096


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
15:09:36.837983 IP 11.0.0.1.2304 > 12.0.0.1.0: Flags [S], seq 1255230673:1255230713, win 512, length 40
15:09:36.839367 IP 12.0.0.1.0 > 11.0.0.1.2304: Flags [R.], seq 0, ack 1255230714, win 0, length 0
15:09:37.838115 IP 11.0.0.1.2305 > 12.0.0.1.0: Flags [S], seq 2135769685:2135769725, win 512, length 40
15:09:37.839054 IP 12.0.0.1.0 > 11.0.0.1.2305: Flags [R.], seq 0, ack 2135769726, win 0, length 0
15:09:38.838528 IP 11.0.0.1.2306 > 12.0.0.1.0: Flags [S], seq 1139555126:1139555166, win 512, length 40
15:09:38.839369 IP 12.0.0.1.0 > 11.0.0.1.2306: Flags [R.], seq 0, ack 1139555167, win 0, length 0
15:09:39.838328 IP 11.0.0.1.2307 > 12.0.0.1.0: Flags [S], seq 1181209811:1181209851, win 512, length 40
15:09:39.838924 IP 12.0.0.1.0 > 11.0.0.1.2307: Flags [R.], seq 0, ack 1181209852, win 0, length 0
15:09:40.838335 IP 11.0.0.1.2308 > 12.0.0.1.0: Flags [S], seq 1554756347:1554756387, win 512, length 40
15:09:40.838901 IP 12.0.0.1.0 > 11.0.0.1.2308: Flags [R.], seq 0, ack 1554756388, win 0, length 0
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel

And a similar copy of tra c on Analyzer-2:

bormoglotx@Analyzer-2:~$ sudo tcpdump -i eth1 -B 4096


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
15:09:35.125093 IP 11.0.0.1.2304 > 12.0.0.1.0: Flags [S], seq 1255230673:1255230713, win 512, length 40
15:09:35.126394 IP 12.0.0.1.0 > 11.0.0.1.2304: Flags [R.], seq 0, ack 1255230714, win 0, length 0
15:09:36.125044 IP 11.0.0.1.2305 > 12.0.0.1.0: Flags [S], seq 2135769685:2135769725, win 512, length 40
15:09:36.126107 IP 12.0.0.1.0 > 11.0.0.1.2305: Flags [R.], seq 0, ack 2135769726, win 0, length 0
15:09:37.125552 IP 11.0.0.1.2306 > 12.0.0.1.0: Flags [S], seq 1139555126:1139555166, win 512, length 40
15:09:37.126418 IP 12.0.0.1.0 > 11.0.0.1.2306: Flags [R.], seq 0, ack 1139555167, win 0, length 0
15:09:38.125374 IP 11.0.0.1.2307 > 12.0.0.1.0: Flags [S], seq 1181209811:1181209851, win 512, length 40
15:09:38.125930 IP 12.0.0.1.0 > 11.0.0.1.2307: Flags [R.], seq 0, ack 1181209852, win 0, length 0
15:09:39.125320 IP 11.0.0.1.2308 > 12.0.0.1.0: Flags [S], seq 1554756347:1554756387, win 512, length 40
15:09:39.125844 IP 12.0.0.1.0 > 11.0.0.1.2308: Flags [R.], seq 0, ack 1554756388, win 0, length 0
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel

Excellent - the task is completed, the tra c ows where it is needed - both consumers receive the requested copy of the
tra c.

But the network is developing at a frantic pace, and our company doesn’t spare money for zekalization and SORM. Now we
have another server - Analyzer-3, which also wants to receive a copy of the tra c. But the di culty is that this server is not
connected locally to our RZN-PE-1, but in RZN-PE-2:

Mirroring to a remote host

In light of the foregoing, we need to redo the mirroring scheme again, now it will look like this:

Since the Analyzer-3 server is located behind RZN-PE-2, the methods that we used earlier to solve this problem will not
work. Our main problem is not how to mirror the tra c, but how to drag this already mirrored tra c to the Analyzer-3
server, which lives behind RZN-PE-2, and make it transparent to the network, otherwise we will get problems (which, see
later). For this, it is customary to use gre tunnel on Juniper equipment. The idea is that we make a tunnel to a remote host
and pack all mirrored tra c into this tunnel and send it either directly to a server or to a router that terminates the
destination server. You have two options for using gre tunnel.

Option 1. On the router that performs mirroring, the gre tunnel is con gured, and the destination address of the server
receiving tra c is speci ed as destination. Naturally, the network where this server is located (in our case, it is Analyzer-3)
must be known through some routing protocol (BGP or IGP - it doesn’t matter), otherwise the gre tunnel simply will not take
o . The problem is that in such a scenario, tra c to the server is poured along with gre headers. For modern tra c analysis
and monitoring systems, this should not be a problem - gre is not IPSec and tra c is not encrypted. That is, on one side of
the scale, ease of implementation, on the second - an extra heading. Perhaps in some scenario the presence of an extra
header will not be acceptable, then you will have to use option 2.

Option 2. Between the router, which performs mirroring, and the router, which terminates the server that receives the
tra c, the gre tunnel rises (usually this is done on loopbacks). On the side of the router, which performs mirroring from the
source, everything is the same as in option 1, but on the receiver side, you need to con gure an instance on the router that
will mirror the tra c received from the gre tunnel towards the analyzer. That is, for one mirror, it turns out that we need to
use one instance of mirroring at the source and the second at the recipient of tra c, which greatly complicates the scheme.
But on the other hand, in this scenario, pure tra c ows to the server, without any extra gre headers. In addition, when
implementing this scheme, there is a rule that must be strictly observed - a router, which terminates the endpoint gre of the
tunnel should not have a route to the host, which will be indicated as the recipient in the mirrored tra c (that is, the
recipient of the original mirrored packet). If this condition is not met, then you will receive duplicate packets - the tra c will
y out of the gre tunnel and, in addition to being mirrored to the port you speci ed, it will also route like a regular ip packet.
And if the router knows the route to the destination host, then tra c will be sent to it. To avoid this, the gre interface must
be immersed in a separate instance with the virtual-router type, although there are other methods described below. If
anyone is interested, then the con guration, the essence of the problem and how to defeat it under the spoiler: which will
be speci ed as the recipient in the mirrored tra c (that is, the recipient of the original mirrored packet). If this condition is
not met, then you will receive duplicate packets - the tra c will y out of the gre tunnel and, in addition to being mirrored
to the port you speci ed, it will also route like a regular ip packet. And if the router knows the route to the destination host,
then tra c will be sent to it. To avoid this, the gre interface must be immersed in a separate instance with the virtual-router
type, although there are other methods described below. If anyone is interested, then the con guration, the essence of the
problem and how to defeat it under the spoiler: which will be speci ed as the recipient in the mirrored tra c (that is, the
recipient of the original mirrored packet). If this condition is not met, then you will receive duplicate packets - the tra c will
y out of the gre tunnel and, in addition to being mirrored to the port you speci ed, it will also route like a regular ip packet.
And if the router knows the route to the destination host, then tra c will be sent to it. To avoid this, the gre interface must
be immersed in a separate instance with the virtual-router type, although there are other methods described below. If
anyone is interested, then the con guration, the essence of the problem and how to defeat it under the spoiler: will still be
routing like a regular ip packet. And if the router knows the route to the destination host, then tra c will be sent to it. To
avoid this, the gre interface must be immersed in a separate instance with the virtual-router type, although there are other
methods described below. If anyone is interested, then the con guration, the essence of the problem and how to defeat it
under the spoiler: will still be routing like a regular ip packet. And if the router knows the route to the destination host, then
tra c will be sent to it. To avoid this, the gre interface must be immersed in a separate instance with the virtual-router type,
although there are other methods described below. If anyone is interested, then the con guration, the essence of the
problem and how to defeat it under the spoiler:

Mirroring via gre problem


The con guration of the gre tunnel on the server side of the source:

bormoglotx@RZN-PE-1# show interfaces gr-0/0/0


description RSPAN;
unit 0 {
tunnel {
source 62.0.0.1;
destination 62.0.0.2;
}
family inet {
address 169.254.100.1/31;
}
}

Only the destination address of the tunnel has changed - it has become the RZN-PE-2 loopback.
On RZN-PE-2, you must rst create a gre tunnel to RZN-PE-1:

bormoglotx@RZN-PE-2> show configuration interfaces gr-0/0/0


description SPAN;
unit 0 {
tunnel {
source 62.0.0.2;
destination 62.0.0.1;
}
family inet {
filter {
input MIRROR-RSPAN-GE0/0/1;
}
}
}

To now send tra c from this interface to the Mirroring instance, we need to hang a lter on it that looks like this:

bormoglotx@RZN-PE-2> show configuration firewall family inet filter MIRROR-RSPAN-GE0/0/1


term MIRROR {
then port-mirror-instance RSAPN;
}

Well, the nal touch is to create the instance itself, bind it to fpc and create an interface where tra c will be sent:

bormoglotx@RZN-PE-2> show configuration forwarding-options port-mirroring instance RSAPN


input {
rate 1;
}
family inet {
output {
interface ge-0/0/1.0 {
next-hop 169.254.100.1;
}
}
}
bormoglotx@RZN-PE-2> show configuration chassis
fpc 0 {
pic 0 {
tunnel-services {
bandwidth 10g;
}
}
port-mirror-instance RSAPN;
}
bormoglotx@RZN-PE-2> show configuration interfaces ge-0/0/1
description Analyzer-3;
unit 0 {
family inet {
address 169.254.100.0/31 {
arp 169.254.100.1 mac 02:00:00:19:21:68;
}
}
}

Now run the ping between Server-1 and Server-2 and verify that we have mirrored:

bormoglotx@Server-1:~$ ping 12.0.0.1 -I eth1


PING 12.0.0.1 (12.0.0.1) from 11.0.0.1 eth1: 56(84) bytes of data.
64 bytes from 12.0.0.1: icmp_seq=1 ttl=63 time=1.44 ms
64 bytes from 12.0.0.1: icmp_seq=1 ttl=60 time=3.24 ms (DUP!)

...
64 bytes from 12.0.0.1: icmp_seq=1 ttl=3 time=34.7 ms (DUP!)
^C
--- 12.0.0.1 ping statistics ---
1 packets transmitted, 1 received, +41 duplicates, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.444/17.916/34.712/9.126 ms

I deleted some of the duplicates from the output, but the number of duplicates can be seen - one valid package and 41 take.
On the tra c analyzer you will naturally see the same picture:

bormoglotx@Analyzer-3:~$ sudo tcpdump -i eth1 -B 9192


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
11:52:13.275451 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1601, seq 1, length 64
11:52:13.275462 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1601, seq 1, length 64
11:52:13.276703 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1601, seq 1, length 64

In addition to mirroring, the router also forwards the packet received from the gre tunnel, since it knows the route to the
destination address. To x this, create an instance with the virtual router type and add to it the gre interface and the
interface to which we mirror the tra c:

[edit]
bormoglotx@RZN-PE-2# show routing-instances RSPAN-VR
description "for RSPAN use only";
instance-type virtual-router;
interface gr-0/0/0.0;
interface ge-0/0/1.0;

Run the ping again and check the operation of the circuit. Now the duplicate server is not visible:

bormoglotx@Server-1:~$ ping 12.0.0.1 -I eth1


PING 12.0.0.1 (12.0.0.1) from 11.0.0.1 eth1: 56(84) bytes of data.
64 bytes from 12.0.0.1: icmp_seq=1 ttl=63 time=2.56 ms
64 bytes from 12.0.0.1: icmp_seq=2 ttl=63 time=8.13 ms
64 bytes from 12.0.0.1: icmp_seq=3 ttl=63 time=1.33 ms
64 bytes from 12.0.0.1: icmp_seq=4 ttl=63 time=2.09 ms
64 bytes from 12.0.0.1: icmp_seq=5 ttl=63 time=2.30 ms
^C
--- 12.0.0.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 1.332/3.288/8.137/2.459 ms

Well, the absence of duplicates proves the dump on the Analyzer-3 analyzer:

bormoglotx@Analyzer-3:~$ sudo tcpdump -i eth1 -B 9192


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
11:59:12.605205 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1602, seq 1, length 64
11:59:12.605350 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1602, seq 1, length 64
11:59:13.611070 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1602, seq 2, length 64
11:59:13.612356 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1602, seq 2, length 64
11:59:14.606350 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1602, seq 3, length 64
11:59:14.606739 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1602, seq 3, length 64
11:59:15.612423 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1602, seq 4, length 64
11:59:15.612488 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1602, seq 4, length 64
11:59:16.614228 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1602, seq 5, length 64
11:59:16.614588 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1602, seq 5, length 64
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel

In addition to creating a separate instance on RZN-PE-2, you can use other methods. Nowhere is it written about them and I
found out about them just by testing.

You can specify in the lter that all tra c from the tunnel should be sent to discard (namely discard, since if it is rejected,
then Juniper will send icmp back message that the packet is ltered)

bormoglotx@RZN-PE-2# show firewall family inet filter MIRROR-RSPAN-GE0/0/1


term MIRROR {
then {
port-mirror-instance RSAPN;
discard;
}
}

If you believe the tests, then everything works with such a lter:

bormoglotx@Server-1:~$ ping 12.0.0.1 -I eth1


PING 12.0.0.1 (12.0.0.1) from 11.0.0.1 eth1: 56(84) bytes of data.
64 bytes from 12.0.0.1: icmp_seq=1 ttl=63 time=2.68 ms
64 bytes from 12.0.0.1: icmp_seq=2 ttl=63 time=1.22 ms
64 bytes from 12.0.0.1: icmp_seq=3 ttl=63 time=1.96 ms
64 bytes from 12.0.0.1: icmp_seq=4 ttl=63 time=2.30 ms
64 bytes from 12.0.0.1: icmp_seq=5 ttl=63 time=1.96 ms
^C
--- 12.0.0.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4005ms
rtt min/avg/max/mdev = 1.220/2.028/2.685/0.487 ms

There is tra c on the analyzer:

bormoglotx@Analyzer-3:~$ sudo tcpdump -i eth1 -B 9192


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
12:03:11.934805 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1604, seq 1, length 64
12:03:11.934834 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1604, seq 1, length 64
12:03:12.982685 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1604, seq 2, length 64
12:03:12.982716 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1604, seq 2, length 64
12:03:13.935027 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1604, seq 3, length 64
12:03:13.935607 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1604, seq 3, length 64
12:03:14.936859 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1604, seq 4, length 64
12:03:14.937654 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1604, seq 4, length 64
12:03:15.937650 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1604, seq 5, length 64
12:03:15.938375 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1604, seq 5, length 64
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel

Another option allows us not to con gure the Mirroring instance for mirroring on RZN-PE-2 at all. We need to make a next-
hop group (if you have one interface, like mine, then we need to add some kind of fake one so that JunOS can make a
commit), and hang a lter on the gre interface, which will change next-hop for incoming tra c to we need:

bormoglotx@RZN-PE-2> show configuration interfaces gr-0/0/0


description SPAN;
unit 0 {
tunnel {
source 62.0.0.2;
destination 62.0.0.1;
}
family inet {
filter {
input MIRROR-RSPAN-GE0/0/1;
}
}
}
bormoglotx@RZN-PE-2> show configuration firewall family inet filter MIRROR-RSPAN-GE0/0/1
term MIRROR {
then next-hop-group Analyzer-3;
}

Next-hop band rose:

bormoglotx@RZN-PE-2> show forwarding-options next-hop-group Analyzer-3 detail


Next-hop-group: Analyzer-3
Type: inet
State: up
Number of members configured : 2
Number of members that are up : 1
Number of subgroups configured : 0
Number of subgroups that are up : 0
Members Interfaces: State
ge-0/0/1.0 next-hop 169.254.100.1 up
ge-0/0/100.0 down

And now let's check if we have duplicates and whether mirroring works:

bormoglotx@Server-1:~$ ping 12.0.0.1 -I eth1 -c 5


PING 12.0.0.1 (12.0.0.1) from 11.0.0.1 eth1: 56(84) bytes of data.
64 bytes from 12.0.0.1: icmp_seq=1 ttl=63 time=3.38 ms
64 bytes from 12.0.0.1: icmp_seq=2 ttl=63 time=2.17 ms
64 bytes from 12.0.0.1: icmp_seq=3 ttl=63 time=2.14 ms
64 bytes from 12.0.0.1: icmp_seq=4 ttl=63 time=2.06 ms
64 bytes from 12.0.0.1: icmp_seq=5 ttl=63 time=1.89 ms
--- 12.0.0.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4006ms
rtt min/avg/max/mdev = 1.891/2.332/3.387/0.538 ms

There are no duplicates, it remains to check whether anything fell on the analyzer:

bormoglotx@Analyzer-3:~$ sudo tcpdump -i eth1 -B 9192


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
12:19:28.306816 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1609, seq 1, length 64
12:19:28.306840 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1609, seq 1, length 64
12:19:29.306887 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1609, seq 2, length 64
12:19:29.307273 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1609, seq 2, length 64
12:19:30.308323 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1609, seq 3, length 64
12:19:30.308455 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1609, seq 3, length 64
12:19:31.309897 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1609, seq 4, length 64
12:19:31.310117 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1609, seq 4, length 64
12:19:32.313234 IP 11.0.0.1 > 12.0.0.1: ICMP echo request, id 1609, seq 5, length 64
12:19:32.313271 IP 12.0.0.1 > 11.0.0.1: ICMP echo reply, id 1609, seq 5, length 64
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel

Judging by the tests - everything works out as it should. Which option to implement in production is up to you.

We will use the rst option. First we need to enable tunnel services so that we have a gre interface (gr-X / X / X):

bormoglotx@RZN-PE-1# show chassis fpc 0


pic 0 {
tunnel-services {
bandwidth 10g;
}
}
port-mirror-instance SPAN-1;

Here it’s worth a bit to return to theory and talk about tunnel interfaces and resource reservation. In this con guration, I
allocate 10G for tunnel services on a zero PIC of zero FPC. This does not mean that 10G of pfe bandwidth is cut o - this
suggests that tunnel services can use no more than 10G pfe bandwidth, and part of the resources that are not occupied by
them can be used for forwarding physical port tra c - that is, 10G on pfe is shared between tunnel services and real
interfaces. But this is on MPC cards. If you are a “happy” owner of a DPC card (for example, you have a 4-dozen card), then
with the above con g you will lose one port (that is, the xe port will simply disappear from the system and will be
inaccessible from cli, and a light will light up near the port, telling us that the port is in tunnel mode). Unfortunately,

Secondly, I would like to say about port numbering - if you reserve 1G, then the port number will be gr-0/0/10, if you reserve
10G or more, then the port number will be gr-0/0/0 (this is shown below option).

[edit]
bormoglotx@RZN-PE-1# run show interfaces terse | match "^(gr|lt|vt)-"
gr-0/0/0 up up
lt-0/0/0 up up
vt-0/0/0 up up

On line cards with a TRIO chipset, the maximum possible reserved bandwidth for tunnel services is 60G.
Note: I would like to add that lt and vt are di erent interfaces. lt - logical tunnel - a logical tunnel, which is usually intended
for connecting logical systems or routing instances with each other - it allows you to drive tra c between them, as if these
instances or logical systems are connected by a direct patch cord. But vt is a virtual tunnel - a virtual loopback, which is not
intended to bind some kind of virtual entity, but to wrap up tra c on pfe for re-lookup (for example, in vpls).

After we created the tunnel interfaces, we now have the opportunity to con gure gr-0/0/0. Since we torn out the option in
which the remote PE router does not terminate the gre tunnel but simply sends tra c to the server side, then as the sourse
address of the tunnel on RZN-PE-1 we specify our own loopback, but as the destination address of the recipient server of
the mirrored tra c, moreover, this address should be available.

As a matter of fact, the server may or may not have an address. You can select it yourself and make a static ARP record, as
shown below:

[edit]
bormoglotx@RZN-PE-2# show | compare
[edit interfaces]
+ ge-0/0/1 {
+ description Analyzer-3;
+ unit 0 {
+ family inet {
+ address 192.168.0.0/31 {
+ arp 192.168.0.1 mac 02:00:00:19:21:68;
+ }
+ }
+ }
+ }
[edit protocols ospf area 0.0.0.0]
interface ge-0/0/0.0 { ... }
+ interface ge-0/0/1.0 {
+ passive;
+ }

Moreover, as can be seen from the presented con guration, the interface is added as passive in ospf so that RZN-PE-1 knows
the route to this network:

[edit]
bormoglotx@RZN-PE-1# run show route 192.168.0.1
inet.0: 20 destinations, 20 routes (20 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
192.168.0.0/31 *[OSPF/10] 00:00:16, metric 3
> to 10.0.0.0 via ge-0/0/0.0

Now create a gre tunnel on RZN-PE-1 and add it to the next-hop group:

[edit]
bormoglotx@RZN-PE-1# show interfaces gr-0/0/0
description RSPAN;
unit 0 {
tunnel {
source 62.0.0.1;
destination 192.168.0.1;
}
family inet {
address 169.254.100.1/31;
}
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-servers
group-type inet;
interface gr-0/0/0.0;
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}

Unlike ge interfaces, the gre interface is p2p, and therefore there is no point in specifying a next-hop address - tra c will
still y from the other end, although you can specify it.
Well, then everything is as usual - check the state of the mirroring session:

[edit]
bormoglotx@RZN-PE-1# run show forwarding-options next-hop-group detail
Next-hop-group: Analyzer-servers
Type: inet
State: up
Number of members configured : 3
Number of members that are up : 3
Number of subgroups configured : 0
Number of subgroups that are up : 0
Members Interfaces: State
gr-0/0/0.0 up
ge-0/0/1.0 next-hop 169.254.0.1 up
ge-0/0/2.0 next-hop 169.254.0.3 up

Well, now we check that the tra c on the remote server is obtained:

bormoglotx@Server-1:~$ sudo hping3 -S -c 5 12.0.0.1 -d 40 -I eth1


HPING 12.0.0.1 (eth1 12.0.0.1): S set, 40 headers + 40 data bytes
len=40 ip=12.0.0.1 ttl=63 DF id=53439 sport=0 flags=RA seq=0 win=0 rtt=8.2 ms
len=40 ip=12.0.0.1 ttl=63 DF id=53515 sport=0 flags=RA seq=1 win=0 rtt=3.5 ms
len=40 ip=12.0.0.1 ttl=63 DF id=53610 sport=0 flags=RA seq=2 win=0 rtt=3.4 ms
len=40 ip=12.0.0.1 ttl=63 DF id=53734 sport=0 flags=RA seq=3 win=0 rtt=3.8 ms
len=40 ip=12.0.0.1 ttl=63 DF id=53897 sport=0 flags=RA seq=4 win=0 rtt=3.3 ms
--- 12.0.0.1 hping statistic ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 3.3/4.4/8.2 ms

bormoglotx@Analyzer-3:~$ sudo tcpdump -i eth1 -B 4096


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
16:34:34.923370 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 11.0.0.1.2894 > 12.0.0.1.0: Flags [S], seq 1149405522:11494055
16:34:34.926586 IP 62.0.0.1 > 192.168.0.1: GREv0, length 44: IP 12.0.0.1.0 > 11.0.0.1.2894: Flags [R.], seq 0, ack 1149405563,
16:34:35.923022 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 11.0.0.1.2895 > 12.0.0.1.0: Flags [S], seq 1598018315:15980183
16:34:35.923855 IP 62.0.0.1 > 192.168.0.1: GREv0, length 44: IP 12.0.0.1.0 > 11.0.0.1.2895: Flags [R.], seq 0, ack 1598018356,
16:34:36.922903 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 11.0.0.1.2896 > 12.0.0.1.0: Flags [S], seq 592229199:592229239
16:34:36.924048 IP 62.0.0.1 > 192.168.0.1: GREv0, length 44: IP 12.0.0.1.0 > 11.0.0.1.2896: Flags [R.], seq 0, ack 592229240,
16:34:37.923278 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 11.0.0.1.2897 > 12.0.0.1.0: Flags [S], seq 694611591:694611631
16:34:37.924765 IP 62.0.0.1 > 192.168.0.1: GREv0, length 44: IP 12.0.0.1.0 > 11.0.0.1.2897: Flags [R.], seq 0, ack 694611632,
16:34:38.924275 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 11.0.0.1.2898 > 12.0.0.1.0: Flags [S], seq 1423363395:14233634
16:34:38.924291 IP 62.0.0.1 > 192.168.0.1: GREv0, length 44: IP 12.0.0.1.0 > 11.0.0.1.2898: Flags [R.], seq 0, ack 1423363436,
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel

But, as I said - gre header tra c, and if this is not a problem for your server, then this approach is the easiest and most
exible.

But as it turned out, now the owners of the recipient server mirrored tra c do not want to receive all the tra c, since it has
become too much. The Analyzer-1 server needs only TCP tra c, the Analyzer-2 server only needs UDP tra c, but the
Analyzer-3 server needs all the tra c, not limited to TCP / UDP. That is, now we need to implement such a scheme:

Selective mirroring for two or more consumers

Here we need the tunnel interface vt-0/0/0 (virtual loopback) well, or you can use lt-0/0/0 (virtual tunnel), but the rst is
more preferable. So, the purpose of selective mirroring is as follows - tra c from the port is rst mirrored to the virtual
loopback vt port, and then it is scattered from this port into di erent next-hop groups based on the parameters you
selected - protocols, ports, etc. better understanding of what is happening, let's now assemble this scheme. First, we will
change the Mirroring instance so that the tra c is mirrored to a virtual loopback:

[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input {
rate 1;
run-length 0;
}
family inet {
output {
interface vt-0/0/0.0;
no-filter-check;
}
}

The no- lter-check parameter is very important - this command allows us to attach a lter to the interface into which tra c
is mirrored. By default, ltering is disabled on these interfaces. Now create the vt interface itself:

[edit]
bormoglotx@RZN-PE-1# show interfaces vt-0/0/0
unit 0 {
description SPAN-USE;
family inet;
}

You cannot hang any addresses on this interface, and the family of addresses that can be resolved on it is limited.

Now we have the following picture - all tra c from the ge-0/0/3 interface is directed to the vt-0/0 / 0.0 port. Now we need
to mirror this tra c towards di erent consumers. To do this, rst create next-hop groups, which include the necessary
consumers:

[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-TCP
group-type inet;
interface gr-0/0/0.0;
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-UDP
group-type inet;
interface gr-0/0/0.0;
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-default
group-type inet;
interface gr-0/0/0.0;
interface ge-0/0/100.0;

The gr-0/0/0 interface, which is designed to mirror tra c on Analyzer-3, is added to all three groups. This is due to the fact
that this server wants to receive both TCP and UDP tra c, and you cannot make a separate group for it and then apply it in
the lter. Using the same next-hop in di erent groups is not prohibited. In the Analyzer-default group there is also the ge-
0/0 / 100.0 port - this is a fake port added to the group in order to be able to commit con guration - since the group must
have at least two interfaces.

Now we need to create a lter that will match tra c according to the criteria we need and scatter by next-hop groups:

[edit]
bormoglotx@RZN-PE-1# show firewall family inet filter MIRROR-SORTED
term MIRROR-TCP {
from {
protocol tcp;
}
then next-hop-group Analyzer-TCP;
}
term MIRROR-UDP {
from {
protocol udp;
}
then next-hop-group Analyzer-UDP;
}
term MIRROR-DEFAUL {
then next-hop-group Analyzer-default;
}

And fasten it to the vt interface:

[edit]
bormoglotx@RZN-PE-1# show interfaces vt-0/0/0
unit 0 {
description SPAN-USE;
family inet {
filter {
input MIRROR-SORTED;
}
}
}

We check our design. Mirroring in vt interface in up state:

bormoglotx@RZN-PE-1> show forwarding-options port-mirroring SPAN-1


Instance Name: SPAN-1
Instance Id: 2
Input parameters:
Rate : 1
Run-length : 0
Maximum-packet-length : 0
Output parameters:
Family State Destination Next-hop
inet up vt-0/0/0.0

All groups are in operation (remember that at least one port must be in up for the group to go up):

bormoglotx@RZN-PE-1> show forwarding-options next-hop-group detail


Next-hop-group: Analyzer-TCP
Type: inet
State: up
Number of members configured : 2
Number of members that are up : 2
Number of subgroups configured : 0
Number of subgroups that are up : 0
Members Interfaces: State
gr-0/0/0.0 up
ge-0/0/1.0 next-hop 169.254.0.1 up
Next-hop-group: Analyzer-UDP
Type: inet
State: up
Number of members configured : 2
Number of members that are up : 2
Number of subgroups configured : 0
Number of subgroups that are up : 0
Members Interfaces: State
gr-0/0/0.0 up
ge-0/0/2.0 next-hop 169.254.0.3 up
Next-hop-group: Analyzer-default
Type: inet
State: up
Number of members configured : 2
Number of members that are up : 1
Number of subgroups configured : 0
Number of subgroups that are up : 0
Members Interfaces: State
gr-0/0/0.0 up
ge-0/0/100.0 down

Well, now we will generate 5 icmp, tcp and udp packets and see what server gets to. On all client servers, enable tcpdump at
the same time. I used hping3 with the --rand-source switch, so we won’t see any return tra c, since tra c is only taken on
the port towards Server-1.

So, look what we caught on Analyzer-1, there should be only TCP:

bormoglotx@Analyzer-1:~$ sudo tcpdump -i eth1 -B 9192


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
19:58:25.457641 IP 108.215.126.169.1668 > 12.0.0.1.0: Flags [S], seq 1842749676:1842749716, win 512, length 40
19:58:26.458098 IP 230.181.170.188.1669 > 12.0.0.1.0: Flags [S], seq 1810452177:1810452217, win 512, length 40
19:58:27.459245 IP 112.6.155.46.1670 > 12.0.0.1.0: Flags [S], seq 1524555644:1524555684, win 512, length 40
19:58:28.459006 IP 50.45.169.23.1671 > 12.0.0.1.0: Flags [S], seq 1362080290:1362080330, win 512, length 40
19:58:29.459294 IP 135.146.14.177.1672 > 12.0.0.1.0: Flags [S], seq 2122009219:2122009259, win 512, length 40
^C
5 packets captured
5 packets received by filter
0 packets dropped by kernel

Now let's check what got on Analyzer-2 (there should be only UDP tra c here):

bormoglotx@Analyzer-2:~$ sudo tcpdump -i eth1 -B 9192


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
19:58:09.340702 IP 132.43.66.243.1121 > 12.0.0.1.0: UDP, length 40
19:58:10.341308 IP 205.172.124.143.1122 > 12.0.0.1.0: UDP, length 40
19:58:11.341239 IP 253.127.33.120.1123 > 12.0.0.1.0: UDP, length 40
19:58:12.341204 IP 246.68.75.25.1124 > 12.0.0.1.0: UDP, length 40
19:58:13.341819 IP 95.89.126.64.1125 > 12.0.0.1.0: UDP, length 40
^C
5 packets captured
5 packets received by filter
0 packets dropped by kernel

Well, I stayed with Analyzer-3, there we catch everything in a row, the total number of packets should be 15 (5 UDP / 5 TCP /
5 ICMP):

bormoglotx@Analyzer-3:~$ sudo tcpdump -i eth1 -B 9192


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
19:58:11.782669 IP 62.0.0.1 > 192.168.0.1: GREv0, length 72: IP 132.43.66.243.1121 > 12.0.0.1.0: UDP, length 40
19:58:12.783508 IP 62.0.0.1 > 192.168.0.1: GREv0, length 72: IP 205.172.124.143.1122 > 12.0.0.1.0: UDP, length 40
19:58:13.783166 IP 62.0.0.1 > 192.168.0.1: GREv0, length 72: IP 253.127.33.120.1123 > 12.0.0.1.0: UDP, length 40
19:58:14.782758 IP 62.0.0.1 > 192.168.0.1: GREv0, length 72: IP 246.68.75.25.1124 > 12.0.0.1.0: UDP, length 40
19:58:15.783594 IP 62.0.0.1 > 192.168.0.1: GREv0, length 72: IP 95.89.126.64.1125 > 12.0.0.1.0: UDP, length 40
19:58:18.310249 IP 62.0.0.1 > 192.168.0.1: GREv0, length 100: IP 65.173.140.215 > 12.0.0.1: ICMP net 5.6.7.8 unreachable, leng
19:58:19.311045 IP 62.0.0.1 > 192.168.0.1: GREv0, length 100: IP 171.91.95.222 > 12.0.0.1: ICMP net 5.6.7.8 unreachable, lengt
19:58:20.312496 IP 62.0.0.1 > 192.168.0.1: GREv0, length 100: IP 228.215.127.12 > 12.0.0.1: ICMP net 5.6.7.8 unreachable, leng
19:58:21.311067 IP 62.0.0.1 > 192.168.0.1: GREv0, length 100: IP 214.149.191.71 > 12.0.0.1: ICMP net 5.6.7.8 unreachable, leng
19:58:22.311398 IP 62.0.0.1 > 192.168.0.1: GREv0, length 100: IP 119.130.166.53 > 12.0.0.1: ICMP net 5.6.7.8 unreachable, leng
19:58:26.186528 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 108.215.126.169.1668 > 12.0.0.1.0: Flags [S], seq 1842749676:1
19:58:27.187385 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 230.181.170.188.1669 > 12.0.0.1.0: Flags [S], seq 1810452177:1
19:58:28.188726 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 112.6.155.46.1670 > 12.0.0.1.0: Flags [S], seq 1524555644:1524
19:58:29.188846 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 50.45.169.23.1671 > 12.0.0.1.0: Flags [S], seq 1362080290:1362
19:58:30.188499 IP 62.0.0.1 > 192.168.0.1: GREv0, length 84: IP 135.146.14.177.1672 > 12.0.0.1.0: Flags [S], seq 2122009219:21
^C
15 packets captured
15 packets received by filter
0 packets dropped by kernel

Well, all that had to be implemented was done - tra c is mirrored and scattered among consumers, as intended.

Above we mirrored L3 tra c, but the Juniper MX series routers are very exible devices and they allow you to mirror not
only IP tra c (inet / inet6 family), but also L2 tra c, for example vpls or l2ckt (xconnect in Cisco terms).

Local mirroring of L2 tra c

Let us consider the simplest case when you need to spy on what is transmitted to L2CKT (this is certainly not good to do,
since the client whose tra c you wrap on the analyzer does not even know about it, and such things should be done only
with the consent of customer). The scheme is simple - some kind of L2CKT is pulled between RZN-PE-1 and RZN-PE-2:

That is, we need to implement such a mirroring scheme:

Between RZN-PE-1 and RZN-PE-2 are pulled L2CKT, which we want to listen to:

bormoglotx@RZN-PE-1> show configuration protocols l2circuit


neighbor 62.0.0.2 {
interface ge-0/0/6.100 {
virtual-circuit-id 100;
}
}
bormoglotx@RZN-PE-1> show configuration interfaces ge-0/0/6.100
encapsulation vlan-ccc;
vlan-id 100;
family ccc {
filter {
input MIRROR-L2CKT-SPAN-1;
output MIRROR-L2CKT-SPAN-1;
}
}

It is logical that the ccc family is included on the interface - this is L2CKT after all. In the con guration, a lter on both sides
is already hung on the interface we need - since we want to receive all the tra c that will pass through our L2CKT. The lter
is essentially the same as it was before, only the address family is not inet, but ccc:

bormoglotx@RZN-PE-1> show configuration firewall family ccc filter MIRROR-L2CKT-SPAN-1


term MIRROR {
then port-mirror-instance SPAN-1;
}

Next, we set up the Mirroring instance, which we want to use for mirroring. There are no changes in the input section -
everything is as before, but in the output section there are signi cant di erences:

bormoglotx@RZN-PE-1> show configuration forwarding-options port-mirroring instance SPAN-1


input {
rate 1;
run-length 0;
}
family ccc {
output {
interface ge-0/0/1.0;
}
}

У нас изменилось семейство адресов — теперь это ccc. Это тянет за собой неизбежные изменения в и конфигурации
интерфейса, в который мы хотим отправлять трафик. Если мы попробуем задать какой то адрес next-hop-а, как это
делалось ранее на не p2p интерфейсе, то у нас ничего не получится:

bormoglotx@RZN-PE-1# set forwarding-options port-mirroring instance SPAN-1 family ccc output interface ge-0/0/1 ?
Possible completions:
<[Enter]> Execute this command
+ apply-groups Groups from which to inherit configuration data
+ apply-groups-except Don't inherit configuration data from these groups
no-filter-check Do not check for filters on port-mirroring interface
| Pipe through a command

У нас просто нет такой возможности. Поэтому на интерфейсе, в который нам надо отправить трафик надо включить
семейство bridge или ccc:

[edit]
bormoglotx@RZN-PE-1# show interfaces ge-0/0/1
description Analyzer-1;
encapsulation ethernet-ccc;
unit 0 {
family ccc;
}

Семейство ccc естественно использовать проще, но если вам приспичило использовать bridge, то не забудьте про
важный нюанс — интерфейс с инкапсуляцией bridge должен быть помещен в бридж-домен (номер влана в домене
можно использовать нулевой (none), так вы не отберете реальные номера вланов под зеркалирование у остальных
сервисов).

Все готово, проверим состояние сессии зеркалирвания:

bormoglotx@RZN-PE-1> show forwarding-options port-mirroring


Instance Name: SPAN-1
Instance Id: 2
Input parameters:
Rate : 1
Run-length : 0
Maximum-packet-length : 0
Output parameters:
Family State Destination Next-hop
ccc up ge-0/0/1.0

Everything is ne - session in up. Now run the ping between our hosts and check what happens on the analyzer:

bormoglotx@TEST-1> ping routing-instance VR-1 10.0.0.2 count 5


PING 10.0.0.2 (10.0.0.2): 56 data bytes
64 bytes from 10.0.0.2: icmp_seq=0 ttl=64 time=10.159 ms
64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=11.136 ms
64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=9.723 ms
64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=7.754 ms
64 bytes from 10.0.0.2: icmp_seq=4 ttl=64 time=10.619 ms
--- 10.0.0.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 7.754/9.878/11.136/1.162 ms

Here's what happened to collect:

bormoglotx@Analyzer-1:~$ sudo tcpdump -i eth1 -B 9192


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
23:44:31.948237 IP 10.0.0.1 > 10.0.0.2: ICMP echo request, id 17420, seq 0, length 64
23:44:31.954408 IP 10.0.0.2 > 10.0.0.1: ICMP echo reply, id 17420, seq 0, length 64
23:44:32.955149 IP 10.0.0.1 > 10.0.0.2: ICMP echo request, id 17420, seq 1, length 64
23:44:32.964115 IP 10.0.0.2 > 10.0.0.1: ICMP echo reply, id 17420, seq 1, length 64
23:44:33.967789 IP 10.0.0.1 > 10.0.0.2: ICMP echo request, id 17420, seq 2, length 64
23:44:33.973388 IP 10.0.0.2 > 10.0.0.1: ICMP echo reply, id 17420, seq 2, length 64
23:44:34.975442 IP 10.0.0.1 > 10.0.0.2: ICMP echo request, id 17420, seq 3, length 64
23:44:34.980370 IP 10.0.0.2 > 10.0.0.1: ICMP echo reply, id 17420, seq 3, length 64
23:44:35.986900 IP 10.0.0.1 > 10.0.0.2: ICMP echo request, id 17420, seq 4, length 64
23:44:35.992213 IP 10.0.0.2 > 10.0.0.1: ICMP echo reply, id 17420, seq 4, length 64
^C
10 packets captured
10 packets received by filter
0 packets dropped by kernel

Actually, all the packets got to the analyzer, which is what we sought.

Now consider a more complex scheme - we need to con gure mirroring for interfaces located in the bridge domain or in the
virtual switch, the reception does not send a copy to some local port, as we did above, but drop this tra c to the remote
box.

Mirroring L2 tra c to a remote server

The rst thought is that everything is simple, you can use the gre tunnel. But, unfortunately, gre does not support ccc / tcc /
vpls / bridge encapsulation. But in Junos, there are a lot of di erent tools that allow you to solve the same problem using
di erent methods, and sometimes it seems that doing something seems unrealistic, but in the end, everything takes o
after the Nth amount of time and Nth number of smoked manuals. It's the same here. Now we will assemble such a complex
scheme:

I will explain what and why. So, we mirror the tra c from the virtual switch (L2CKT or the bridge domain) to the mirroring
instance, and the tra c is not mirrored to some physical interface, but to the virtual tunnel interface lt-0/0/0. This interface
is one per box and its units are created in pairs, which are called peer-units — one unit is the input end of the tunnel, and the
second unit is the output. As a result, all that falls into one unit will y out of the second unit associated with it and vice
versa. On this interface, we enable ccc encapsulation and build L2CKT from it to a remote router that terminates the
recipient server - that is, we simply send L2 tra c through L2CKT directly to the remote server. For a remote router, this will
be a simple L2CKT.

Now let's move on to the con guration. The interfaces to the server side are in access, and are located in the virtual switch:

bormoglotx@RZN-PE-1# wildcard range show interfaces ge-0/0/[3-4]


description Server-1;
encapsulation ethernet-bridge;
unit 0 {
family bridge {
filter {
input MIRROR-BRIDGE-vSwitch-1;
}
interface-mode access;
vlan-id 100;
}
}
description Server-2;
encapsulation ethernet-bridge;
unit 0 {
family bridge {
filter {
input MIRROR-BRIDGE-vSwitch-1;
}
interface-mode access;
vlan-id 100;
}
}
[edit]
bormoglotx@RZN-PE-1# show routing-instances vSwitch-1
instance-type virtual-switch;
interface ge-0/0/3.0;
interface ge-0/0/4.0;
bridge-domains {
BD100 {
vlan-id 100;
}
}

Filters are hung on the interfaces to mirror incoming tra c to the SPAN-1 instance. The lter is no di erent from the
previously used ones, except for the family - in this scenario, the bridge is used:

[edit]
bormoglotx@RZN-PE-1# show firewall family bridge filter MIRROR-BRIDGE-vSwitch-1
term MIRROR {
then port-mirror-instance SPAN-1;
}

Now create the SPAN-1 instance:

[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input {
rate 1;
run-length 0;
}
family vpls {
output {
interface lt-0/0/0.0;
}
}

There is a small nuance. The address family is not indicated by the bridge - you will not even nd such a family in the instance
con guration, but vpls. This family (VPLS) is used to mirror tra c from vpls / bridge domains.

Next, we create the tunnel interface to which we want to send tra c:

[edit]
bormoglotx@RZN-PE-1# show interfaces lt-0/0/0
unit 0 {
description RSPAN-IN;
encapsulation ethernet-ccc;
peer-unit 1;
family ccc;
}
unit 1 {
description RSPAN-OUT;
encapsulation ethernet-ccc;
peer-unit 0;
family ccc;
}

As I wrote earlier, the lt interface consists of two units - in our case, units 0 and 1. Everything that ies into unit 0 will y
through unit 1. In general, on the one hand, a unit can be like L3, for example, inet, and on the other, like L2, for example ccc
- and this will work. We have ccc on both ends, on the zero unit this is due to the fact that tra c should be mirrored to an
instance with the ccc / bridge / vpls family, and the use of ccc on the rst unit is due to the fact that L2CKT is built from this
unit.

Next, create an L2CKT between RZN-PE-1 and RZN-PE-2. From the side of RZN-PE-1:

[edit]
bormoglotx@RZN-PE-1# show protocols l2circuit
neighbor 62.0.0.2 {
interface lt-0/0/0.1 {
virtual-circuit-id 1;
encapsulation-type ethernet;
}
}

From the side of RZN-PE-2:

bormoglotx@RZN-PE-2> show configuration protocols l2circuit


neighbor 62.0.0.1 {
interface ge-0/0/1.0 {
virtual-circuit-id 1;
encapsulation-type ethernet;
}
}
bormoglotx@RZN-PE-2> show configuration interfaces ge-0/0/1
description Analyzer-3;
encapsulation ethernet-ccc;
unit 0 {
family ccc;
}

Now you can check if our Frankenstein is working. First, let's see the state of L2CKT:

[edit]
bormoglotx@RZN-PE-1# run show l2circuit connections | find ^Nei
Neighbor: 62.0.0.2
Interface Type St Time last up # Up trans
lt-0/0/0.1(vc 1) rmt Up Sep 2 07:28:05 2017 1
Remote PE: 62.0.0.2, Negotiated control-word: Yes (Null)
Incoming label: 299840, Outgoing label: 299872
Negotiated PW status TLV: No
Local interface: lt-0/0/0.1, Status: Up, Encapsulation: ETHERNET
Flow Label Transmit: No, Flow Label Receive: No

Great, L2CKT at work. Next, check the state of the mirroring session:

[edit]
bormoglotx@RZN-PE-1# run show forwarding-options port-mirroring SPAN-1
Instance Name: SPAN-1
Instance Id: 2
Input parameters:
Rate : 1
Run-length : 0
Maximum-packet-length : 0
Output parameters:
Family State Destination Next-hop
vpls up lt-0/0/0.0

Everything is ne, now run the ping between the Server-1 and Server-2 servers and see what gets to the tra c analyzer:

bormoglotx@Server-1:~$ ping 11.0.0.2 -I 11.0.0.1 -c 5 -i 0.2


PING 11.0.0.2 (11.0.0.2) from 11.0.0.1 : 56(84) bytes of data.
64 bytes from 11.0.0.2: icmp_seq=1 ttl=64 time=3.86 ms
64 bytes from 11.0.0.2: icmp_seq=2 ttl=64 time=2.34 ms
64 bytes from 11.0.0.2: icmp_seq=3 ttl=64 time=2.30 ms
64 bytes from 11.0.0.2: icmp_seq=4 ttl=64 time=9.56 ms
64 bytes from 11.0.0.2: icmp_seq=5 ttl=64 time=1.43 ms
--- 11.0.0.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 803ms
rtt min/avg/max/mdev = 1.436/3.903/9.565/2.937 ms

Now go to Analyzer-3 and see what got into tcpdump:

bormoglotx@Analyzer-3:~$ sudo tcpdump -i eth1 -B 9192


tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
10:48:46.296920 IP 11.0.0.1 > 11.0.0.2: ICMP echo request, id 2000, seq 1, length 64
10:48:46.297969 IP 11.0.0.2 > 11.0.0.1: ICMP echo reply, id 2000, seq 1, length 64
10:48:46.496380 IP 11.0.0.1 > 11.0.0.2: ICMP echo request, id 2000, seq 2, length 64
10:48:46.497647 IP 11.0.0.2 > 11.0.0.1: ICMP echo reply, id 2000, seq 2, length 64
10:48:46.700540 IP 11.0.0.1 > 11.0.0.2: ICMP echo request, id 2000, seq 3, length 64
10:48:46.700547 IP 11.0.0.2 > 11.0.0.1: ICMP echo reply, id 2000, seq 3, length 64
10:48:46.897518 IP 11.0.0.1 > 11.0.0.2: ICMP echo request, id 2000, seq 4, length 64
10:48:46.907024 IP 11.0.0.2 > 11.0.0.1: ICMP echo reply, id 2000, seq 4, length 64
10:48:47.098414 IP 11.0.0.1 > 11.0.0.2: ICMP echo request, id 2000, seq 5, length 64
10:48:47.098799 IP 11.0.0.2 > 11.0.0.1: ICMP echo reply, id 2000, seq 5, length 64
10:48:51.307134 ARP, Request who-has 11.0.0.1 tell 11.0.0.2, length 46
10:48:51.307542 ARP, Reply 11.0.0.1 is-at 00:50:01:00:07:00 (oui Unknown), length 46
^C
12 packets captured
12 packets received by filter
0 packets dropped by kernel

Well, in addition to our pings, the request-response arp also got into the dump, which proves that all tra c is mirrored,
which is what we need.

Well, in conclusion, we recall that I wrote that a maximum of two Mirroring instances can be bind to the same fpc. But what
if you need to use three instances?

Of course, you can use two user-de ned instances and a default instance (which is only one), but rstly this is not the best
solution, and secondly, what if the default instance is already taken? Naturally, JunOS allows you to get around this
limitation. In principle, there is nothing supernatural - the principle of operation is the same, the changes concern only the
con guration of the instances.

Using more than two mirroring instances on the same FPC

So, the main point is to create a link between several Mirroring instances: the parent instance and the child instances that
refer to it are made. In the parent instance, we specify the input parameters - that is, the speed of myrroring / sampling, the
maximum packet size. In child instances, output parameters are already indicated - interfaces or next-hop groups, but input
parameters are inherited from the parent instance speci ed in the con guration. Without con gs, this is clearly
incomprehensible, so let's put together a mirroring scheme like this:

First, create a parent instance, I just called it SPAN.

bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN


input {
rate 1;
run-length 0;
}

Only the incoming mirroring parameters are speci ed in the instance. Nothing more to be indicated here.

Now create three child instances:

[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input-parameters-instance SPAN;
family inet {
output {
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
}
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-2
input-parameters-instance SPAN;
family inet {
output {
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}
}
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-3
input-parameters-instance SPAN;
family inet {
output {
interface gr-0/0/0.0 {
}
}

Here we already indicate the outgoing mirroring parameters. The link between the parent and child instance is done using
the following command:

input-parameters-instance SPAN;

As a result, all three SPAN-1/2/3 instances I created will inherit input parameters from the SPAN instance. As you remember,
now we need to bind the instances to some (or some, if the incoming ports on di erent cards) are FPC. As I said earlier - only
the parent instance needs to be bound to FPC:

bormoglotx@RZN-PE-1# show chassis fpc 0


pic 0 {
tunnel-services {
bandwidth 10g;
}
}
port-mirror-instance SPAN;

Well, then everything is the same - we create lters and hang them on the incoming ports:

bormoglotx@RZN-PE-1# wildcard range show interfaces ge-0/0/[3-5]


description Server-1;
unit 0 {
family inet {
filter {
input MIRROR>>>SPAN-3;
output MIRROR>>>SPAN-3;
}
address 11.0.0.254/24;
}
}
description Server-2;
unit 0 {
family inet {
filter {
input MIRROR>>>SPAN-2;
output MIRROR>>>SPAN-2;
}
address 12.0.0.254/24;
}
}
description Server-3;
unit 0 {
family inet {
filter {
input MIRROR>>>SPAN-1;
output MIRROR>>>SPAN-1;
}
address 13.0.0.254/24;
}
}

Please note that the lters do not indicate the parent instance, but child instances:

[edit]
bormoglotx@RZN-PE-1# wildcard range show firewall family inet filter MIRROR>>>SPAN-[1-3]
term MIRROR {
then port-mirror-instance SPAN-1;
}
term MIRROR {
then port-mirror-instance SPAN-2;
}
term MIRROR {
then port-mirror-instance SPAN-3;
}

Now check the state of the mirroring sessions:

bormoglotx@RZN-PE-1# run show forwarding-options port-mirroring


Instance Name: SPAN-1
Instance Id: 3
Input parameters:
Rate : 1
Run-length : 0
Maximum-packet-length : 0
Output parameters:
Family State Destination Next-hop
inet up gr-0/0/0.0
Instance Name: SPAN-2
Instance Id: 4
Input parameters:
Rate : 1
Run-length : 0
Maximum-packet-length : 0
Output parameters:
Family State Destination Next-hop
inet up ge-0/0/2.0 169.254.0.3
Instance Name: SPAN-3
Instance Id: 5
Input parameters:
Rate : 1
Run-length : 0
Maximum-packet-length : 0
Output parameters:
Family State Destination Next-hop
inet up ge-0/0/1.0 169.254.0.1

It can be seen from the conclusion that the tra c mirroring sessions are in operation, and the incoming tra c processing
parameters are inherited from the parent instance. Actually, I will not show the conclusions of the work directly in order to
reduce the article - I think that after reading this article you will be able to assemble such a scheme yourself and check its
performance.

It seems that everything that I wanted to write - wrote. If there are comments or additions - write.

Thanks for attention.

Tags:

juniper mx mirroring span rspan

Also popular now:


Orientation of a mobile robot, selection of a method for registering particular image points
Why can't JavaScript security analysis be truly automated? / Positive Technologies Blog
My experience of shooting 360º panorama on lm
Fourier Digital Image Processing
DPM: Why does it look like a bow? / Microsoft Blog

35 Gb / s - how Megafon and Huawei set a 5G speed record / MegaFon Blog


Ideal OS: rethinking desktop operating systems
Step-by-step start plan for Upwork # 2
Another git tutorial
Climbing an Unconquered Mountain: The Challenges of Creating a Game Alone

Copyright © Sudo Null company 2019 [email protected]

You might also like