SudoNull - Traffic Mirroring On Juniper MX
SudoNull - Traffic Mirroring On Juniper MX
Today we’ll talk about tra c mirroring on the Juniper MX Series routers. After the switches made by Cisco, Huawei or Arista,
the con guration of SPAN and RSPAN on JunOS will seem very complicated, but the complex (at rst glance) con guration
hides the huge capabilities of the MX platform in the eld of tra c mirroring. Although the Juniper approach is complicated
at rst glance, it becomes simple and understandable if you don’t stupidly copy-paste the con gs from one box to another,
but understand what is being done and why. JunOS ideology suggests using lter based forwarding (FBF) for mirroring
purposes, which gives us some exibility in implementing complex tra c mirroring schemes.
Наш тестовый стенд будет постоянно меняться — будем добавлять потребителей, изменять их желания по
получаемой копии трафика и т.д. На первом этапе тестовый стенд выглядит так:
Примечание: сначала я хотел использовать генератор трафика, но потом решил, что и сгенерированные hping3
tcp/udp/icmp пакеты, отловленные на анализаторе трафика (как анализатор использовался просто хост с ubuntu
server 14.04 на борту) будут более наглядны, чем просто счетчики с портов в pps (можно к примеру сравнить
актуальность отправленных и принятых данных). Счетчики же надо использовать при проведении нагрузочного
тестирования для проверки производительности маршрутизатора при использовании данного функционала. Но на
виртуальном MX проверять производительность нет смысла — все равно все упрется в возможности сервера
вируализации.
Suppose that there is some kind of tra c exchange between the servers Server-1 (11.0.0.1) and Server-2 (12.0.0.1). The
owners of the Analyzer-1 server want to see what exactly is transferred between these two servers, so we need to con gure
sending a copy of all tra c between Server-1 and Server-2 to Analyzer-1 - that is, do local mirroring. So let's get started.
In theory, this should work like this: we create a mirroring instance in which we specify the parameters of the incoming
tra c (how often to mirror the tra c) and the outgoing parameters to which or which ports to poison the tra c. In order
to direct tra c to the instance we created, we need to use the interface or interfaces, from which we want to remove a copy
of the tra c, hang a special lter that will wrap our tra c in the desired instance. That is, this is a classic policy base routing
scheme, well, or lter base routing in terms of Juniper. We’ve gured out the theory, now let's get to practice - we need to
build such a mirroring scheme:
First we need to create an instance in the [edit forwarding-options port-mirroring] hierarchy, which we will use to mirror
tra c.
The instance con guration consists of two parts. First, we’ll deal with the input section - as it’s not hard to guess, these are
the parameters of incoming tra c, which should be mirrored. The rate and run-length parameters are important here. The
rst parameter is responsible for the frequency at which packets will be mirrored (trigger is triggered), the second
parameter is responsible for how many packets will still be mirrored after the trigger of the rate trigger.
In our case, rate is set to 1, that is, each packet will be mirrored. Run-length is set to 0, because with a rate of 1 its presence
does not play any role.
For completeness, we will analyze the meaning of these parameters in a more illustrative example. The rate parameter sets
the frequency of tra c mirroring, suppose rate is 5, that is, the trigger will re on every 5th packet, which means that every
5th packet will be mirrored. Now suppose that run-length is set to 4. This tells us that 4 more packets will be mirrored after
every 5th packet. That is, the trigger on the 5th packet rst worked - this packet will be mirrored, now 4 more packets
following the already mirrored packet will also be mirrored. As a result, we get that we are mirroring every 5th packet, plus
4 more packets following it - a total of 100% tra c. By changing these parameters, you can mirror for example every 10
packets out of 100, etc. (well, this is more needed for sampling than mirroring,
If we return to our case, then we already mirror each package, so we simply do not need the run-length parameter and left
the default value of zero.
To calculate the percentage of mirrored tra c, you can use the formula % = ((run-length + 1) / rate) * 100) . It is logical that
with the parameters run-length 1 and rate 1, we get a mirroring of 200% of the tra c or for example with rate 1 and run-
length 4 - 500% of the tra c. I’m grieving or delighting you - more than 100% of the tra c is not mirrored - Juniper will not
multiply packets, which is more than logical. And I couldn’t come up with a scenario when you need to make two copies of
the same tra c (if anyone knows, write in the comments).
And another important parameter is maximum-packet-length. This is the maximum packet size that will be mirrored. If you
set it, for example, to 128, then when you receive a packet, more than 128 bytes (well, for example 1514), the rst 128 bytes
will be cut from it and sent to the consumer. The rest of the packet will simply be discarded. This is convenient when you
need to send only headers to the server and you do not need a payload. It is not recommended to set less than 20 for ipv4.
Now let's move on to the output parameters. Here, in the general case, we need to specify the interface to which we go to
mirror the tra c. In the case when we just have a p2p interface, you don’t need to specify anything else - everything will y.
But as we all remember, ethernet is far from p2p (to be exact, it is csma / cd), and in addition to the interface, we need to
specify the host address to which the tra c is intended, both IP and MAC (but we will get to that later ) I chose an address
from the link-local address range to avoid any intersections with existing addresses - you can take any addressing, this will
absolutely not change anything in the way the technology works. In ethernet, in order to send a packet to some host, the
router needs to nd out the MAC address of this host using ARP. In my case, nothing has been con gured on the recipient
server side - it’s just an empty interface and it turns out that the router will vainly try to resolve the address of the remote
host. Naturally, all mirroring will end there. How to be in this situation? Everything ingenious is simple - a static ARP record is
made:
[edit]
bormoglotx@RZN-PE-1# run show arp interface ge-0/0/1.0
MAC Address Address Name Interface Flags
02:00:00:00:00:01 169.254.0.1 169.254.0.1 ge-0/0/1.0 permanent
Here I would like to dwell in more detail. Theoretically, you can send tra c to some real address con gured on the server,
but the simplest and most exible approach is to create a ctitious IP address and ARP record to the tra c consumer - that
is, we simply make Juniper think that the speci ed interface is behind the speci ed interface The IP / MAC address, which
ultimately makes the box stupidly send tra c there, without understanding, is there really a speci ed host or not - the main
thing is that the port is up. Using static ARP recording in mirroring has a great advantage - static ARP recording does not
expire, and the router will not send requests to the ARP server (which may fall into the dump of the removed tra c, which is
not very good).
Now, in order for tra c to become mirrored, we need to somehow wrap it in the instance we created. To do this, use lter
base forwarding. We create a lter and apply it to the interface we are interested in:
[edit]
bormoglotx@RZN-PE-1# show firewall family inet filter MIRROR>>>SPAN-1
term MIRROR {
then port-mirror-instance SPAN-1;
}
[edit]
bormoglotx@RZN-PE-1# show interfaces ge-0/0/3
description Server-1;
unit 0 {
family inet {
filter {
input MIRROR>>>SPAN-1;
output MIRROR>>>SPAN-1;
}
address 11.0.0.254/24;
}
}
Since we need to collect both incoming and outgoing tra c, we hang the lter in both directions.
As practice shows, this lter does not block the passage of tra c between the servers themselves, so there is no need to
write an accept action, but they are often added to secure it.
Apparently myrroring at work. Let's now run 5 packages from Server-1 to Server-2 and see what we can catch on the
Analyzer-1 analyzer:
Everything is not so rosy, the conclusion shows that nothing actually works for us, although Juniper reported to us in the
conclusion above that everything is ok. The fact is that you can create the instance for mirroring yourself (which we did) or
use the default instance (it is one for the whole box). If we create the instance ourselves, then we must associate this
instance with the FPC on which we do Mirroring (if the ports are on several FPCs, it means associating with several). Let's go
back to Juniper and indicate in the FPC con guration the instance we created. Why did I focus on this? The fact is that a
couple of times he himself came across this and could not understand what the catch was - after all, the conclusions say that
everything is ne.
[edit]
bormoglotx@RZN-PE-1# show | compare
[edit]
+ chassis {
+ fpc 0 {
+ port-mirror-instance SPAN-1;
+ }
+ }
As a result, the entire tra c exchange between Server-1 and Server-2 fell on the analyzer, which is what we sought.
Moving on, our scheme has changed and now we have Analyzer-2, which also wants to get all the tra c between Server-1
and Server-2:
As a result, we have another task - we need to implement a new mirroring scheme that looks like so:
It seems to be nothing complicated - create an interface in the direction of Analyzer-2, add it to the instance and the case is
in the hat.
[edit]
bormoglotx@RZN-PE-1# show interfaces ge-0/0/2
description Analyzer-2;
unit 0 {
family inet {
address 169.254.0.2/31 {
arp 169.254.0.3 mac 02:00:00:00:00:01;
}
}
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input {
rate 1;
run-length 0;
}
family inet {
output {
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}
}
}
But when we try to add another port to the output hierarchy in the Mirroring instance, we get an error when committing:
[edit]
bormoglotx@RZN-PE-1# commit check
[edit forwarding-options port-mirroring instance SPAN-1 family inet output]
Port-mirroring configuration error
Port-mirroring out of multiple nexthops is not allowed on this platform
error: configuration check-out failed
A scary phrase at rst glance - the platform’s limitations prevent us from setting up two next hop at once for mirrored
tra c. But this restriction is very easily bypassed if we use next-hop groups.
I think that it is already clear what a next-hop group is - the name speaks for itself. Juniper MX supports up to 30 next-hop
groups, each of which can have up to 16 next-hop groups. But besides this, in each next-hop group you can create next-hop
subgroups. In one next-hop group, there must be at least two next-hops, otherwise JunOS will not allow a commit.
Now let's move on to the con guration, create the next-hop group:
[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-servers
group-type inet;
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input {
rate 1;
run-length 0;
}
family inet {
output {
next-hop-group Analyzer-servers;
}
}
Everything is in order with the group - it is at work (the group will be in up if it has at least one interface in up). Now check
the state of the mirroring session:
Everything is also in order, but as we saw earlier, this does not mean that we did everything right and everything will take
o . Therefore, we will check whether tra c to our two servers will be mirrored:
Tra c on Analyzer-1:
Excellent - the task is completed, the tra c ows where it is needed - both consumers receive the requested copy of the
tra c.
But the network is developing at a frantic pace, and our company doesn’t spare money for zekalization and SORM. Now we
have another server - Analyzer-3, which also wants to receive a copy of the tra c. But the di culty is that this server is not
connected locally to our RZN-PE-1, but in RZN-PE-2:
In light of the foregoing, we need to redo the mirroring scheme again, now it will look like this:
Since the Analyzer-3 server is located behind RZN-PE-2, the methods that we used earlier to solve this problem will not
work. Our main problem is not how to mirror the tra c, but how to drag this already mirrored tra c to the Analyzer-3
server, which lives behind RZN-PE-2, and make it transparent to the network, otherwise we will get problems (which, see
later). For this, it is customary to use gre tunnel on Juniper equipment. The idea is that we make a tunnel to a remote host
and pack all mirrored tra c into this tunnel and send it either directly to a server or to a router that terminates the
destination server. You have two options for using gre tunnel.
Option 1. On the router that performs mirroring, the gre tunnel is con gured, and the destination address of the server
receiving tra c is speci ed as destination. Naturally, the network where this server is located (in our case, it is Analyzer-3)
must be known through some routing protocol (BGP or IGP - it doesn’t matter), otherwise the gre tunnel simply will not take
o . The problem is that in such a scenario, tra c to the server is poured along with gre headers. For modern tra c analysis
and monitoring systems, this should not be a problem - gre is not IPSec and tra c is not encrypted. That is, on one side of
the scale, ease of implementation, on the second - an extra heading. Perhaps in some scenario the presence of an extra
header will not be acceptable, then you will have to use option 2.
Option 2. Between the router, which performs mirroring, and the router, which terminates the server that receives the
tra c, the gre tunnel rises (usually this is done on loopbacks). On the side of the router, which performs mirroring from the
source, everything is the same as in option 1, but on the receiver side, you need to con gure an instance on the router that
will mirror the tra c received from the gre tunnel towards the analyzer. That is, for one mirror, it turns out that we need to
use one instance of mirroring at the source and the second at the recipient of tra c, which greatly complicates the scheme.
But on the other hand, in this scenario, pure tra c ows to the server, without any extra gre headers. In addition, when
implementing this scheme, there is a rule that must be strictly observed - a router, which terminates the endpoint gre of the
tunnel should not have a route to the host, which will be indicated as the recipient in the mirrored tra c (that is, the
recipient of the original mirrored packet). If this condition is not met, then you will receive duplicate packets - the tra c will
y out of the gre tunnel and, in addition to being mirrored to the port you speci ed, it will also route like a regular ip packet.
And if the router knows the route to the destination host, then tra c will be sent to it. To avoid this, the gre interface must
be immersed in a separate instance with the virtual-router type, although there are other methods described below. If
anyone is interested, then the con guration, the essence of the problem and how to defeat it under the spoiler: which will
be speci ed as the recipient in the mirrored tra c (that is, the recipient of the original mirrored packet). If this condition is
not met, then you will receive duplicate packets - the tra c will y out of the gre tunnel and, in addition to being mirrored
to the port you speci ed, it will also route like a regular ip packet. And if the router knows the route to the destination host,
then tra c will be sent to it. To avoid this, the gre interface must be immersed in a separate instance with the virtual-router
type, although there are other methods described below. If anyone is interested, then the con guration, the essence of the
problem and how to defeat it under the spoiler: which will be speci ed as the recipient in the mirrored tra c (that is, the
recipient of the original mirrored packet). If this condition is not met, then you will receive duplicate packets - the tra c will
y out of the gre tunnel and, in addition to being mirrored to the port you speci ed, it will also route like a regular ip packet.
And if the router knows the route to the destination host, then tra c will be sent to it. To avoid this, the gre interface must
be immersed in a separate instance with the virtual-router type, although there are other methods described below. If
anyone is interested, then the con guration, the essence of the problem and how to defeat it under the spoiler: will still be
routing like a regular ip packet. And if the router knows the route to the destination host, then tra c will be sent to it. To
avoid this, the gre interface must be immersed in a separate instance with the virtual-router type, although there are other
methods described below. If anyone is interested, then the con guration, the essence of the problem and how to defeat it
under the spoiler: will still be routing like a regular ip packet. And if the router knows the route to the destination host, then
tra c will be sent to it. To avoid this, the gre interface must be immersed in a separate instance with the virtual-router type,
although there are other methods described below. If anyone is interested, then the con guration, the essence of the
problem and how to defeat it under the spoiler:
Only the destination address of the tunnel has changed - it has become the RZN-PE-2 loopback.
On RZN-PE-2, you must rst create a gre tunnel to RZN-PE-1:
To now send tra c from this interface to the Mirroring instance, we need to hang a lter on it that looks like this:
Well, the nal touch is to create the instance itself, bind it to fpc and create an interface where tra c will be sent:
Now run the ping between Server-1 and Server-2 and verify that we have mirrored:
I deleted some of the duplicates from the output, but the number of duplicates can be seen - one valid package and 41 take.
On the tra c analyzer you will naturally see the same picture:
In addition to mirroring, the router also forwards the packet received from the gre tunnel, since it knows the route to the
destination address. To x this, create an instance with the virtual router type and add to it the gre interface and the
interface to which we mirror the tra c:
[edit]
bormoglotx@RZN-PE-2# show routing-instances RSPAN-VR
description "for RSPAN use only";
instance-type virtual-router;
interface gr-0/0/0.0;
interface ge-0/0/1.0;
Run the ping again and check the operation of the circuit. Now the duplicate server is not visible:
Well, the absence of duplicates proves the dump on the Analyzer-3 analyzer:
In addition to creating a separate instance on RZN-PE-2, you can use other methods. Nowhere is it written about them and I
found out about them just by testing.
You can specify in the lter that all tra c from the tunnel should be sent to discard (namely discard, since if it is rejected,
then Juniper will send icmp back message that the packet is ltered)
If you believe the tests, then everything works with such a lter:
Another option allows us not to con gure the Mirroring instance for mirroring on RZN-PE-2 at all. We need to make a next-
hop group (if you have one interface, like mine, then we need to add some kind of fake one so that JunOS can make a
commit), and hang a lter on the gre interface, which will change next-hop for incoming tra c to we need:
And now let's check if we have duplicates and whether mirroring works:
There are no duplicates, it remains to check whether anything fell on the analyzer:
Judging by the tests - everything works out as it should. Which option to implement in production is up to you.
We will use the rst option. First we need to enable tunnel services so that we have a gre interface (gr-X / X / X):
Here it’s worth a bit to return to theory and talk about tunnel interfaces and resource reservation. In this con guration, I
allocate 10G for tunnel services on a zero PIC of zero FPC. This does not mean that 10G of pfe bandwidth is cut o - this
suggests that tunnel services can use no more than 10G pfe bandwidth, and part of the resources that are not occupied by
them can be used for forwarding physical port tra c - that is, 10G on pfe is shared between tunnel services and real
interfaces. But this is on MPC cards. If you are a “happy” owner of a DPC card (for example, you have a 4-dozen card), then
with the above con g you will lose one port (that is, the xe port will simply disappear from the system and will be
inaccessible from cli, and a light will light up near the port, telling us that the port is in tunnel mode). Unfortunately,
Secondly, I would like to say about port numbering - if you reserve 1G, then the port number will be gr-0/0/10, if you reserve
10G or more, then the port number will be gr-0/0/0 (this is shown below option).
[edit]
bormoglotx@RZN-PE-1# run show interfaces terse | match "^(gr|lt|vt)-"
gr-0/0/0 up up
lt-0/0/0 up up
vt-0/0/0 up up
On line cards with a TRIO chipset, the maximum possible reserved bandwidth for tunnel services is 60G.
Note: I would like to add that lt and vt are di erent interfaces. lt - logical tunnel - a logical tunnel, which is usually intended
for connecting logical systems or routing instances with each other - it allows you to drive tra c between them, as if these
instances or logical systems are connected by a direct patch cord. But vt is a virtual tunnel - a virtual loopback, which is not
intended to bind some kind of virtual entity, but to wrap up tra c on pfe for re-lookup (for example, in vpls).
After we created the tunnel interfaces, we now have the opportunity to con gure gr-0/0/0. Since we torn out the option in
which the remote PE router does not terminate the gre tunnel but simply sends tra c to the server side, then as the sourse
address of the tunnel on RZN-PE-1 we specify our own loopback, but as the destination address of the recipient server of
the mirrored tra c, moreover, this address should be available.
As a matter of fact, the server may or may not have an address. You can select it yourself and make a static ARP record, as
shown below:
[edit]
bormoglotx@RZN-PE-2# show | compare
[edit interfaces]
+ ge-0/0/1 {
+ description Analyzer-3;
+ unit 0 {
+ family inet {
+ address 192.168.0.0/31 {
+ arp 192.168.0.1 mac 02:00:00:19:21:68;
+ }
+ }
+ }
+ }
[edit protocols ospf area 0.0.0.0]
interface ge-0/0/0.0 { ... }
+ interface ge-0/0/1.0 {
+ passive;
+ }
Moreover, as can be seen from the presented con guration, the interface is added as passive in ospf so that RZN-PE-1 knows
the route to this network:
[edit]
bormoglotx@RZN-PE-1# run show route 192.168.0.1
inet.0: 20 destinations, 20 routes (20 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both
192.168.0.0/31 *[OSPF/10] 00:00:16, metric 3
> to 10.0.0.0 via ge-0/0/0.0
Now create a gre tunnel on RZN-PE-1 and add it to the next-hop group:
[edit]
bormoglotx@RZN-PE-1# show interfaces gr-0/0/0
description RSPAN;
unit 0 {
tunnel {
source 62.0.0.1;
destination 192.168.0.1;
}
family inet {
address 169.254.100.1/31;
}
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-servers
group-type inet;
interface gr-0/0/0.0;
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}
Unlike ge interfaces, the gre interface is p2p, and therefore there is no point in specifying a next-hop address - tra c will
still y from the other end, although you can specify it.
Well, then everything is as usual - check the state of the mirroring session:
[edit]
bormoglotx@RZN-PE-1# run show forwarding-options next-hop-group detail
Next-hop-group: Analyzer-servers
Type: inet
State: up
Number of members configured : 3
Number of members that are up : 3
Number of subgroups configured : 0
Number of subgroups that are up : 0
Members Interfaces: State
gr-0/0/0.0 up
ge-0/0/1.0 next-hop 169.254.0.1 up
ge-0/0/2.0 next-hop 169.254.0.3 up
Well, now we check that the tra c on the remote server is obtained:
But, as I said - gre header tra c, and if this is not a problem for your server, then this approach is the easiest and most
exible.
But as it turned out, now the owners of the recipient server mirrored tra c do not want to receive all the tra c, since it has
become too much. The Analyzer-1 server needs only TCP tra c, the Analyzer-2 server only needs UDP tra c, but the
Analyzer-3 server needs all the tra c, not limited to TCP / UDP. That is, now we need to implement such a scheme:
Here we need the tunnel interface vt-0/0/0 (virtual loopback) well, or you can use lt-0/0/0 (virtual tunnel), but the rst is
more preferable. So, the purpose of selective mirroring is as follows - tra c from the port is rst mirrored to the virtual
loopback vt port, and then it is scattered from this port into di erent next-hop groups based on the parameters you
selected - protocols, ports, etc. better understanding of what is happening, let's now assemble this scheme. First, we will
change the Mirroring instance so that the tra c is mirrored to a virtual loopback:
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input {
rate 1;
run-length 0;
}
family inet {
output {
interface vt-0/0/0.0;
no-filter-check;
}
}
The no- lter-check parameter is very important - this command allows us to attach a lter to the interface into which tra c
is mirrored. By default, ltering is disabled on these interfaces. Now create the vt interface itself:
[edit]
bormoglotx@RZN-PE-1# show interfaces vt-0/0/0
unit 0 {
description SPAN-USE;
family inet;
}
You cannot hang any addresses on this interface, and the family of addresses that can be resolved on it is limited.
Now we have the following picture - all tra c from the ge-0/0/3 interface is directed to the vt-0/0 / 0.0 port. Now we need
to mirror this tra c towards di erent consumers. To do this, rst create next-hop groups, which include the necessary
consumers:
[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-TCP
group-type inet;
interface gr-0/0/0.0;
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-UDP
group-type inet;
interface gr-0/0/0.0;
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options next-hop-group Analyzer-default
group-type inet;
interface gr-0/0/0.0;
interface ge-0/0/100.0;
The gr-0/0/0 interface, which is designed to mirror tra c on Analyzer-3, is added to all three groups. This is due to the fact
that this server wants to receive both TCP and UDP tra c, and you cannot make a separate group for it and then apply it in
the lter. Using the same next-hop in di erent groups is not prohibited. In the Analyzer-default group there is also the ge-
0/0 / 100.0 port - this is a fake port added to the group in order to be able to commit con guration - since the group must
have at least two interfaces.
Now we need to create a lter that will match tra c according to the criteria we need and scatter by next-hop groups:
[edit]
bormoglotx@RZN-PE-1# show firewall family inet filter MIRROR-SORTED
term MIRROR-TCP {
from {
protocol tcp;
}
then next-hop-group Analyzer-TCP;
}
term MIRROR-UDP {
from {
protocol udp;
}
then next-hop-group Analyzer-UDP;
}
term MIRROR-DEFAUL {
then next-hop-group Analyzer-default;
}
[edit]
bormoglotx@RZN-PE-1# show interfaces vt-0/0/0
unit 0 {
description SPAN-USE;
family inet {
filter {
input MIRROR-SORTED;
}
}
}
All groups are in operation (remember that at least one port must be in up for the group to go up):
Well, now we will generate 5 icmp, tcp and udp packets and see what server gets to. On all client servers, enable tcpdump at
the same time. I used hping3 with the --rand-source switch, so we won’t see any return tra c, since tra c is only taken on
the port towards Server-1.
Now let's check what got on Analyzer-2 (there should be only UDP tra c here):
Well, I stayed with Analyzer-3, there we catch everything in a row, the total number of packets should be 15 (5 UDP / 5 TCP /
5 ICMP):
Well, all that had to be implemented was done - tra c is mirrored and scattered among consumers, as intended.
Above we mirrored L3 tra c, but the Juniper MX series routers are very exible devices and they allow you to mirror not
only IP tra c (inet / inet6 family), but also L2 tra c, for example vpls or l2ckt (xconnect in Cisco terms).
Let us consider the simplest case when you need to spy on what is transmitted to L2CKT (this is certainly not good to do,
since the client whose tra c you wrap on the analyzer does not even know about it, and such things should be done only
with the consent of customer). The scheme is simple - some kind of L2CKT is pulled between RZN-PE-1 and RZN-PE-2:
Between RZN-PE-1 and RZN-PE-2 are pulled L2CKT, which we want to listen to:
It is logical that the ccc family is included on the interface - this is L2CKT after all. In the con guration, a lter on both sides
is already hung on the interface we need - since we want to receive all the tra c that will pass through our L2CKT. The lter
is essentially the same as it was before, only the address family is not inet, but ccc:
Next, we set up the Mirroring instance, which we want to use for mirroring. There are no changes in the input section -
everything is as before, but in the output section there are signi cant di erences:
У нас изменилось семейство адресов — теперь это ccc. Это тянет за собой неизбежные изменения в и конфигурации
интерфейса, в который мы хотим отправлять трафик. Если мы попробуем задать какой то адрес next-hop-а, как это
делалось ранее на не p2p интерфейсе, то у нас ничего не получится:
bormoglotx@RZN-PE-1# set forwarding-options port-mirroring instance SPAN-1 family ccc output interface ge-0/0/1 ?
Possible completions:
<[Enter]> Execute this command
+ apply-groups Groups from which to inherit configuration data
+ apply-groups-except Don't inherit configuration data from these groups
no-filter-check Do not check for filters on port-mirroring interface
| Pipe through a command
У нас просто нет такой возможности. Поэтому на интерфейсе, в который нам надо отправить трафик надо включить
семейство bridge или ccc:
[edit]
bormoglotx@RZN-PE-1# show interfaces ge-0/0/1
description Analyzer-1;
encapsulation ethernet-ccc;
unit 0 {
family ccc;
}
Семейство ccc естественно использовать проще, но если вам приспичило использовать bridge, то не забудьте про
важный нюанс — интерфейс с инкапсуляцией bridge должен быть помещен в бридж-домен (номер влана в домене
можно использовать нулевой (none), так вы не отберете реальные номера вланов под зеркалирование у остальных
сервисов).
Everything is ne - session in up. Now run the ping between our hosts and check what happens on the analyzer:
Actually, all the packets got to the analyzer, which is what we sought.
Now consider a more complex scheme - we need to con gure mirroring for interfaces located in the bridge domain or in the
virtual switch, the reception does not send a copy to some local port, as we did above, but drop this tra c to the remote
box.
The rst thought is that everything is simple, you can use the gre tunnel. But, unfortunately, gre does not support ccc / tcc /
vpls / bridge encapsulation. But in Junos, there are a lot of di erent tools that allow you to solve the same problem using
di erent methods, and sometimes it seems that doing something seems unrealistic, but in the end, everything takes o
after the Nth amount of time and Nth number of smoked manuals. It's the same here. Now we will assemble such a complex
scheme:
I will explain what and why. So, we mirror the tra c from the virtual switch (L2CKT or the bridge domain) to the mirroring
instance, and the tra c is not mirrored to some physical interface, but to the virtual tunnel interface lt-0/0/0. This interface
is one per box and its units are created in pairs, which are called peer-units — one unit is the input end of the tunnel, and the
second unit is the output. As a result, all that falls into one unit will y out of the second unit associated with it and vice
versa. On this interface, we enable ccc encapsulation and build L2CKT from it to a remote router that terminates the
recipient server - that is, we simply send L2 tra c through L2CKT directly to the remote server. For a remote router, this will
be a simple L2CKT.
Now let's move on to the con guration. The interfaces to the server side are in access, and are located in the virtual switch:
Filters are hung on the interfaces to mirror incoming tra c to the SPAN-1 instance. The lter is no di erent from the
previously used ones, except for the family - in this scenario, the bridge is used:
[edit]
bormoglotx@RZN-PE-1# show firewall family bridge filter MIRROR-BRIDGE-vSwitch-1
term MIRROR {
then port-mirror-instance SPAN-1;
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input {
rate 1;
run-length 0;
}
family vpls {
output {
interface lt-0/0/0.0;
}
}
There is a small nuance. The address family is not indicated by the bridge - you will not even nd such a family in the instance
con guration, but vpls. This family (VPLS) is used to mirror tra c from vpls / bridge domains.
[edit]
bormoglotx@RZN-PE-1# show interfaces lt-0/0/0
unit 0 {
description RSPAN-IN;
encapsulation ethernet-ccc;
peer-unit 1;
family ccc;
}
unit 1 {
description RSPAN-OUT;
encapsulation ethernet-ccc;
peer-unit 0;
family ccc;
}
As I wrote earlier, the lt interface consists of two units - in our case, units 0 and 1. Everything that ies into unit 0 will y
through unit 1. In general, on the one hand, a unit can be like L3, for example, inet, and on the other, like L2, for example ccc
- and this will work. We have ccc on both ends, on the zero unit this is due to the fact that tra c should be mirrored to an
instance with the ccc / bridge / vpls family, and the use of ccc on the rst unit is due to the fact that L2CKT is built from this
unit.
Next, create an L2CKT between RZN-PE-1 and RZN-PE-2. From the side of RZN-PE-1:
[edit]
bormoglotx@RZN-PE-1# show protocols l2circuit
neighbor 62.0.0.2 {
interface lt-0/0/0.1 {
virtual-circuit-id 1;
encapsulation-type ethernet;
}
}
Now you can check if our Frankenstein is working. First, let's see the state of L2CKT:
[edit]
bormoglotx@RZN-PE-1# run show l2circuit connections | find ^Nei
Neighbor: 62.0.0.2
Interface Type St Time last up # Up trans
lt-0/0/0.1(vc 1) rmt Up Sep 2 07:28:05 2017 1
Remote PE: 62.0.0.2, Negotiated control-word: Yes (Null)
Incoming label: 299840, Outgoing label: 299872
Negotiated PW status TLV: No
Local interface: lt-0/0/0.1, Status: Up, Encapsulation: ETHERNET
Flow Label Transmit: No, Flow Label Receive: No
Great, L2CKT at work. Next, check the state of the mirroring session:
[edit]
bormoglotx@RZN-PE-1# run show forwarding-options port-mirroring SPAN-1
Instance Name: SPAN-1
Instance Id: 2
Input parameters:
Rate : 1
Run-length : 0
Maximum-packet-length : 0
Output parameters:
Family State Destination Next-hop
vpls up lt-0/0/0.0
Everything is ne, now run the ping between the Server-1 and Server-2 servers and see what gets to the tra c analyzer:
Well, in addition to our pings, the request-response arp also got into the dump, which proves that all tra c is mirrored,
which is what we need.
Well, in conclusion, we recall that I wrote that a maximum of two Mirroring instances can be bind to the same fpc. But what
if you need to use three instances?
Of course, you can use two user-de ned instances and a default instance (which is only one), but rstly this is not the best
solution, and secondly, what if the default instance is already taken? Naturally, JunOS allows you to get around this
limitation. In principle, there is nothing supernatural - the principle of operation is the same, the changes concern only the
con guration of the instances.
So, the main point is to create a link between several Mirroring instances: the parent instance and the child instances that
refer to it are made. In the parent instance, we specify the input parameters - that is, the speed of myrroring / sampling, the
maximum packet size. In child instances, output parameters are already indicated - interfaces or next-hop groups, but input
parameters are inherited from the parent instance speci ed in the con guration. Without con gs, this is clearly
incomprehensible, so let's put together a mirroring scheme like this:
Only the incoming mirroring parameters are speci ed in the instance. Nothing more to be indicated here.
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-1
input-parameters-instance SPAN;
family inet {
output {
interface ge-0/0/1.0 {
next-hop 169.254.0.1;
}
}
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-2
input-parameters-instance SPAN;
family inet {
output {
interface ge-0/0/2.0 {
next-hop 169.254.0.3;
}
}
}
[edit]
bormoglotx@RZN-PE-1# show forwarding-options port-mirroring instance SPAN-3
input-parameters-instance SPAN;
family inet {
output {
interface gr-0/0/0.0 {
}
}
Here we already indicate the outgoing mirroring parameters. The link between the parent and child instance is done using
the following command:
input-parameters-instance SPAN;
As a result, all three SPAN-1/2/3 instances I created will inherit input parameters from the SPAN instance. As you remember,
now we need to bind the instances to some (or some, if the incoming ports on di erent cards) are FPC. As I said earlier - only
the parent instance needs to be bound to FPC:
Well, then everything is the same - we create lters and hang them on the incoming ports:
Please note that the lters do not indicate the parent instance, but child instances:
[edit]
bormoglotx@RZN-PE-1# wildcard range show firewall family inet filter MIRROR>>>SPAN-[1-3]
term MIRROR {
then port-mirror-instance SPAN-1;
}
term MIRROR {
then port-mirror-instance SPAN-2;
}
term MIRROR {
then port-mirror-instance SPAN-3;
}
It can be seen from the conclusion that the tra c mirroring sessions are in operation, and the incoming tra c processing
parameters are inherited from the parent instance. Actually, I will not show the conclusions of the work directly in order to
reduce the article - I think that after reading this article you will be able to assemble such a scheme yourself and check its
performance.
It seems that everything that I wanted to write - wrote. If there are comments or additions - write.
Tags: