IOS Features
IOS Features
IOS Features
load balancing. I've done a lot of NAT in my career, but most of it has been on an ASA. Some of these
features are not so obvious on IOS, and I've sometimes had a hard time producing
specific functionality when I had to do anything beyond a basic NAT or PAT. Here, I have deep-dived
every NAT feature I can find, including use cases. We will start with introducing the easy features, cover
The subnet between R1, R2, R3 and R4 will be 192.168.0.0/24, with the fourth octet being the router
number. The link between R4 and R5 will be 30.0.0.0/24 with the fourth octet being the router number.
Each router will have a loopback of X.X.X.X where X is its router number (i.e. R3 = 3.3.3.3). R4 will
R1, R2, and R3 all have a default route pointing towards R4. These will be our "inside". R5 doesn't have a
route for anything other than its own loopback and connected 30.0.0.0/24 connected segment. This will be
our "outside".
R4(config)#int fa0/1
R4(config-if)#int fa0/0
R4(config-if)#ip nat outside
..!!!
R5#debug ip icmp
R5#
*Mar 1 00:26:35.911: ICMP: echo reply sent, src 30.0.0.5, dst 30.0.0.20
*Mar 1 00:26:37.899: ICMP: echo reply sent, src 30.0.0.5, dst 30.0.0.20
*Mar 1 00:26:37.979: ICMP: echo reply sent, src 30.0.0.5, dst 30.0.0.20
*Mar 1 00:26:38.027: ICMP: echo reply sent, src 30.0.0.5, dst 30.0.0.20
Really straightforward. This flips the source address from 192.168.0.1 to 30.0.0.20 when moving from
inside to outside. From outside to inside the destination address will be flipped from 30.0.0.20 back to
192.168.0.1. R4 will ARP for 30.0.0.20 on the outside, which we can see via the alias table:
Interface 4.4.4.4
Interface 30.0.0.4
Dynamic 30.0.0.20
Interface 192.168.0.4
If for some reason we don't want R4 to ARP for 30.0.0.20, we could use no-alias:
R4(config)#do sh ip alias
Interface 4.4.4.4
Interface 30.0.0.4
Interface 192.168.0.4
R5#clear arp
R5#ping 30.0.0.20
.....
R5#ping 30.0.0.20
.!!!!
R1#debug ip icmp
R1#
*Mar 1 00:38:39.007: ICMP: echo reply sent, src 192.168.0.1, dst 30.0.0.5
*Mar 1 00:38:39.067: ICMP: echo reply sent, src 192.168.0.1, dst 30.0.0.5
*Mar 1 00:38:39.103: ICMP: echo reply sent, src 192.168.0.1, dst 30.0.0.5
*Mar 1 00:38:39.175: ICMP: echo reply sent, src 192.168.0.1, dst 30.0.0.5
Let's create some more traffic and check out the NAT table.
R1#telnet 30.0.0.5
There's some interesting stuff here. We see the entry created by our nat statement:
We'll go over this more further down the document. Let's focus on:
Why is this here? I thought this was NAT, not PAT, so we shouldn't need all these port numbers. For that
matter we don't even care about the outside local/global addresses, really.
This is because of a feature activated by ip nat create flow-entries. This is a default-on feature to
accelerate the NAT process. If you want to disable it, you'd use:
R1#telnet 30.0.0.5
That's more what you'd expect to see, even if it is slower. I've now re-enabled ip nat create flow-entries.
Static PATs.
This should map port 19 (chargen) on the inside to port 5000 on the outside.
R1(config)#service tcp-small-servers ! enable chargen on R1
R5#telnet 30.0.0.20
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefg
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefgh
"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghi
What if we were translating some protocol that needed an ALG (Application Layer Gateway)? Turns out
IOS's NAT process has some fixups built-in for applications that contain IP and port information inside the
packet. This process happens be default. If you want to disable it, you'd use:
ip nat inside source static tcp 192.168.0.1 19 30.0.0.20 5000 no-payload
Dynamic NAT.
This will perform a 1:1 NAT translation, dynamically, for the first 20 hosts on 192.168.0.0/24 on to 30.0.0.50
through 70.
R1#telnet 30.0.0.5
We see that 192.168.0.1 has translated to 30.0.0.50 as expected. Now that this is setup we'll see
R5#telnet 30.0.0.50
R2#ping 30.0.0.5
!!!!!
This will do a pretty clever thing, and match the fourth octet on a 1:1 basis when generating traffic from
inside -> outside. Outside -> inside is reversible after the inside->outside translation has taken place and is
in the table.
R3#telnet 30.0.0.5
Dynamic PAT
Now our "nat-pool" nat pool still references 20 IPs, which is unnecessary, this would work fine with one IP
R1#telnet 30.0.0.5
R2#ping 30.0.0.5
.!!!!
We see that both our sessions are now sourced dynamically off 30.0.0.52, instead of one IP per device.
We see all the sessions coming off the interface IP, 30.0.0.4. Note I did not use the overload command
above. I could've, but it's implied when you PAT off an interface in this fashion.
Let's say you want a catch-all host behind your PAT. It would get all the traffic not going somewhere
else. This is similar to the "DMZ host" feature that's on a lot of economy routers. Let's make R2 our catch-
all:
R1#ping 30.0.0.5
.!!!!
R5#telnet 30.0.0.4
Password:
R2>
R4(config)#do sh ip nat trans
So far we've been looking at "domain based" NAT. "domains" meaning inside & outside. As we've been
seeing, the NAT table for domain-based NAT is viewed by show ip nat translations. But we haven't
- ip nat outside
- Nat Virtual Interface (NVI)
I've eliminated all the existing NAT configuration and we're starting from scratch.
These are both source NATs. The top one is created by ip nat outside, the bottom one is created by ip
nat inside. I'm going to generate a traffic flow so that we can see the outcome of this better.
(Note I've fixed something behind-the-scenes here so that I can demonstrate this point first. We'll discuss
later.)
SOURCE: 192.168.0.1
DESTINATION: 192.168.0.50
2) A reverse source NAT of 192.168.0.50 to 30.0.0.5. The source NAT is for the outside->inside direction
(30.0.0.5 -> 192.168.0.50), and this is the "reversible" method we've been discussing.
That's all fine and dandy, but here's my quicky method for seeing what this all means:
If inside->outside, our pre-translation packet is the inside pair (inside local, outside local) or "1 -> 2"
(192.168.0.1 -> 192.168.0.50) and our post-translation packet is the outside pair (inside global -> outside
Original packet is (Outside Global, Inside Global) or "1 -> 2" (30.0.0.5 -> 30.0.0.50); and our post-
translation packet is the inside pair, reversed (Outside Local -> Inside Local) or "3 -> 4" (192.168.0.50 ->
192.168.0.1).
As such, we are now able to get by on translations and ARPs, no routing is required.... sort of.
I mentioned I'd "fixed" something undisclosed above, let's look at what would have gone wrong here. I
R4(config)#int fa0/0
R4(config-if)#no ip route-cache
R4(config-if)#int fa0/1
R4(config-if)#no ip route-cache
IP packet debugging is on
IP NAT debugging is on
R1#telnet 192.168.0.50
R4(config-if)#
*Mar 1 07:41:33.458: IP: s=192.168.0.1 (FastEthernet0/1), d=192.168.0.50 (FastEthernet0/1), len 44, rcvd
*Mar 1 07:41:33.466: IP: tableid=0, s=192.168.0.50 (local), d=192.168.0.1 (FastEthernet0/1), routed via
FIB
*Mar 1 07:41:33.466: IP: s=192.168.0.50 (local), d=192.168.0.1 (FastEthernet0/1), len 40, sending
The issue is on line 1. We're routing from Fa0/1 to Fa0/1. That's because even though R4 ARPed for
This is where order of operations comes in. Inside->Outside and Outside->Inside NAT are handled
differently.
Inside->Outside "routes first" and NATs second. I put "routes first" in quotes, because it's more like "picks
an interface first" (which I suppose is routing). Outside->Inside nat NATs first and "routes second".
Problem is, the packet is basically deemed invalid before the NAT even happens. We need a more specific
This /32 route will push traffic for 192.168.0.50 on to the outside interface.
R1#telnet 192.168.0.50
R4(config)#
forward
And now we're seeing traffic going from Fa0/1 to Fa0/0, and then the two pre-discussed NATs happening.
The "add-route" command creates the static route towards 192.168.0.50 on Fa0/0 automatically:
I've read a lot of blogs saying NVI-based NAT is the "new NAT method". I don't think this is the case, or at
http://www.cisco.com/en/US/tech/tk648/tk361/technologies_q_and_a_item09186a00800e523b.shtml
"NVI stands for NAT Virtual Interface. It allows NAT to translate between two different VRFs."
There's a lot of features that aren't available on NVI-based NAT yet (such as SNAT, and some route-map
configurations), and based on the above statement, I am wondering if they're planned for the future?
NVI-based NAT "double routes". It picks an egress interface, NATs, and then re-picks an egress
interface. This behavior is symmetric for both "inside" and "outside". In fact, as we will see, NVI NAT
R4(config)#int fa0/0
R1#ping 192.168.0.50
!!!!!
destination. This makes the debugging, nat statements, etc much easier to figure out.
A couple catches on NVI NAT. As I mentioned above, SNAT (discussed later) is unsupported, as are
Before we push on to policy NATs, let's take a quick look at a way to use IOS NAT as a poor man's load
balancer.
I've enabled telnet on R1, R2 and R3; let's distribute inbound telnet connections from R5 amongst the three
in a round-robin fashion. I've also given all three routers a default route aimed at R4. I've removed all the
R4(config)#int fa0/0
R4(config-if)#ip nat outside
R4(config-if)#int fa0/1
R4(config-if)#ip nat inside
R4(config-if)#exit
R4(config-std-nacl)#permit 30.0.0.25
R4(config-std-nacl)#exit
R4(config)#ip nat pool server-pool 192.168.0.1 192.168.0.3 netmask 255.255.255.0 type rotary
and...
R5#telnet 30.0.0.25
I'm not sure what causes the problem, but sometimes when I set this up, the router refuses to automatically
R4(config)#do sh ip alias
Interface 4.4.4.4
Interface 30.0.0.4
Interface 192.168.0.4
R4(config)#do sh ip alias
Interface 4.4.4.4
Interface 30.0.0.4
Alias 30.0.0.25 23
Interface 192.168.0.4
and now it should work:
R5#telnet 30.0.0.25
Password:
R1>exit
R5#telnet 30.0.0.25
Password:
R2>exit
R5#telnet 30.0.0.25
Password:
R3>
Now that we have that working, what if R1-R3 want to access the outside?
R1#ping 30.0.0.5
.....
R1#ping 30.0.0.5
!!!!!
If you've ever used a "lesser" router and tried to forward a range of ports (say TCP 10 through 30) from the
outside to an inside address, you probably did it with relative ease. You may have also struggled trying to
get this to work in the Cisco world, which does "port forwards" one at a time via static PAT. There's a
R4(config)#no ip nat pool server-pool 192.168.0.1 192.168.0.3 netmask 255.255.255.0 type rotary
R4(config)#no ip nat pool server-pool 192.168.0.1 192.168.0.1 netmask 255.255.255.0 type rotary
Create an access-list specifying the traffic to "rotary load-balance" to our single server:
R4(config-ext-nacl)#exit
And test:
R5#telnet 30.0.0.4
Password:
R1>exit
R5#telnet 30.0.0.4 19
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefg
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefgh
The simplest way to create a policy NAT is to just use an extended access list. Up until now, we've been
using standard access-lists, which create a simple logic: If the source is on this list, change it. Now we can
say things like "If source is on this list and you're headed towards a specific IP range, then change it". In
my experience, this is most useful for VPNs, where you want to PAT towards the Internet but dynamic NAT
to another range over the VPN tunnel. I'm not going to build that elaborate of a lab, but now you have a
R5 has the same IPs as before - the interface IPs between R4 and R5 are 30.0.0.0/24
R4(config)#int fa0/1
R4(config-if)#int fa0/0
R4(config-if)#int fa1/0
Now that you can reach either R5 or R6 from R4, we need to NAT differently depending on which direction
we're going.
R1#ping 30.0.0.5
!!!!!
R1#ping 30.1.0.6
!!!!!
Now if you're paying attention, you may have already noticed the limitation of this method when used
with my diagram. "Pretend R7 doesn't exist", I said. What if we're trying to reach 7.7.7.7 using the
extended-access list policy NAT method? Both our access-lists would read the same:
That's not going to work. In fact, what if the destination was the Internet? Your access-lists might look like:
Build
1) match access-lists
Some examples will also show them setting interfaces (or next-hops), but I've not seen a functional
difference between using "set interface" and "match interface" with policy NAT. If anyone knows a
This would simulate a poor man's redundant Internet solution - different static IPs on different ISPs, routing
out one at a time. If Fa0/0 goes down, Fa1/0 will take over. Let's give it a try:
R1#ping 7.7.7.7
!!!!!
R4#conf t
R4(config)#int fa0/0
R4(config-if)#shut
R1#ping 7.7.7.7
!!!!!
Note that the "match ip address" clause in the route-map is really not necessary in this case, but I included
it to show the functionality. "match interface" is sufficient to make the NAT decision.
We saw earlier than dynamic NAT is typically reversible. Not so much with route-maps for dynamic NAT.
R4(config)#int fa0/0
R4(config-if)#no shut
The reversible keyword is required in order to make this scenario happen with route-maps.
R4(config)#int fa0/1
R4(config-if)#int fa0/0
R4(config-if)#int fa1/0
I showed how to do policy-PAT already, but 1:1 is a whole different story. Let's say this is a server farm,
we have two different ISPs, but we're not running BGP and we have separate IP ranges statically assigned
from both ISPs. How do we do a hot/cold failover but maintain the static NAT?
Let's make 192.168.0.1 our "server" and try to forward traffic from two outside IPs towards it.
You had probably already guessed that that wasn't going to work.
Verification -
R5#telnet 30.0.0.1
Password:
R1>
R4(config)#int fa0/0
R4(config-if)#shut
R6#telnet 30.1.0.1
Password:
R1>
R4(config)#int fa0/0
R4(config-if)#no shut
There's another way to accomplish something similar. The extendable command makes sort of a reverse-
R5#telnet 30.0.0.1
Password:
R1>
We'd expect an entry like this based on the default ip nat create flow-entries. However, this time, it's
taken... more literally. The router is doing what I can only describe as a bi-directional PAT.
R6#telnet 30.1.0.1
Password:
R1>
Those entries are there for more than just acceleration, they're actually required now. In fact, I got curious
and disabled ip nat create flow-entries:
Tough luck, you're getting the flow entries anyway, because this process doesn't work without them.
Here's something I was always curious about - NATing to totally arbitrary IPs in IOS.
You've certainly gathered by now that you can NAT to anything, even IPs that aren't on any of your
interfaces. That's generally pretty useless because other devices aren't aware how to reach the IPs,
207.50.50.0/24. I don't want to put 207.50.50.0/24 on an interface. I'm going to use NVI NAT for the
R4(config)#interface fa0/0
R4(config)#interface fa0/1
You may remember "add-route" from domain NAT; here it is for NVI NAT. Note it's applied on the pool with
This static route can now be introduced into our outside routing protocol through redistribution. Or, you
could just use a bgp "network" statement: network 207.50.50.0 mask 255.255.255.0. In our case, the
R4(config)#router ospf 1
Verify -
R1#ping 30.0.0.5
!!!!!
R5#
*Mar 1 00:36:11.983: ICMP: echo reply sent, src 30.0.0.5, dst 207.50.50.1
*Mar 1 00:36:12.019: ICMP: echo reply sent, src 30.0.0.5, dst 207.50.50.1
*Mar 1 00:36:12.055: ICMP: echo reply sent, src 30.0.0.5, dst 207.50.50.1
*Mar 1 00:36:12.115: ICMP: echo reply sent, src 30.0.0.5, dst 207.50.50.1
*Mar 1 00:36:12.159: ICMP: echo reply sent, src 30.0.0.5, dst 207.50.50.1
Something similar could also be accomplished by creating a static route to null0 - ip route 207.50.50.0
We're still missing one big topic in this article still: SNAT, or Stateful NAT. It's a way of sharing NAT tables
across multiple routers, typically via HSRP, for the purpose of hot/hot shared NAT or hot/cold shared
NAT. This method could literally take a blog to itself... in fact, it did! I had to put it into production a few
http://brbccie.blogspot.com/2013/03/stateful-nat-with-asymmetric-routing.html
Jeff Kronlage
0 0
06/30/13--10:40: Netflow
This post will be geared towards CCIE lab topics. I will use Solarwinds' freebie Netflow analyzer in some
examples, but the topics, in general, will be geared towards exporting data, not towards collecting it.
Let's kick off with a discussion of versions. Anyone who's used Netflow before knows version 5 is the one
typically used, with some newer implementations using version 9. So what's the story on all the "lost
versions"?
v1 - First implementation, still supported on 12.4T, restricted to IPv4 without subnet masks.
v9 - Latest Cisco standard, supports IPv6, aggregation, and Flexible Netflow (FNF).
"v10" - aka IPFIX, this is the open standard for Netflow and will presumably replace it eventually. It's called
"v10" because the version header in the packet of IPFIX is "10", and is basically an open standard
implementation of v9.
We will be focusing primarily on v5 and v9, and touching a little bit on v8. There's no good argument for
using v1, and IOS 12.4(15)T only supports v1, v5, v8 (limited) and v9. IPFIX/v10 isn't available in
12.4(15)T. Fortunately - or perhaps unfortunately for those who are looking at this document for reasons
other than academic reasons - the Catalyst 3560 that is on the lab exam doesn't support Netflow at all, so
we're not going to touch on Catalyst Netflow at all. Of note, more modern 3560s, such as the 3560-X, do
support Netflow.
If you want to know more about the various Netflow versions, here is a fantastic explanation:
http://www.youtube.com/watch?v=rcDQi7M1uo4
On a side note, I mentioned above that the protocol is determined on a high-level by protocol number: TCP,
UDP, ICMP, etc. In newer versions of IOS (15.0+), NBAR can be integrated into Netflow for more granular
protocol results. As that is presently outside the scope of the CCIE lab, I will not be discussing it here.
Let's look at some basic Netflow v5 usage. Here is our lab topology:
R7 (Lo0 7.7.7.7) and R8 (Lo0 8.8.8.8) will be communicating with each other, with R1 running Netflow, and
http://www.solarwinds.com/products/freetools/netflow-analyzer.aspx
We'll enable TCP small servers so that we can utilize chargen to create TCP flows.
R7(config)#service tcp-small-servers
R8(config)#service tcp-small-servers
<ctrl-c>
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefg
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefgh
<ctrl-c>
Even after terminating the output with ctrl-c, the session is still running in the background:
R1(config)#int fa0/1
R1(config-if)#int fa1/0
R1(config-if)#int fa2/0
R1(config-if)#exit
It's unlikely you'd be able to start collection in Solarwinds at this point. The freebie Solarwinds will only let
you start collection if it's receiving Netflow packets, and it's unlikely any flows have been sent yet, as the
time-out for ongoing flows is presently set to 1 hour. Let's turn it down:
We'll go ahead and turn down the inactive flow timer as well:
You probably noticed my use of ip flow ingress on all interfaces that are passing traffic. This is a new
command with Netflow v9. The old command was ip route-cache flow. It's still supported but it's almost
functionality identical to ip flow ingress. The one small difference is with sub-interfaces. If you apply ip
flow ingress to a main interface, you're going to get the native VLAN traffic reported only. If you want the
entire interface, you apply ip route-cache flow to the main interface, and it basically acts as a macro,
applying ip flow ingress to every sub-interface (even ones created in the future) for you.
One of the most baffling things for me was the ip flow egress command. There's some very important
things to know about its usage. First of all, do not use it unless you are using Netflow v9. Netflow v5
doesn't have a concept of ingress and egress. There's no field in the v5 packet for direction.
So how do you collect egress traffic information on v1 or v5? This is simple. ip flow ingress is applied to
every interface and the collector reverses the information behind the scenes. Logically, if the collector can
see all the ingress flows, it would know about all the egress flows, too (what comes in most go out!). We'll
talk more about ip flow egress when we get to Netflow v9.
As you might imagine, a busy Netflow exporter could not only create a lot of extra CPU and memory usage
for the router, but it could create too much traffic on the wire or even swamp the collector. Sampled
Netflow was created to fix this problem. Sampled Netflow would take every 1 out of X packets and sample
it. The problem with this mechanism is that it may continuously miss flows that are happen in between 1
and X. Say you are looking at every 1 in 100 packets, and you continuously have a burst every 50th
packet. Sampled Netflow will never see this burst. Introducing random sampled Netflow, which still grabs
every 1 in X packet, but introduces a random element so that it's not precisely every 1 in 100. Sampled
Netflow isn't supported on any equipment on the CCIE lab, but random sampled Netflow is.
R1(config)#flow-sampler-map NETFLOW-TEST
R1(config-sampler)#exit
R1(config)#int fa0/1
R1(config-if)#flow-sampler NETFLOW-TEST
R1(config-if)#int fa1/0
R1(config-if)#int fa2/0
R1(config-if)#no ip flow ingress
R1(config-if)#flow-sampler NETFLOW-TEST
Note I've turned off ip flow ingress on all interfaces first. ip flow ingress trumps random sampled
Netflow.