0% found this document useful (0 votes)
355 views304 pages

Cisco CCIE 400-101 Study Guide

Uploaded by

Joel Nguina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
355 views304 pages

Cisco CCIE 400-101 Study Guide

Uploaded by

Joel Nguina
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 304

CCIE Routing and Switching Written

Exam
Exam code - 400-101
2

Certifications:

 Cisco Certified Internetworking Expert Routing and Switching Written Exam

The Cisco CCIE® Routing and Switching Written Exam (400-101) version 5.0 is a
two-hour test with 90−110 questions that will validate that professionals have the
expertise to: configure, validate, and troubleshoot complex enterprise network
infrastructure; and, understand how infrastructure components interoperate; and
translate functional requirements into specific device configurations. The exam is
closed book and no outside reference materials are allowed.

About This Study Guide


This Study Guide provides all the information required to pass the 400-101 CCIE
Routing and Switching Written Exam. It however, does not represent a complete
reference work but is organized around the specific skills that are tested in the exam.
Thus, the information contained in this Study Guide is specific to the 400-101 exam
and not to Cisco CCIE routing and switching exam entirely because it includes a lab
exam as well. It includes the information required to answer questions related to CCIE
Routing and switching that may be asked during the exam. Topics covered in this
Study Guide include networking principles, Layer 2 technologies, Layer 3
technologies, VPN technologies, Infrastructure security, and Infrastructure services.

Intended Audience

This Study Guide is targeted towards networking experts who want to reach the zenith
of IT networking by taking the ultimate exam in the entire computer networking
industry related to routing and switching technologies. CCIE routing and switching is
the expert of networking who oversees the routing and switching aspect of a large
networked establishment and implement and troubleshoots various routing and
switching problems.

Good luck!
3

TABLE OF CONTENTS

1.0 Network theory-------------------------------------------------------------4


1.2 Network implementation and operation---------------------------------19
1.3 Network troubleshooting--------------------------------------------------35

2.0 Layer 2 Technologies-----------------------------------------------------52


2.1 LAN switching technologies----------------------------------------------52
2.2 Layer 2 multicast-----------------------------------------------------------83
2.3 Layer 2 WAN circuit technologies---------------------------------------87

3.0 Layer 3 Technologies-----------------------------------------------------98


3.1 Addressing technologies---------------------------------------------------98
3.2 Layer 3 multicast----------------------------------------------------------114
3.3 Fundamental routing concepts-------------------------------------------137
3.4 RIP (v2 and v6)------------------------------------------------------------169
3.5 EIGRP (for IPv4 and IPv6)----------------------------------------------176
3.6 OSPF (v2 and v3)---------------------------------------------------------195
3.7 BGP-------------------------------------------------------------------------215
3.8 ISIS (for IPv4 and IPv6)-------------------------------------------------220

4.0 VPN Technologies-------------------------------------------------------226


4.1 Tunneling------------------------------------------------------------------226
4.2 Encryption----------------------------------------------------------------248

5.0 Infrastructure Security-------------------------------------------------252


5.1 Device security------------------------------------------------------------252
5.2 Network security---------------------------------------------------------259

6.0 Infrastructure Services------------------------------------------------268


6.1 System management-----------------------------------------------------268
6.2 Quality of service---------------------------------------------------------272
6.3 Network services---------------------------------------------------------280
6.4 Network optimization----------------------------------------------------289
4

1.0 Network theory

1.1.a Describe basic software architecture differences between IOS


and IOS XE

IOS XE Software Architecture


The IOS XE foundation is a POSIX environment along with Free and Open Source
Software (FOSS) for the common drivers, tools and utilities needed to manage the
system. In addition to the standard set of off-the-shelf drivers, IOS XE is comprised of
a set of Cisco specific drivers and associated chassis/platform management modules.

On top of the base operating system and drivers, IOS XE provides a set of
infrastructure modules which define how software is installed, how processes are
started and sequenced, how high-availability and software upgrades are performed
and, lastly, how the applications are managed from an operational perspective. The
core application that runs on top of this new infrastructure is the IOS feature set.
Cisco products immediately reap the benefits of an extensive feature set for routing
and switching platforms that has been built into IOS over the years. Customers can
expect the same features to be available and for them to perform and be managed in
the exact same way as previous products.

Finally, the evolved IOS architecture is specifically designed to accommodate other


applications outside of IOS. These applications can either be tightly integrated with
IOS or they could run side-by-side with IOS with very little or no interactions. These
applications can be upgraded or restarted independently of IOS. If an application does
require services from IOS, it integrates with IOS through a set of client libraries called
“service points”. These service points generically extend IOS information and
services to outside applications such that these services are not replicated or managed
separately.

IOS XE Naming Conventions


The naming convention has changed since there are many features that need to be
highlighted. Let us highlight the parts which make up the below image name:

cat4500e-universalk9.SPA.03.01.00.SG.150-01.XO
cat4500e: Platform Designator
universal: Feature Set Designator
k9: Crypto Designator if crypto code is present in IOSd package
SPA: Indicates image is digitally signed
03.01.00.SG: Release Version number
150.01.XO: IOSd package version number – this will allow you to correlate the
version of IOSd to another platform running classic IOS
5

Examples:

IOS XE CLI Differences


IOS XE looks and feels the same as the IOS that we all know. There is almost no
change in the different feature configurations, making the migration and user
experience consistent with IOS. If you know how to operate IOS today, you already
know how to operate IOS XE. The only minor difference in the CLI, and some
outputs, is due to the customization that reflects the process-oriented approach of IOS
XE, and the ability to use a multi-core CPU.

For example The Supervisor 7-E incorporates a Free scale dual-core CPU running at
1.5GHz. The IOSd is a multi- threaded process, which uses both of the cores,
allowing it to load-balance and scale to the CPU’s maximum processing capacity.
The “show version” command is changed to reflect the IOS XE Naming convention,
and Licensing information.

The “show process cpu” is changed to reflect the various process running along with
IOSd in the linux environment. This command now displays a detailed description of
IOS and non-IOS processes across all CPU cores. Adding the “detailed process iosd”
keyword, displays the CPU utilization statistics by the IOSd process across all CPU
cores.

Similarly, “show process memory” displays memory utilization of the entire system.
Adding the “detailed process iosd” keyword, displays the memory consumed by the
IOSd process.

Below are some of the outputs of the above commands.

The command below shows the utilization of the respective cores of the CPU, in the
output of “show process cpu | include five”.
6

The above command output shows the allocation of the threads to the cores. The cores
(0 or 1) are shown on the left, below the “C” column, with the Thread IDs (TID) to its
right.

Control plane and Forwarding plane


OS XE introduces an opportunity to enable teams to now build drivers for new Data
Plane ASICs outside the IOS instance and have them program to a set of standard
APIs which in turn enforces Control Plane and Data Plane processing separation.

IOS XE accomplishes Control Plane / Data Plane separation through the introduction
of the Forwarding and Feature Manager (FFM) and its standard interface to the
7

Forwarding Engine Driver (FED). FFM provides a set of APIs to Control Plane
processes. In turn, the FFM programs the Data Plane via the FED and maintains
forwarding state for the system. The FED is the instantiation of the hardware driver
for the Data Plane and is provided by the platform.

Excluding specific platform's architecture

Platform Abstraction
Since, historically, IOS has served as an Operating System as well as providing the
key Routing Infrastructure, there has always been an aspect of Platform Dependent
(PD) and Platform Independent (PI) code within IOS. IOS XE allows the platform
dependent code to be abstracted from a single monolithic image. By moving drivers
outside of IOS, IOS XE enables a more purely PI-focused IOS process. This provides
a more efficient software delivery model for both the core IOS team, as well as
platform developers, since the software can be developed, packaged and released
independently.

Application Integration
Prior to IOS XE, the only way to integrate functionality into an IOS product was to
either port the functionality into the IOS operating system or run the functionality on a
service blade outside of IOS. This model has fundamentally constrained how quickly
Cisco can integrate homegrown features, products through acquisition, or third party
applications.

IOS XE permits the integration of non-IOS applications through the following


mechanisms:

 Standard Linux-based environment for hosting applications;


 Extending IOS functionality into peripheral applications through well-defined
APIs exported via Linux- shared client libraries;
 Provide a robust management infrastructure called Common Management
Enabling Technology (COMET) that allows for CLI, XML, SNMP, and HTTP-
based management of integrated applications.

1.1.b Identify Cisco express forwarding concepts


RIB, FIB, LFIB, Adjacency table
 Routers once had combined control planes and data planes
 Control Plane creates and maintains the routing table
 Data Plane responsible for moving data from ingress to egress
 The Routing Information Base (RIB) operates in software
 The Forwarding Information Base (FIB) takes the RIB’s best routes and places
them in it’s own data construct which resides in faster hardware resources
8

 Cisco’s implementation of this architecture is know as Cisco Express Forwarding


(CEF)
 Process Switching -the Router/MLS processes every packet to make a forwarding
decision.
 Fast Switching - the initial packet’s forwarding decision is still derived from the
Route Processor, but that destination is then held in cache for subsequent
forwarding precluding the processor’s involvement
 CEF - takes fast switching a step further by introducing the FIB and Adjacency
tables into the equation
 FIB is a mirror image of the IP routing table. Changes to the routing table and
next hop IP’s are reflected in the FIB
 Adjacency table is populated with L2 next hop addresses for all FIB entries (ARP,
frame-relay mappings
 LFIB - Label Forwarding Information Base - used for MPLS in order to fast
switch labeled packets

Routers and MLS were once centralized, cache-based systems combining the control
and data planes. The control plane is comprised of the technologies that create and
maintain the routing table. The data plane is comprised of the technologies that move
data from ingress to egress.

This architecture has since split into the RIB and FIB (Routing Information Base, and
Forwarding Information Base). The RIB operates in software, and the FIB takes the
RIB’s best routes and places them in it’s own data construct which resides in faster
hardware resources. Cisco’s implementation of this architecture is know as CEF
(Cisco Express Forwarding).

Process Switching, Fast Switching and the evolution to CEF


Process Switching requires the Router/MLS to process every packet to make a
forwarding decision.

Fast Switching evolved from Process Switching, whereby the initial packet’s
forwarding decision is still derived from the Route Processor, but that destination is
then held in cache for subsequent forwarding precluding the processor’s involvement.

With CEF, Cisco took fast switching a step further by introducing the FIB and
Adjacency tables into the equation.

The FIB is a mirror image of the IP routing table. Changes to the routing table and
next hop ip’s are reflected in the FIB. Fast switching route cache maintenance is
thereby eliminated.

The adjacency table is populated with l2 next hop addresses for all FIB entries, hence
adjacency. When an adjacency is established, as through ARP, a link layer header for
that adjacency is stored in the adjacency table.
9

Load balancing Hash

Per-destination mode - all packets for a given destination are forwarded along the
same path. Preserves packet order
Per-packet load balancing - guarantees equal load across all links, however
potentially the packets may arrive out-of-order at the destination as differential delay
may exist within the network.

 Each SIP/DIP session is assigned to an active path


 The session-to-path assignment is done using a hash function that takes the source
and destination IP addresses and a unique hash ID that randomizes the assignment
across the end-to-end path
 Active paths are assigned internally to several of 16 hash buckets. The path-to-
bucket assignment varies with the type of load balancing and the number of active
paths
 The result of the hash function is used to pick one of the enabled buckets, and thus
which path to use for the session.

How CEF load balancing works


CEF is an advanced Layer 3 switching technology inside a router. Usually a router
uses a route cache to speed up packet forwarding. The route cache is filled on demand
when the first packet for a specific destination needs to be forwarded. If the
destination is on a remote network reachable via a next hop router, the entry in the
route cache is consisting of the destination network. If parallel paths exist this does
not provide load balancing, as only one path would be used. Therefor the entry in the
route cache now relates to a specific destination address, or host. If multiple hosts on
the destination network are receiving traffic a route cache entry for each individual
host is made, balancing the hosts over the available paths. This provides per
destination load balancing. The problem that arises is that for a backbone router
carrying traffic for several thousands of destination hosts a respective number of
cache entries is needed. This consumes memory and makes cache maintenance a
demanding task. In addition the decision about which path to use is done at the time
the route-cache is filled, and it is based on the utilization of the individual links at that
point in time. However the amount of traffic on individual connections can change
over time, possibly leading to a situation where some links carry mostly idle
connections and others are congested. CEF takes a different approach as it calculates
all information necessary for the forwarding task in advance and decouples the
forwarding information from the next hop adjacency, which allows for effective load
balancing.

The two main components of CEF operation are the


•Forwarding Information Base
•Adjacency Tables

Forwarding Information Base


10

CEF uses a Forwarding Information Base (FIB) to make IP destination prefix-based


switching decisions. The FIB is conceptually similar to a routing table or information
base. It maintains a mirror image of the forwarding information contained in the IP
routing table. When routing or topology changes occur in the network, the IP routing
table is updated, and those changes are reflected in the FIB. The FIB maintains next-
hop address information based on the information in the IP routing table. Because
there is a one-to-one correlation between FIB entries and routing table entries, the FIB
contains all known routes and eliminates the need for route cache maintenance that is
associated with earlier switching paths such as fast switching and optimum switching.

Adjacency Tables
Network nodes in the network are said to be adjacent if they can reach each other with
a single hop across a link layer. In addition to the FIB, CEF uses adjacency tables to
prepend Layer 2 addressing information. The adjacency table maintains Layer 2 next-
hop addresses for all FIB entries.

The adjacency table is populated as adjacencies are discovered. Each time an


adjacency entry is created (such as through the ARP protocol), a link-layer header for
that adjacent node is pre-computed and stored in the adjacency table. Once a route is
determined, it points to a next hop and corresponding adjacency entry. It is
subsequently used for encapsulation during CEF switching of packets. A route might
have several paths to a destination prefix, such as when a router is configured for
simultaneous load balancing and redundancy. For each resolved path a pointer is
added for the adjacency corresponding to the next-hop interface for that path. This
mechanism is used for load balancing across several paths. For per destination load
balancing a hash is computed out of the source and destination IP address. one of the
adjacency entries in the adjacency table, providing that the same path is used for all
packets with this source/destination address pair. If per packet load balancing is used
the packets are distributed round robin over the available paths. In either case the
information in the FIB and adjacency tables provide all the necessary forwarding
information, just like for non-load balancing operation. The additional task for load
balancing is to select one of the multiple adjacency entries for each forwarded packet.

Polarization concept and avoidance


CEF polarization (use of the same hash algorithm and same hash input which results
in the use of a single Equal-Cost Multi-Path (ECMP) link for ALL flows).

1.1.c Explain general network challenges

Unicast flooding
 LAN switches use CAM table to forward frames
 When there is no entry corresponding to the frame's destination MAC address in
the incoming VLAN, the (unicast) frame will be sent to all forwarding ports
within the respective VLAN, which causes flooding

Causes:
11

 Asynchronous Routing
 Spanning-Tree Topology Changes
 CAM Table Overflow

Out of order packets


 Well-known phenomenon that the order of packets is inverted in the Internet
 Can be caused by per-packet load-sharing algorithm
 Can affect network (re-transmissions) and receiver (slow TCP session)
 Causes Unnecessary Re-transmission: When the TCP receiver gets packets out of
order, it sends duplicate ACKs to trigger fast re-transmit algorithm at the sender.
These ACKs make the TCP sender infer a packet has been lost and re-transmit it
 Limits Transmission Speed: When fast re-transmission is triggered by duplicate
ACKs, the TCP sender assumes it is an indication of network congestion. It
reduces its congestion window (cwnd) to limit the transmission speed, which
needs to grow larger from a “slow start” again. If reordering happens frequently,
the congestion window is at a small size and can hardly grow larger. As a result,
the TCP connection has to transmit packets at a limited speed and cannot
efficiently utilize the bandwidth.
 Reduce Receiver’s Efficiency: TCP receiver has to hand in data to the upper layer
in order. When reordering happens, TCP has to buffer all the out-of-order packets
until getting all packets in order. Meanwhile, the upper layer gets data in burst
rather than smoothly, which also reduce the system efficiency as a whole.

Asymmetric routing
In Asymmetric routing, a packet traverses from a source to a destination in one path
and takes a different path when it returns to the source can cause problems with
firewalls as they maintain state information.

Impact of micro burst


Micro-bursting - rapid bursts of data packets are sent in quick succession, leading to
periods of full line-rate transmission that can overflow packet buffers of the network
stack can be mitigated by a network scheduler.

1.1.d Explain IP operations

ICMP Unreachable, redirect


ICMP is a TCP/IP protocol designed to help manage and control the operation of a
TCP/IP network. The ICMP protocol provides a wide variety of information about a
network’s status and is considered part of TCP/IP’s network layer. ICMP can provide
useful information for troubleshooting TCP/IP.

ICMP uses messages to accomplish its tasks. Many of these messages are used in
even the smallest IP network.

ICMP Messages
Message Description
12

Destination Unreachable Informs the source host that there is a problem delivering a
packet.
Time Exceeded Indicates that the time that it takes a packet to be delivered
has expired and that the packet has been discarded.
Redirect Indicates that the packet has been redirected to another
router that has a better route to the destination address. The
message informs the sender to use the better route.
Echo Used by the ping command to verify connectivity.

IPv4 options, IPv6 extension headers

Issues such as address exhaustion that made version 4 of the IP (IPv4) inadequate,
require robust solutions. While 32 bits of address space were originally thought to be
adequate time and growth have proven this to not be the case. Additionally, IPv4
suffers from a lack of hierarchical structure; while addresses may be sequentially
allocated and summarized, they are not optimized by routing or allocation.

Designers of IPv6 worked diligently to ensure that the same issues would not be
encountered. Members of the Internet community who were responsible for
developing the protocol carefully scrutinized each new RFC penned for IP.

Next Header Field


The next header field is designated to tell routers if any other headers need to be
looked at for the packet to route according to instruction. This feature differs
drastically from the IPv4 case, where only one header has a fixed length. The IPv6
main header has a fixed length as well (enabling routers to know beforehand how
much of the packet they need to read), but has built-in functionality to stack other
headers that provide other value-added services on top of the main header. This field
is 8 bits in length, allowing for up to 255 types of Next-Headers. Currently, only finite
amounts of Next-Headers are developed. Here is a list of the ones currently on the
plate:

• Hop by Hop Options Header


• Destination Options Header I
• Routing Header
• Fragment Header
• Authentication Header
• Encapsulating Security Payload Header
• Destination Options Header II

The preceding list shows the selection of Next-Header fields that can occur in an IPv6
packet. These headers are listed in order of the appearance they would make in an
IPv6 packet utilizing this extra functionality.

IPv4 and IPv6 Fragmentation

Although the maximum length of an IP packet can be over 65,000 bytes, many lower-
layer protocols do not support such large maximum transmission units (MTU). When
13

the IP layer receives a packet to send, it first queries the outgoing interface to get its
MTU. If the size of the packet is greater than the MTU of the interface, the packet is
fragmented. When a packet is fragmented, it is not reassembled until it reaches the IP
layer on the destination host.

Furthermore, any router in the path tot the destination host can fragment a packet, and
any router in the path can further fragment an already fragmented packet. Each
fragmented packet receives its own IP header and is routed independently from other
packets.

If one or more fragments are lost, the entire packet must be retransmitted.
Retransmission is the responsibility of the higher-layer protocol (such as TCP).

If the flags field in the IP header is set to Do Not Fragment (010) the packet, the
packet is discarded if the outgoing MTU is smaller than the packet size.

TTL
There are two time-to-live (TTL) attributes that are used to ensure efficient delivery:
the TTL counter and the TTL threshold. The TTL counter is decremented by one
every time the packet hops a router. Once the TTL counter is reaches zero, the packet
is discarded. The TTL threshold provides greater granularity and is applied to
specified interfaces of multicast-enabled routers. The router compares the threshold
value of the multicast packet with the value specified in the interface configuration. If
the TTL value of the packet is greater than or equal to the TTL threshold configured
for the interface, the packet will be forwarded through that interface. This allows
network administrators to limit the distribution of multicast packets beyond their
boundaries by setting high values for outbound external interfaces. The maximum
value for the TTL threshold is 255.

IP MTU

The Path MTU Discovery (PMTUD) feature is defined in RFC 1191. This feature
deter- mines what the MTU is over the path between two nodes. This allows the TCP
session to set the maximum possible MSS to improve TCP performance for large data
transfers with- out causing IP fragmentation.

PMTUD is based on trial and error. The first packet is built to the size of the MTU of
the next-hop interface to the destination. The Don’t Fragment (DF) bit is set, and the
IP packet is sent. If the packet reaches the destination, the session forms.
However, if the packet does not reach the destination, the intermediary hop that
discards the packet because of an MTU conflict responds with an ICMP Packet Too
Big message, which contains the MTU of the link that could not accommodate the
packet. The sending host then issues another packet that is sized to the MTU in the
ICMP message. This process repeats until the packet reaches the destination.

The MSS value is set to the MTU minus the 40 bytes of IP and TCP overhead. The
40-byte value assumes that additional TCP options are not being used, which is the
default behavior. This provides the 1460-byte MTU. It is possible to have even larger
MTU sizes, especially internal to the network. The Packet over SONET (POS) link
has an MTU of 4470. If two BGP peers use PMTUD and are connected only by POS
14

or ATM links with MTUs of 4470, the MSS could be as large as 4430, which
provides an even greater reduction in update packets and TCP ACK messages.

1.1.e Explain TCP operations

IPv4 and IPv6 PMTU


Path mtu for ipv6 is similar to that for ipv4, allowing for the discovery by a host of
the MTU’s a packet will encounter along a data path. The big difference with ipv6 is
that the source handles the fragmentation.

Path MTU Discovery (PMTUD) for IPv6


The main goal of PMTUD is finding out the MTU value along a path when a packet is
sent to a destination to avoid fragmentation. Then the source node can use the
maximum MTU size found to communicate with the destination node. Fragmentation
may occur in intermediary routers when the packet is larger than the link layer's
MTU. Fragmentation is a harmful and costly operation in terms of CPU cycles for
routers. Moreover, in some circumstances, fragmentation over fragmentation might
occur in several intermediate routers along a delivery path, causing a decrease in
performance.

Intermediary routers do not perform fragmentation in IPv6. The source node may
fragment packets by itself only when the path MTU is smaller than the packets to
deliver. As described in RFC 2460, Internet Protocol, Version 6 (IPv6) Specification,
it is strongly recommended that IPv6 nodes implement PMTUD for IPv6 to avoid
fragmentation.

As defined in RFC 1981, Path MTU Discovery for IP version 6, PMTUD for IPv6
uses ICMPv6 error message Type 2, Packet Too Big, for its operation. Figure 1.1
shows an example of PMTUD for IPv6 used by a source node. First, the source node
that sends the first IPv6 packet to a destination node uses 1500 bytes as the MTU
value (1).

Then, the intermediary Router A replies to the source node using an ICMPv6 message
Type 2, Packet Too Big, and specifies 1400 bytes as the lower MTU value in the
ICMPv6 packet (2). The source node then sends the packet but instead uses 1400
bytes as the MTU value; the packet passes through Router A (3).

However, along the path, intermediary Router B replies to the source node using an
ICMPv6 message Type 2 and specifies 1300 bytes as the MTU value (4). Finally, the
source node resends the packet using 1300 bytes as the MTU value. The packet passes
through both intermediary routers and is delivered to the destination node (5). The
session is now established between source and destination nodes, and all packets sent
between them use 1300 bytes as the MTU value (6).

Figure 1.1. PMTUD Uses ICMPv6 Type 2 Messages


15

The MTU values found by PMTUD for IPv6 are cached by source nodes. With Cisco
IOS Software technology, you can display the PMTUD values cached per destination
using the command show ipv6 mtu. The syntax of this command follows:

Router#show ipv6 mtu

MSS
The maximum segment size (MSS) is a parameter of the TCP protocol that specifies
the largest amount of data, specified in octets, that a computer or communications
device can receive in a single TCP segment. It does not count the TCP header or the
IP header.[1] The IP datagram containing a TCP segment may be self-contained
within a single packet, or it may be reconstructed from several fragmented pieces;
either way, the MSS limit applies to the total amount of data contained in the final,
reconstructed TCP segment.

Latency

key causes:
 propagation delay
 serialization
 data protocols
 routing and switching
 queueing and buffering

Propagation delay is the primary source of latency. it is a function of how long it


takes information to travel over the communications media at the speed of light from
source to destination.

Serialization is the conversion of bytes of data into a serial bit stream to be


transmitted over the media, ie:
16

Serialization of a 1500 byte packet on a 100mbps lan will take 120 microseconds data
communications protocols at various layers use handshakes to synchronize between
transmitter and receiver, and for error detection/correction. These handshakes take
time and therefore create latency also

Routing and switching latency – routers and switches add approximately 200
microseconds of latency to the link for packet processing. This contributes about 5%
overall latency to an average Internet link queueing and buffer management can
contribute 20 ms to latency. This occurs as packets are necessarily queued due to
over-utilization of a link

Network latency in a packet-switched network is measured either one-way (the time


from the source sending a packet to the destination receiving it), or round-trip delay
time (the one-way latency from source to destination plus the one-way latency from
the destination back to the source). Round-trip latency is more often quoted, because
it can be measured from a single point. Note that round trip latency excludes the
amount of time that a destination system spends processing the packet. Many software
platforms provide a service called ping that can be used to measure round-trip latency.
Ping performs no packet processing; it merely sends a response back when it receives
a packet (i.e. performs a no-op), thus it is a first rough way of measuring latency. Ping
cannot perform accurate measurements,[2] principally because it uses the ICMP
protocol that is used only for diagnostic or control purposes, and differs from real
communication protocols such as TCP. Furthermore routers and ISP‘s might apply
different traffic shaping policies to different protocols.

However, in a non-trivial network, a typical packet will be forwarded over many links
via many gateways, each of which will not begin to forward the packet until it has
been completely received. In such a network, the minimal latency is the sum of the
minimum latency of each link, plus the transmission delay of each link except the
final one, plus the forwarding latency of each gateway.

In practice, queuing and processing delays further augment this minimal latency.
Queuing delay occurs when a gateway receives multiple packets from different
sources heading towards the same destination. Since typically only one packet can be
transmitted at a time, some of the packets must queue for transmission, incurring
additional delay. Processing delays are incurred while a gateway determines what to
do with a newly received packet. A new and emergent behavior called buffer-bloat
can also cause increased latency that is an order of magnitude or more. The
combination of propagation, serialization, and queuing, and processing delays often
produces a complex and variable network latency profile.

Windowing

TCP Reliability and Flow Control Features and Protocol Modifications


The main task of the Transmission Control Protocol is simple: packaging and sending
data. Of course, almost every protocol packages and sends data. What distinguishes
TCP from these protocols is the sliding window mechanism that controls the flow of
data between devices. This system not only manages the basic data transfer process, it
is also used to ensure that data is sent reliably, and also to manage the flow of data
17

between devices to ensure that data is transferred efficiently without either device
sending data faster than the other can receive it.

Extra “smarts” needed to be given to the protocol to handle potential problems, and
changes to the basic way that devices send data were implemented to avoid
inefficiencies that might otherwise have resulted.

I begin with an explanation of the basic method by which TCP detects lost segments
and retransmits them. Some of the issues associated with TCP's acknowledgment
scheme and an optional feature for improving its efficiency. I then describe the system
by which TCP adjusts how long it will wait before deciding that a segment is lost.
This includes a look at the infamous “Silly Window Syndrome” problem, and special
heuristics for addressing issues related to small window size that modify the basic
sliding windows scheme.

TCP Sliding Window Acknowledgment System For Data Transport, Reliability


and Flow Control
A simple “send and forget” protocol like IP is unreliable and includes no flow control
for one main reason: it is an open loop system where the transmitter receives no
feedback from the recipient. A datagram is sent and it may or may not get there, but
the transmitter will never have any way of knowing because no mechanism for
feedback exists. This is shown conceptually in Figure 1.2.

Figure 1.2 Operation Of An Unreliable Protocol

1.1f. Explain UDP operations

Starvation
UDP traffic requires a lower amount of bandwidth when compared to the bandwidth
that TCP requires. However, when TCP responds to packets being lost, UDP tends to
consume an inconsistent amount of bandwidth. Because the TCP senders reduce the
rate of transmitting packets when congestion takes place, UDP senders can utilize
more bandwidth. Remember that UDP senders do not reduce the rate of sending when
18

congestion occurs. In addition to this, UDP applications that consume vast amounts of
bandwidth can lead to a queue being full. This could lead to TCP packets being tail
dropped. These processes just described that result in the output queues tail dropping
TCP packets, and being filled with UDP packets, are referred to as TCP starvation.

RTP/RTCP concepts

UDP at transport layer provides the port numbers: (session multiplexing) and header
checksum. RTP adds time stamp and sequence number to the header information. So
that packets are rearranged at the remote device end in the correct order (sequence
number) and a buffer to handle jitter between packets to ensure smooth audio output
(time stamp).
The payload type field is used to designate type of RTP in use. We can use RTP for
audio or video purpose. When two devices try to establish an audio session, for each
RTP stream an even UDP port number is chosen (ranging from 16,384 to 32,677).
RTP streams are one way, so in case of two way communication, two RTP streams,
one in each direction will be used. The chosen port is fixed for the entire audio
session and ports don’t change dynamically during the call.

The protocol delivers statistics information related to – packet count, packet delay,
packet loss and jitter. RTP can carry interactive audio and video using dynamic port
range that is makes it difficult to negotiate firewalls. RTP is used commonly for
unicast sessions, but it is primarily designed for multicast sessions. So apart from
roles of sender/receiver, it also define role for translator and mixer to support
multicast sessions. RTP is used in streaming media systems, videoconferencing and
push to talk systems. Applications relying on RTP are delay sensitive but less
sensitive to packet loss, where preferred choice is UDP. RTP don’t send
retransmission request if a packet is lost during the transport. Figure 2 illustrates role
of RTP in VOIP networks.

1.2 Network implementation and operation

1.2.a Evaluate proposed changes to a network

Changes to routing protocol parameters

BGP
The Border Gateway Protocol (BGP) is an inter-autonomous system routing protocol.
An autonomous system is a network or group of networks under a common
administration and with common routing policies. BGP is used to exchange routing
information for the Internet and is the protocol used between Internet Service
Providers (ISP). Customer networks, such as universities and corporations, usually
employ an Interior Gateway Protocol (IGP) such as RIP or OSPF for the exchange of
routing information within their networks. Customers connect to ISPs, and ISPs use
BGP to exchange customer and ISP routes. When BGP is used between Autonomous
Systems (AS), the protocol is referred to as External BGP (EBGP). If a service
provider is using BGP to exchange routes within an AS, then the protocol is referred
to as Interior BGP (IBGP).
19

BGP is a very robust and scalable routing protocol, as evidenced by the fact that BGP
is the routing protocol employed on the Internet. To achieve scalability at this level,
BGP uses many route parameters, called attributes, to define routing policies and
maintain a stable routing environment. BGP neighbors exchange full routing
information when the TCP connection between neighbors is first established.
When changes to the routing table are detected, the BGP routers send to their
neighbors only those routes that have changed. BGP routers do not send periodic
routing updates, and BGP routing updates advertise only the optimal path to a
destination network.

MP-BGP
The Multi-Protocol BGP feature adds capabilities to BGP to enable multicast routing
policy throughout the Internet and to connect multicast topologies within and between
BGP autonomous systems. That is, multi-protocol BGP is an enhanced BGP that
carries IP multicast routes. BGP carries two sets of routes, one set for unicast routing
and one set for multicast routing. The routes associated with multicast routing are
used by the Protocol Independent Multi-cast (PIM) to build data distribution trees.

OSPF
Open Shortest Path First (OSPF) is a routing protocol developed for Internet Protocol
(IP) networks by the Interior Gateway Protocol (IGP) working group of the Internet
Engineering Task Force (IETF). It was derived from several research efforts, which
among other includes the early version of OSI’s Intermediate System to Intermediate
System (IS-IS) routing protocol.

OSPF has two primary characteristics. The first is that the protocol is open, which
means that its specification is in the public domain (RFC 1247). The second principal
characteristic is that OSPF is based on the Shortest Path First (SPF) algorithm, which
sometimes is referred to as the Dijkstra algorithm, named for the person credited with
its creation.

OSPF is a link-state routing protocol that calls for the sending of Link-State
Advertisements (LSAs) to all other routers within the same hierarchical area.
Information on attached interfaces, metrics used, and other variables are included in
OSPF LSAs. As OSPF routers accumulate link-state information, they use the SPF
algorithm to calculate the shortest path to each node.

Network Topology
Comparing BGP router parameters on either side of potential BGP neighbors does the
discovery of Border Gateway Protocol (BGP) Neighborhood topology. In particular a
comparison is made between the local and remote BGP router identification and
autonomous system as well as the connection states on both sides.

Service Alarms
The following alarms are supported for this technology:

• BGP Neighbor Loss/BGP Neighbor Found


• BGP Process Down/BGP Process Up
• BGP Link Down/BGP Link Up
20

Deploying Layer 3 VPNs Over Multipoint L2TPv3 Tunnels


VPN services have been traditionally deployed over IP core networks by configuring
MPLS or through L2TPv3 tunnels using point-to-point links. This feature introduces
the capability to deploy layer 3 VPN services by configuring multipoint L2TPv3
tunnels over an existing IP core network. This feature is configured on only the PE
routers and requires no configuration on the core routers. The L2TPv3 multipoint
tunnel network allows layer 3 VPN services to be carried through the core without the
configuration of MPLS. L2TPv3 multipoint tunneling supports multiple tunnel end
points, which create a full mesh topology that requires only a single tunnel to be
configured on each PE router. This feature provides the capability for VPN traffic to
be carried from enterprise networks across cooperating service provider core networks
to remote sites.

BGP Advertises Tunnel Type and Tunnel Capabilities between PE Routers


Border Gateway Protocol (BGP) is used to advertise the tunnel endpoints and the sub-
address family identifier (SAFI) specific attributes (which contains the tunnel type,
and tunnel capabilities). This feature introduces the tunnel SAFI and the BGP SAFI-
Specific Attribute (SSA) attribute. The tunnel SAFI defines the tunnel endpoint and
carries the endpoint IPv4 address and next hop. The SAFI number 64 identifies the
tunnel SAFI. The BGP SSA carries the BGP preference and BGP flags. It also carries
the tunnel cookie, tunnel cookie length, and session ID. Attribute number 19 identifies
the BGP SSA.

These attributes allow BGP to distribute tunnel encapsulation information between PE


routers. VPNv4 traffic is routed through these tunnels. The next hop, advertised in
BGP VPNv4 updates, determines which tunnel to use for routing tunnel traffic.

Configuring the PE Routers and Managing Address Space


A single multipoint L2TPv3 tunnel is configured on each PE router. Configuring a
unique Virtual Routing and Forwarding (VRF) instance creates the VPN. The tunnel
that transports the VPN traffic across the core network resides in its own address
space. A special purpose VRF called a Resolve in VRF (RiV) is created to manage the
tunnel address space. The address space configured under the RiV is associated with
the tunnel, and a static route is configured in the RiV to route outgoing traffic through
the tunnel.

Packet Validation Mechanism


This feature provides a simple mechanism to validate received packets from
appropriate peers. The multipoint L2TPv3 tunnel header is automatically configured
with a 64-bit cookie and L2TPv3 session ID. This packet validation mechanism is
intended to protect the VPN from illegitimate traffic sources, such as injecting a rogue
packet into the tunnel to gain access to the VPN. The cookie and session ID are not
user configurable; however, they are visible in the packet as it's routed between the
two tunnel end-points. This packet validation mechanism will not protect the VPN
from hackers who have the ability to monitor legitimate traffic between PE routers.

Configuring the VRF for the L2TPv3 Tunnel


Configuring a unique Virtual Routing and Forwarding (VRF) instance creates the
VPN. The tunnel that transports the VPN traffic across the core network resides in its
21

own address space. A special purpose VRF called a Resolve in VRF (RiV) is created
to manage the tunnel address space.

SUMMARY STEPS
1. enable
2. configure terminal
3. ip vrf vrf-name
4. rd as-number:network-number | ip-address:network number
5. route-target import as-number:network-number | ip-address:network number
6. route-target export as-number:network-number | ip-address:network number
7. exit
8. ip vrf vrf-name
9. rd as-number:network-number | ip-address:network number
10. end

DETAILED STEPS
22

What to Do Next
Proceed to the next task “Configuring the Multipoint L2TPv3 Tunnel.”

Configuring the Multipoint L2TPv3 Tunnel


Border Gateway Protocol (BGP) is used to advertise the tunnel type, tunnel
capabilities, and tunnel-specific attributes. BGP is also used to distribute VPNv4
routing information between PE routers on the edge of the network, which maintains
peering relationships between the VPN service and VPN sites. The next hop
advertised in BGP VPNv4 updates triggers tunnel endpoint discovery.

Prerequisites
The IP address of the interface, specified as the tunnel source, should match the IP
address used by BGP as the next hop for the VPNv4 update. The BGP configuration
will include the neighbor ip-address update-source loopback 0 command.

SUMMARY STEPS
1. enable
2. configure {terminal | memory | network}
3. interface tunnel interface-number
4. ip vrf forwarding RiV-name
5. ip address ip-address subnet-mask
6. tunnel source loopback interface-number
7. tunnel mode l3vpn l2tpv3 multipoint
8. end

DETAILED STEPS
23

Configuring the VRF and RiV Example


The following sample configuration creates and configures the VRF and RiV:

ip vrf vrf-name
rd 100:110
route-target import 100:1000
route-target export 100:1000
exit
ip vrf MY_RIV
rd 1:1
end

Configuring the Multipoint L2TPv3 Tunnel Example


The following sample configuration creates and configures the L2TPv3 tunnel:

interface tunnel 1
ip vrf forwarding MY_RIV
ip-address 172.16.1.3 255.255.255.255
tunnel source loopback 0
tunnel mode l3vpn l2tpv3 multipoint
end

Configuring a Route Map for the Layer 3 VPN Example


24

The following sample configuration creates an inbound route map to set the next hop
to be resolved within the VRF:

route-map SELECT_UPDATE_FOR_L3VPN permit 10


set ip next-hop in-vrf MY_RIV
end

Defining Address Space and Configuring BGP Example


The following sample configuration defines address space for the VPN and configures
BGP:

ip route vrf MY_RIV 0.0.0.0 0.0.0.0 tunnel 1


router bgp 100
neighbor 172.16.1.2 remote-as 100
neighbor 172.16.1.2 update-source Loopback 0
address-family vpnv4 unicast
neighbor 172.16.1.2 activate
neighbor 172.16.1.2 route-map SELECT_UPDATE_FOR_L3VPN in
exit-address-family
address-family ipv4 tunnel
neighbor 176.16.1.2 activate
end

Verifying the VRF Example


Use the show ip bgp vpnv4, show ip route vrf, and show ip cef vrf commands to
verify that VRF and RiV are configured correctly propagating to the appropriate
routing and forwarding tables.

Verify that the specified VRF prefix has been received by BGP. The BGP table entry
should show that the route-map has worked and that the next hop is showing in the
RiV. Use the show ip bgp vpnv4 command as shown in this example:

Router# show ip bgp vpnv4 vrf vrf-name 10.10.10.4


BGP routing table entry for 100:1:10.10.10.4/24, version 12
Paths: (1 available, best #1)
Not advertised to any peer
Local
172.16.1.2 in "vrf-name" from 172.16.1.2 (172.16.1.2)
Origin incomplete, metric 0, localpref 100, valid, internal, best
Extended Community: RT:100:1
Use the show ip route vrf command to confirm that the same information has been propagated to the
routing table:
Router# show ip route vrf vrf-name 10.10.10.4
Routing entry for 10.10.10.4/24
Known via "bgp 100", distance 200, metric 0, type internal
Last update from 172.16.1.2 00:23:07 ago
Routing Descriptor Blocks:
* 172.16.1.2 (vrf-name), from 172.16.1.2, 00:23:07 ago
Route metric is 0, traffic share count is 1
AS Hops 0

Use the show ip cef vrf command to verify that the same information has been
propagated to the CEF forwarding table:
25

Adding Multicast Support

Configuring IP Multicast Routing


IP multicast provides a third scheme, allowing a host to send packets to a subset of all
hosts (group transmission). These hosts are known as group members.

Packets delivered to group members are identified by a single multicast group


address. Multicast packets are delivered to a group using best-effort reliability, just
like IP unicast packets.

The multicast environment consists of senders and receivers. Any host, regardless of
whether it is a member of a group, can send to a group. However, only the members
of a group receive the message.

A multicast address is chosen for the receivers in a multicast group. Senders use that
address as the destination address of a datagram to reach all members of the group.

Membership in a multicast group is dynamic; hosts can join and leave at any time.
There is no restriction on the location or number of members in a multicast group. A
host can be a member of more than one multicast group at a time.

How active a multicast group is and what members it has can vary from group to
group and from time to time. A multicast group can be active for a long time, or it
may be very short-lived. Membership in a group can change constantly. A group that
has members may have no activity.

Routers executing a multicast routing protocol, such as Protocol Independent


Multicast (PIM), maintain forwarding tables to forward multicast datagrams. Routers
use the Internet Group Management Protocol (IGMP) to learn whether members of a
26

group are present on their directly attached subnets. Hosts join multicast groups by
sending IGMP report messages.

Many multimedia applications involve multiple participants. IP multicast is naturally


suitable for this communication paradigm.

To identify the hardware platform or software image information associated with a


feature, use the Feature Navigator on Cisco.com to search for information about the
feature or refer to the software release notes for a specific release.

The Cisco IP Multicast Routing Implementation


The Cisco IOS software supports the following protocols to implement IP multicast
routing:

 IGMP is used between hosts on a LAN and the routers on that LAN to track
the multicast groups of which hosts are members.
 Protocol Independent Multicast (PIM) is used between routers so that they can
track which multicast packets to forward to each other and to their directly
connected LANs.
 Distance Vector Multicast Routing Protocol (DVMRP) is used on the
MBONE (the multicast backbone of the Internet). The Cisco IOS software
supports PIM-to-DVMRP interaction.
 Cisco Group Management Protocol (CGMP) is used on routers connected to
Catalyst switches to perform tasks similar to those performed by IGMP.

Figure 1.3 shows where these protocols operate within the IP multicast environment.

Figure 1.3 IP Multicast Routing Protocols

IGMP
27

To start implementing IP multicast routing in your campus network, you must first
define who receives the multicast. IGMP provides a means to automatically control
and limit the flow of multicast traffic throughout your network with the use of special
multicast queriers and hosts.

 A querier is a network device, such as a router, that sends query messages to


discover which network devices are members of a given multicast group.
 A host is a receiver, including routers, that sends report messages (in response
to query messages) to inform the querier of a host membership.

A set of queriers and hosts that receive multicast data streams from the same source is
called a multicast group. Queries and hosts use IGMP messages to join and leave
multicast groups.
IP multicast traffic uses group addresses, which are Class D IP addresses. The high-
order four bits of a Class D address are 1110. Therefore, host group addresses can be
in the range 224.0.0.0 to 239.255.255.255.

IGMP Versions
Multicast addresses in the range 224.0.0.0 to 224.0.0.255 are reserved for use by
routing protocols and other network control traffic. The address 224.0.0.0 is
guaranteed not to be assigned to any group.

IGMP packets are transmitted using IP multicast group addresses as follows:

 IGMP general queries are destined to the address 224.0.0.1 (all systems on a
subnet).
 IGMP group-specific queries are destined to the group IP address for which the
router is querying.
 IGMP group membership reports are destined to the group IP address for which
the router is reporting.
 IGMP Version 2 (IGMPv2) Leave messages are destined to the address 224.0.0.2
(all routers on a subnet).

That in some old host IP stacks, Leave messages might be destined to the group IP
address rather than to the all-routers address.

Primarily multicast hosts to signal their interest in joining a specific multicast group
and to begin receiving group traffic use IGMP messages.

The original IGMP Version 1 Host Membership model defined in RFC 1112 is
extended to significantly reduce leave latency and provide control over source
multicast traffic by use of Internet Group Management Protocol, Version 2.

 IGMP Version 1
Provides for the basic Query-Response mechanism that allows the multicast router
to determine which multicast groups are active and other processes that enable
hosts to join and leave a multicast group. RFC 1112 defines Host Extensions for
IP Multicasting.
 IGMP Version 2
28

Extends IGMP allowing such features as the IGMP leave process, group-specific
queries, and an explicit maximum query response time. IGMP Version 2 also adds
the capability for routers to elect the IGMP querier without dependence on the
multicast protocol to perform this task. RFC 2236 defines Internet Group
Management Protocol, Version 2.
 IGMP Version 3
Provides for “source filtering” which enables a multicast receiver host to signal to
a router which groups it wants to receive multicast traffic from, and from which
sources this traffic is expected.

The PIM protocol maintains the current IP multicast service mode of receiver-
initiated membership. It is not dependent on a specific unicast routing protocol.
PIM is defined in RFC 2362, Protocol-Independent Multicast-Sparse Mode (PIM-
SM): Protocol Specification. PIM is defined in the following Internet Engineering
Task Force (IETF) Internet drafts:

 Protocol Independent Multicast (PIM): Motivation and Architecture


 Protocol Independent Multicast (PIM), Dense Mode Protocol Specification
 Protocol Independent Multicast (PIM), Sparse Mode Protocol Specification
 draft-ietf-idmr-igmp-v2-06.txt, Internet Group Management Protocol, Version 2
 draft-ietf-pim-v2-dm-03.txt, PIM Version 2 Dense Mode

PIM can operate in dense mode or sparse mode. It is possible for the router to handle
both sparse groups and dense groups at the same time.

In dense mode, a router assumes that all other routers want to forward multicast
packets for a group. If a router receives a multicast packet and has no directly
connected members or PIM neighbors present, a prune message is sent back to the
source. Subsequent multicast packets are not flooded to this router on this pruned
branch. PIM builds source-based multicast distribution trees.

In sparse mode, a router assumes that other routers do not want to forward multicast
packets for a group, unless there is an explicit request for the traffic. When hosts join
a multicast group, the directly connected routers send PIM join messages toward the
rendezvous point (RP). The RP keeps track of multicast groups. Hosts that send
multicast packets are registered with the RP by the first hop router of that host. The
RP then sends join messages toward the source. At this point, packets are forwarded
on a shared distribution tree. If the multicast traffic from a specific source is
sufficient, the first hop router of the host may send join messages toward the source to
build a source-based distribution tree.

CGMP
CGMP is a protocol used on routers connected to Catalyst switches to perform tasks
similar to those performed by IGMP. CGMP is necessary for those Catalyst switches
that cannot distinguish between IP multicast data packets and IGMP report messages,
both of which are addressed to the same group address at the MAC level.

Enabling PIM on an Interface


Enabling PIM on an interface also enables IGMP operation on that interface. An
interface can be configured to be in dense mode, sparse mode, or sparse-dense mode.
29

The mode determines how the router populates its multicast routing table and how the
router forwards multicast packets it receives from its directly connected LANs. You
must enable PIM in one of these modes for an interface to

Perform IP multicast routing.


In populating the multicast routing table, dense mode interfaces are always added to
the table. Sparse mode interfaces are added to the table only when periodic join
messages are received from downstream routers, or when a directly connected
member is on the interface. When forwarding from a LAN, sparse mode operation
occurs if an RP is known for the group. If so, the packets are encapsulated and sent
toward the RP. When no RP is known, the packet is flooded in a dense mode fashion.
If the multicast traffic from a specific source is sufficient, the first hop router of the
receiver may send join messages toward the source to build a source-based
distribution tree.

There is no default mode setting. By default, multicast routing is disabled on an


interface.

Enabling Dense Mode


To configure PIM on an interface to be in dense mode, use the following command in
interface configuration mode:

Enabling Sparse-Dense Mode


If you configure either the ip pim sparse-mode or ip pim dense-mode interface
configuration command, then sparseness or denseness is applied to the interface as a
whole. However, some environments might require PIM to run in a single region in
sparse mode for some groups and in dense mode for other groups.
An alternative to enabling only dense mode or only sparse mode is to enable sparse-
dense mode. In this case, the interface is treated as dense mode if the group is in dense
mode; the interface is treated in sparse mode if the group is in sparse mode. You must
have an RP if the interface is in sparse-dense mode, and you want to treat the group as
a sparse group.

If you configure sparse-dense mode, the idea of sparseness or denseness is applied to


the group on the router, and the network manager should apply the same concept
throughout the network.

Another benefit of sparse-dense mode is that Auto-RP information can be distributed


in a dense mode manner; yet, multicast groups for user groups can be used in a sparse
mode manner. Thus, there is no need to configure a default RP at the leaf routers.

When an interface is treated in dense mode, it is populated in the outgoing interface


list of a multicast routing table when either of the following conditions is true:

 Members or DVMRP neighbors are on the interface.


 There are PIM neighbors and the group has not been pruned.
30

When an interface is treated in sparse mode, it is populated in the outgoing interface


list of a multicast routing table when either of the following conditions is true:

 Members or DVMRP neighbors are on the interface.


 A PIM neighbor on the interface has received an explicit join message.

To enable PIM to operate in the same mode as the group, use the following command
in interface configuration mode:

Configuring PIM Dense Mode State Refresh


If you have PIM dense mode (PIM-DM) enabled on a router interface, the PIM Dense
Mode State Refresh feature is enabled by default.

PIM-DM builds source-based multicast distribution trees that operate on a “flood and
prune” principle. Multicast packets from a source are flooded to all areas of a PIM-
DM network. PIM routers that receive multicast packets and have no directly
connected multicast group members or PIM neighbors send a prune message back up
the source-based distribution tree toward the source of the packets. As a result,
subsequent multicast packets are not flooded to pruned branches of the distribution
tree. However, the pruned state in PIM-DM times out approximately every 3 minutes
and the entire PIM-DM network is re-flooded with multicast packets and prune
messages. This re-flooding of unwanted traffic throughout the PIM-DM network
consumes network bandwidth.

The PIM Dense Mode State Refresh feature keeps the pruned state in PIM-DM from
timing out by periodically forwarding a control message down the source-based
distribution tree. The control message refreshes the prune state on the outgoing
interfaces of each router in the distribution tree.

This feature also enables PIM routers in a PIM-DM multicast network to recognize
topology changes (sources joining or leaving a multicast group) before the default 3-
minute state refresh timeout period expires.

By default, all PIM routers that are running a Cisco IOS software release that supports
the PIM Dense Mode State Refresh feature automatically process and forward state
refresh control messages. To disable the processing and forwarding of state refresh
control messages on a PIM router, uses the ip pim state-refresh disable global
configuration command.

To configure the origination of the control messages on a PIM router, use the
following commands beginning in global configuration mode:
31

Configuring a Rendezvous Point


If you configure PIM to operate in sparse mode, you must also choose one or more
routers to be rendezvous points (RPs). You need not configure the routers to be RPs;
they learn how to become RPs themselves. RPs are used by senders to a multicast
group to announce their existence and by receivers of multicast packets to learn about
new senders. The Cisco IOS software can be configured so that packets for a single
multicast group can use one or more RPs.

First hop routers to send PIM register messages on behalf of a host sending a packet
to the group use the RP address. Last hop routers to send PIM join and prune
messages to the RP to inform it about group membership also use the RP address.
You must configure the RP address on all routers (including the RP router).

A PIM router can be an RP for more than one group. Only one RP address can be
used at a time within a PIM domain. The conditions specified by the access list
determine for which groups the router is an RP.

To configure the address of the RP, use the following command on a leaf router in
global configuration mode:

Configuring Auto-RP
Auto-RP is a feature that automates the distribution of group-to-RP mappings in a
PIM network. This feature has the following benefits:

 The use of multiple RPs within a network to serve different group ranges is easy.
 It allows load splitting among different RPs and arrangement of RPs according to
the location of group participants.
 It avoids inconsistent, manual RP configurations that can cause connectivity
problems.

Multiple RPs can be used to serve different group ranges or serve as backups of each
other. To make Auto-RP work, a router must be designated as an RP-mapping agent,
which receives the RP-announcement messages from the RPs and arbitrates conflicts.
The RP-mapping agent then sends the consistent group-to-RP mappings to all other
routers. Thus, all routers automatically discover which RP to use for the groups they
support.

Enabling the Functional Address for IP Multicast over Token Ring LANs
32

By default, IP multicast datagrams on Token Ring LAN segments use the MAC-level
broadcast address 0xFFFF.FFFF.FFFF. That default places an unnecessary burden on
all devices that do not participate in IP multicast. The IP Multicast over Token Ring
LANs feature defines a way to map IP multicast addresses to a single Token Ring
MAC address.

This feature defines the Token Ring functional address (0xc000.0004.0000) that
should be used over Token Ring. A functional address is a severely restricted form of
multicast addressing implemented on Token Ring interfaces. Only 31 functional
addresses are available. A bit in the destination MAC address designates it as a
functional address.

The implementation used by Cisco complies with RFC 1469, IP Multicast over
Token-Ring Local Area Networks.

If you configure this feature, IP multicast transmissions over Token Ring interfaces
are more efficient than they formerly were. This feature reduces the load on other
machines that do not participate in IP multicast because they do not process these
packets.

The following restrictions apply to the Token Ring functional address:

 This feature can be configured only on a Token Ring interface.


 Neighboring devices on the Token Ring on which this feature is used should
also use the same functional address for IP multicast traffic.
 Because there are a limited number of Token Ring functional addresses, other
protocols could be assigned to the Token Ring functional address
0xc000.0004.0000. Therefore, not every frame sent to the functional address is
necessarily an IP multicast frame.

To enable the mapping of IP multicast addresses to the Token Ring functional address
0xc000.0004.0000, use the following command in interface configuration mode:

Configuring PIM Version 2

PIM Version 2 includes the following improvements over PIM Version 1:

 A single, active RP exists per multicast group, with multiple backup RPs. This
single RP compares to multiple active RPs for the same group in PIM Version
1.
 A bootstrap router (BSR) provides a fault-tolerant, automated RP discovery
and distribution mechanism. Thus, routers dynamically learn the group-to-RP
mappings.
33

 Sparse mode and dense mode are properties of a group, as opposed to an


interface. We strongly recommend sparse-dense mode, as opposed to either
sparse mode or dense mode only.
 PIM join and prune messages have more flexible encodings for multiple
address families.
 A more flexible hello packet format replaces the query packet to encode
current and future capability options.
 Register messages to an RP indicate whether a border router or a designated
router sent them.
 PIM packets are no longer inside IGMP packets; they are standalone packets.

PIM Version 1, together with the Auto-RP feature, can perform the same tasks as the
PIM Version 2 BSR. However, Auto-RP is a standalone protocol, separate from PIM
Version 1, and is Cisco proprietary. PIM Version 2 is a standards track protocol in the
IETF. We recommend that you use PIM Version 2.

Prerequisites
The Cisco PIM Version 2 implementation allows interoperability and transition
between Version 1 and Version 2, although there might be some minor problems. You
can upgrade to PIM Version 2 incrementally. PIM Versions 1 and 2 can be configured
on different routers within one network. Internally, all routers on a shared media
network must run the same PIM version. Therefore, if a PIM Version 2 router detects
a PIM Version 1 router, the Version 2 router downgrades itself to Version 1 until all
Version 1 routers have been shut down or upgraded.
PIM uses the BSR to discover and announce RP-set information for each group prefix
to all the routers in a PIM domain. This is the same function accomplished by Auto-
RP, but the BSR is part of the PIM Version 2 specification.

To avoid a single point of failure, you can configure several candidate BSRs in a PIM
domain. A BSR is elected among the candidate BSRs automatically; they use
bootstrap messages to discover which BSR has the highest priority. This router then
announces to all PIM routers in the PIM domain that it is the BSR.

Routers that are configured as candidate RPs then unicast to the BSR the group range
for which they are responsible. The BSR includes this information in its bootstrap
messages and disseminates it to all PIM routers in the domain. Based on this
information, all routers will be able to map multicast groups to specific RPs. As long
as a router is receiving the bootstrap message, it has a current RP map.

 When PIM Version 2 routers interoperate with PIM Version 1 routers, Auto-RP
should have already been deployed.
 Because bootstrap messages are sent hop by hop, a PIM Version1 router will
prevent these messages from reaching all routers in your network. Therefore, if
your network has a PIM Version 1 router in it, and only Cisco routers, it is best to
use Auto-RP rather than the bootstrap mechanism.

PIM Version 2 Configuration Task List


There are two approaches to using PIM Version 2. You can use Version 2 exclusively
in your network, or migrate to Version 2 by employing a mixed PIM version
34

environment. When deploying PIM Version 2 in your network, use the following
guidelines:

 If your network is all Cisco routers, you may use either Auto-RP or the bootstrap
mechanism (BSR).
 If you have routers other than Cisco in your network, you need to use the
bootstrap mechanism.

Configuring PIM Sparse-Dense Mode


To configure PIM sparse-dense mode, use the following commands on all PIM
routers inside the PIM domain beginning in global configuration mode:

Defining a PIM Sparse Mode Domain Border Interface


A border interface in a PIM sparse mode (PIM-SM) domain requires special
precautions to avoid exchange of certain traffic with a neighboring domain reachable
through that interface, especially if that domain is also running PIM-SM. BSR and
Auto-RP messages should not be exchanged between different domains, because
routers in one domain may elect RPs in the other domain, resulting in protocol
malfunction or loss of isolation between the domains.

Configuring Candidate RPs


Configure one or more candidate RPs. Similar to BSRs, the RPs should also be well
connected and in the backbone portion of the network. An RP can serve the entire IP
multicast address space or a portion of it. Candidate RPs send candidate RP
advertisements to the BSR.

Consider the following scenarios when deciding which routers should be RPs:
 In a network of Cisco routers where only Auto-RP is used, any router can be
configured as an RP.
 In a network of routers that includes only Cisco PIM Version 2 routers and routers
from other vendors, any router can be used as an RP.
 In a network of Cisco PIM Version 1 routers, Cisco PIM Version 2 routers, and
routers from other vendors, only Cisco PIM Version 2 routers should be
configured as RPs.

Making the Transition to PIM Version 2


On each LAN, the Cisco implementation of PIM Version 2 automatically enforces the
rule that all PIM messages on a shared LAN are in the same PIM version. To
accommodate that rule, if a PIM Version 2 router detects a PIM Version 1 router on
35

the same interface, the Version 2 router downgrades itself to Version 1 until all
Version 1 routers have been shut down or upgraded.

Deciding When to Configure a BSR


If there are only Cisco routers in your network (no routers from other vendors), there
is no need to configure a BSR. Configure Auto-RP in the mixed PIM Version
1/Version 2 environment.

Dense Mode
Dense mode groups in a mixed Version 1/Version 2 region need no special
configuration; they will interoperate automatically.

Sparse Mode
Sparse mode groups in a mixed Version 1/Version 2 region are possible because the
Auto-RP feature in Version 1 interoperates with the RP feature of Version 2.
Although all PIM Version 2 routers also can use Version 1, we recommend that the
RPs be upgraded to Version 2 (or at least upgraded to PIM Version 1 in the Cisco IOS
Release 11.3 software).

To ease the transition to PIM Version 2, we also recommend the following


configuration:

 Auto-RP be used throughout the region


 Sparse-dense mode be configured throughout the region

1.3 – Network Troubleshooting

1.3a - Use IOS troubleshooting tools

The ability to maintain, monitor, and troubleshoot networks is important. You might
design a switched LAN network that meets the requirements of the organization using
the network today, but over time these requirements might exceed the current
capabilities of your design. It is very important that you understand how to monitor
and troubleshoot your network, ensuring that you can identify key issues in your
network and resolve them quickly and efficiently.

To ensure the continuing operation and stability of your LAN switches, you must put
in place mechanisms that allow for the fast restoration of configuration files and other
files in the event of a switch failure or file corruption. Understanding how you can
work with key files on your LAN switches in terms of transferring these files to and
from other locations for both backup and restoration purposes is paramount in
ensuring your ability to maintain the availability of the LAN network.

It is always important to put the LAN network in perspective; it exists to allow


organizations to run network applications and permit the exchange of information
vital to the ongoing operation of the organization. The ability to troubleshoot these
network applications is vital; you might often need to view the traffic that is being
36

transmitted between the end points of a network application to aid in troubleshooting


a specific problem. Cisco Catalyst switches enable you to capture network traffic,
even though traffic capture has been traditionally impossible to perform on a switch.
In a switched environment, capturing traffic is difficult because the fabric that forms
each broadcast domain is not shared and the traffic destined for a specific host on a
specific switch port is not normally sent to any other ports. Traffic capture also
enables you to monitor the performance of your network, looking for possible issues
such as excessive broadcast and multicast traffic or an excess of corrupted frames.

Using IP and LAN Connectivity Tools


An important feature of Cisco Catalyst switches is the troubleshooting tools that are
provided as part of the operating system, which enable you to quickly diagnose
problems that might arise in your switched environment. Cisco Catalyst switches
provide the following troubleshooting tools:
 IP connectivity tools, such as ping and traceroute
 LAN connectivity tools, such as Cisco Discovery Protocol (CDP) and Layer 2
traceroute
 Debugging tools
 Monitoring tools, such as switch port analyzer (SPAN)

In this scenario, you learn about the IP and LAN connectivity tools.

Understanding IP and LAN Connectivity Tools


The tools provided for verifying connectivity to the network are important features of
any networking device. As a network engineer, when you install networking
equipment and attach it to the network, you want confirmation that the equipment is
configured correctly and is communicating with the network. Cisco Catalyst switches
provide traditional IP connectivity tools, such as ping and trace route, which are used
for verifying management communications on Layer 2 switches and are useful for
verifying Layer 3 routing on Layer 3 switches. Cisco Catalyst switches also provide
LAN (Layer 2) connectivity tools, such as the Cisco Discovery Protocol (CDP) and
Layer 2 trace route (l2trace), which are useful for verifying inter switch
communications and the Layer 2 transmission paths within a Layer 2 domain. In
summary, the following tools are provided on Cisco Catalyst switches for verifying
Layer 2 and Layer 3 (IP) connectivity:
 The ping utility
 The trace route utility
 Cisco Discovery Protocol (CDP)
 Layer 2 trace route (l2trace)

IP Connectivity Tools—the ping Utility


The ping utility represents the most fundamental tool used for verifying Layer 3
connectivity. Ping can quickly provide you with information indicating whether or not
a remote device is alive, as well as the end-to-end delay of the network path to the
remote device. You have used the ping utility numerous times to verify basic IP
connectivity between systems; as a network engineer you will probably use ping on a
daily basis to ensure networks that you are configuring are working as expected or to
aid in the diagnosis of faults that are affecting connectivity.
37

On Cisco IOS-based Catalyst switches, you can use either the basic ping or extended
ping utilities for testing connectivity. The basic ping sends just five ICMP echo
requests to a remote destination and indicates whether or not an ICMP echo reply was
received and how long it took for each reply to be received. The extended ping allows
you much more flexibility, by allowing you to send a configurable number of ICMP
echo requests, with the packet size of your choice along with many other parameters.

If you are using extended ping for testing IP connectivity, you can also use the ping
ip command to invoke an extended IP ping. Notice the round-trip time statistics,
which provide an indication of the current end-to-end latency in the network.

By using the extended ping utility, you can modify the parameters of the ping test
such as increasing the number of packets sent and the size of each packet sent. You
can also specify the source IP address used in each packet, which is useful if you want
to ensure that the return path from the destination being tested has the necessary
routing information to reach the source IP address specified.

The source IP address specified must be a valid IP address on the local switch itself;
you can't use extended ping to spoof (masquerade) a source IP address of another
device.
You can configure other advanced features such as setting the type of service (useful
for testing quality of service), setting the don't fragment (DF) bit (useful for
determining the lowest maximum transmission unit [MTU] of a transmission path by
testing whether or not IP packets of a particular length can reach a destination), and
more.

On CatOS, the ping utility is much simpler than the extended ping available on Cisco
IOS because Cisco IOS was originally designed for Layer 3 routers rather than Layer
2 switches that essentially act only like an end device in terms of IP. You can use the
–s switch with the ping command to send a continuous stream of Internet Control
Message Protocol (ICMP) traffic until you interrupt the utility. Example 10-2 shows
the syntax of the ping utility on CatOS.

Example 1.1. Using the ping utility on CatOS


Console> (enable) ping <host> Console> (enable) ping –s <host> [packet_size]
[packet_count]

In Example 1.2, if you use the ping command with no options, a basic ping is
assumed and five 56-byte ICMP echo requests are sent to the host specified. If you
use the –s option, you can optionally specify a custom packet size and packet count. If
you do not specify a custom packet size and packet count with the –s option, an
almost infinite amount (2,147,483,647) of 56-byte ICMP echo requests are sent to the
host specified.

To interrupt the ping –s command, use the Ctrl-C key sequence.

Example 1.2 shows an example of using the different methods available to the ping
utility on a CatOS-based Catalyst switch.
38

Example 1.2 Using the ping utility on CatOS


Console> (enable) ping 192.168.1.1 !!!!! -----192.168.1.1 PING Statistics------ 5
packets transmitted, 5 packets received, 0% packet loss round-trip (ms)
min/avg/max = 1/1/1 Console> (enable) ping –s 192.168.1.1 1400 2 PING
192.168.1.1: 1400 data bytes 1408 bytes from 192.168.1.1: icmp_seq=0. time=18
ms 1408 bytes from 192.168.1.1: icmp_seq=1. time=10 ms ----192.168.1.1 PING
Statistics---- 2 packets transmitted, 2 packets received, 0% packet loss round-trip
(ms) min/avg/max = 10/14/18 Console> (enable) ping –s 192.168.1.1 PING
192.168.1.1: 56 data bytes 64 bytes from 192.168.1.1: icmp_seq=0. time=7 ms 64
bytes from 192.168.1.1: icmp_seq=1. time=11 ms 64 bytes from 192.168.1.1:
icmp_seq=2. time=10 ms 64 bytes from 192.168.1.1: icmp_seq=3. time=6 ms 64
bytes from 192.168.1.1: icmp_seq=4. time=10 ms 64 bytes from 192.168.1.1:
icmp_seq=5. time=9 ms 64 bytes from 192.168.1.1: icmp_seq=6. time=8 ms 64
bytes from 192.168.1.1: icmp_seq=7. time=7 ms 64 bytes from 192.168.1.1:
icmp_seq=8. time=9 ms ^C ----192.168.1.1 PING Statistics---- 9 packets
transmitted, 9 packets received, 0% packet loss round-trip (ms) min/avg/max =
6/8/11

In Example 1-3, the base ping utility with no options is first demonstrated. Next,
the ping –s 192.168.1.1 1400 2command is executed, which generates two 1400-byte
ICMP echo requests (1400 bytes represents the data payload of the ICMP. Notice that
the ICMP header adds an additional 8 bytes to the packet). Finally, the ping –s
192.168.1.1command is executed, which generates continuous ICMP echo requests
every second, until the Ctrl-C break sequence (^C) is executed.

The trace-route Utility


The trace-route utility is a useful tool that is used to determine the Layer 3 path taken
to reach a destination host from a source host.

The trace-route utility verifies the network path only from the source to destination
(as indicated by the arrows in Figure 1-1), but does not verify return traffic from the
destination to source. When the return path is different from the forward path, this is
described as asymmetric routing (as opposed to symmetric routing). Asymmetric
routing can be undesirable because although traffic might take the most optimum path
to a destination, return traffic takes a less optimum path back. Asymmetric routing
also causes issues for firewalls, as most firewalls classify traffic in terms of
connections and must see both the forward and return traffic on a connection (for
example, if a firewall receives a return packet associated with a connection, but has
not seen the connection setup packets sent in the forward direction, the firewall rejects
the packet as part of an invalid connection).
Traditionally, because the trace-route utility is useful only for routed IP networks, it
has not been used too often on Layer 2 devices such as switches. However, with the
growing number of Layer 3 switched networks, the trace-route utility has become a
useful troubleshooting utility for L3 switched LAN networks. The trace-route utility
works by using the time-to-live (TTL) field inside an IP packet header. Every IP
packet header has a TTL field, which specifies how long the packet should "live," and
this field is used to prevent IP packets from continuously circulating around routing
loops. A routing loop exists when a packet bounces back and forth between two or
more routers and can be present due to the convergence behavior of older distance-
39

vector routing protocols after a network failure. Each router that routes an IP packet
decrements the TTL field; if a routing loop exists in the network, the TTL field is
eventually decremented to 0 and the IP packet is dropped. When an IP packet is
dropped due to the TTL field reaching zero (or expiring), the router that drops the
packet sends an ICMP TTL expired in transit message back to the source of the IP
packet, to inform the source that the packet never reached its destination. The source
IP address of the ICMP message is the router that dropped the packet, which therefore
indicates to the sender that the router is part of the network path to the destination.
The goal of trace-route is to determine the network path taken, hop-by-hop to reach a
destination IP address. Trace-route works by generating ICMP echo request packets
addressed to the desired destination, initially starting with a TTL of 1 and then
generating subsequent ICMP echo request packets with an incrementing TTL (i.e.,
1,2,3,4 and so on). This has the effect of causing each router in turn along the
transmission path to the destination to receive an ICMP echo request packet with a
TTL of 1, which each switch discards and then generates an ICMP TTL Expired in
Transit message that is sent back to the source.

1.3.b – Apply Troubleshooting Methodologies

Troubleshooting Frame Relay


"Serial0 is down, line protocol is down" This output means you have a problem with
the cable, channel service unit/data service unit (CSU/DSU), or the serial line. You
need to troubleshoot the problem with a loopback test. To do a loopback test, follow
the steps below:

Set the serial line encapsulation to HDLC and keep-alive to 10 seconds. To do so,
issue the commands encapsulation hdlc and keep-alive 10 under the serial interface.

1. Place the CSU/DSU or modem in local loop mode. If the line protocol comes
up when the CSU, DSU or modem is in local loopback mode (indicated by a
"line protocol is up (looped)" message), it suggests that the problem is
occurring beyond the local CSU/DSU. If the status line does not change states,
there is possibly a problem in the router, connecting cable, CSU/DSU or
modem. In most cases, the problem is with the CSU/DSU or modem.
2. Ping your own IP address with the CSU/DSU or modem looped. There should
not be any misses. An extended ping of 0x0000 is helpful in resolving line
problems since a T1 or E1 derives clock from data and requires a transition
every 8 bits. B8ZS ensures that. A heavy zero data pattern helps to determine
if the transitions are appropriately forced on the trunk. A heavy ones pattern is
used to appropriately simulate a high zero load in case there is a pair of data
inverters in the path. The alternating pattern (0x5555) represents a "typical"
data pattern. If your pings fail or if you get cyclic redundancy check (CRC)
errors, a bit error rate tester (BERT) with an appropriate analyzer from the
telco is needed.
3. When you are finished testing, make sure you return the encapsulation to
Frame Relay.

"Serial0 is up, line protocol is down"


40

This line in the output means that the router is getting a carrier signal from the
CSU/DSU or modem. Check to make sure the Frame Relay provider has activated
their port and that your Local Management Interface (LMI) settings match. Generally,
the Frame Relay switch ignores the data terminal equipment (DTE) unless it sees the
correct LMI (use Cisco's default to "cisco" LMI). Check to make sure the Cisco router
is transmitting data.

You will most likely need to check the line integrity using loop tests at various
locations beginning with the local CSU and working your way out until you get to the
provider's Frame Relay switch.

"Serial0 is up, line protocol is up"


If you did not turn keep-alives off, this line of output means that the router is talking
with the Frame Relay provider's switch. You should be seeing a successful exchange
of two−way traffic on the serial interface with no CRC errors. Keep-alives are
necessary in Frame Relay because they are the mechanism that the router uses to
"learn" which data−link connection identifiers (DLCIs) the provider has provisioned.
To watch the exchange, you can safely use debug frame−relay lmi in almost all
situations. The debug frame−relay lmi command generates very few messages and
can provide answers to questions such as:

1. Is the Cisco Router talking to the local Frame Relay switch?


2. Is the router getting full LMI status messages for the subscribed permanent
virtual circuits (PVCs) from the Frame Relay provider?

3. Are the DLCIs correct?

Here's some sample debug frame−relay lmi output from a successful connection:

Gather Information on the Problem

Problems are typically discovered and reported by one of the following types of users:
41

 External users trying to reach employees within your company


 Internal users using phones to call employees in other company locations or
PSTN destinations, and perform basic actions such as call transfers and dialing
into conferences.

As the network administrator, you must collect sufficient information from these users
to allow you to isolate the problem. Detailed, accurate information will make this task
easier. Table 3 lists recommended questions to ask users when they report a problem.
As you turn up your network, you may consider putting these questions in an on-line
form. A form will encourage users to provide more details about the problem and also
put them into the habit of looking for particular error messages and indicators.
Capturing the information electronically will also permit you to retrieve and re-
examine this information in the future, should the problem repeat itself.

Questions to Ask Users When They Report Problems

Ask this Question... To Determine...


Did something fail or did it Whether the issue relates to system degradation
simply perform poorly? or a connectivity failure. An example of a
failure is when a user dials a phone number and
hears fast busy tone. An example of a
performance problem is when a user dials into
a conference call and hears "choppy" audio
when other parties speak. Quality of service or
performance issues requires a different
approach than connectivity or operational
problems. You must still isolate the potential
sources of the problem, but you will typically
use performance management tools instead of
log files.

What device were you trying to The device type, model and version of software
use? installed. It is also critical to capture the IP
address assigned to the device, as well as its
MAC address. If the case of IP phones,
determining the phone's active Cisco Unified
Communications Manager server is also
important. On Cisco Unified IP phones,
pressing the Settings button and choosing the
Network Configuration option from the menu
can display these important network values.

Did it ever work? If a device was recently installed and the


problem occurred while making it work for the
first time, or if the device was operating
normally before the problem occurred. If the
device was newly installed, the problem is
most likely due to improper configuration or
42

wiring of that particular device. Problems with


devices that are already up and running can
typically be traced back to one of two causes:
(a) the user modifying their device, such as
changing their configuration or upgrading
software, or (b) a change or failure elsewhere
in the network.

Exactly what action(s) did you The steps that led up to the problem, including
perform? which buttons were pressed and in which
order. Capturing this information in detail is
important so that you can consistently
reproduce the problem.

What error message(s) The visual and audio indicators of the problem.
appeared or announcements did Ask users to provide the exact text that appears
you hear? and any error codes in either an E-mail or on-
line form. If the error indication was audible,
ask the user to write down the announcement
they heard, the last menu option they were able
to successfully choose or the tone they heard
when the call failed.

What time did the problem The date and time to compare against entries in
occur? log files. If the problem occurred on a Cisco
Unified IP phone, make certain the user
provides the timestamp that appears on their
phone's display. Several Cisco components in a
network may capture the same problem event
in separate log files, with different ID values.
In order to correlate log entries written by
different components, you must compare the
timestamps to find messages for the same
event. Cisco Unified IP phones synchronize
their date and time with their active Cisco
Unified Communications Manager server. If all
Cisco components in the network use Network
Time Protocol (NTP) to synchronize with the
same source, then the timestamps for the same
problem messages will match in every log file.

What is the number of the If the problem relates to a WAN or PTSN link,
phone you used and what was or a Cisco Unified Communications Manager
the phone number you called? dial plan issue. Ask the user the phone number
he or she dialed (called number) and determine
if the destination was within his or her site,
another site within the corporate network, or a
PSTN destination. Because the calling number
(the number of the phone used) also affects call
routing in some cases, capture this number as
43

well.

Did you try to perform any If the problem is not directly related to the
special actions, such as a calling number or called number but rather to
transfer, forward, call park, call the supplementary service setup on Unified
pickup, or meet-me conference? Communications Manager or the problem is at
Is the phone set up to the destination phone the user tried to reach by
automatically perform any of transferring or forwarding the call.
these actions?
Did you attempt the same If the problem is isolated to that user's device
action on another device? or represents a more widespread network
problem. If the user cannot make a call from
his or her phone, ask the user to place a call to
the same destination using a phone in a nearby
office.
Isolate Point(s) of Failure

After collecting information on the symptoms and behavior of the problem, to narrow
the focus of your efforts you should:

 Identify the specific devices involved in the problem.


 Check the version of software running on each device.
 Determine if something has changed in the network.
 Verify the integrity of the IP network.

Identify Devices Involved in the Problem

In large- to medium-sized networks, it is crucial to identify the specific phones,


routers, switches, servers and other devices that were involved in a reported problem.
Isolating these devices allows you to rule out the vast majority of equipment within
the network and focus your time and energy on suspect devices. To help you isolate
which devices were involved in a problem, two types of information can prove
invaluable:

 Network topology diagrams: It is strongly recommended that you have one or


more diagrams that show the arrangement of all Cisco Unified
Communications products in your network. These diagrams illustrate how
these devices are connected and also capture each device's IP address and
name (you may want to also have a spreadsheet or database of the latter
information). This information can help you visualize the situation and focus
on the devices that may be contributing to the reported problem. See Network
Topology Diagrams for recommendations on how to prepare these diagrams.

 Call flow diagrams: Cisco equipment, including Unified Communications


Manager servers, typically provides detailed debug and call trace log files. To
interpret these log files, however, it is useful to understand the signaling that
occurs between devices as calls are set up and disconnected. Using the
network topology and call flow diagrams in conjunction with the log files, you
44

can trace how far a call progressed before it failed and identify which device
reported the problem. Examples of using call flow diagrams for problem
isolation are shown in Troubleshooting Daily Operations.

Check Software Release Versions for Compatibility


After you have identified which devices may be involved in the problem, verify that
the version of software running on each device is compatible with the software
running on every other device. As part of Cisco Unified Communications Release
7.0(1) verification, Cisco Systems has performed interoperability and load testing on
simulated network environments running specific software versions. The Release
Matrix lists the combination of software releases that were tested.

However, if the combination of releases installed in your network does not match the
values in the Release Matrix, it does not necessarily mean the combination is invalid.
To check interoperability for a specific device and software release, locate and review
its Release Notes. Release Notes contain up-to-date information on compatibility
between the product and various releases of other products. This document also
describes open caveats, known issues that may cause unexpected behavior. Before
beginning extensive troubleshooting work, examine the Release Notes to determine if
you are experiencing a known problem that has an available workaround.

Determine if Network Changes Have Occurred


Before focusing on the particular device or site where the problem occurred, it may be
useful to determine if a change was made to surrounding devices. If something has
been added, reconfigured or removed from elsewhere in the network, that change may
be the source of the problem. It is recommended that you track changes to the network
such as:

 New user phones added


 Modifications to Cisco Unified Communications Manager call routing
settings, such as new directory numbers, route patterns and dial rules to
support new sites or devices
 Changes to port configurations on switches, routers or gateways (new
equipment, wiring changes or new port activation)
 Changes to IP addressing schemes (such as adding new subnets) that may have
affected route tables

Verify the IP Network Integrity


Always remember that Cisco Unified Communications equipment relies on a
backbone IP network. Many connectivity problems are not caused by configuration
errors or operational failures on Cisco devices, but rather by the IP network that
interconnects them. Problems such as poor voice quality are typically due to IP
network congestion, while call failures between locations may be the result of
network outages due to disconnected cables or improperly configured IP route tables.

Before assuming that call processing problems result from Cisco Unified
Communications devices themselves, check the integrity of the backbone IP network.
Keep the OSI model in mind as you perform these checks. Start from the bottom, at
the physical layer, by checking that end-to-end cabling. Then verify the status of
Layer 2 switches, looking for any port errors. Move from there to confirm that the
45

Layer 3 routers are running and contain correct routing tables. Continue up the OSI
stack to Layer 7, the application layer. To resolve problems occurring at the top levels
of the stack, a protocol analyzer (or "sniffer") may be useful. You can use sniffer to
examine the IP traffic passing between devices and also decode the packets. Sniffers
are particularly useful for troubleshooting errors between devices that communicate
using Media Gateway Control Protocol (MGCP) or Session Initiation Protocol (SIP).

Apply Tools to Determine the Problem's Root Cause


After you have eliminated the IP network as the source of the problem and you have
isolated the specific Cisco Unified Communications components involved, you can
start applying the many diagnostic tools provided by Cisco components.

1.3.c – Interpret Packet Capture

Wireshark Traffic Analyzer


To understand what happens inside a network requires the ability to capture and
analyze traffic. Prior to Cisco IOS Release XE 3.3.0SG, the Catalyst 4500 series
switch offered only two features to address this need: SPAN and debug platform
packet. Both are limited. SPAN is ideal for capturing packets, but can only deliver
them by forwarding them to some specified local or remote destination; it provides no
local display or analysis support. The debug platform packet command is specific to
the Catalyst 4500 series switch and only works on packets that stem from the software
process-forwarding path. Although it has limited local display capabilities, it has no
analysis support.

So the need exists for a traffic capture and analysis mechanism that is applicable to
both hardware and software forwarded traffic and that provides strong packet capture,
display and analysis support, preferably using a well-known interface.

Wireshark dumps packets to a file using a well-known format called. pcap, and is
applied or enabled on individual interfaces. You specify an interface in EXEC mode
along with the filter and other parameters. The Wireshark application is applied only
when you enter a start command and is removed only when Wireshark stops capturing
packets either automatically or manually.

Capture Points
A capture point is the central policy definition of the Wireshark feature. The point
describes all the characteristics associated with a given instance of Wireshark: what
packets to capture, where to capture them from, what to do with the captured packets,
and when to stop. Capture points can be modified after creation and do not become
active until explicitly activated with a start command. This process is termed
activating the capture point or starting the capture point. Capture points are identified
by name and may also be manually or automatically deactivated or stopped.

Multiple capture points may be defined and activated simultaneously.

Attachment Points: Interfaces and traffic directions


46

An attachment point is a point in the logical packet process path associated with a
capture point. Consider an attachment point as an attribute of the capture point.
Packets that impact an attachment point are tested against the capture point's filters;
packets that match are copied and sent to the capture point's associated Wireshark
instance. A specific capture point can be associated with multiple attachment points,
with limits on mixing attachment points of different types. Some restrictions apply
when you specify attachment points of different types. Attachment points are
directional (input or output or both) with the exception of the Layer 2 VLAN
attachment point, which is always bidirectional.

Filters
Filters are attributes of a capture point that identify and limit the subset of traffic
traveling through the attachment point of a capture point, which is copied and passed
to Wireshark. To be displayed by Wireshark, a packet must pass through an
attachment point, as well as all of the filters associated with the capture point.

A capture point has three types of filters:

 Core system filter—The core system filter is applied by hardware, and its
match criteria are limited by hardware. This filter determines whether
hardware-forwarded traffic is copied to software for Wireshark purposes.

 Capture filter—The capture filter is applied by Wireshark. The match criteria


are more granular than those supported by the core system filter. Packets that
pass the core filter but fail the capture filter are still copied and sent to the
CPU/software, but are discarded by the Wireshark process. The capture filter
syntax matches that of the display filter.

Core System Filter

You can specify core system filter match criteria by using the class map or ACL, or
explicitly by using the CLI.

In some installations, you need to obtain authorization to modify the switch


configuration, which can lead to extended delays if the approval process is lengthy.
This would limit the ability of network administrators to monitor and analyze traffic.
To address this situation, Wireshark supports explicit specification of core system
filter match criteria from the EXEC mode CLI. The disadvantage is that the match
criteria that you can specify is a limited subset of what class map supports, such as
MAC, IP source and destination addresses, ether-type, IP protocol, and TCP/UDP
source and destination ports.

If you prefer to use configuration mode, you can define ACLs or have class maps
refer capture points to them. Explicit and ACL-based match criteria are used
internally to construct class maps and policy maps. These implicitly constructed class
maps are not reflected in the switch running-config and are not NVGEN'd.

Capture Filter
47

The capture filter allows you to direct Wireshark to further filter incoming packets
based on various conditions. Wireshark applies the capture filter immediately on
receipt of the packet; packets that fail the capture filter are neither stored nor
displayed.

A switch receives this parameter and passes it unchanged to Wireshark. Because


Wireshark parses the application filter definition, the defining syntax is the one
provided by the Wireshark display filter. This syntax and that of standard Cisco IOS
differ, which allows you to specify ACL match criteria that cannot be expressed with
standard syntax.

Actions
Wireshark can be invoked on live traffic or on a previously existing .pcap file. When
invoked on live traffic, it can perform four types of actions on packets that pass its
capture and display filters:

 Captures to buffer in memory to decode and analyze and store


 Stores to a .pcap file
 Decodes and displays
 Stores and displays

When invoked on a .pcap file only, only the decode and display action is applicable.

Storing Captured Packets to Buffer in Memory


Packets can be stored in the capture buffer in memory for subsequent decode,
analysis, or storage to a .pcap file.

The capture buffer can be linear or circular mode. In linear mode, new packets are
discarded when the buffer is full. In circular mode, if the buffer is full, the oldest
packet is discarded to accommodate the new packet. Although the buffer can also be
cleared when needed, this mode is mainly used for debugging network traffic.

Storing Captured Packets to a .pcap File

Wireshark can store captured packets to a .pcap file. The capture file can be located
on the following storage devices:

 Catalyst 4500 series switch on-board flash storage (bootflash:)


 external flash disk (slot:)
 USB drive (usb0:)

When configuring a Wireshark capture point, you can associate a filename. When the
capture point is activated, Wireshark creates a file with the specified name and writes
packets to it. If the file already exists when the file is associated or the capture point is
activated, Wireshark queries you as to whether the file can be overwritten. Only one
capture point may be associated with a given filename.

If the destination of the Wireshark writing process is full, Wireshark fails with partial
data in the file. You must ensure that there is sufficient space in the file system before
48

you start the capture session. With Cisco IOS Release IOS XE 3.3.0SG, the file
system full status is not detected for some storage devices.

You can reduce the required storage space by retaining only a segment, instead of the
entire packet. Typically, you do not require details beyond the first 64 or 128 bytes.
The default behavior is to store the entire packet.

To avoid possible packet drops when processing and writing to the file system,
Wireshark can optionally use a memory buffer to temporarily hold packets as they
arrive. Memory buffer size can be specified when the capture point is associated with
a .pcap file.

Decoding and Displaying Packets


Wireshark can decode and display packets to the console. This functionality is
possible for capture points applied to live traffic and for capture points applied to a
previously existing. pcap file.

Wireshark can decode and display packet details for a wide variety of packet formats.
The details are displayed by entering the monitor capture name start command with
one of the following keyword options, which place you into a display and decode
mode:

 brief—Displays one line per packet (the default).


 detailed—Decodes and displays all the fields of all the packets whose
protocols are supported. Detailed mode require more CPU than the other two
modes.
 (hexadecimal) dump—Displays one line per packet as a hexadecimal dump of
the packet data and the printable characters of each packet.

When we enter the capture command with the decode and display option, the
Wireshark output is returned to Cisco IOS and displayed on the console unchanged.

Displaying Live Traffic


Wireshark receives copies of packets from the Catalyst 4500 series switch core
system. Wireshark applies its capture and display filters to discard uninteresting
packets, and then decodes and displays the remaining packets.

Displaying from .pcap File


Wireshark can decode and display packets from a previously stored .pcap file and
direct the display filter to selectively displayed packets. A capture filter is not
applicable in this situation.

Storing and Displaying Packets


Functionally, this mode is a combination of the previous two modes. Wireshark stores
packets in the specified. pcap file and decodes and displays them to the console. Only
the core and capture filters are applicable here.

Activating and Deactivating Wireshark Capture Points


49

After a Wireshark capture point has been defined with its attachment points, filters,
actions, and other options, it must be activated. Until the capture point is activated, it
does not actually capture packets.

Before a capture point is activated, some sanity checks are performed. A capture point
cannot be activated if it has neither a core system filter nor attachment points defined.
Attempting to activate a capture point that generates an error.

The capture and display filters are specified as needed.

After Wireshark capture points are activated, they can be deactivated in multiple
ways. A capture point that is storing only packets to a .pcap file can be halted
manually or configured with time or packet limits, after which the capture point halts
automatically. Only packets that pass the Wireshark capture filter are counted against
the packet limit threshold.

When a Wireshark capture point is activated, a fixed rate filter is applied


automatically in the hardware so that the CPU is not flooded with Wireshark-directed
packets. The disadvantage of the rate filter is that you cannot capture contiguous
packets beyond the established rate even if more resources are available.

Using IOS Embedded Packet Capture


When enabled, the router captures the packets sent and received. The packets are
stored within a buffer in DRAM and are thus not persistent through a reload. Once the
data is captured, it can be examined in a summary or detailed view on the router. In
addition, the data can be exported as a packet capture (PCAP) file to allow for further
examination. The tool is configured in exec mode and is considered a temporary
assistance tool. As a result, the tool configuration is not stored within the router
configuration and will not remain in place after systems reload.
Cisco IOS Configuration Example
Basic EPC Configuration
1. Define a 'capture buffer', which is a temporary buffer that the captured packets
are stored within. There are various options that can be selected when the
buffer is defined; such as size, maximum packet size, and circular/linear:
monitor capture buffer BUF size 2048 max-size 1518 linear

2. A filter can also be applied to limit the capture to desired traffic. Define an
Access Control List (ACL) within config mode and apply the filter to the
buffer:
ip access-list extended BUF-FILTER
permit ip host 192.168.1.1 host 172.16.1.1
permit ip host 172.16.1.1 host 192.168.1.1
monitor capture buffer BUF filter access-list BUF-FILTER

3. Define a 'capture point', which defines the location where the capture occurs.
The capture point also defines whether the capture occurs for IPv4 or IPv6 and
in which switching path (process versus cef):
monitor capture point ip cef POINT fastEthernet 0 both

4. Attach the buffer to the capture point:


50

monitor capture point associate POINT BUF

5. Start the capture:


monitor capture point start POINT

6. The capture is now active. Allow collection of the necessary data.

7. Stop the capture:


monitor capture point stop POINT

8. Examine the buffer on the unit:


show monitor capture buffer BUF dump

9. Export the buffer from the router for further analysis:


monitor capture buffer BUF export tftp://10.1.1.1/BUF.pcap

10. Once the necessary data has been collected, delete the "capture point" and
"capture buffer":
no monitor capture point ip cef POINT fastEthernet 0 both
no monitor capture buffer BUF

Cisco IOS-XE Configuration Example


The Embedded Packet Capture feature was introduced in Cisco IOS-XE Release 3.7 -
15.2(4)S. The configuration of the capture is different than Cisco IOS as it adds more
features.
Basic EPC Configuration
1. Define the location where the capture will occur:
monitor capture CAP interface GigabitEthernet0/0/1 both

2. Associate a filter. The filter may be specified inline, or an ACL or class-map


may be referenced:
monitor capture CAP match ipv4 protocol tcp any any

3. Start the capture:


monitor capture CAP start

4. The capture is now active. Allow it to collect the necessary data.

5. Stop the capture:


monitor capture CAP stop

6. Examine the capture in a summary view:


show monitor capture CAP buffer brief

7. Examine the capture in a detailed view:


show monitor capture CAP buffer detailed

8. In addition, export the capture in PCAP format for further analysis:


monitor capture CAP export ftp://10.0.0.1/CAP.pcap

9. Once the necessary data has been collected, remove the capture:
no monitor capture CAP
51

2.0 – Layer 2 Technologies

2.1 – LAN switching technologies

2.1.a – Implement and troubleshoot switch administration


The IEEE defines the format and assignment of network addresses by requiring
manufacturers to encode globally unique unicast Media Access Control (MAC)
addresses on all NICs. The first half of the MAC address identifies the manufacturer
of the card and is called the organizationally unique identifier (OUI).

Ethernet or Token Ring router interfaces and all network interface cards (NICs) are
identified with a unique burned-in address (BIA), called the MAC address or the
physical address. The MAC address is implemented at the Data-Link Layer (Layer 2)
of the OSI reference model to identify the station. The MAC address is 48 bits in
length (6 octets) and is represented in hexadecimal format.

The first three bytes of a MAC address form the Organizational Unique Identifier
(OUI), which identifies the manufacturer. The last three octets are administered by the
manufacturer and assigned in sequence.

Non-canonical and Canonical Transmission


When converting hexadecimal MAC addresses to binary, each hexadecimal number is
represented by its binary equivalent. Thus, dc-c0-df-fa-0f-3c is 11011100 11000000
11011111 11111010 00001111 00111100 when converted to binary.

In Ethernet networks, each octet of the MAC address is transmitted from left to right,
but from LSB to MSB. The difference is that for each octet, the LSB is transmitted
first and the MSB is transmitted last. Thus, the first octet of MAC address dc-c0-df-
fa-0f-3c, i.e., dc is transmitted from left to right as 00111011; the second octet, c0, is
transmitted as 00000011; the third octet, df, is transmitted as 1111111011; etc. This is
called canonical transmission and is also known as LSB first.

Errdisable recovery

Function of Errdisable
If the configuration shows a port to be enabled, but software on the switch detects an
error situation on the port, the software shuts down that port. In other words, the
52

switch operating system software because of an error condition that is encountered on


the port automatically disables the port.

When a port is error disabled, it is effectively shut down and no traffic is sent or
received on that port. The port LED is set to the color orange and, when you issue the
show interfaces command, the port status shows err-disabled. Here is an example of
what an error-disabled port looks like from the command-line interface (CLI) of the
switch:

Or, if the interface has been disabled because of an error condition, you can see
messages that are similar to these in both the console and the syslog:

This example message displays when a host port receives the bridge protocol data unit
(BPDU). The actual message depends on the reason for the error condition.

The error disable function serves two purposes:

 It lets the administrator know when and where there is a port problem.
 It eliminates the possibility that this port can cause other ports on the module
(or the entire module) to fail.

Such a failure can occur when a bad port monopolizes buffers or port error messages
monopolize inter-process communications on the card, which can ultimately cause
serious network issues. The error disable feature helps prevent these situations.

Causes of Errdisable
This feature was first implemented to handle special collision situations in which the
switch detected excessive or late collisions on a port. Excessive collisions occur when
a frame is dropped because the switch encounters 16 collisions in a row. Late
collisions occur after every device on the wire should have recognized that the wire
was in use. Possible causes of these types of errors include:

 A cable that is out of specification (either too long, the wrong type, or
defective)
 A bad network interface card (NIC) card (with physical problems or driver
problems)
 A port duplex misconfiguration

A port duplex misconfiguration is a common cause of the errors because of failures to


negotiate the speed and duplex properly between two directly connected devices (for
example, a NIC that connects to a switch). Only half-duplex connections should ever
53

have collisions in a LAN. Because of the carrier sense multiple access (CSMA) nature
of Ethernet, collisions are normal for half duplex, as long as the collisions do not
exceed a small percentage of traffic.

There are various reasons for the interface to go into errdisable. The reason can be:

 Duplex mismatch
 Port channel misconfiguration
 BPDU guard violation
 Uni-Directional Link Detection (UDLD) condition
 Late-collision detection
 Link-flap detection
 Security violation
 Port Aggregation Protocol (PAgP) flap
 Layer 2 Tunneling Protocol (L2TP) guard
 DHCP snooping rate-limit
 Incorrect GBIC / Small Form-Factor Pluggable (SFP) module or cable
 Address Resolution Protocol (ARP) inspection
 Inline power

Recover a Port from Err-disabled State


In order to recover a port from the errdisable state, first identify and correct the root
problem, and then re-enable the port. If you re-enable the port before you fix the root
problem, the ports just become error disabled again.
Correct the Root Problem
After you discover why the ports were disabled, fix the root problem. The fix depends
on the triggering problem. There are numerous things that can trigger the shutdown.
 Ether-Channel misconfiguration
In order for EtherChannel to work, the ports that are involved must have
consistent configurations. The ports must have the same VLAN, the same
trunk mode, the same speed, the same duplex, and so on. Most of the
configuration differences within a switch are caught and reported when you
create the channel. If one switch is configured for EtherChannel and the other
switch is not configured for EtherChannel, the spanning tree process can shut
down the channeled ports on the side that is configured for EtherChannel. The
on mode of EtherChannel does not send PAgP packets to negotiate with the
other side before channeling; it just assumes that the other side is channeling.
In addition, this example does not turn on EtherChannel for the other switch,
but leaves these ports as individual, un-channeled ports. If you leave the other
switch in this state for a minute or so, Spanning Tree Protocol (STP) on the
switch where the EtherChannel is turned on thinks that there is a loop. This
puts the channeling ports in the err-disabled state.
In this example, a loop was detected and the ports were disabled. The output
of the show ether-channel summary command shows that the Number of
channel-groups in use is 0. When you look at one of the ports that are involved, you can
see that the status is err-disabled:
54

The EtherChannel was torn down because the ports were placed in errdisable on this
switch.

In order to determine what the problem was, look at the error message. The message
indicates that the EtherChannel encountered a spanning tree loop. One way to fix the
situation is to set the channel mode to desirable on both sides of the connection, and
then reenable the ports. Then, each side forms a channel only if both sides agree to
channel. If they do not agree to channel, both sides continue to function as normal
ports.

L2 MTU
The MTU for an L2VPN is important because Label Distribution Protocol (LDP) does
not bring a pseudo-wire (PW) up when the MTUs on the attachment circuits at each
side of a PW are not the same.

Here is a show command that illustrates that an L2VPN PW stays down when there is
an MTU mismatch:
55

In this example, note that the MPLS L2VPN provider edges (PEs) at each side must
signal the same MTU value in order to bring the PW up.

The MTU signaled by MPLS LDP does not include the L2 overhead. This is different
from the XR interface config and show commands that include the L2 overhead. The
MTU on the sub-interface is 2018 bytes (as inherited from the main interface of 2014
bytes), but LDP signaled an MTU of 2000 bytes. As a result, it subtracts 18 bytes (14
bytes of Ethernet header + 4 bytes of 1 dot1q tag) from the L2 header.

It is important to understand how each device computes the MTU values of the
attachment circuits in order to fix MTU mismatches. This depends upon parameters
such as vendor, platform, software version, and configuration.
56

2.1.b – Implement and Troubleshoot Layer 2 Protocols

Cisco Discovery Protocol (CDP)


Cisco Discovery Protocol (CDP) Cisco proprietary protocol that is media- and
protocol-independent and runs on all Cisco manufactured equipment over any Layer-2
protocol that supports Sub-network Access Protocol (SNAP) frames including
Ethernet, Frame Relay, and ATM. With CDP, network management applications can
obtain the device type and the SNMP IP address of neighboring devices.

CDP allows network management applications to dynamically discover Cisco devices


that are neighbors of already known devices, neighbors running lower-layer
transparent protocols in particular. CDP runs over the data link layer only, not the
network layer. Therefore, two systems that support different network layer protocols
can learn about each other. Cached CDP information is available to network
management applications. However, Cisco devices never forward a CDP packet.
When new information is received, old information is discarded.

CDP is enabled by default but you can use the no cdp run global command to disable
CDP. You can also disable CDP per interface by using the no cdp enable interface
command. In Catalyst OS (CatOS), the command to globally disable CDP is set cdp
disable. In CatOS, to disable CDP on a port, use the set cdp disable [mod/port]
command.

To find out about neighboring Cisco routers or switches, use the show cdp neighbors
command, which gives summary information of each router. You use the same
command on a Catalyst switch. To get more detailed information about neighboring
routers, use the show cdp neighbors detail command. From the output, you can
gather neighbor information such as name, IP address, platform type, and IOS
version.

Understanding LLDP
The Cisco Discovery Protocol (CDP) is a device discovery protocol that runs over
Layer 2 (the data link layer) on all Cisco-manufactured devices (routers, bridges,
access servers, and switches). CDP allows network management applications to
automatically discover and learn about other Cisco devices connected to the network.

To support non-Cisco devices and to allow for interoperability between other devices,
the switch supports the IEEE 802.1AB LLDP. LLDP is a neighbor discovery protocol
that is used for network devices to advertise information about themselves to other
devices on the network. This protocol runs over the data-link layer, which allows two
systems running different network layer protocols to learn about each other.

LLDP supports a set of attributes that it uses to discover neighbor devices. These
attributes contain type, length, and value descriptions and are referred to as TLVs.
LLDP supported devices can use TLVs to receive and send information to their
neighbors. Details such as configuration information, device capabilities, and device
identity can be advertised using this protocol.

The switch supports the following basic management TLVs, which are optional:
57

 Port description TLV


 System name TLV
 System description TLV
 System capabilities TLV
 Management address TLV

These organizationally specific LLDP TLVs are also advertised to support LLDP-
MED.

 Port VLAN ID TLV ((IEEE 802.1 organizationally specific TLVs)


 MAC/PHY configuration/status TLV(IEEE 802.3 organizationally specific
TLVs)

UDLD
In order to detect the unidirectional links before the forwarding loop is created, Cisco
designed and implemented the UDLD protocol.
UDLD is a Layer 2 (L2) protocol that works with the Layer 1 (L1) mechanisms to
determine the physical status of a link. At Layer 1, auto-negotiation takes care of
physical signaling and fault detection. UDLD performs tasks that auto-negotiation
cannot perform, such as detecting the identities of neighbors and shutting down
misconnected ports. When you enable both auto-negotiation and UDLD, Layer 1 and
Layer 2 detections work together to prevent physical and logical unidirectional
connections and the malfunctioning of other protocols.
UDLD works by exchanging protocol packets between the neighboring devices. In
order for UDLD to work, both devices on the link must support UDLD and have it
enabled on respective ports.
Each switch port configured for UDLD sends UDLD protocol packets that contain the
port's own device/port ID, and the neighbor's device/port IDs seen by UDLD on that
port. Neighboring ports should see their own device/port ID (echo) in the packets
received from the other side.
If the port does not see its own device/port ID in the incoming UDLD packets for a
specific duration of time, the link is considered unidirectional.
This echo-algorithm allows detection of these issues:
 Link is up on both sides, however, packets are only received by one side.
 Wiring mistakes when receive and transmit fibers are not connected to the
same port on the remote side.
Once the unidirectional link is detected by UDLD, the respective port is disabled and
this message is printed on the console:
UDLD-3-DISABLE: Unidirectional link detected on port 1/2. Port disabled
Port shutdown by UDLD remains disabled until it is manually re-enabled, or until err-
disable timeout expires (if configured).
UDLD Modes of Operation
UDLD can operate in two modes: normal and aggressive.
58

In normal mode, if the link state of the port was determined to be bi-directional and
the UDLD information times out, no action is taken by UDLD. The port state for
UDLD is marked as undetermined. The port behaves according to its STP state.
In aggressive mode, if the link state of the port is determined to be bi-directional and
the UDLD information times out while the link on the port is still up, UDLD tries to
re-establish the state of the port. If not successful, the port is put into the err-
disable state.
Aging of UDLD information happens when the port that runs UDLD does not receive
UDLD packets from the neighbor port for duration of hold time. The hold time for the
port is dictated by the remote port and depends on the message interval at the remote
side. The shorter the message interval, the shorter the hold time and the faster the
detection. Recent implementations of UDLD allow configuration of message interval.
UDLD information can age out due to the high error rate on the port caused by some
physical issue or duplex mismatch. Such packet drop does not mean that the link is
unidirectional and UDLD in normal mode will not disable such link.
It is important to be able to choose the right message interval in order to ensure proper
detection time. The message interval should be fast enough to detect the
unidirectional link before the forwarding loop is created, however, it should not
overload the switch CPU. The default message interval is 15 seconds, and is fast
enough to detect the unidirectional link before the forwarding loop is created with
default STP timers. The detection time is approximately equal to three times the
message interval.
For example: Tdetection ~ message_interval x3
This is 45 seconds for the default message interval of 15 seconds.
It takes Treconvergence=max_age + 2x forward_delay for the STP to reconverge in case of
unidirectional link failure. With the default timers, it takes 20+2x15=50 seconds.
It is recommended to keep Tdetection < Treconvergence by choosing an appropriate message
interval.
In aggressive mode, once the information is aged, UDLD will make an attempt to re-
establish the link state by sending packets every second for eight seconds. If the link
state is still not determined, the link is disabled.
Aggressive mode adds additional detection of these situations:
 The port is stuck (on one side the port neither transmits nor receives, however,
the link is up on both sides).
 The link is up on one side and down on the other side. This is issue might be
seen on fiber ports. When transmit fiber is unplugged on the local port, the
link remains up on the local side. However, it is down on the remote side.
Most recently, fiber Fast Ethernet hardware implementations have Far End Fault
Indication (FEFI) functions in order to bring the link down on both sides in these
situations. On Gigabit Ethernet, a similar function is provided by link negotiation.
Copper ports are normally not susceptible to this type of issue, as they use Ethernet
link pulses to monitor the link. It is important to mention that in both cases, no
forwarding loop occurs because there is no connectivity between the ports. If the link
is up on one side and down on the other, however, blackholing of traffic might occur.
Aggressive UDLD is designed to prevent this.
59

2.1.c – Implement and Troubleshoot VLAN

Understanding VLANs
A VLAN is a switched network that is logically segmented by function, project team,
or application, without regard to the physical locations of the users. VLANs have the
same attributes as physical LANs, but you can group end stations even if they are not
physically located on the same LAN segment. Any switch port can belong to a
VLAN, and unicast, broadcast, and multicast packets are forwarded and flooded only
to end stations in the VLAN. Each VLAN is considered a logical network, and
packets destined for stations that do not belong to the VLAN must be forwarded
through a router or bridge as shown in Figure 12-1. Because a VLAN is considered a
separate logical network, it contains its own bridge Management Information Base
(MIB) information and can support its own implementation of spanning tree.

Figure 2-1 shows an example of VLANs segmented into logically defined networks.

Figure 2-1 VLANs as Logically Defined Networks

VLANs are often associated with IP sub-networks. For example, all the end stations
in a particular IP subnet belong to the same VLAN. Interface VLAN membership on
the switch is assigned manually on an interface-by-interface basis. When you assign
switch interfaces to VLANs by using this method, it is known as interface-based, or
static, VLAN membership.

Traffic between VLANs must be routed or fallback bridged. A Catalyst 3550 switch
can route traffic between VLANs by using switch virtual interfaces (SVIs). An SVI
must be explicitly configured and assigned an IP address to route traffic between
VLANs.

Supported VLANs
The Catalyst 3550 switch supports 1005 VLANs in VTP client, server, and
transparent modes. VLANs are identified with a number from 1 to 4094. VLAN IDs
1002 through 1005 are reserved for Token Ring and FDDI VLANs. VTP only learns
60

normal-range VLANs, with VLAN IDs 1 to 1005; VLAN IDs greater than 1005 are
extended-range VLANs and are not stored in the VLAN database. The switch must be
in VTP transparent mode when you create VLAN IDs from 1006 to 4094.

The switch supports per-VLAN spanning-tree plus (PVST+) and rapid PVST+ with a
maximum of 128 spanning-tree instances. One spanning-tree instance is allowed per
VLAN. The switch supports both Inter-Switch Link (ISL) and IEEE 802.1Q trunking
methods for sending VLAN traffic over Ethernet ports.

Configuring Normal-Range VLANs


Normal-range VLANs are VLANs with VLAN IDs 1 to 1005. If the switch is in VTP
server or transparent mode, you can add, modify or remove configurations for VLANs
2 to 1001 in the VLAN database. (VLAN IDs 1 and 1002 to 1005 are automatically
created and cannot be removed.)

Configurations for VLAN IDs 1 to 1005 are written to the file vlan.dat (VLAN
database), and you can display them by entering the show vlan privileged EXEC
command. The vlan.dat file is stored in Flash memory.

You use the interface configuration mode to define the port membership mode and to
add and remove ports from VLANs. The results of these commands are written to the
running-configuration file, and you can display the file by entering the show running-
config privileged EXEC command.

You can set these parameters when you create a new normal-range VLAN or modify
an existing VLAN in the VLAN database:

 VLAN ID
 VLAN name
 VLAN type (Ethernet, Fiber Distributed Data Interface [FDDI], FDDI
network entity title [NET], TrBRF, or TrCRF, Token Ring, Token Ring-Net)
 VLAN state (active or suspended)
 Maximum transmission unit (MTU) for the VLAN
 Security Association Identifier (SAID)
 Bridge identification number for TrBRF VLANs
 Ring number for FDDI and TrCRF VLANs
 Parent VLAN number for TrCRF VLANs
 Spanning Tree Protocol (STP) type for TrCRF VLANs
 VLAN number to use when translating from one VLAN type to another

Configuring Extended-Range VLANs


When the switch is in VTP transparent mode (VTP disabled), you can create
extended-range VLANs (in the range 1006 to 4094). Extended-range VLANs enable
service providers to extend their infrastructure to a greater number of customers. The
extended-range VLAN IDs is allowed for any switchport commands that allow
VLAN IDs. You always use config-vlan mode (accessed by entering the vlan vlan-id
global configuration command) to configure extended-range VLANs. The extended
range is not supported in VLAN database configuration mode (accessed by entering
the vlan database privileged EXEC command).
61

Extended-range VLAN configurations are not stored in the VLAN database, but
because VTP mode is transparent, they are stored in the switch running configuration
file, and you can save the configuration in the startup configuration file by using the
copy running-config startup-config privileged EXEC command.

Extended-Range VLAN Configuration Guidelines

Follow these guidelines when creating extended-range VLANs:

 To add an extended-range VLAN, you must use the vlan vlan-id global
configuration command and access config-vlan mode. You cannot add extended-
range VLANs in VLAN database configuration mode (accessed by entering the
vlan database privileged EXEC command).
 VLAN IDs in the extended range are not saved in the VLAN database and are
not recognized by VTP.
 You cannot include extended-range VLANs in the pruning eligible range.

 The switch must be in VTP transparent mode when you create extended-range
VLANs. If VTP mode is server or client, an error message is generated, and the
extended-range VLAN is rejected.
 You can set the VTP mode to transparent in global configuration mode or in
VLAN database configuration mode. You should save this configuration to the
startup configuration so that the switch will boot up in VTP transparent mode.
Otherwise, you will lose extended-range VLAN configuration if the switch resets.
 VLANs in the extended range are not supported by VQP. They cannot be
configured by VMPS.
 STP is enabled by default on extended-range VLANs, but you can disable it by
using the no spanning-tree vlan vlan-id global configuration command. When the
maximum numbers of spanning-tree instances (128) are on the switch, spanning
tree is disabled on any newly created VLANs. If the number of VLANs on the
switch exceeds the maximum number of spanning tree instances, we recommend
that you configure the IEEE 802.1S Multiple STP (MSTP) on your switch to map
multiple VLANs to a single STP instance.
 Each routed port on a Catalyst 3550 switch creates an internal VLAN for its use.
These internal VLANs use extended-range VLAN numbers, and the internal
VLAN ID cannot be used for an extended-range VLAN. If you try to create an
extended-range VLAN with a VLAN ID that is already allocated as an internal
VLAN, an error message is generated, and the command is rejected.
 Because internal VLAN IDs are in the lower part of the extended range,
we recommend that you create extended-range VLANs beginning from the
highest number (4094) and moving to the lowest (1006) to reduce the
possibility of using an internal VLAN ID.
 Before configuring extended-range VLANs, enter the show vlan internal
usage privileged EXEC command to see which VLANs have been
allocated as internal VLANs.
 If necessary, you can shut down the routed port assigned to the internal
VLAN, which frees up the internal VLAN, and then create the extended-
62

range VLAN and re-enable the port, which then uses another VLAN as its
internal VLAN.

Creating an Extended-Range VLAN


You create an extended-range VLAN in global configuration mode by entering the
vlan global configuration command with a VLAN ID from 1006 to 4094. This
command accesses the config-vlan mode. If you enter an extended-range VLAN ID
when the switch is not in VTP transparent mode, an error message is generated when
you exit from config-vlan mode, and the extended-range VLAN is not created.

Extended-range VLANs are not saved in the VLAN database; they are saved in the
switch running configuration file. You can save the extended-range VLAN
configuration in the switch startup configuration file by using the copy running-config
startup-config privileged EXEC command.

Understanding Voice VLAN


The voice VLAN feature enables access ports to carry IP voice traffic from an IP
phone. The switch can connect to a Cisco 7960 IP Phone and carry IP voice traffic.
Because the sound quality of an IP phone call can deteriorate if the data is unevenly
sent, the switch supports quality of service (QoS) based on IEEE 802.1P class of
service (CoS). QoS uses classification and scheduling to send network traffic from the
switch in a predictable manner. The Cisco 7960 IP Phone is a configurable device,
and you can configure it to forward traffic with an 802.1P priority. You can configure
the switch to trust or override the traffic priority assigned by an IP Phone.

The Cisco 7960 IP Phone contains an integrated three-port 10/100 switch as shown in
Figure 2.2. The ports provide dedicated connections to these devices:

 Port 1 connects to the switch or other voice-over-IP (VoIP) device.


 Port 2 is an internal 10/100 interface that carries the IP phone traffic.
 Port 3 (access port) connects to a PC or other device.

Figure 2.2 shows one way to connect a Cisco 7960 IP Phone.

Figure 2.2 Cisco 7960 IP Phone Connected to a Switch


63

When the IP Phone connects to the switch, the access port (PC-to-telephone jack) of
the IP phone can connect to a PC. Packets to and from the PC and to or from the IP
phone share the same physical link to the switch and the same switch port.

Configuring Voice VLAN


This section describes how to configure voice VLAN on access ports. It contains this
configuration information:

 Default Voice VLAN Configuration


 Voice VLAN Configuration Guidelines
 Configuring a Port to Connect to a Cisco 7960 IP Phone

Default Voice VLAN Configuration


The voice VLAN feature is disabled by default.

When the voice VLAN feature is enabled, all untagged traffic is sent according to the
default CoS priority of the port.

The default CoS value is 0 for incoming traffic.


The CoS value is not trusted for 802.1P or 802.1Q tagged traffic.
The IP Phone overrides the priority of all incoming traffic (tagged and untagged) and
sets the CoS value to 0.

Voice VLAN Configuration Guidelines


These are the voice VLAN configuration guidelines:

 You should configure voice VLAN on switch access ports.


 Before you enable voice VLAN, we recommend that you enable QoS on the
switch by entering the mls qos global configuration command and configure the
port trust state to trust by entering the mls qos trust cos interface configuration
command.
 The Port Fast feature is automatically enabled when voice VLAN is
configured. When you disable voice VLAN, the Port Fast feature is not
automatically disabled.

 When you enable port security on an interface that is also configured with a
voice VLAN, you must set the maximum allowed secure addresses on the port to
at least two.
 If any type of port security is enabled on the access VLAN, dynamic port
security is automatically enabled on the voice VLAN.
 You cannot configure static secure or sticky secure MAC addresses on a voice
VLAN.
 Voice VLAN ports can also be these port types:

 Dynamic access port.


 Secure port.
64

 802.1X authenticated port.


 Protected port.

2.1.d – Implement and troubleshoot trunking

When using VLANs in networks that have multiple interconnected switches, you need
to use VLAN trucking between the switches. With VLAN trunking, the switches tag
each frame sent between switches so that the receiving switch knows to what VLAN
the frame belongs. End user devices connect to switch ports that provide simple
connectivity to a single VLAN each. The attached devices are unaware of any VLAN
structure.

A trunk link can transport more than one VLAN through a single switch port. A trunk
link is not assigned to a specific VLAN. Instead, one or more active VLANs can be
transported between switches using a single physical trunk link. Connecting two
switches with separate physical links for each VLAN is also possible. In addition,
trunking can support multiple VLANs that have members on more than one switch.

Cisco switches support two trunking protocols, namely, Inter-Switch Link (ISL) and
IEEE 802.1Q.

Inter-Switch Link (ISL)


Cisco created ISL before the IEEE standardized a trunking protocol. Thus, ISL is a
Cisco proprietary solution and can be used only between two Cisco switches. ISL
fully encapsulates each original Ethernet frame in an ISL header and trailer. The
original Ethernet frame inside the ISL header and trailer remains unchanged.

The ISL header includes a VLAN field that provides a place to encode the VLAN
number. By tagging a frame with the correct VLAN number inside the header, the
sending switch can ensure that the receiving switch knows to which VLAN the
encapsulated frame belongs. Also, the source and destination addresses in the ISL
header use MAC addresses of the sending and receiving switch, as opposed to the
devices that actually sent the original frame.

IEEE 802.1Q
After Cisco created ISL, the IEEE completed work on the 802.1Q standard. 802.1Q
uses a different style of header to tag frames with a VLAN number than the ISL. It
does not encapsulate the original frame, but adds a 4-byte header to the original
Ethernet header. This additional header includes a field with which to identify the
VLAN number. Because the original header has been changed, 802.1Q encapsulation
forces a recalculation of the original FCS field in the Ethernet trailer, because the FCS
is based on the contents of the entire frame. 802.1Q also introduces the concept of a
native VLAN on a trunk. Frames belonging to this VLAN are not encapsulated with
tagging information. In the event that a host is connected to an 802.1Q trunk link, that
host will be able to receive and understand only the native VLAN frames.
65

VLAN Trunking Protocol (VTP)


Administration of network environments that consists of many interconnected
switches is complicated. Cisco has developed a propriety solution to manage VLANs
across such networks using the VLAN Trunking Protocol (VTP) to exchange VLAN
configuration information between switches. VTP uses Layer 2 trunk frames to
exchange VLAN information so that the VLAN configuration stays consistent
throughout a network. VTP also manages the additions, deletions, and name changes
of VLANs across multiple switches from a central point, minimizing
misconfigurations and configuration inconsistencies that can cause problems, such as
duplicate VLAN names or incorrect VLAN type settings.
VTP is organized into management domains or areas with common VLAN
requirements. A switch can belong to only one VTP domain. Switches in different
VTP domains do not share VTP information. Switches in a VTP domain advertise
several attributes to their domain neighbors. Each advertisement contains information
about the VTP management domain, VTP configuration revision number, known
VLANs, and specific VLAN parameters.

The VTP process begins with VLAN creation on a switch called a VTP server. VTP
floods advertisements throughout the VTP domain every 5 minutes, or whenever there
is a change in VLAN configuration. The VTP advertisement includes a configuration
revision number, VLAN names and numbers, and information about which switches
have ports assigned to each VLAN. By configuring the details on one or more VTP
server and propagating the information through advertisements, all switches know the
names and numbers of all VLANs.

VTP Modes
To participate in a VTP management domain, each switch must be configured to
operate in one of three modes. These modes are: server mode, client mode, and
transparent mode.
Server Mode
Server mode is the default mode. In this mode, VTP servers have full control over
VLAN creation and modification for their domains. All VTP information is advertised
to other switches in the domain, while all received VTP information is synchronized
with the other switches. Because it is the default mode, server mode can be used on
any switch in a management domain, even if other server and client switches are in
use. This mode provides some redundancy in the event of a server failure in the
domain.
Client Mode
Client mode is a passive listening mode. Switches listens to VTP advertisements from
other switches and modify their VLAN configurations accordingly. Thus the
administrator is not allowed to create, change, or delete any VLANs. If other switches
are in the management domain, a new switch should be configured for client mode
operation. In this way, the switch will learn any existing VTP information from a
server. If this switch will be used as a redundant server, it should start out in client
mode to learn all VTP information from reliable sources. If the switch was initially
configured for server mode instead, it might propagate incorrect information to the
66

other domain switches. Once the switch has learned the current VTP information, it
can be reconfigured for server mode.
Transparent Mode
Transparent mode does not allow the switch to participate in VTP negotiations. Thus,
a switch does not advertise its own VLAN configuration, and a switch does not
synchronize its VLAN database with received advertisements. VLANs can still be
created, deleted, and renamed on the transparent switch. However, they will not be
advertised to other neighboring switches. VTP advertisements received by a
transparent switch will be forwarded on to other switches on trunk links.

VTP Pruning
A switch must forward broadcast frames out all available ports in the broadcast
domain because broadcasts are destined everywhere there is a listener. Multicast
frames, unless forwarded by more intelligent means, follow the same pattern. In
addition, frames destined for an address that the switch has not yet learned or has
forgotten must be forwarded out all ports in an attempt to find the destination. When
forwarding frames out all ports in a broadcast domain or VLAN, trunk ports are
included. By default, a trunk link transports traffic from all VLANs, unless specific
VLANs are removed from the trunk with the clear trunk command. In a network
with several switches, trunk links are enabled between switches and VTP is used to
manage the propagation of VLAN information. This causes the trunk links between
switches to carry traffic from all VLANs.

VTP pruning makes more efficient use of trunk bandwidth by reducing unnecessary
flooded traffic. Broadcast and unknown unicast frames on a VLAN are forwarded
over a trunk link only if the switch on the receiving end of the trunk has ports in that
VLAN. In other words, VTP pruning allows switches to prevent broadcasts and
unknown unicasts from flowing to switches that do not have any ports in that VLAN.
VTP pruning occurs as an extension to VTP version 1. When a Catalyst switch has a
port associated with a VLAN, the switch sends an advertisement to its neighbor
switches that it has active ports on that VLAN. The neighbors keep this information,
enabling them to decide if flooded traffic from a VLAN should use a trunk port or
not.

By default, VTP pruning is disabled on IOS-based and CLI-based switches. On IOS-


based switches, the vtp pruning command in the VLAN database configuration
mode, the can be used to enable pruning while the set vtp pruning enable command
can be used to enabled VTP pruning on CLI-based switches

VTP Configuration
Before VLANs can be configured, VTP must be configured. By default, every switch
will operate in VTP server mode for the management domain NULL, with no
password or secure mode.
67

Configuring a VTP Management Domain


Before a switch is added into a network, the VTP management domain should be
identified. If this switch is the first one on the network, the management domain will
need to be created. Otherwise, the switch may have to join an existing management
domain with other existing switches.

The following command can be used to assign a switch to a management domain on


an IOS-based switch:

Switch# vlan database


Switch(vlan)# vtp domain domain_name

To assign a switch to a management domain on a CLI-based switch, use the following


command:

Switch(enable) set vtp [ domain domain_name ]

Configuring the VTP Mode


Once you have assigned the switch to a VTP management domain, you need to select
the VTP mode for the new switch. There are three VTP modes that can be selected:
server mode, client mode and transparent mode.

On an IOS-based switch, the following commands can be used to configure the VTP
mode:

Switch# vlan database


Switch(vlan)# vtp domain domain_name
Switch(vlan)# vtp { server | client | transparent }
Switch(vlan)# vtp password password

On a CLI-based switch, the following command can be used to configure the VTP
mode:

Switch(enable) set vtp [ domain domain_name ]


[ mode{ server | client | transparent }] [ password password ]

If the domain is operating in secure mode, a password can be included in the


command line. The password can have 8 to 64 characters.

Configuring the VTP Version


Two versions of VTP, VTP version 1 and VTP version 2, are available for use in a
management domain. Although VTP version 1 is the default protocol on a Catalyst
switch, Catalyst switches are capable of running both versions; however, the two
versions are not interoperable within a management domain. Thus, the same VTP
version must be configured on each switch in a domain. However, a switch running
VTP version 2 may coexist with other version 1 switches, if its VTP version 2 is not
enabled. This situation becomes important if you want to use version 2 in a domain.
68

Then, only one server mode switch needs to have VTP version 2 enabled. The new
version number is propagated to all other version 2-capable switches in the domain,
causing them to enable version 2 for use. By default, VTP version 1 is enabled.
Version 2 can be enabled or disabled using the v2 option. The two versions of VTP
differ in the features they support. VTP version 2 offers the following additional
features over version 1:
 In transparent mode VTP version 1 matches the VTP version and domain name
before forwarding the information to other switches using VTP. On the other
hand, VTP version 2 in transparent mode forwards the VTP messages without
checking the version number.
 VTP version 2 performs consistency checks on the VTP and VLAN parameters
entered from the CLI or by Simple Network Management Protocol (SNMP). This
checking helps prevent errors in such things as VLAN names and numbers from
being propagated to other switches in the domain. However, no consistency
checks are performed on VTP messages that are received on trunk links or on
configuration and database data that is read from NVRAM.
 VTP version 2 has Unrecognized Type-Length-Value (TLV) support, which
means that VTP version 2 switches will propagate received configuration change
messages out other trunk links, even if the switch supervisor is not able to parse or
understand the message.

On an IOS-based switch, the VTP version number is configured using the following
commands:

Switch# vlan database


Switch(vlan)# vtp v2-mode

On a CLI-based switch, the VTP version number is configured using the following
command:

Switch(enable) set vtp v2 enable

2.1.e Implement and troubleshoot EtherChannel

Cisco provides a method of increasing link bandwidth between two systems by


aggregating or bundling Ethernet links, using EtherChannel technology. Two to eight
links of either Fast Ethernet (FE) or Gigabit Ethernet (GE) can be bundled as one
logical link of Fast EtherChannel (FEC) or Gigabit EtherChannel (GEC), respectively.
This bundle provides a full-duplex bandwidth of up to 1600 Mbps on 8 links of Fast
Ethernet or 16 Gbps on 8 links of Gigabit Ethernet. This provides load sharing and
redundancy capabilities. If a link fails in the FEC or GEC bundle, traffic sent through
that link will move to an adjacent link. When links are restored, the load will be
redistributed among the links.
69

Bundling Ports with EtherChannel


Fast EtherChannel is available on the Catalyst 1900, 2820, 2900, 2900XL, 3500XL,
4000, 5000, and 6000 families. Gigabit EtherChannel is supported only on the
Catalyst 2900, 2900XL, 4000, 5000, and 6000 families. Most of the switch families
support a maximum of four Fast Ethernet or Gigabit Ethernet links bundled in a single
EtherChannel link. However, the Catalyst 6000 family supports up to eight bundled
links while the Catalyst 6000 supports up to 128 individual EtherChannel links.

All bundled ports must belong to the same VLAN. If used as a trunk, bundled ports
must all be in trunking mode and pass the same VLANs. Also, each of the ports must
be of the same speed and must be in duplex mode before they are bundled.
Distributing Traffic in EtherChannel
Traffic in an EtherChannel is statistically load-balanced across the individual links
bundled together. However, the load is not necessarily balanced equally across all of
the links. Instead, frames are forwarded on a specific link as a function of the
addresses present in the frame. Some combination of source and destination addresses
is used to form a binary pattern used to select a link number in the bundle. Switches
perform an exclusive-OR (XOR) operation on one or more low-order bits of the
addresses to determine what link to use.

In a two-link EtherChannel, the XOR operation is performed independently on each


bit position in the address value. If the two address values have the same bit value, the
XOR result is 0. If the two address bits differ, the XOR result is 1. In this way, frames
can be statistically distributed among the links with the assumption that MAC or IP
addresses are statistically distributed throughout the network. In a four-link
EtherChannel, the XOR is performed on the lower two bits of the address values
resulting in a two-bit XOR value or a link number from 0 to 3.

Communication between two devices will always be sent through the same
EtherChannel link because the two endpoint addresses stay the same. However, when
a device communicates with several other devices, chances are that the destination
addresses are equally distributed with zeros and ones in the last bit. This causes the
frames to be distributed across the EtherChannel links.

Switches with an Ethernet Bundling Controller (EBC) are limited to distributing


frames based on source and destination MAC addresses only. For each frame, the
source MAC address is XOR’d with the destination MAC address. Because this is the
only choice, no switch configuration is necessary.

Port Aggregation Protocol (PAgP)


Cisco developed the Port Aggregation Protocol (PAgP) to provide automatic
EtherChannel configuration and negotiation between switches. PAgP packets are
exchanged between switches over EtherChannel-capable ports. The identification of
neighbors and port group capabilities are learned and are compared with local switch
capabilities. Ports that have the same neighbor device ID and port group capability
will be bundled together as a bidirectional, point-to-point EtherChannel link. PAgP
will form an EtherChannel only on ports that are configured for either identical static
VLANs or trunking. PAgP also dynamically modifies parameters of the EtherChannel
if one of the bundled ports is modified. When ports are bundled into an EtherChannel,
70

all broadcasts and multicasts are sent over one port in the bundle only. Broadcasts will
not be sent over the remaining ports and will not be allowed to return over any other
port in the bundle. Switch ports can be configured for one of three PAgP modes:
 Auto is the default mode. In this mode PAgP packets are sent to negotiate an
EtherChannel only if the far end initiates EtherChannel negotiations. Auto mode is
thus a passive mode that requires a neighbor in desirable mode.
 On, in this mode the ports will always be bundled as an EtherChannel. No
negotiation takes place because PAgP packets are not sent or processed.
 Off, in this mode the ports will never be bundled as an EtherChannel. They will
remain as individual access or trunk links.
 Desirable, in this mode PAgP packets are sent to actively negotiate an
EtherChannel. This mode starts the negotiation process, and will bring up a
successful EtherChannel with another switch in either desirable or auto mode.

The following command is used to configure switch ports for PAgP:

SwitchTK1(config)# interface type mod/num


SwitchTK1(config-if)# channel-protocol pagp
SwitchTK1(config-if)# channel-group number mode {on | auto | desirable}

Link Aggregation Control Protocol (LACP)


LACP defined in IEEE 802.3ad can be used instead of PAgP. LACP operates much
like PAgP, and can be configured in active mode. The difference is that LACP
allocates roles to the EtherChannel’s endpoints. A switch that has the lowest system
priority, a 2-byte priority value succeeded by a 6-byte switch MAC address, is able to
decide on which ports actively partake in the EtherChannel at a certain time. Ports are
chosen and activated in relation to their port priority value, a 2-byte priority
succeeded by a 2-byte port number. A low value means a higher priority. A maximum
collection of up to 16 possible links can be defined for every EtherChannel. With
LACP, a switch chooses up to eight of these links with the lowest port priorities as
active EtherChannel links. The rest of the links are put in a standby state. They are
only enabled when an active link is down.

The following command is used to configure a LACP EtherChannel:

SwitchTK1(config)# lacp system-priority priority


SwitchTK1(config)# interface type mod/num
SwitchTK1(config-if)# channel-protocol lacp
SwitchTK1(config-if)# channel-group number mode {on | passive | active}
SwitchTK1(config-if)# lacp port-priority priority

Configuring EtherChannel
On a router, assigning interfaces to a port-channel with the channel group number
mode on command configures EtherChannel. The virtual interface is created with the
interface port channel number command.
71

To configure an EtherChannel on a CLI-based switch, use the Switch (enable) set


port channel module_number/port_range mode {on | off | desirable | auto}
command and on an IOS-based switch, use the Switch (config-if)# port group
group_number [ distribution{ source | destination }] command.

Information about the current EtherChannel configuration can be obtained by using


the show port channel [ module_number/port_number ] [ info | statistics ]
command on a CLI-based switch and the show port group [ group_number ]
command on an IOS-based switch.

Ether channel misconfiguration guard


You can use EtherChannel guard to detect an EtherChannel misconfiguration between
the switch and a connected device. A misconfiguration can occur if the switch
interfaces are configured in an EtherChannel, but the interfaces on the other device
are not. A misconfiguration can also occur if the channel parameters are not the same
at both ends of the EtherChannel.

If the switch detects a misconfiguration on the other device, EtherChannel guard


places the switch interfaces in the error-disabled state, and displays an error message.

You can enable this feature by using the spanning-tree ether-channel guard mis-config
global configuration command.

Understanding Root Guard


The Layer 2 network of a service provider (SP) can include many connections to
switches that are not owned by the SP. In such a topology, the spanning tree can
reconfigure itself and select a customer switch as the root switch, as shown in Figure
21-9. You can avoid this situation by enabling root guard on SP switch interfaces that
connect to switches in your customer's network. If spanning-tree calculations cause an
interface in the customer network to be selected as the root port, root guard then
places the interface in the root-inconsistent (blocked) state to prevent the customer's
switch from becoming the root switch or being in the path to the root.

If a switch outside the SP network becomes the root switch, the interface is blocked
(root-inconsistent state), and spanning tree selects a new root switch. The customer's
switch does not become the root switch and is not in the path to the root.

If the switch is operating in multiple spanning-tree (MST) mode, root guard forces the
interface to be a designated port. If a boundary port is blocked in an internal spanning-
tree (IST) instance because of root guard, the interface also is blocked in all MST
instances. A boundary port is an interface that connects to a LAN, the designated
switch of which is either an IEEE 802.1D switch or a switch with a different MST
region configuration.

Root guard enabled on an interface applies to all the VLANs to which the interface
belongs. VLANs can be grouped and mapped to an MST instance.

You can enable this feature by using the spanning-tree guard root interface
configuration command.
72

Figure 2.3 – Root guard in a service-provider network

2.1.f – Implement and troubleshoot spanning-tree

Understanding Port Fast


Port Fast immediately brings an interface configured as an access or trunk port to the
forwarding state from a blocking state, bypassing the listening and learning states.
You can use Port Fast on interfaces connected to a single workstation or server, as
shown in Figure 2.4, to allow those devices to immediately connect to the network,
rather than waiting for the spanning tree to converge.

Interfaces connected to a single workstation or server should not receive bridge


protocol data units (BPDUs). An interface with Port Fast enabled goes through the
normal cycle of spanning-tree status changes when the switch is restarted.

You can enable this feature by using the spanning-tree portfast interface configuration
or the spanning-tree portfast default global configuration command.

Figure 2.4
73

Understanding BPDU Guard


The BPDU guard feature can be globally enabled on the switch or can be enabled per
port, but the feature operates with some differences.
At the global level, you enable BPDU guard on Port Fast-enabled ports by using
the spanning-tree portfast bpduguard default global configuration command.
Spanning tree shuts down ports that are in a Port Fast-operational state if any BPDU
is received on them. In a valid configuration, Port Fast-enabled ports do not receive
BPDUs. Receiving a BPDU on a Port Fast-enabled port means an invalid
configuration, such as the connection of an unauthorized device, and the BPDU guard
feature puts the port in the error-disabled state. When this happens, the switch shuts
down the entire port on which the violation occurred.
To prevent the port from shutting down, you can use the errdisable detect cause
bpduguard shutdown vlan global configuration command to shut down just the
offending VLAN on the port where the violation occurred.
At the interface level, you enable BPDU guard on any port by using the spanning-
tree bpduguard enable interface configuration command without also enabling the
Port Fast feature. When the port receives a BPDU, it is put in the error-disabled state.
The BPDU guard feature provides a secure response to invalid configurations because
you must manually put the interface back in service. Use the BPDU guard feature in a
service-provider network to prevent an access port from participating in the spanning
tree.

Understanding BPDU Filtering


The BPDU filtering feature can be globally enabled on the switch or can be enabled
per interface, but the feature operates with some differences.
At the global level, you can enable BPDU filtering on Port Fast-enabled interfaces by
using the spanning-tree portfast bpdufilter default global configuration command.
This command prevents interfaces that are in a Port Fast-operational state from
sending or receiving BPDUs. The interfaces still send a few BPDUs at link-up before
the switch begins to filter outbound BPDUs. You should globally enable BPDU
filtering on a switch so that hosts connected to these interfaces do not receive BPDUs.
74

If a BPDU is received on a Port Fast-enabled interface, the interface loses its Port
Fast-operational status, and BPDU filtering is disabled.
At the interface level, you can enable BPDU filtering on any interface by using
the spanning-tree bpdufilter enable interface configuration command without also
enabling the Port Fast feature. This command prevents the interface from sending or
receiving BPDUs.

Understanding UplinkFast

Switches in hierarchical networks can be grouped into backbone switches, distribution


switches, and access switches. Figure 2.5 shows a complex network where
distribution switches and access switches each have at least one redundant link that
spanning tree blocks to prevent loops.

Figure 2.5

If a switch loses connectivity, it begins using the alternate paths as soon as the
spanning tree selects a new root port. By enabling Uplink Fast with the spanning-tree
uplink fast global configuration command, you can accelerate the choice of a new root
port when a link or switch fails or when the spanning tree reconfigures itself. The root
port transitions to the forwarding state immediately without going through the
listening and learning states, as it would with the normal spanning-tree procedures.

When the spanning tree reconfigures the new root port, other interfaces flood the
network with multicast packets, one for each address that was learned on the
interface. You can limit these bursts of multicast traffic by reducing the max-update-
rate parameter (the default for this parameter is 150 packets per second). However, if
75

you enter zero, station-learning frames are not generated, so the spanning-tree
topology converges more slowly after a loss of connectivity.

Understanding Loop Guard


You can use loop guard to prevent alternate or root ports from becoming designated
ports because of a failure that leads to a unidirectional link. This feature is most
effective when it is enabled on the entire switched network. Loop guard prevents
alternate and root ports from becoming designated ports, and spanning tree does not
send BPDUs on root or alternate ports.

You can enable this feature by using the spanning-tree loop guard default global
configuration command.

When the switch is operating in PVST+ or rapid-PVST+ mode, loop guard prevents
alternate and root ports from becoming designated ports, and spanning tree does not
send BPDUs on root or alternate ports.

When the switch is operating in MST mode, BPDUs are not sent on nonboundary
ports only if the interface is blocked by loop guard in all MST instances. On a
boundary port, loop guard blocks the interface in all MST instances.

Spanning-Tree Configuration guidelines


You can configure PortFast, BPDU guard, BPDU filtering, EtherChannel guard, root
guard, or loop guard if your switch is running PVST+, rapid PVST+, or MSTP.

On a Catalyst 3750-X switch, you can configure the UplinkFast, the BackboneFast, or
the cross-stack UplinkFast feature for rapid PVST+ or for the MSTP, but the feature
remains disabled (inactive) until you change the spanning-tree mode to PVST+.

On a Catalyst 3750-X switch, you can configure the Uplink Fast or the BackboneFast
feature for rapid PVST+ or for the MSTP, but the feature remains disabled (inactive)
until you change the spanning-tree mode to PVST+.

2.1.g – Implement and troubleshoot other LAN Switching


technologies

SPAN Operation
SPAN copies traffic from one or more ports, one or more Ether Channels, or one or
more VLANs, and sends the copied traffic to one or more destinations for analysis by
a network analyzer such as a Switch Probe device or other Remote Monitoring
(RMON) probe. The Mini Protocol Analyzer can also send traffic to the processor for
packet capture.

SPAN does not affect the switching of traffic on sources. You must dedicate the
destination for SPAN use. The SPAN-generated copies of traffic compete with user
traffic for switch resources.

Local SPAN Overview


76

A local SPAN session is an association of source ports and source VLANs with one
or more destinations. You configure a local SPAN session on a single switch. Local
SPAN does not have separate source and destination sessions.

Local SPAN sessions do not copy locally sourced RSPAN VLAN traffic from source
trunk ports that carry RSPAN VLANs. Local SPAN sessions do not copy locally
sourced RSPAN GRE-encapsulated traffic from source ports.

Each local SPAN session can have either ports or VLANs as sources, but not both.

Local SPAN copies traffic from one or more source ports in any VLAN or from one
or more VLANs to a destination for analysis. For example, as shown in Figure 2.6, all
traffic on Ethernet port 5 (the source port) is copied to Ethernet port 10. A network
analyzer on Ethernet port 10 receives all traffic from Ethernet port 5 without being
physically attached to Ethernet port 5.

Figure 2.6 – example SPAN operation

RSPAN Overview
RSPAN supports source ports, source VLANs, and destinations on different switches,
which provide remote monitoring of multiple switches across your network. RSPAN
uses a Layer 2 VLAN to carry SPAN traffic between switches.

RSPAN consists of an RSPAN source session, an RSPAN VLAN, and an RSPAN


destination session. You separately configure RSPAN source sessions and destination
sessions on different switches. To configure an RSPAN source session on one switch,
you associate a set of source ports or VLANs with an RSPAN VLAN. To configure
an RSPAN destination session on another switch, you associate the destinations with
the RSPAN VLAN.
77

The traffic for each RSPAN session is carried as Layer 2 non-routable traffic over a
user-specified RSPAN VLAN that is dedicated for that RSPAN session in all
participating switches. All participating switches must be trunk-connected at Layer 2.

RSPAN source sessions do not copy locally sourced RSPAN VLAN traffic from
source trunk ports that carry RSPAN VLANs. RSPAN source sessions do not copy
locally sourced RSPAN GRE-encapsulated traffic from source ports.

Each RSPAN source session can have either ports or VLANs as sources, but not both.

The RSPAN source session copies traffic from the source ports or source VLANs and
switches the traffic over the RSPAN VLAN to the RSPAN destination session. The
RSPAN destination session switches the traffic to the destinations.

ERSPAN Overview
ERSPAN supports source ports, source VLANs, and destinations on different
switches, which provides remote monitoring of multiple switches across your
network. ERSPAN uses a GRE tunnel to carry traffic between switches.

ERSPAN consists of an ERSPAN source session, routable ERSPAN GRE-


encapsulated traffic, and an ERSPAN destination session. You separately configure
ERSPAN source sessions and destination sessions on different switches.

To configure an ERSPAN source session on one switch, you associate a set of source
ports or VLANs with a destination IP address, ERSPAN ID number, and optionally
with a VRF name. To configure an ERSPAN destination session on another switch,
you associate the destinations with the source IP address, ERSPAN ID number, and
optionally with a VRF name.

ERSPAN source sessions do not copy locally sourced RSPAN VLAN traffic from
source trunk ports that carry RSPAN VLANs. ERSPAN source sessions do not copy
locally sourced ERSPAN GRE-encapsulated traffic from source ports.

Each ERSPAN source session can have either ports or VLANs as sources, but not
both.

The ERSPAN source session copies traffic from the source ports or source VLANs
and forward the traffic using routable GRE-encapsulated packets to the ERSPAN
destination session. The ERSPAN destination session switches the traffic to the
destinations.

2.1.h - Describe chassis virtualization and aggregation technologies

Cisco CRS-1 Carrier Routing System


78

Cisco CRS-1 is the first platform to run IOS XR. It is designed for high system
availability, scale, and uninterrupted system operation. CRS-1 is designed to operate
either as a single- chassis or multi-chassis system. It has two major elements: line card
chassis (LCC) and fab- ric card chassis (FCC). Details about each system follow:

 CRS-1 16-Slot Single-Chassis System is a 16-slot LCC with total switching


capacity of 1.2 Tbps and featuring a mid-plane design. It has 16 line card and 2
route processor slots.
 CRS-1 8-Slot Single-Shelf System is an eight-slot line card chassis with total
switch- ing capacity of 640 Gbps and featuring a mid-plane design. It has eight-
line card and two route processor slots.
 CRS-1 4-Slot Single-Shelf System is a four-slot line card shelf with total
switching capacity of 320 Gbps. It has fourline card and two route processor slots.
 CRS-1 Multi-Shelf System consists of 2 to 72 16-slot LCC and 1 to 8 FCC with a
total switching capacity of up to 92 Tbps. The LCCs are connected only to the
FCCs where stage 2 of the three-stage fabric switching is performed. The FCC is a
24-slot system.

Cisco XR 12000 Series


Cisco XR 12000 series is capable of a 2.5 Gbps, 10 Gbps, or 40 Gbps per slot system
with four different form factors:

 Cisco 12016, Cisco 12416, and Cisco 12816 are full-rack, 16-slot, and 2.5-,
10- and 40-Gbps per slot systems, respectively.
 Cisco 12010, Cisco 12410, and Cisco 12810 are half-rack, 10-slot, and 2.5-,
10- and 40-Gbps per slot systems, respectively.
 Cisco 12006 and Cisco 12406 are 1/4-rack, 6-slot, and 2.5- and 10-Gbps per
slot systems, respectively.
 Cisco 12404 is a four-slot, 10-Gbps per slot system.

Cisco ASR 9000 Series


ASR 9000 Series Aggregation Service Router is targeted for carrier Ethernet services
and delivers a high degree of performance and scalability. It can scale up to 6.4 Tbps
per sys- tem. It comes with two form factors:

 Cisco ASR 9010 is a 10-slot, 21-rack unit (RU) system.


 Cisco ASR 9006 is a 6-slot, 10-rack unit (RU) system.

In- stall Manager provides a user with the CLI necessary to carry out install
operations. The user initiates an install request from the command line and Install
Manager takes care of install-related functions, which might include one of the
following tasks:

 Fresh installation of an IOS XR image


 Upgrade or downgrade of an IOS XR release
 Addition and activation of PIEs and SMUs
 Removal of inactive PIEs from boot device
79

 Provide show command outputs related to the state of the installed software or re-
lated to the progress of an install operation Figure 1-16 shows a simplified
overview of the process.

2.1.i – Describe spanning-tree concepts

MSTP, which uses RSTP for rapid convergence, enables VLANs to be grouped into a
spanning-tree instance, with each instance having a spanning-tree topology
independent of other spanning-tree instances. This architecture provides multiple
forwarding paths for data traffic, enables load balancing, and reduces the number of
spanning-tree instances required to support a large number of VLANs.

Multiple Spanning-Tree Regions


For switches to participate in multiple spanning-tree (MST) instances, you must
consistently configure the switches with the same MST configuration information. A
collection of interconnected switches that have the same MST configuration
comprises an MST region.

The MST configuration determines to which MST region each switch belongs. The
configuration includes the name of the region, the revision number, and the MST
instance-to-VLAN assignment map. You configure the switch for a region by using
the spanning-tree mst configuration global configuration command, after which the
switch enters the MST configuration mode. From this mode, you can map VLANs to
an MST instance by using the instance MST configuration command, specify the
region name by using the name MST configuration command, and set the revision
number by using the revision MST configuration command.

A region can have one member or multiple members with the same MST
configuration; each member must be capable of processing RSTP BPDUs. There is no
limit to the number of MST regions in a network, but each region can support up to 16
spanning-tree instances. You can assign a VLAN to only one spanning-tree instance
at a time.

Operations Within an MST Region


The IST connects all the MSTP switches in a region. When the IST converges, the
root of the IST becomes the IST master, which is the switch within the region with the
lowest bridge ID and path cost to the CST root. The IST master also is the CST root if
there is only one region within the network. If the CST root is outside the region, one
of the MSTP switches at the boundary of the region is selected as the IST master.

When an MSTP switch initializes, it sends BPDUs claiming itself as the root of the
CST and the IST master, with both of the path costs to the CST root and to the IST
master set to zero. The switch also initializes all of its MST instances and claims to be
the root for all of them. If the switch receives superior MST root information (lower
80

bridge ID, lower path cost, and so forth) than currently stored for the port, it
relinquishes its claim as the IST master.

During initialization, a region might have many sub-regions, each with its own IST
master. As switches receive superior IST information, they leave their old sub-regions
and join the new sub-region that might contain the true IST master. Thus all sub-
regions shrink, except for the one that contains the true IST master.

For correct operation, all switches in the MST region must agree on the same IST
master. Therefore, any two switches in the region synchronize their port roles for an
MST instance only if they converge to a common IST master.

Operations Between MST Regions


If there are multiple regions or legacy 802.1D switches within the network, MSTP
establishes and maintains the CST, which includes all MST regions and all legacy
STP switches in the network. The MST instances combine with the IST at the
boundary of the region to become the CST.

The IST connects all the MSTP switches in the region and appears as a sub-tree in the
CST that encompasses the entire switched domain, with the root of the sub-tree being
the IST master. The MST region appears as a virtual switch to adjacent STP switches
and MST regions.

Bridge Assurance
With Cisco IOS Release 12.2(33) SXI and later releases, you can use Bridge
Assurance to protect against certain problems that can cause bridging loops in the
network. Specifically, you use Bridge Assurance to protect against a unidirectional
link failure or other software failure and a device that continues to forward data traffic
when it is no longer running the spanning tree algorithm.

Bridge Assurance is enabled by default and can only be disabled globally. Also,
Bridge Assurance is enabled only on spanning tree network ports that are point-to-
point links. Finally, both ends of the link must have Bridge Assurance enabled. If the
device on one side of the link has Bridge Assurance enabled and the device on the
other side either does not support Bridge Assurance or does not have this feature
enabled, the connecting port is blocked.

With Bridge Assurance enabled, BPDUs are sent out on all operational network ports,
including alternate and backup ports, for each hello time period. If the port does not
receive a BPDU for a specified period, the port moves into an inconsistent state
(blocking). And is not used in the root port calculation. Once that port receives a
BPDU, it resumes the normal spanning tree transitions.

Figure 2.7 shows a normal STP topology, and Figure 2.8 demonstrates a potential
network problem when the device fails and you are not running Bridge Assurance.

Figure 2.7 – network with normal STP topology


81

Figure 2.8 – Network Problem without running bridge assurance

When using Bridge Assurance, follow these guidelines:

 Bridge Assurance runs only on point-to-point spanning tree network ports.


You must configure each side of the link for this feature.
 We recommend that you enable Bridge Assurance throughout your network.

Spanning-Tree Dispute Mechanism


Currently, this feature is not present in the IEEE MST standard, but it is included in
the standard-compliant implementation. The software checks the consistency of the
port role and state in the received BPDUs to detect unidirectional link failures that
could cause bridging loops.

When a designated port detects a conflict, it keeps its role, but reverts to a discarding
state because disrupting connectivity in case of inconsistency is preferable to opening
a bridging loop.

The following figure shows a unidirectional link failure that typically creates a
bridging loop. Switch A is the root bridge, and its BPDUs are lost on the link leading
82

to switch B. Rapid PVST+ (802.1w) and MST BPDUs include the role and state of
the sending port. With this information, switch A can detect that switch B does not
react to the superior BPDUs that it sends and that switch B is the designated, not root
port. As a result, switch A blocks (or keeps blocking) its port, which prevents the
bridging loop. The block is shown as an STP dispute.

2.2 – Layer 2 Multicast

2.2.a – Implement and troubleshoot IGMP

To start implementing IP multicast routing in your campus network, you must first
define who receives the multicast. IGMP provides a means to automatically control
and limit the flow of multicast traffic throughout your network with the use of special
multicast queriers and hosts.

 A querier is a network device, such as a router, that sends query messages to


discover which network devices are members of a given multicast group.
 A host is a receiver, including routers, that sends report messages (in response
to query messages) to inform the querier of a host membership.

A set of queriers and hosts that receive multicast data streams from the same source is
called a multicast group. Queries and hosts use IGMP messages to join and leave
multicast groups.
IP multicast traffic uses group addresses, which are Class D IP addresses. The high-
order four bits of a Class D address are 1110. Therefore, host group addresses can be
in the range 224.0.0.0 to 239.255.255.255.

IGMP Versions
Multicast addresses in the range 224.0.0.0 to 224.0.0.255 are reserved for use by
routing protocols and other network control traffic. The address 224.0.0.0 is
guaranteed not to be assigned to any group.

IGMP packets are transmitted using IP multicast group addresses as follows:

 IGMP general queries are destined to the address 224.0.0.1 (all systems on a
subnet).
 IGMP group-specific queries are destined to the group IP address for which the
router is querying.
83

 IGMP group membership reports are destined to the group IP address for which
the router is reporting.
 IGMP Version 2 (IGMPv2) Leave messages are destined to the address 224.0.0.2
(all routers on a subnet).

That in some old host IP stacks, Leave messages might be destined to the group IP
address rather than to the all-routers address.

Primarily multicast hosts to signal their interest in joining a specific multicast group
and to begin receiving group traffic use IGMP messages.

The original IGMP Version 1 Host Membership model defined in RFC 1112 is
extended to significantly reduce leave latency and provide control over source
multicast traffic by use of Internet Group Management Protocol, Version 2.

 IGMP Version 1
Provides for the basic Query-Response mechanism that allows the multicast router
to determine which multicast groups are active and other processes that enable
hosts to join and leave a multicast group. RFC 1112 defines Host Extensions for
IP Multicasting.
 IGMP Version 2
Extends IGMP allowing such features as the IGMP leave process, group-specific
queries, and an explicit maximum query response time. IGMP Version 2 also adds
the capability for routers to elect the IGMP querier without dependence on the
multicast protocol to perform this task. RFC 2236 defines Internet Group
Management Protocol, Version 2.
 IGMP Version 3
Provides for “source filtering” which enables a multicast receiver host to signal to
a router which groups it wants to receive multicast traffic from, and from which
sources this traffic is expected.

The PIM protocol maintains the current IP multicast service mode of receiver-
initiated membership. It is not dependent on a specific unicast routing protocol.
PIM is defined in RFC 2362, Protocol-Independent Multicast-Sparse Mode (PIM-
SM): Protocol Specification. PIM is defined in the following Internet Engineering
Task Force (IETF) Internet drafts:

 Protocol Independent Multicast (PIM): Motivation and Architecture


 Protocol Independent Multicast (PIM), Dense Mode Protocol Specification
 Protocol Independent Multicast (PIM), Sparse Mode Protocol Specification
 draft-ietf-idmr-igmp-v2-06.txt, Internet Group Management Protocol, Version 2
 draft-ietf-pim-v2-dm-03.txt, PIM Version 2 Dense Mode

PIM can operate in dense mode or sparse mode. It is possible for the router to handle
both sparse groups and dense groups at the same time.

In dense mode, a router assumes that all other routers want to forward multicast
packets for a group. If a router receives a multicast packet and has no directly
connected members or PIM neighbors present, a prune message is sent back to the
84

source. Subsequent multicast packets are not flooded to this router on this pruned
branch. PIM builds source-based multicast distribution trees.

In sparse mode, a router assumes that other routers do not want to forward multicast
packets for a group, unless there is an explicit request for the traffic. When hosts join
a multicast group, the directly connected routers send PIM join messages toward the
rendezvous point (RP). The RP keeps track of multicast groups. Hosts that send
multicast packets are registered with the RP by the first hop router of that host. The
RP then sends join messages toward the source. At this point, packets are forwarded
on a shared distribution tree. If the multicast traffic from a specific source is
sufficient, the first hop router of the host may send join messages toward the source to
build a source-based distribution tree.

2.2.b – Explain MLD

Multicast Listener Discovery (MLD) snooping provides a way to constrain multicast


traffic at Layer 2. By snooping the MLD membership reports sent by hosts in the
bridge domain, the MLD snooping application can set up Layer 2 multicast
forwarding tables to deliver traffic only to ports with at least one interested member,
significantly reducing the volume of multicast traffic.
MLD snooping uses the information in MLD membership report messages to build
corresponding information in the forwarding tables to restrict IPv6 multicast traffic at
Layer 2. The forwarding table entries are in the form <Route, OIF List>, where:
 Route is a <*, G> route or <S, G> route.
 OIF List comprises all bridge ports that have sent MLD membership reports
for the specified route plus all multicast router (mrouter) ports in the bridge
domain.

Restrictions for MLD Snooping


Following are the restrictions (features that are not supported):
 MLD Snooping is supported only on L2VPN bridge domains.
 Explicit host tracking.
 Multicast Admission Control.
 Security filtering.
 Report rate limiting.
 Multicast router discovery.

Advantages of MLD Snooping


 In its basic form, it reduces bandwidth consumption by reducing multicast
traffic that would otherwise flood an entire VPLS bridge domain.
85

 With the use of some optional configurations, it provides security between


bridge domains by filtering the MLD reports received from hosts on one
bridge port and preventing leakage towards the hosts on other bridge ports.

High Availability (HA) features for MLD


MLD supports the following HA features:
 Process restarts
 RP Failover
 Stateful Switch-Over (SSO)
 Non-Stop Forwarding (NSF)—Forwarding continues unaffected while the
control plane is restored following a process restart or route processor (RP)
failover.
 Line card online insertion and removal (OIR)

Bridge Domain Support for MLD


MLD snooping operates at the bridge domain level. When MLD snooping is enabled
on a bridge domain, the snooping functionality applies to all ports under the bridge
domain, including:
 Physical ports under the bridge domain.
 Ethernet flow points (EFPs)—An EFP can be a VLAN, VLAN range, list of
VLANs, or an entire interface port.
 Pseudowires (PWs) in VPLS bridge domains.
 Ethernet bundles—Ethernet bundles include IEEE 802.3ad link bundles and
Cisco EtherChannel bundles. From the perspective of the MLD snooping
application, an Ethernet bundle is just another EFP. The forwarding
application in the Cisco ASR 9000 Series Routers randomly nominates a
single port from the bundle to carry the multicast traffic.

Multicast Router and Host Ports


MLD snooping classifies each port as one of the following:
 Multicast router ports (mrouter ports)—These are ports to which a multicast-
enabled router is connected. Mrouter ports are usually dynamically
discovered, but may also be statically configured. Multicast traffic is always
forwarded to all mrouter ports, except when an mrouter port is the ingress
port.
 Host ports—Any port that is not an mrouter port is a host port.

Multicast Router Discovery for MLD


MLD snooping discovers mrouter ports dynamically. You can also explicitly
configure a port as an emrouter port.
 Discovery- MLD snooping identifies upstream mrouter ports in the bridge
domain by snooping mld query messages and Protocol Independent Multicast
86

Version 2 (PIMv2) hello messages. Snooping PIMv2 hello messages identifies


mld non-queriers in the bridge domain.
 Static configuration—You can statically configure a port as an mrouter port
with the mrouter command in a profile attached to the port. Static
configuration can help in situations when incompatibilities with non-Cisco
equipment prevent dynamic discovery.

2.2.c – Explain PIM Snooping

In networks where a Layer 2 switch interconnects several routers, such as an Internet


exchange point (IXP), the switch floods IP multicast packets on all multicast router
ports by default, even if there are no multicast receivers downstream. With PIM
snooping enabled, the switch restricts multicast packets for each IP multicast group to
only those multicast router ports that have downstream receivers joined to that group.
When you enable PIM snooping, the switch learns which multicast router ports need
to receive the multicast traffic within a specific VLAN by listening to the PIM hello
messages, PIM join and prune messages, and bidirectional PIM designated forwarder-
election messages. With Release 12.2(33) SXJ2 and later releases, PIM snooping also
constrains multicast traffic to VPLS interfaces.

2.3 Layer 2 WAN circuit technologies

2.3.a – Implement and troubleshoot HDLC

The Synchronous Data Link Control (SDLC) protocol, developed by IBM, made this
system possible by allowing communication between a mainframe and a remote
workstation over long distances. This protocol evolved into the High-Level Data Link
Control (HDLC) protocol, which provided reliable communications and the ability to
manage the flow of traffic between devices. HDLC is an open industry standard
protocol, whereas SDLC is an IBM proprietary protocol that must be licensed by
IBM. Industry standard protocols such as TCP/IP drive the adoption and low costs of
telecommunications equipment.

Cisco offered an enhanced multiprotocol version of the HDLC protocol to enable


various protocols over the HDLC (High-Level Data Link Control) Layer 2 framing.
This Cisco HDLC protocol is proprietary and exclusive to Cisco. The HDLC standard
was loose at the time of Cisco’s creation and left too much room for interpretation.
When this was standardized in RFC 1619 with PPP in HDLC-like framing, Cisco’s
HDLC protocol was not compliant. Point-to-Point Protocol (PPP) evolved from
HDLC; it offers an industry standard way to provide multiprotocol networking
abilities, as well as many enhancements such as authentication, multilink, and
compression. PPP is used for many other technologies, including ISDN. HDLC and
PPP are scalable to architectures with fast speeds.

Figure 2.9 shows this networking evolution.


87

2.3.b – Implement and Troubleshoot PPP

Cisco’s Implementation of SLIP and PPP


Serial Line Internet Protocol (SLIP) and Point-to-Point Protocol (PPP) define
methods of sending Internet Protocol (IP) packets over standard asynchronous serial
lines with minimum line speeds of 1200 baud.

Using SLIP or PPP encapsulation over asynchronous lines is an inexpensive way to


connect PCs to a network. SLIP and PPP over asynchronous dial-up modems allow a
home computer to be connected to a network without the cost of a leased line. Dial-up
SLIP and PPP links can also be used for remote sites that need only occasional remote
node or backup connectivity. Both public domain and vendor-supported SLIP and
PPP implementations are available for a variety of computer applications.

The Cisco IOS software concentrates a large number of SLIP or PPP PC or


workstation client hosts onto a network interface that allows the PCs to communicate
with any host on the network. The Cisco IOS software can support any combination
of SLIP or PPP lines and lines dedicated to normal asynchronous devices such as
terminals and modems.

SLIP is an older protocol. PPP is a newer, more robust protocol than SLIP, and it
contains functions that can detect or prevent misconfiguration. PPP also provides
greater built-in security mechanisms.

Figure 2.10 illustrates a typical asynchronous SLIP or PPP remote-node


configuration.

Figure 2.10 Sample SLIP or PPP Remote-Node Configuration


88

Responding to BOOTP Requests


The BOOTP protocol allows a client machine to discover its own IP address, the
address of the router, and the name of a file to be loaded into memory and executed.
There are typically two phases to using BOOTP: first, the client’s address is
determined and the boot file is selected; then the file is transferred, typically using
TFTP.

SLIP and PPP clients can send BOOTP requests to the Cisco IOS software, and the
Cisco IOS software responds with information about the network. For example, the
client can send a BOOTP request to find out what its IP address is and where the boot
file is located, and the Cisco IOS software responds with the information.

BOOTP supports the extended BOOTP requests specified in RFC 1084 and works for
both SLIP and PPP encapsulation.

BOOTP compares to Reverse Address Resolution Protocol (RARP) as follows: RARP


is an older protocol that allows a client to determine its IP address if it knows its
hardware address.

Asynchronous Network Connections and Routing


The Cisco IOS software also supports IP routing connections for communication that
requires connecting one network to another.

The Cisco IOS software supports protocol translation for SLIP and PPP between other
network devices running Telnet, LAT, or X.25. For example, you can send IP packets
across a public X.25 PAD network using SLIP or PPP encapsulation when SLIP or
PPP protocol translation is enabled.

Asynchronous interfaces offer both dedicated and dynamic address assignments,


configurable hold queues and IP packet sizes, extended BOOTP requests, and permit
and deny conditions for controlling access to lines. Figure 2.11 shows a sample
asynchronous routing configuration.

Figure 2.11 Sample Asynchronous Routing Configuration


89

Asynchronous Interfaces and Broadcasts


The Cisco IOS software recognizes a variety of IP broadcast addresses. When a router
receives an IP packet from an asynchronous client, it rebroadcasts the packet onto the
network without changing the IP header.

The Cisco IOS software receives the SLIP or PPP client broadcasts and responds to
BOOTP requests with the current IP address assigned to the asynchronous interface
from which the request was received. This facility allows the asynchronous client
software to automatically learn its own IP address.

Remote Node Configuration Task List


To configure the Cisco IOS software through a remote node, you must perform the
first task in the following list on your asynchronous interfaces. Perform the rest of the
tasks to customize the asynchronous interface for your particular network
environment and to monitor asynchronous connections:

• Configure Asynchronous Interfaces


• Configure Network-Layer Protocols over SLIP and PPP
• Enable SLIP and PPP on Virtual Asynchronous Interfaces
• Configure Remote Access to NetBEUI Services
• Configure Performance Parameters
• Optimize Available Bandwidth
• Specify the MTU Size of IP Packets
• Provide Backward Compatibility for SLIP and PPP
• Modify the IP Output Queue Size
• Specify IP Access Lists
• Configure Support for Extended BOOTP Requests
• Monitor and Maintain Asynchronous Interfaces
• Asynchronous Interface Configuration Examples

Configure Asynchronous Interfaces


To configure the Cisco IOS software through a remote node, configure basic
functionality on your asynchronous interfaces, and then customize the interfaces for
your environment. Basic configuration tasks include the following:

• Specify an Asynchronous Interface


• Configure SLIP or PPP Encapsulation
• Specify Dedicated or Interactive Mode
90

• Configure the Interface Addressing Method for Remote Devicess


• Assign IP Addresses for Local Devices
• Enable Asynchronous Routing
• Enable Default Routing on an Asynchronous Interface
• Configure Automatic Protocol Startup
• Make SLIP and PPP connections at the EXEC level if you have configured
interactive mode.

Specify an Asynchronous Interface


To specify an asynchronous interface, perform the following task in global
configuration mode:

Configure SLIP or PPP Encapsulation


SLIP and PPP are methods of encapsulating datagrams and other network-layer
protocol information over point-to-point links. To configure the default encapsulation
on an asynchronous interface, perform the following task in interface configuration
mode:

Specify Dedicated or Interactive Mode


You can configure one or more asynchronous interfaces on your access server (and
one on a router) to be in dedicated network interface mode. In dedicated mode, an
interface is automatically configured for SLIP or PPP connections. There is no user
prompt or EXEC level, and no end-user commands are required to initiate remote-
node connections. If you want a line to be used only for SLIP or PPP connections,
configure the line for dedicated mode.

In interactive mode, a line can be used to make any type of connection, depending on
the EXEC command entered by the user. For example, depending on its
configuration, the line could be used for Telnet or XRemote connections, or SLIP or
PPP encapsulation.

Configure Dedicated Network Mode


You can configure an asynchronous interface to be in dedicated network mode. When
the interface is configured for dedicated mode, the end user cannot change the
encapsulation method, address, or other parameters.

To configure an interface for dedicated network mode, perform the following task in
interface configuration mode.
91

Return a Line to Interactive Mode


After a line has been placed in dedicated mode, perform the following task in
interface configuration mode to return it to interactive mode.

By default, no asynchronous mode is configured. In this state, the line is not available
for inbound networking because the SLIP and PPP connections are disabled

Configure the Interface Addressing Method for Remote Devices


You can control whether addressing is dynamic (the user specifies the address at the
EXEC level when making the connection), or whether default addressing is used (the
address is forced by the system). If you specify dynamic addressing, the router must
be in interactive mode and the user will enter the address at the EXEC level.

It is common to configure an asynchronous interface to have a default address and to


allow dynamic addressing. With this configuration, the choice between the default
address or a dynamic addressing is made by the users when they enter the slip or ppp
EXEC command. If the user enters an address, it is used, and if the user enters the
default keyword, the default address is used.

Assign a Default Asynchronous Address


Perform the following task in interface configuration mode to assign a permanent
default asynchronous address:

Use the no form of this command to disable the default address. If the server has been
configured to authenticate asynchronous connections, you are prompted for a
password after entering the slip default or ppp default EXEC command before the line
is placed into asynchronous mode.

The assigned default address is implemented when the user enters the slip default or
ppp default EXEC command. The Terminal Access Controller Access System
(TACACS) server, when enabled validates the transaction, and the line is put into
network mode using the address that is in the configuration file.

Configuring a default address is useful when the user is not required to know the IP
address to gain access to a system (for example, users of a server that is available to
many students on a campus).

Instead of requiring each user to know an IP address, they only need to enter the slip
default or ppp default EXEC command and let the server select the address to use.
92

When a line is configured for dynamic assignment of asynchronous addresses, the


user enters the slip or ppp EXEC command and is prompted for an address or logical
host name. The address is validated by TACACS, when enabled, and the line is
assigned the given address and put into asynchronous mode. Assigning asynchronous
addresses dynamically is also useful when you want to assign set addresses to users.
For example, an application on a personal computer that automatically dials in using
SLIP and polls for electronic mail messages can be set up to dial in periodically and
enter the required IP address and password.

Assign IP Addresses for Local Devices


The local address is set using the ip address or ip unnumbered command.
IP addresses identify locations to which IP datagrams can be sent. You must assign
each router interface an IP address. See the Internetworking Technology Overview
publication for detailed information on IP addresses.

To assign an IP address to a network interface, perform the following task in interface


configuration mode:

A subnet mask identifies the subnet field of a network address. Subnets are described
in the Internetworking Technology Overview publication.

Conserve Network Addresses


When asynchronous routing is enabled, you might find it necessary to conserve
network addresses by configuring the asynchronous interfaces as unnumbered. An
unnumbered interface does not have an address. Network resources are therefore
conserved because fewer network numbers are used and routing tables are smaller.

To configure an unnumbered interface, perform the following task in interface


configuration mode:

Whenever the unnumbered interface generates a packet (for example, a routing


update), it uses the address of the specified interface as the source address of the IP
packet. It also uses the address of the specified interface to determine which routing
processes are sending updates over the unnumbered interface.

You can use the IP unnumbered feature even if the system on the other end of the
asynchronous link does not support it. The IP unnumbered feature is transparent to the
other end of the link because each system bases its routing activities on information in
the routing updates it receives and on its own interface address.

Enable Asynchronous Routing


93

To route IP packets on an asynchronous interface, perform the following task in


interface configuration mode:

This command permits you to enable the IGRP, RIP, and OSPF routing protocols for
use when the user makes a connection using the ppp or slip EXEC commands. The
user must, however, specify /routing keyword at the SLIP or PPP command line.

Enable Default Routing on an Asynchronous Interface


To automatically enable the use of IP routing protocols on asynchronous interfaces
with the ppp and slip EXEC commands, perform the following task in interface
configuration mode.

For asynchronous interfaces in interactive mode, the async default routing command
causes the ppp and slip EXEC commands to be interpreted as though the /route switch
had been included in the command. For asynchronous interfaces in dedicated mode,
the async dynamic routing command enables routing protocols to be used on the line.
Without the a sync default routing command, there is no way to enable the use of
routing protocols automatically on a dedicated asynchronous interface.

Configure Automatic Protocol Startup


To configure the Cisco IOS software to allow a PPP or SLIP session to start
automatically, perform the following tasks in line configuration mode:

The auto select command permits the Cisco IOS software to allow an appropriate
process to start automatically when a starting character is received. The software
detects either a Return character, which is the start character for an EXEC session or
the start character for the ARA protocol. By using the optional during login argument,
the username or password prompt is displayed without pressing the Return key. While
the username or password prompts are displayed, you can choose to answer these
prompts or to start sending packets from an auto selected protocol.

Configure Network-Layer Protocols over SLIP and PPP


You can configure network-layer protocols, such as AppleTalk, IP, and IPX, over
SLIP and PPP.

SLIP supports only IP, but PPP supports each of these protocols.
94

Configure IP–SLIP
To enable IP–SLIP on a synchronous or asynchronous interface, perform the
following tasks, beginning in interface configuration mode:

Configure AppleTalk–PPP
You can configure an asynchronous interface so that users can access AppleTalk
zones by dialing into the router via PPP through this interface. Users accessing the
network can run AppleTalk and IP natively on a remote Macintosh, access any
available AppleTalk zones from Chooser, use networked peripherals, and share files
with other Macintosh users. This feature is also referred to as ATCP.

You create a virtual network that exists only for accessing an AppleTalk internet
through the server.

To create a new AppleTalk zone, issue the apple talk virtual-net command and use a
new zone name; this network number is then the only one associated with this zone.
To add network numbers to an existing AppleTalk zone, use this existing zone name
in the command; this network number is then added to the existing zone.

Routing is not supported on these interfaces.


To enable ATCP for PPP, perform the following tasks in interface configuration
(asynchronous) mode:

Configure IP–PPP
To enable IP–PPP (IPCP) on a synchronous or asynchronous interface, perform the
following tasks, beginning in interface configuration mode:
95

Configure IPX–PPP
You can configure IPX to run over PPP (IPXCP) on synchronous serial and
asynchronous serial interfaces using one of two methods.

The first method associates an asynchronous interface with a loopback interface


configured to run IPX. It permits you to configure IPX–PPP on asynchronous
interfaces only.

The second method permits you to configure IPX–PPP on asynchronous and


synchronous serial interfaces. However, it requires that you specify a dedicated IPX
network number for each interface, which can require a substantial number of
network numbers for a large number of interfaces.

You can also configure IPX to run on VTYs configured for PPP.

IPX–PPP—Associating Asynchronous Interfaces with Loopback Interfaces


To permit IPX client connections to an asynchronous interface, the interface must be
associated with a loopback interface configured to run IPX. To permit such
connections, perform the following tasks, beginning in global configuration mode:
96

If you are configuring IPX–PPP on asynchronous interfaces, you should filter routing
updates on the interface. Most asynchronous serial links have very low bandwidth,
and routing updates take up a great deal of bandwidth. Step 10 in the previous task
table uses the ipx sap-interval 0 to filter SAP updates.

IPX–PPP—Using Dedicated IPX Network Numbers for Each Interface


To enable IPX–PPP, perform the following tasks starting in global configuration
mode. The first five tasks are required. The last task is optional:

2.3.c - Describe WAN rate-based Ethernet circuits

There are a variety of approaches to connecting multiple sites across a wide area
network (WAN), ranging from leased lines to MPLS to IPsec VPN tunnels.
Unfortunately, many of the options and potential WAN topologies are often
misunderstood or confused with one another.

Point-to-Point
A point-to-point circuit, as its name implies, connects exactly two points. This is the
simplest type of WAN circuit. Packets sent from one site are delivered to the other
and vice versa. There is typically some amount of processing and encapsulation
performed across the carrier's infrastructure, but it is all transparent to the customer.

A point-to-point circuit is typically delivered as a layer two transport service, which


allows the customer to run whatever layer three protocols they want with an arbitrary
addressing scheme. A customer can change or add an IP subnet in use on a layer two
circuit without coordinating with their provider.
97

Point-to-multipoint

The primary detractor of point-to-point circuits is that they don't scale efficiently.
Suppose you wanted to deploy a hub-and-spoke style WAN topology with twenty
branch sites connecting to a single main office. You could deploy twenty point-to-
point circuits, one to each spoke site from the hub, but that would result in a clutter of
twenty separate physical connections at the hub site. Installing twenty circuits would
be rather expensive, and you might not even be able to fit them all on the same
device. This is where a point-to-multipoint circuit would be ideal.

With a point-to-multipoint circuit, we only need a single circuit to the hub site to be
shared by all spoke circuits. Each spoke is assigned a unique tag which identifies its
traffic across the hub circuit. The type of tagging is medium-dependent; we'll use an
Ethernet circuit with IEEE 802.1Q VLAN trunking as an example. (The spoke circuits
may have their traffic tagged or untagged, depending on the specific carrier's service
parameters.) Each spoke gets a virtual circuit and routed subinterface on the hub
circuit. The resulting layer three topology is the same as using several point-to-point
circuits but is more practical in implementation.

Metro Ethernet
Metro Ethernet is a layer two metropolitan area network (MAN) service, which
simplifies the WAN topology greatly by effectively emulating one big LAN switch
spanning an entire metro area. All sites connected into the metro Ethernet cloud are
placed into a single multi-access segment. This allows each site to communicate
directly across the carrier's infrastructure with any other site.

The catch here is that, as its name implies, metro Ethernet is typically limited to
within a single geographic area. One could not, for example, order metro Ethernet
connectivity among Los Angeles, Dallas, and New York. Its multi-access nature also
introduces design considerations pertaining to the mapping of routing protocol
adjacencies that should not be overlooked.
98

3.0 – Layer 3 Technologies

3.1 – Addressing technologies

3.1.a – Identify, implement and troubleshoot IPv4 addressing and sub-netting

Address types, VLSM


Internet Service Providers may face a situation where they need to allocate IP subnets
of different sizes as per the requirement of customer. One customer may ask Class C
subnet of 3 IP addresses and another may ask for 10 IPs. For an ISP, it is not feasible
to divide the IP addresses into fixed size subnets, rather he may want to subnet the
subnets in such a way which results in minimum wastage of IP addresses.

For example, an administrator has 192.168.1.0/24 network. The suffix /24


(pronounced as "slash 24") tells the number of bits used for network address. He is
having three different departments with different number of hosts. Sales department
has 100 computers, Purchase department has 50 computers, Accounts has 25
computers and Management has 5 computers. In CIDR, the subnets are of fixed size.
Using the same methodology the administrator cannot fulfill all the requirements of
the network.

The following procedure shows how VLSM can be used in order to allocate
department-wise IP addresses as mentioned in the example.

Step - 1
Make a list of Subnets possible.

Step - 2
Sort the requirements of IPs in descending order (Highest to Lowest).

Sales 100

Purchase 50
99

Accounts 25

Management 5

Step - 3
Allocate the highest range of IPs to the highest requirement, so let's assign
192.168.1.0 /25 (255.255.255.128) to Sales department. This IP subnet with Network
number 192.168.1.0 has 126 valid Host IP addresses, which satisfy the requirement of
Sales Department. The subnet Mask used for this subnet has 10000000 as the last
octet.

Step - 4
Allocate the next highest range, so let's assign 192.168.1.128 /26 (255.255.255.192)
to Purchase department. This IP subnet with Network number 192.168.1.128 has 62
valid Host IP Addresses, which can be easily assigned to all Purchase departments’
PCs. The subnet mask used has 11000000 in the last octet.

Step - 5
Allocate the next highest range, i.e. Accounts. The requirement of 25 IPs can be
fulfilled with 192.168.1.192 /27 (255.255.255.224) IP subnet, which contains 30 valid
host IPs. The network number of Accounts department will be 192.168.1.192. The
last octet of subnet mask is 11100000.

Step - 6
Allocate next highest range to Management. The Management department contains
only 5 computers. The subnet 192.168.1.224 /29 with Mask 255.255.255.248 has
exactly 6 valid host IP addresses. So this can be assigned to Management. The last
octet of subnet mask will contain 11111000.

By using VLSM, the administrators can subnet the IP subnet such a way that least
number of IP addresses are wasted. Even after assigning IPs to every department, the
administrator, in this example, still left with plenty of IP addresses, which was not
possible if he has used CIDR.

ARP
In IPv4, IP addresses are mapped to Link Layer addresses by ARP, which is specified
in RFC 826, "An Ethernet Address Resolution Protocol", November 1982. ARP lives
in the Link Layer, and has no real protection against hacking attacks (so it is the target
of many dangerous attacks).

When a node (alice-pc) wants to send a packet to another node (bob-pc) which is in
the same subnet, it first checks to see if it has done so recently, which means it might
have bob-pc's MAC address cached. This cache is called the ARP table. This is a list
of IP addresses and corresponding MAC addresses, complete with a timestamp (to
determine when that information becomes "stale" and should be discarded). If bob-
pc's IP address (and corresponding MAC address) is already in alice-pc's ARP cache,
100

alice-pc just gets bob-pc's MAC address from its ARP cache, and builds the outgoing
Ethernet frame.

If you type "arp -a" in Windows, you will see the current contents of the ARP cache.
The dynamic entries were learned through ARP requests, and they will eventually
expire. The static entries were generated from multicast addresses, and never expire.

ARP Request

Assuming bob-pc's IP address is not in the table, alice-pc will send an ARP Request
to all nodes on the subnet. Let's assume alice-pc and bob-pc have the following IP and
MAC addresses:

alice-pc: IP address 172.20.2.1, MAC address: 00:22:15:24:32:9c

bob-pc: IP address 172.20.2.11, MAC address 00:17:a4:ec:11:9c

The ARP request from alice-pc says

"Hey everyone! I own IP address 172.20.2.1 and MAC address 00:22:15:24:32:9c.


Who owns IP address 172.20.2.11?".
101

This is sent via Ethernet broadcast (MAC address ff:ff:ff:ff:ff:ff) to all nodes on the
subnet. Here is a packet capture of this ARP request:

ARP Reply

When bob-pc receives this ARP request, it updates its ARP cache with the sender's IP
address and MAC address, for future use.

All of the nodes except bob-pc see that ARP request and discard it as not relevant to
them ("not my business!"). Node bob-pc however, recognizes its own IP address, and
responds directly to alice-pc with an ARP response:

"Hey 00:22:15:24:32:9c, I own IP address 172.20.2.11, and my MAC address is


00:17:a4:ec:11:9c".

Since bob-pc knows the MAC address of alice-pc (from the ARP request), it sends the
reply directly to alice-pc (not to everyone). Here is a packet capture of the ARP reply:
102

When alice-pc receives this reply, it updates its ARP cache with the sender's IP
address and MAC address, which it then uses to create the Ethernet frame needed to
transmit the IP packet.

Gratuitous ARP
There is an interesting variant on the ARP request, called a Gratuitous ARP. This
involves a node sending an ARP request for information about its own IP address.
This is basically saying:

"Hey everyone! I own IP address 10.3.2.56 and MAC address 00:19:5b:2f:14:6b.


Who owns IP address 10.3.2.56?".

Here is a packet capture of one:

This seems like an odd thing for a node to do. If you were another node and saw this,
you might think, "idiot, YOU own it - you just said you did." There are two reasons
for a node to send such a request:

First, when a node (say, alice-pc) first connects to a subnet (or when its interface is
enabled), it sends a gratuitous ARP and then waits for a short while hoping that no
one replies. If someone does reply (say bob-pc), then the IP address in question is
already in use by the node that responded, so the new node (alice-bc) cannot also use
that address (that would produce an address conflict). It must configure a new address
and try again. If alice-pc doesn't hear any reply, it assumes the address is "fair game"
and begins using it. Note that if bob-pc did own the address, but is currently not
connected (or is powered off), alice-pc will not know that bob-pc already has the
address. When bob-pc later connects to the subnet, it will find out that its address is
already in use by alice-pc (assuming alice-pc is connected at that time). Node bob-pc
just lost ownership of that address, and must configure another one.

Second, switches in a subnet can learn what interface various nodes are connected to,
by listening for gratuitous ARPs (they don't send a reply, of course). They build a
table of MAC addresses connected to its various interfaces, which is used to optimize
traffic flow.

3.1.b Identify, implement and troubleshoot IPv6 addressing and sub-netting


103

Unicast, multicast
IPv6 Address Type: Unicast

An IPv6 unicast address is an identifier for a single interface, on a single node. A


packet that is sent to a unicast address is delivered to the interface identified by that
address. The Cisco IOS software supports the following IPv6 unicast address types:

 Aggregatable Global Address


 Link-Local Address
 IPv4-Compatible IPv6 Address
 Unique Local Address

Aggregatable Global Address


An aggregatable global address is an IPv6 address from the aggregatable global
unicast prefix. The structure of aggregatable global unicast addresses enables strict
aggregation of routing prefixes that limits the number of routing table entries in the
global routing table. Aggregatable global addresses are used on links that are
aggregated upward through organizations, and eventually to the Internet service
providers (ISPs).

Aggregatable global IPv6 addresses are defined by a global routing prefix, a subnet
ID, and an interface ID. Except for addresses that start with binary 000, all global
unicast addresses have a 64-bit interface ID. The IPv6 global unicast address
allocation uses the range of addresses that start with binary value 001 (2000::/3).
Figure 3.1 shows the structure of an aggregatable global address.

Figure 3.1 Aggregatable Global Address Format

Addresses with a prefix of 2000::/3 (001) through E000::/3 (111) are required to have
64-bit interface identifiers in the extended universal identifier (EUI)-64 format. The
Internet Assigned Numbers Authority (IANA) allocates the IPv6 address space in the
range of 2000::/16 to regional registries.

The aggregatable global address typically consists of a 48-bit global routing prefix
and a 16-bit subnet ID or Site-Level Aggregator (SLA). In the IPv6 aggregatable
global unicast address format document (RFC 2374), the global routing prefix
included two other hierarchically structured fields named Top-Level Aggregator
(TLA) and Next-Level Aggregator (NLA). The IETF decided to remove the TLS and
NLA fields from the RFCs because these fields are policy-based. Some existing IPv6
networks deployed before the change might still be using networks based on the older
architecture.
104

A 16-bit subnet field called the subnet ID could be used by individual organizations to
create their own local addressing hierarchy and to identify subnets. A subnet ID is
similar to a subnet in IPv4, except that an organization with an IPv6 subnet ID can
support up to 65,535 individual subnets.

An interface ID is used to identify interfaces on a link. The interface ID must be


unique to the link. It may also be unique over a broader scope. In many cases, an
interface ID will be the same as or based on the link-layer address of an interface.
Interface IDs used in aggregatable global unicast and other IPv6 address types must
be 64 bits long and constructed in the modified EUI-64 format.

Interface IDs are constructed in the modified EUI-64 format in one of the following
ways:

 For all IEEE 802 interface types (for example, Ethernet, and FDDI interfaces),
the first three octets (24 bits) are taken from the Organizationally Unique
Identifier (OUI) of the 48-bit link-layer address (the Media Access Control
[MAC] address) of the interface, the fourth and fifth octets (16 bits) are a fixed
hexadecimal value of FFFE, and the last three octets (24 bits) are taken from the
last three octets of the MAC address. The construction of the interface ID is
completed by setting the Universal/Local (U/L) bit—the seventh bit of the first
octet—to a value of 0 or 1. A value of 0 indicates a locally administered identifier;
a value of 1 indicates a globally unique IPv6 interface identifier.
 For other interface types (for example, serial, loopback, ATM, Frame Relay,
and tunnel interface types—except tunnel interfaces used with IPv6 overlay
tunnels), the interface ID is constructed in the same way as the interface ID for
IEEE 802 interface types; however, the first MAC address from the pool of MAC
addresses in the router is used to construct the identifier (because the interface
does not have a MAC address).
 For tunnel interface types that are used with IPv6 overlay tunnels, the interface
ID is the IPv4 address assigned to the tunnel interface with all zeros in the high-
order 32 bits of the identifier.

If no IEEE 802 interface types are in the router, link-local IPv6 addresses are
generated on the interfaces in the router in the following sequence:

1. The router is queried for MAC addresses (from the pool of MAC addresses in
the router).
2. If no MAC addresses are available in the router, the serial number of the
router is used to form the link-local addresses.
3. If the serial number of the router cannot be used to form the link-local
addresses, the router uses a message digest algorithm 5 (MD5) hash to determine
the MAC address of the router from the hostname of the router.

Link-Local Address
A link-local address is an IPv6 unicast address that can be automatically configured
on any interface using the link-local prefix FE80::/10 (1111 1110 10) and the
interface identifier in the modified EUI-64 format. Link-local addresses are used in
the neighbor discovery protocol and the stateless auto-configuration process. Nodes
on a local link can use link-local addresses to communicate; the nodes do not need
105

globally unique addresses to communicate. Figure 3.2 shows the structure of a link-
local address.

IPv6 routers must not forward packets that have link-local source or destination
addresses to other links.

Figure 3.2 Link-Local Address Format

IPv4-Compatible IPv6 Address


An IPv4-compatible IPv6 address is an IPv6 unicast address that has zeros in the
high-order 96 bits of the address and an IPv4 address in the low-order 32 bits of the
address. The format of an IPv4-compatible IPv6 address is 0:0:0:0:0:0:A.B.C.D or
::A.B.C.D. The entire 128-bit IPv4-compatible IPv6 address is used as the IPv6
address of a node and the IPv4 address embedded in the low-order 32 bits is used as
the IPv4 address of the node. IPv4-compatible IPv6 addresses are assigned to nodes
that support both the IPv4 and IPv6 protocol stacks and are used in automatic tunnels.
Figure 3.3 shows the structure of an IPv4-compatible IPv6 address and a few
acceptable formats for the address.

Figure 3.3 IPv4-Compatible IPv6 Address Format

IPv6 Multicast Overview


An IPv6 multicast group is an arbitrary group of receivers that want to receive a
particular data stream. This group has no physical or geographical boundaries—
receivers can be located anywhere on the Internet or in any private network. Receivers
that are interested in receiving data flowing to a particular group must join the group
by signaling their local router. This signaling is achieved with the MLD protocol.
106

Routers use the MLD protocol to learn whether members of a group are present on
their directly attached subnets. Hosts join multicast groups by sending MLD report
messages. The network then delivers data to a potentially unlimited number of
receivers, using only one copy of the multicast data on each subnet. IPv6 hosts that
wish to receive the traffic are known as group members.

Packets delivered to group members are identified by a single multicast group


address. Multicast packets are delivered to a group using best-effort reliability, just
like IPv6 unicast packets.

The multicast environment consists of senders and receivers. Any host, regardless of
whether it is a member of a group, can send to a group. However, only the members
of a group receive the message.

A multicast address is chosen for the receivers in a multicast group. Senders use that
address as the destination address of a datagram to reach all members of the group.

Membership in a multicast group is dynamic; hosts can join and leave at any time.
There is no restriction on the location or number of members in a multicast group. A
host can be a member of more than one multicast group at a time.

How active a multicast group is, its duration, and its membership can vary from group
to group and from time to time. A group that has members may have no activity.

IPv6 Multicast Addressing


An IPv6 multicast address is an IPv6 address that has a prefix of FF00::/8 (1111
1111). An IPv6 multicast address is an identifier for a set of interfaces that typically
belong to different nodes. A packet sent to a multicast address is delivered to all
interfaces identified by the multicast address. The second octet following the prefix
defines the lifetime and scope of the multicast address. A permanent multicast address
has a lifetime parameter equal to 0; a temporary multicast address has a lifetime
parameter equal to 1. A multicast address that has the scope of a node, link, site, or
organization, or a global scope has a scope parameter of 1, 2, 5, 8, or E, respectively.
For example, a multicast address with the prefix FF02::/16 is a permanent multicast
address with a link scope. Figure 3.4 shows the format of the IPv6 multicast address.

Figure 3.4 IPv6 Multicast Address Format


107

Pv6 nodes (hosts and routers) are required to join (receive packets destined for) the
following multicast groups:

 All-nodes multicast group FF02:0:0:0:0:0:0:1 (scope is link-local)


 Solicited-node multicast group FF02:0:0:0:0:1:FF00:0000/104 for each of its
assigned unicast and anycast addresses

IPv6 routers must also join the all-routers multicast group FF02:0:0:0:0:0:0:2 (scope
is link-local).

The solicited-node multicast address is a multicast group that corresponds to an IPv6


unicast or anycast address. IPv6 nodes must join the associated solicited-node
multicast group for every unicast and anycast address to which it is assigned. The
IPv6 solicited-node multicast address has the prefix FF02:0:0:0:0:1:FF00:0000/104
concatenated with the 24 low-order bits of a corresponding IPv6 unicast or anycast
address (see Figure 3.5). For example, the solicited-node multicast address
corresponding to the IPv6 address 2037::01:800:200E:8C6C is FF02::1:FF0E:8C6C.
Solicited-node addresses are used in neighbor solicitation messages.

Figure 3.5 IPv6 Solicited-Node Multicast Address Format

EUI-64
108

Extended Unique Identifier (EUI), as per RFC2373, allows a host to assign itself a
unique 64-Bit IP Version 6 interface identifier (EUI-64). This feature is a key benefit
over IPv4 as it eliminates the need of manual configuration or DHCP as in the world
of IPv4. The IPv6 EUI-64 format address is obtained through the 48-bit MAC
address. The Mac address is first separated into two 24-bits, with one being OUI
(Organizationally Unique Identifier) and the other being NIC specific. The 16-bit
0xFFFE is then inserted between these two 24-bits to for the 64-bit EUI address.
IEEE has chosen FFFE as a reserved value, which can only appear in EUI-64
generated from the EUI-48 MAC address.
Here is an example showing how the Mac Address is used to generate EUI.

Next, the seventh bit from the left, or the universal/local (U/L) bit, needs to be
inverted. This bit identifies whether this interface identifier is universally or locally
administered. If 0, the address is locally administered and if 1, the address is globally
unique. It is worth noticing that in the OUI portion, the globally unique addresses
assigned by the IEEE has always been set to 0 whereas the locally created addresses
has 1 configured. Therefore, when the bit is inverted, it maintains its original scope
(global unique address is still global unique and vice versa).
109

Once the above is done, we have a fully functional EUI-64 format address.

Another doubt or frequently asked question is, are IPv6 devices (routers etc) today
doing anything to that universal/local bit? Currently, nothing is being done be the U/L
bit 1 or 0. However, per RFC4291 2.5.1 (The use of the universal/local bit in the
Modified EUI-64 format identifier is to allow development of future technology that
can take advantage of interface identifiers with universal scope), this may change in
the future as the technology evolves.

Example

DHCPv6 prefix delegation


The DHCPv6 Prefix Delegation feature can be used to manage link, subnet, and site
addressing changes. DHCPv6 can be used in environments to deliver state-ful and
stateless information:

 Stateful—Address assignment is centrally managed and clients must obtain


configuration information not available through protocols such as address auto-
configuration and neighbor discovery.
110

 Stateless—Stateless configuration parameters do not require a server to


maintain any dynamic state for individual clients, such as Domain Name System
(DNS) server addresses and domain search list options.

Extensions to DHCPv6 also enable prefix delegation, through which an Internet


service provider (ISP) can automate the process of assigning prefixes to a customer
for use within the customer's network. Prefix delegation occurs between a provider
edge (PE) device and customer premises equipment (CPE), using the DHCPv6 prefix
delegation option. Once the ISP has delegated prefixes to a customer, the customer
may further subnet and assign prefixes to the links in the customer's network.

Configuring Nodes Without Prefix Delegation


Stateless DHCPv6 allows DHCPv6 to be used for configuring a node with parameters
that do not require a server to maintain any dynamic state for the node. The use of
stateless DHCP is controlled by router advertisement (RA) messages multi-casted by
routers. The Cisco IOS DHCPv6 client will invoke stateless DHCPv6 when it
receives an appropriate RA. The Cisco IOS DHCPv6 server will respond to a stateless
DHCPv6 request with the appropriate configuration parameters, such as the DNS
servers and domain search list options.

Client and Server Identification


Each DHCPv6 client and server is identified by a DHCP unique identifier (DUID).
The DUID is carried in the client identifier and server identifier options. The DUID is
unique across all DHCP clients and servers, and it is stable for any specific client or
server. DHCPv6 uses DUIDs based on link-layer addresses for both the client and
server identifier. The device uses the MAC address from the lowest-numbered
interface to form the DUID. The network interface is assumed to be permanently
attached to the device.

When an IPv6 DHCP client requests two prefixes with the same DUID but different
IAIDs on two different interfaces, these prefixes are considered to be for two different
clients, and interface information is maintained for both.

Rapid Commit
The DHCPv6 client can obtain configuration parameters from a server either through
a rapid two-message exchange (solicit, reply) or through a normal four-message
exchange (solicit, advertise, request, reply). By default, the four-message exchange is
used. When both client and server enable the rapid-commit option, the two-message
exchange is used.

DHCPv6 Client, Server, and Relay Functions


The DHCPv6 client, server, and relay functions are mutually exclusive on an
interface. When one of these functions is already enabled and a user tries to configure
a different function on the same interface, one of the following messages is displayed:
"Interface is in DHCP client mode," "Interface is in DHCP server mode," or
"Interface is in DHCP relay mode."

The following sections describe these functions:

 Client Function
111

 Server Function
 DHCP Relay Agent
 DHCPv6 Server and Relay - MPLS VPN Support

Client Function
The DHCPv6 client function can be enabled on individual IPv6-enabled interfaces.

The DHCPv6 client can request and accept those configuration parameters that do not
require a server to maintain any dynamic state for individual clients, such as DNS
server addresses and domain search list options. The DHCPv6 client will configure
the local Cisco IOS stack with the received information.

The DHCPv6 client can also request the delegation of prefixes. The prefixes acquired
from a delegating router will be stored in a local IPv6 general prefix pool. The
prefixes in the general prefix pool can then be referred to from other applications; for
example, the general prefix pools can be used to number router downstream
interfaces.

Server Selection
A DHCPv6 client builds a list of potential servers by sending a solicit message and
collecting advertise message replies from servers. These messages are ranked based
on preference value, and servers may add a preference option to their advertise
messages explicitly stating their preference value. If the client needs to acquire
prefixes from servers, only servers that have advertised prefixes are considered.

IAPD and IAID


An Identity Association for Prefix Delegation (IAPD) is a collection of prefixes
assigned to a requesting router. A requesting router may have more than one IAPD;
for example, one for each of its interfaces.

Each IAPD is identified by an identity association identification (IAID). The IAID is


chosen by the requesting router and is unique among the IAPD IAIDs on the
requesting router. IAIDs are made consistent across reboots by using information
from the associated network interface, which is assumed to be permanently attached
to the device.

Server Function
The DHCPv6 server function can be enabled on individual IPv6-enabled interfaces.

The DHCPv6 server can provide those configuration parameters that do not require
the server to maintain any dynamic state for individual clients, such as DNS server
addresses and domain search list options. The DHCPv6 server may be configured to
perform prefix delegation.

All the configuration parameters for clients are independently configured into
DHCPv6 configuration pools, which are stored in NVRAM. A configuration pool can
be associated with a particular DHCPv6 server on an interface when it is started.
Prefixes to be delegated to clients may be specified either as a list of pre-assigned
prefixes for a particular client or as IPv6 local prefix pools that are also stored in
NVRAM. The list of manually configured prefixes or IPv6 local prefix pools can be
112

referenced and used by DHCPv6 configuration pools.

The DHCPv6 server maintains an automatic binding table in memory to track the
assignment of some configuration parameters, such as prefixes between the server and
its clients. The automatic bindings can be stored permanently in the database agent,
which can be, for example, a remote TFTP server or local NVRAM file system.

Configuration Information Pool


A DHCPv6 configuration information pool is a named entity that includes
information about available configuration parameters and policies that control
assignment of the parameters to clients from the pool. A pool is configured
independently of the DHCPv6 service and is associated with the DHCPv6 service
through the command-line interface (CLI).

Each configuration pool can contain the following configuration parameters and
operational information:

 Prefix delegation information, which could include:


 A prefix pool name and associated preferred and valid lifetimes
 A list of available prefixes for a particular client and associated
preferred and valid lifetimes
 A list of IPv6 addresses of DNS servers
 A domain search list, which is a string containing domain names for DNS
resolution

DHCP for IPv6 Address Assignment


DHCPv6 enables DHCP servers to pass configuration parameters, such as IPv6
network addresses, to IPv6 clients. The DHCPv6 Individual Address Assignment
feature manages non-duplicate address assignment in the correct prefix based on the
network where the host is connected. Assigned addresses can be from one or multiple
prefix pools. Additional options, such as the default domain and DNS name-server
address, can be passed back to the client. Address pools can be assigned for use on a
specific interface or on multiple interfaces, or the server can automatically find the
appropriate pool.

Prefix Assignment

A prefix-delegating router (DHCPv6 server) selects prefixes to be assigned to a


requesting router (DHCPv6 client) upon receiving a request from the client. The
server can select prefixes for a requesting client using static assignment and dynamic
assignment mechanisms. Administrators can manually configure a list of prefixes and
associated preferred and valid lifetimes for an IAPD of a specific client that is
identified by its DUID.

When the delegating router receives a request from a client, it checks if there is a
static binding configured for the IAPD in the client's message. If a static binding is
present, the prefixes in the binding are returned to the client. If no such a binding is
found, the server attempts to assign prefixes for the client from other sources.

The Cisco IOS DHCPv6 server can assign prefixes dynamically from an IPv6 local
113

prefix pool. When the server receives a prefix request from a client, it attempts to
obtain unassigned prefixes from the pool. After the client releases the previously
assigned prefixes, the server returns them to the pool for reassignment.

An IPv6 prefix delegating router can also select prefixes for a requesting router based
on an external authority such as a RADIUS server using the Framed-IPv6-Prefix
attribute.

Automatic Binding
Each DHCPv6 configuration pool has an associated binding table. The binding table
contains the records about all the prefixes in the configuration pool that have been
explicitly delegated to clients. Each entry in the binding table contains the following
information:

 Client DUID
 Client IPv6 address
 A list of IAPDs associated with the client
 A list of prefixes delegated to each IAPD
 Preferred and valid lifetimes for each prefix
 The configuration pool to which this binding table belongs
 The network interface on which the server that is using the pool is running

A binding table entry is automatically created whenever a prefix is delegated to a


client from the configuration pool, and it is updated when the client renews, rebinds,
or confirms the prefix delegation. A binding table entry is deleted when the client
releases all the prefixes in the binding voluntarily, all prefixes' valid lifetimes have
expired, or administrators run the clear ipv6 dhcp binding command.

Binding Database

Each permanent storage to which the binding database is saved is called the database
agent. A database agent can be a remote host such as an FTP server or a local file
system such as NVRAM.

The automatic bindings are maintained in RAM and can be saved to some permanent
storage so that the information about configuration such as prefixes assigned to clients
is not lost after a system reload or power down. The bindings are stored as text
records for easy maintenance. Each record contains the following information:

 DHCPv6 pool name from which the configuration was assigned to the client
 Interface identifier from which the client requests were received
 The client IPv6 address
 The client DUID
 IAID of the IAPD
 Prefix delegated to the client
 The prefix length
 The prefix preferred lifetime in seconds
 The prefix valid lifetime in seconds
 The prefix expiration time stamp
114

 Optional local prefix pool name from which the prefix was assigned

At the beginning of the file, before the text records, a time stamp records the time
when the database is written and a version number, which helps differentiate between
newer and older databases. At the end of the file, after the text records, the text string
"*end*" is stored to detect file truncation.

The permanent storage to which the binding database is saved is called the database
agent. Database agents include FTP and TFTP servers, RCP, flash file system, and
NVRAM.

3.2 Layer 3 multi-cast

3.2.a Troubleshoot reverse path forwarding

RPF failure
PIM is known as Protocol Independent Multicast routing because it does not exchange
its own information about the network topology. Instead it relies on the accuracy of an
underlying unicast routing protocol like OSPF or EIGRP to maintain a loop free
topology. When a multicast packet is received by a router running PIM the device
first looks at what the source IP is of the packet. Next the router does a unicast lookup
on the source, as in the “show ip route w.x.y.z”, where “w.x.y.z” is the source. Next
the outgoing interface in the unicast routing table is compared with the interface in
which the multicast packet was received on. If the incoming interface of the packet
and the outgoing interface for the route are the same, it is assumed that the packet is
not looping, and the packet is candidate for forwarding. If the incoming interface and
the outgoing interface are *not* the same, a loop-free path cannot be guaranteed, and
the RPF check fails. All packets for which the RPF check fails are dropped.

Now as for troubleshooting the RPF check goes there are a couple of useful show and
debug commands that are available to you on the IOS. Suppose the following
topology:
115

We will be running EIGRP on all interfaces, and PIM dense mode on R2, R3, R4, and
SW1. Note that PIM will not be enabled on R1. R4 is a multicast source that is
attempting to send a feed to the client, SW1. On SW1 we will be generating an IGMP
join message to R3 by issuing the “ip igmp join” command on SW1’s interface Fa0/3.
On R4 we will be generating multicast traffic with an extended ping. First let’s look at
the topology with a successful transmission:

SW1#conf t
Enter configuration commands, one per line. End with CNTL/Z.
SW1(config)#int fa0/3
SW1(config-if)#ip igmp join 224.1.1.1
SW1(config-if)#end
SW1#

R4#ping
Protocol [ip]:
Target IP address: 224.1.1.1
Repeat count [1]: 5
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Interface [All]: Ethernet0/0
Time to live [255]:
Source address: 150.1.124.4
Type of service [0]:
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
116

Type escape sequence to abort.


Sending 5, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
Packet sent with a source address of 150.1.124.4

Reply to request 0 from 150.1.37.7, 32 ms


Reply to request 1 from 150.1.37.7, 28 ms
Reply to request 2 from 150.1.37.7, 28 ms
Reply to request 3 from 150.1.37.7, 28 ms
Reply to request 4 from 150.1.37.7, 28 ms

Now let’s trace the traffic flow starting at the destination and working our way back
up the reverse path.

SW1#show ip mroute 224.1.1.1


IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:03:05/stopped, RP 0.0.0.0, flags: DCL


Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/3, Forward/Dense, 00:03:05/00:00:00

(150.1.124.4, 224.1.1.1), 00:02:26/00:02:02, flags: PLTX


Incoming interface: FastEthernet0/3, RPF nbr 150.1.37.3
Outgoing interface list: Null
SW1’s RPF neighbor for (150.1.124.4,224.1.1.1) is 150.1.37.3, which means that
SW1 received the packet from R3.

R3#show ip mroute 224.1.1.1


IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:03:12/stopped, RP 0.0.0.0, flags: DC


Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
117

Serial1/3, Forward/Dense, 00:03:12/00:00:00


Serial1/2, Forward/Dense, 00:03:12/00:00:00
Ethernet0/0, Forward/Dense, 00:03:12/00:00:00

(150.1.124.4, 224.1.1.1), 00:02:33/00:01:46, flags: T


Incoming interface: Serial1/3, RPF nbr 150.1.23.2
Outgoing interface list:
Ethernet0/0, Forward/Dense, 00:02:34/00:00:00
Serial1/2, Prune/Dense, 00:02:34/00:00:28
R3’s RPF neighbor is 150.1.23.2, which means the packet came from R2.

R2#show ip mroute 224.1.1.1


IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:02:44/stopped, RP 0.0.0.0, flags: D


Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:02:44/00:00:00
Serial0/1, Forward/Dense, 00:02:44/00:00:00

(150.1.124.4, 224.1.1.1), 00:02:44/00:01:35, flags: T


Incoming interface: FastEthernet0/0, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1, Forward/Dense, 00:02:45/00:00:00

R2 has no RPF neighbor, meaning the source is directly connected. Now let’s
compare the unicast routing table from the client back to the source.

SW1#show ip route 150.1.124.4


Routing entry for 150.1.124.0/24
Known via "eigrp 1", distance 90, metric 20540160, type internal
Redistributing via eigrp 1
Last update from 150.1.37.3 on FastEthernet0/3, 00:11:23 ago
Routing Descriptor Blocks:
* 150.1.37.3, from 150.1.37.3, 00:11:23 ago, via FastEthernet0/3
Route metric is 20540160, traffic share count is 1
Total delay is 21100 microseconds, minimum bandwidth is 128 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 2
118

R3#show ip route 150.1.124.4


Routing entry for 150.1.124.0/24
Known via "eigrp 1", distance 90, metric 20514560, type internal
Redistributing via eigrp 1
Last update from 150.1.13.1 on Serial1/2, 00:11:47 ago
Routing Descriptor Blocks:
* 150.1.23.2, from 150.1.23.2, 00:11:47 ago, via Serial1/3
Route metric is 20514560, traffic share count is 1
Total delay is 20100 microseconds, minimum bandwidth is 128 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 1
150.1.13.1, from 150.1.13.1, 00:11:47 ago, via Serial1/2
Route metric is 20514560, traffic share count is 1
Total delay is 20100 microseconds, minimum bandwidth is 128 Kbit
Reliability 255/255, minimum MTU 1500 bytes
Loading 1/255, Hops 1

R2#show ip route 150.1.124.4


Routing entry for 150.1.124.0/24
Known via "connected", distance 0, metric 0 (connected, via interface)
Redistributing via eigrp 1
Routing Descriptor Blocks:
* directly connected, via FastEthernet0/0
Route metric is 0, traffic share count is 1

Based on this output we can see that SW1 sees the source reachable via R3, which
was the neighbor the multicast packet came from. R3 sees the source reachable via R1
and R2 due to equal cost load balancing, with R2 as the neighbor that the multicast
packet came from. Finally R2 sees the source as directly connected, which is where
the multicast packet came from. This means that the RPF check is successful as traffic
is transiting the network; hence we had a successful transmission.

Now let’s modify the routing table on R3 so that the route to R4 points to R1. Since
the multicast packet on R3 comes from R2 and the unicast route will going back
towards R1 there will be an RPF failure, and the packet transmission will not be
successful. Again note that R1 is not routing multicast in this topology.

R3#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R3(config)#ip route 150.1.124.4 255.255.255.255 serial1/2
R3(config)#end
R3#

R4#ping
Protocol [ip]:
Target IP address: 224.1.1.1
Repeat count [1]: 5
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
119

Interface [All]: Ethernet0/0


Time to live [255]:
Source address: 150.1.124.4
Type of service [0]:
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
Packet sent with a source address of 150.1.124.4
.....
R4#

We can now see that on R4 we do not receive a response back from the final
destination… so where do we start troubleshooting? First we want to look at the first
hop away from the source, which in this case is R2. On R2 we want to look in the
multicast routing table to see if the incoming interface list and the outgoing interface
list is correctly populated. Ideally we will see the incoming interface as
FastEthernet0/0, which is directly connected to the source, and the outgoing interface
as Serial0/1, which is the interface downstream facing towards R3.

R2#show ip mroute 224.1.1.1


IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:07:27/stopped, RP 0.0.0.0, flags: D


Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1, Forward/Dense, 00:07:27/00:00:00
FastEthernet0/0, Forward/Dense, 00:07:27/00:00:00

(150.1.124.4, 224.1.1.1), 00:07:27/00:01:51, flags: T


Incoming interface: FastEthernet0/0, RPF nbr 0.0.0.0
Outgoing interface list:
Serial0/1, Forward/Dense, 00:04:46/00:00:00

This is the correct output we should see on R2. Two more verifications we can do are
with the “show ip mroute count” command and the “debug ip mpacket” command.
“show ip mroute count” will show all currently active multicast feeds, and whether
packets are getting dropped:
120

R2#show ip mroute count


IP Multicast Statistics
3 routes using 1864 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts per second/Avg Pkt Size/Kilobits per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 224.1.1.1, Source count: 1, Packets forwarded: 4, Packets received: 4


Source: 150.1.124.4/32, Forwarding: 4/1/100/0, Other: 4/0/0

Group: 224.0.1.40, Source count: 0, Packets forwarded: 0, Packets received: 0


“debug ip mpacket” will show the packet trace in real time, similar to the “debug ip
packet” command for unicast packets. One caveat of using this verification is that
only process switched traffic can be debugged. This means that we need to disable
fast or CEF switching of multicast traffic by issuing the “no ip mroute-cache”
command on the interfaces running PIM. Once this debug is enabled we’ll generate
traffic from R4 again and we should see the packets correctly routed through R2.

R4#ping 224.1.1.1 repeat 100

Type escape sequence to abort.


Sending 100, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:

R2(config)#int f0/0
R2(config-if)#no ip mroute-cache
R2(config-if)#int s0/1
R2(config-if)#no ip mroute-cache
R2(config-if)#end
R2#debug ip mpacket
IP multicast packets debugging is on
R2#
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=231, prot=1,
len=100(100), mforward
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=232, prot=1,
len=100(100), mforward
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=233, prot=1,
len=100(100), mforward
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=234, prot=1,
len=100(100), mforward
IP(0): s=150.1.124.4 (FastEthernet0/0) d=224.1.1.1 (Serial0/1) id=235, prot=1,
len=100(100), mforward
R2#undebug all
Now that we see that R2 is correctly routing the packets let’s look at all three of these
verifications on R3.

R3#show ip mroute 224.1.1.1


IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
121

L - Local, P - Pruned, R - RP-bit set, F - Register flag,


T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report,
Z - Multicast Tunnel, z - MDT-data group sender,
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags: H - Hardware switched, A - Assert winner
Timers: Uptime/Expires
Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 224.1.1.1), 00:00:01/stopped, RP 0.0.0.0, flags: DC


Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Serial1/3, Forward/Dense, 00:00:01/00:00:00
Ethernet0/0, Forward/Dense, 00:00:01/00:00:00

(150.1.124.4, 224.1.1.1), 00:00:01/00:02:58, flags:


Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
Ethernet0/0, Forward/Dense, 00:00:02/00:00:00
Serial1/3, Forward/Dense, 00:00:02/00:00:00

From R3’s show ip mroute output we can see that the incoming interface is listed as
Null. This is an indication that for some reason R3 is not correctly routing the packets,
and is instead dropping them as they are received.

R3#show ip mroute count


IP Multicast Statistics
3 routes using 2174 bytes of memory
2 groups, 0.50 average sources per group
Forwarding Counts: Pkt Count/Pkts(neg(-) = Drops) per second/Avg Pkt Size/Kilobits
per second
Other counts: Total/RPF failed/Other drops(OIF-null, rate-limit etc)

Group: 224.1.1.1, Source count: 1, Packets forwarded: 0, Packets received: 15


Source: 150.1.124.4/32, Forwarding: 0/0/0/0, Other: 15/15/0

Group: 224.0.1.40, Source count: 0, Packets forwarded: 0, Packets received: 0


From this output we can see that packets for (150.1.124.4,224.1.1.1) are getting
dropped, and specifically the reason they are getting dropped is because of RPF
failure. This is seen from the “Other: 15/15/0” output, where the second field is RPF
failed drops.

R3#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R3(config)#int e0/0
R3(config-if)#no ip mroute-cache
R3(config-if)#int s1/3
R3(config-if)#no ip mroute-cache
R3(config-if)#end
122

R3#debug ip mpacket
IP multicast packets debugging is on
IP(0): s=150.1.124.4 (Serial1/3) d=224.1.1.1 id=309, ttl=253, prot=1, len=104(100),
not RPF interface
IP(0): s=150.1.124.4 (Serial1/3) d=224.1.1.1 id=310, ttl=253, prot=1, len=104(100),
not RPF interface
IP(0): s=150.1.124.4 (Serial1/3) d=224.1.1.1 id=311, ttl=253, prot=1, len=104(100),
not RPF interface
IP(0): s=150.1.124.4 (Serial1/3) d=224.1.1.1 id=312, ttl=253, prot=1, len=104(100),
not RPF interface
R3#undebug all
All possible debugging has been turned off

From this output we can clearly see that an RPF failure is occurring on R3. The
reason why is that multicast packets are being received in Serial1/3, while the unicast
route is pointing out Serial1/2. Now that we see the problem occurring there are a few
different ways we can solve it.

First we can modify the unicast routing domain so that it conforms to the multicast
routing domain. In our particular case removing the static route we configured on R3,
or configuring a new more-preferred static route on R3 that points to R2 for
150.1.124.4 would accomplish this.

Secondly we could modify the multicast domain in order to override the RPF check.
The simplest way to do this is with a static multicast route using the “ip mroute”
command. Another way of doing this dynamically would be to configure Multicast
BGP, which for the purposes of this example we will exclude due to its greater
complexity.

The “ip mroute” statement is not like the regular “ip route” statement in the manner
that it does not affect the actual traffic flow through the network. Instead if affects
what interfaces the router will accept multicast packets in. By configuring the
statement “ip mroute 150.1.124.4 255.255.255.255 150.1.23.2” on R3 it will tell the
router that if a multicast packet is received from the source 150.1.124.4 it is okay that
it be received on the interface in which the neighbor 150.1.23.2 exists. In our
particular case this means that even though the unicast route points out Serial1/2, it’s
okay that the multicast packet be received in Serial1/3. Let’s look at the effect of this:

R3#conf t
Enter configuration commands, one per line. End with CNTL/Z.
R3(config)#ip mroute 150.1.124.4 255.255.255.255 150.1.23.2
R3(config)#end
R3#

R4#ping
Protocol [ip]:
Target IP address: 224.1.1.1
Repeat count [1]: 5
Datagram size [100]:
Timeout in seconds [2]:
123

Extended commands [n]: y


Interface [All]: Ethernet0/0
Time to live [255]:
Source address: 150.1.124.4
Type of service [0]:
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 224.1.1.1, timeout is 2 seconds:
Packet sent with a source address of 150.1.124.4

Reply to request 0 from 150.1.37.7, 32 ms


Reply to request 1 from 150.1.37.7, 32 ms
Reply to request 2 from 150.1.37.7, 32 ms
Reply to request 3 from 150.1.37.7, 32 ms
Reply to request 4 from 150.1.37.7, 32 ms
R4#

3.2.b Implement and troubleshoot IPv4 protocol independent multicast

Bi-directional PIM

Enabling IPv4 Bidirectional PIM Globally


To enable IPv4 bidirectional PIM, perform this task:

Command Purpose
Router(config)# ip pim Enables IPv4 bidirectional PIM
bidir-enable globally on the switch.

This example shows how to enable IPv4 bidirectional PIM on the switch:

Router(config)# ip pim bidir-enable


Router(config)#
Configuring the Rendezvous Point for IPv4 Bidirectional PIM Groups

To statically configure the rendezvous point for an IPv4 bidirectional PIM group,
perform this task:

Command Purpose
Step 1 Router(config)# ip pim rp- Statically configures
adressip_address access_list the IP address of the
[override] rendezvous point for
124

the group. When you


specify
the override option,
the static rendezvous
point is used.

Step 2 Router(config)# access-list acces Configures an access


s-listpermit | deny ip_address list.

Step 3 Router(config)# ip pim send-rp- Configures the system


announcetype to use auto-RP to
number scope ttl_value [group- configure groups for
listaccess-list] [interval seconds] which the router will
[bidir] act as a rendezvous
point (RP).

Step 4 Router(config)# ip access-list Configures a standard


standardaccess-list-name permit IP access list.
| denyip_address
Step 5 Router(config)# mls ip multicast Enables MLS IP
multicast.

This example shows how to configure a static rendezvous point for an IPv4
bidirectional PIM group:

Router(config)# ip pim rp-address 10.0.0.1 10 bidir override


Router(config)# access-list 10 permit 224.1.0.0 0.0.255.255
Router(config)# ip pim send-rp-announce Loopback0 scope 16 group-list c21-rp-list-
0 bidir
Router(config)# ip access-list standard c21-rp-list-0 permit 230.31.31.1 0.0.255.255
Setting the IPv4 Bidirectional PIM Scan Interval

You can specify the interval between the IPv4 bidirectional PIM RP Reverse Path
Forwarding (RPF) scans.

To set the IPv4 bidirectional PIM RP RPF scan interval, perform this task:

Command Purpose
Router(config)# mls ip multicast Specifies the IPv4 bidirectional PIM RP RPF scan
bidir gm-scan-interval interval interval; valid values are from 1 to 1000 seconds. The
default is 10 seconds.

This example shows how to set the IPv4 bidirectional PIM RP RPF scan interval:
Router(config)# mls ip multicast bidir gm-scan-interval 30
Router(config)#
125

Displaying IPv4 Bidirectional PIM Information

To display IPv4 bidirectional PIM information, perform one of these tasks:


Command Purpose
Router# show ip pim rp Displays mappings between PIM
mapping [in-use] groups and rendezvous points and
shows learned rendezvous points in
use.
Router# show mls ip multicast Displays PIM group to active
rp-mapping [rp_address] rendezvous points mappings.
Router# show mls ip multicast Displays information based on the
rp-mapping gm-cache group/mask ranges in the RP
mapping cache.
Router# show mls ip multicast Displays information based on the
rp-mapping df-cache DF list in RP mapping cache.
Router# show mls ip multicast Displays IPv4 bidirectional PIM
bidir information.
Router# show ip mroute Displays information about the
multicast routing table.

This example shows how to display information about the PIM group and rendezvous
point mappings:

Router# show ip pim rp mapping


PIM Group-to-RP Mappings
This system is an RP (Auto-RP)
This system is an RP-mapping agent
Group(s) 230.31.0.0/16
RP 60.0.0.60 (?), v2v1, bidir
Info source:60.0.0.60 (?), elected via Auto-RP
Uptime:00:03:47, expires:00:02:11
RP 50.0.0.50 (?), v2v1, bidir
Info source:50.0.0.50 (?), via Auto-RP
Uptime:00:03:04, expires:00:02:55
RP 40.0.0.40 (?), v2v1, bidir
Info source:40.0.0.40 (?), via Auto-RP
Uptime:00:04:19, expires:00:02:38

This example shows how to display information in the IP multicast routing table that
is related to IPv4 bidirectional PIM:
126

Router# show ip mroute bidirectional


(*, 225.1.3.0), 00:00:02/00:02:57, RP 3.3.3.3, flags:BC
Bidir-Upstream:GigabitEthernet2/1, RPF nbr 10.53.1.7, RPF-MFD
Outgoing interface list:
GigabitEthernet2/1, Bidir-Upstream/Sparse-Dense, 00:00:02/00:00:00,H
Vlan30, Forward/Sparse-Dense, 00:00:02/00:02:57, H
(*, 225.1.2.0), 00:00:04/00:02:55, RP 3.3.3.3, flags:BC
Bidir-Upstream:GigabitEthernet2/1, RPF nbr 10.53.1.7, RPF-MFD
Outgoing interface list:
GigabitEthernet2/1, Bidir-Upstream/Sparse-Dense, 00:00:04/00:00:00,H
Vlan30, Forward/Sparse-Dense, 00:00:04/00:02:55, H
(*, 225.1.4.1), 00:00:00/00:02:59, RP 3.3.3.3, flags:BC
Bidir-Upstream:GigabitEthernet2/1, RPF nbr 10.53.1.7, RPF-MFD
Outgoing interface list:
GigabitEthernet2/1, Bidir-Upstream/Sparse-Dense, 00:00:00/00:00:00,H
Vlan30, Forward/Sparse-Dense, 00:00:00/00:02:59, H
This example show how to display information related to a specific multicast route. In
the output below, the arrow in the margin points to information about a partical short
cut:
Router# show ip mroute 239.1.1.2 4.4.4.4

IP Multicast Routing Table

Flags:D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,


L - Local, P - Pruned, R - RP-bit set, F - Register flag,
T - SPT-bit set, J - Join SPT, M - MSDP created entry,
X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
U - URD, I - Received Source Specific Host Report, Z - Multicast Tunnel
Y - Joined MDT-data group, y - Sending to MDT-data group
Outgoing interface flags:H - Hardware switched
Timers:Uptime/Expires
Interface state:Interface, Next-Hop or VCD, State/Mode
(4.4.4.4, 239.1.1.2), 1d02h/00:03:20, flags:FTZ
Incoming interface:Loopback0, RPF nbr 0.0.0.0, Partial-SC
Outgoing interface list:
Vlan10, Forward/Sparse-Dense, 1d02h/00:02:39 (ttl-threshold 5)

This example shows how to display the entries for a specific multicast group address:

Router# show mls ip multicast group 230.31.31.1


Multicast hardware switched flows:
(*, 230.31.31.1) Incoming interface:Vlan611, Packets switched:1778
Hardware switched outgoing interfaces:Vlan131 Vlan151 Vlan415 Gi4/16 Vlan611
RPF-MFD installed

This example shows how to display PIM group to active rendezvous points mappings:
Router# show mls ip multicast rp-mapping
State:H - Hardware Switched, I - Install Pending, D - Delete Pending, Z - Zombie
127

RP Address State RPF DF-count GM-count


60.0.0.60 H Vl611 4 1
This example shows how to display information based on the group/mask ranges in
the RP mapping cache:
Router# show mls ip multicast rp-mapping gm-cache
State:H - Hardware Switched, I - Install Pending, D - Delete Pending,
Z - Zombie
RP Address State Group Mask State Packet/Byte-count
60.0.0.60 H 230.31.0.0 255.255.0.0 H 100/6400
This example shows how to display information about specific MLS IP multicasting
groups:
Router# show mls ip multicast rp-mapping df-cache

State:H - Hardware Switched, I - Install Pending, D - Delete Pending, Z - Zombie


RP Address State DF State
60.0.0.60 H Vl131 H
60.0.0.60 H Vl151 H
60.0.0.60 H Vl415 H
60.0.0.60 H Gi4/16 H

Multicast Boundary
The multicast boundary feature allows you to configure an administrative boundary
for multicast group addresses. By restricting the flow of multicast data packets, you
can reuse the same multicast group address in different administrative domains.

You configure the multicast boundary on an interface. A multicast data packet is


blocked from flowing across the interface if the packet's multicast group address
matches the access control list (ACL) associated with the multicast boundary feature.

Multicast boundary ACLs can be processed in hardware by the Policy Feature Card
(PFC), a Distributed Forwarding Card (DFC), or in software by the RP. The multicast
boundary ACLs are programmed to match the destination address of the packet.
These ACLs are applied to traffic on the interface in both directions (input and
output).

To support multicast boundary ACLs in hardware, the switch creates new ACL
TCAM entries or modifies existing ACL TCAM entries (if other ACL-based features
are active on the interface). To verify TCAM resource utilization, enter the show tcam
counts ip command.

If you configure the filter-auto rp keyword, the administrative boundary also


examines auto-RP discovery and announcement messages and removes any auto-RP
group range announcements from the auto-RP packets that are denied by the boundary
ACL.
128

3.2.c Implement and troubleshoot multicast source discovery


protocol

Intra-domain MSDP (anycast RP)

Multicast Source Discovery Protocol Overview


In the PIM sparse mode model, multicast sources and receivers must register with
their local rendezvous point (RP). Actually, the router closest to a source or a receiver
registers with the RP, but the key point to note is that the RP "knows" about all the
sources and receivers for any particular group. RPs in other domains have no way of
knowing about sources located in other domains. MSDP is an elegant way to solve
this problem.

MSDP is a mechanism that allows RPs to share information about active sources. RPs
knows about the receivers in their local domain. When RPs in remote domains hear
about the active sources, they can pass on that information to their local receivers.
Multicast data can then be forwarded between the domains. A useful feature of MSDP
is that it allows each domain to maintain an independent RP that does not rely on
other domains, but it does enable RPs to forward traffic between domains. PIM-SM is
used to forward the traffic between the multicast domains.

The RP in each domain establishes an MSDP peering session using a TCP connection
with the RPs in other domains or with border routers leading to the other domains.
When the RP learns about a new multicast source within its own domain (through the
normal PIM register mechanism), the RP encapsulates the first data packet in a
Source-Active (SA) message and sends the SA to all MSDP peers. Each receiving
peer uses a modified Reverse Path Forwarding (RPF) check to forward the SA, until
the SA reaches every MSDP router in the interconnected networks—theoretically the
entire multicast internet. If the receiving MSDP peer is an RP, and the RP has a (*, G)
entry for the group in the SA (there is an interested receiver), the RP creates (S, G)
state for the source and joins to the shortest path tree for the source. The encapsulated
data is de-capsulated and forwarded down the shared tree of that RP. When the last
hop router (the router closest to the receiver) receives the multicast packet, it may join
the shortest path tree to the source. The MSDP speaker periodically sends SAs that
include all sources within the domain of the RP. Figure 3.6 shows how data would
flow between a sources in domain A to a receiver in domain E.

MSDP was developed for peering between Internet service providers (ISPs). ISPs did
not want to rely on an RP maintained by a competing ISP to provide service to their
customers. MSDP allows each ISP to have its own local RP and still forward and
receive multicast traffic to the Internet.

Figure 3.6 MSDP Example: MSDP Shares Source Information Between RPs in
Each Domain
129

Anycast RP Overview
Anycast RP is a useful application of MSDP. Originally developed for inter-domain
multicast applications, MSDP used for Anycast RP is an intra-domain feature that
provides redundancy and load-sharing capabilities. Enterprise customers typically use
Anycast RP for configuring a Protocol Independent Multicast sparse mode (PIM-SM)
network to meet fault tolerance requirements within a single multicast domain.

In Anycast RP, two or more RPs is configured with the same IP address on loopback
interfaces. The Anycast RP loopback address should be configured with a 32-bit
mask, making it a host address. All the downstream routers should be configured to
"know" that the Anycast RP loopback address is the IP address of their local RP. IP
routing automatically will select the topologically closest RP for each source and
receiver. Assuming that the sources are evenly spaced around the network, an equal
number of sources will register with each RP. That is, the process of registering the
sources will be shared equally by all the RPs in the network.

Because a source may register with one RP and receivers may join to a different RP, a
method is needed for the RPs to exchange information about active sources. This
information exchange is done with MSDP.

In Anycast RP, all the RPs is configured to be MSDP peers of each other. When a
source registers with one RP, an SA message will be sent to the other RPs informing
them that there is an active source for a particular multicast group. The result is that
each RP will know about the active sources in the area of the other RPs. If any of the
RPs were to fail, IP routing would converge and one of the RPs would become the
active RP in more than one area. New sources would register with the backup RP.
Receivers would join toward the new RP and connectivity would be maintained.

Note that the RP is normally needed only to start new sessions with sources and
receivers. The RP facilitates the shared tree so that sources and receivers can directly
establish a multicast data flow. If a multicast data flow is already directly established
between a source and the receiver, then an RP failure will not affect that session.
130

Anycast RP ensures that new sessions with sources and receivers can begin at any
time.

Anycast RP Example
The main purpose of an Anycast RP implementation is that the downstream-multicast
routers will "see" just one address for an RP. The example given in Figure 2 shows
how the loopback 0 interface of the RPs (RP1 and RP2) is configured with the same
10.0.0.1 IP address. If this 10.0.0.1 address is configured on all RPs as the address for
the loopback 0 interface and then configured as the RP address, IP routing will
converge on the closest RP. This address must be a host route—note the
255.255.255.255 subnet mask.

The downstream routers must be informed about the 10.0.0.1 RP address. In Figure
3.7, the routers are configured statically with the ip pim rp-address 10.0.0.1 global
configuration command. This configuration could also be accomplished using the
Auto-RP or bootstrap router (BSR) features.

The RPs in Figure 3.7 must also share source information using MSDP. In this
example, the loopback 1 interface of the RPs (RP1 and RP2) is configured for MSDP
peering. The MSDP peering address must be different than the Anycast RP address.

Figure 3.7 Anycast RP Configuration

Many routing protocols choose the highest IP address on loopback interfaces for the
Router ID. A problem may arise if the router selects the Anycast RP address for the
Router ID. We recommend that you avoid this problem by manually setting the
Router ID on the RPs to the same address as the MSDP peering address (for example,
the loopback 1 address in Figure 3.7). In Open Shortest Path First (OSPF), the Router
ID is configured using the router-id router configuration command. In Border
Gateway Protocol (BGP), the Router ID is configured using the bgp router-id router
configuration command. In many BGP topologies, the MSDP peering address and the
BGP peering address must be the same in order to pass the RPF check. The BGP
131

peering address can be set using the neighbor update-source router configuration
command.

The Anycast RP example in the previous paragraphs used IP addresses from RFC
1918. These IP addresses are normally blocked at inter-domain borders and therefore
are not accessible to other ISPs. You must use valid IP addresses if you want the RPs
to be reachable from other domains.

SA filters
Multicast Source Discovery Protocol (MSDP) Source-Active (SA) messages. Cisco
highly recommends establishing at least these filters when connecting to the native IP
multicast Internet.

Prerequisites

Requirements

There are no specific requirements for this document.

Components Used

This document is not restricted to specific software and hardware versions.

Description
MSDP-SA messages contain (source, group (S,G)) information for rendezvous points
(RPs) (called MSDP peers) in Protocol Independent Multicast sparse-mode (PIM-SM)
domains. This mechanism allows RPs to learn about multicast sources in remote PIM-
SM domains so that they can join those sources if there are local receivers in their
own domain. You can also use MSDP between multiple RPs in a single PIM-SM
domain to establish MSDP mesh-groups.

With a default configuration, MSDP exchanges SA messages without filtering them


for specific source or group addresses.

Typically, there are a number of (S,G) states in a PIM-SM domain that should stay
within the PIM-SM domain, but, due to default filtering, they get passed in SA
messages to MSDP peers. Examples of this include domain local applications that use
global IP multicast addresses, and sources that use local IP addresses (such as
10.x.y.z). In the native IP multicast Internet, this default leads to excessive (S,G)
information being shared. To improve the scalability of MSDP in the native IP
multicast Internet, and to avoid global visibility of domain local (S,G) information,
we recommend using the following configuration to reduce unnecessary creation,
forwarding, and caching of some of these well-known domain local sources.

Recommended Filter List Configuration


Cisco recommends using the following configuration filter for PIM-SM domains with
a single RP for every group (no MSDP mesh-group):

!
132

!--- Filter MSDP SA-messages.


!--- Replicate the following two rules for every external MSDP peer.

!
ip msdp sa-filter in <peer_address> list 111
ip msdp sa-filter out <peer_address> list 111
!

!--- The redistribution rule is independent of peers.

!
ip msdp redistribute list 111
!

!--- ACL to control SA-messages originated, forwarded.

!--- Domain-local applications.

access-list 111 deny ip any host 224.0.2.2 !


access-list 111 deny ip any host 224.0.1.3 ! Rwhod
access-list 111 deny ip any host 224.0.1.24 ! Microsoft-ds
access-list 111 deny ip any host 224.0.1.22 ! SVRLOC
access-list 111 deny ip any host 224.0.1.2 ! SGI-Dogfight
access-list 111 deny ip any host 224.0.1.35 ! SVRLOC-DA
access-list 111 deny ip any host 224.0.1.60 ! hp-device-disc

!--- Auto-RP groups.

access-list 111 deny ip any host 224.0.1.39


access-list 111 deny ip any host 224.0.1.40

!--- Scoped groups.

access-list 111 deny ip any 239.0.0.0 0.255.255.255

!--- Loopback, private addresses (RFC 1918).

access-list 111 deny ip 10.0.0.0 0.255.255.255 any


access-list 111 deny ip 127.0.0.0 0.255.255.255 any
access-list 111 deny ip 172.16.0.0 0.15.255.255 any
access-list 111 deny ip 192.168.0.0 0.0.255.255 any

!--- Default SSM-range. Do not do MSDP in this range.

access-list 111 deny ip any 232.0.0.0 0.255.255.255


access-list 111 permit ip any any
!
133

Explanation
In the example above, access list 111 (you can use any number) defines domain local
SA-information. This includes (S,G) state for global groups used by domain local
applications, the two auto-RP groups, scoped groups, and (S,G) state from local IP
addresses.

This filter list is applied so that the local router does not accept domain-local SA-
information from external MSDP peers and that external MSDP peers never get SA-
information or domain local information from the router.

The ip msdp sa-filter in <peer_address> list 111-command filters local information


from SA messages received from MSDP peer <peer_address>. If you configure this
command on every external MSDP peer, then the router itself will not accept any
domain local information from outside the domain.

The ip msdp sa-filter out <peer_address> list 111-command filters domain local
information from SA announcements sent to MSDP peer <peer_address>. If you
configure this command on every external MSDP peer, then no domain local
information is announced outside the domain.

We included the ip msdp redistribute list 111 command for added safety. It prevents
the router from originating SA messages for domain local (S,G) state. This action is
independent of the filtering of sent SA messages caused by the ip msdp sa-filter out
command.

Filtering with MSDP Mesh-Groups


If the PIM-SM domain uses an MSDP mesh-group, then there are domain internal
MSDP peers. For this situation, the configuration described above needs to be
examined further.

You should apply the ip msdp sa-filter in and ip msdp sa-filter out rules to external
MSDP peers only. If you apply them to internal MSDP peers, all SA information
filtered by access-list 111 will not be passed between internal peers, which breaks any
application using the source or group addresses filtered by access-list 111 (unless, as
in the case of auto-RP groups, the groups use PIM-DM instead of PIM-SM).

Cisco recommends not configuring the ip msdp redistribute list 111 command
because it prevents the RP from originating SA messages for domain local (S,G) state.
This command breaks any domain local application that depends on it. Since this
command is included for added safety, removing it will not change how messages are
filtered between external MSDP peers.

3.2.d Describe IPv6 multicast

IPv6 multicast addresses


134

An IPv6 multicast address is an IPv6 address that has a prefix of FF00::/8 (1111
1111). An IPv6 multicast address is an identifier for a set of interfaces that typically
belong to different nodes. A packet sent to a multicast address is delivered to all
interfaces identified by the multicast address. The second octet following the prefix
defines the lifetime and scope of the multicast address. A permanent multicast address
has a lifetime parameter equal to 0; a temporary multicast address has a lifetime
parameter equal to 1. A multicast address that has the scope of a node, link, site, or
organization, or a global scope has a scope parameter of 1, 2, 5, 8, or E, respectively.
For example, a multicast address with the prefix FF02::/16 is a permanent multicast
address with a link scope. Figure 3.8 shows the format of the IPv6 multicast address.

Figure 3.8 IPv6 Multicast Address Format

IPv6 nodes (hosts and routers) are required to join (receive packets destined for) the
following multicast groups:

 All-nodes multicast group FF02:0:0:0:0:0:0:1 (scope is link-local)


 Solicited-node multicast group FF02:0:0:0:0:1:FF00:0000/104 for each of its
assigned unicast and anycast addresses

IPv6 routers must also join the all-routers multicast group FF02:0:0:0:0:0:0:2 (scope
is link-local).

The solicited-node multicast address is a multicast group that corresponds to an IPv6


unicast or anycast address. IPv6 nodes must join the associated solicited-node
multicast group for every unicast and anycast address to which it is assigned. The
IPv6 solicited-node multicast address has the prefix FF02:0:0:0:0:1:FF00:0000/104
concatenated with the 24 low-order bits of a corresponding IPv6 unicast or anycast
address (see Figure 3.9). For example, the solicited-node multicast address
corresponding to the IPv6 address 2037::01:800:200E:8C6C is FF02::1:FF0E:8C6C.
Solicited-node addresses are used in neighbor solicitation messages.

Figure 3.9 IPv6 Solicited-Node Multicast Address Format


135

PIMv6

PIMv6 Anycast RP Solution Overview

The anycast RP solution in IPv6 PIM allows an IPv6 network to support anycast
services for the PIM-SM RP. It allows anycast RP to be used inside a domain that
runs PIM only. Anycast RP can be used in IPv4 as well as IPv6, but it does not
depend on the Multicast Source Discovery Protocol (MSDP), which runs only on
IPv4. This feature is useful when inter-domain connection is not required.

Anycast RP is a mechanism that ISP-based backbones use to get fast convergence


when a PIM RP device fails. To allow receivers and sources to rendezvous to the
closest RP, the packets from a source need to get to all RPs to find joined receivers.

A unicast IP address is chosen as the RP address. This address is either statically


configured or distributed using a dynamic protocol to all PIM devices throughout the
domain. A set of devices in the domain is chosen to act as RPs for this RP address;
these devices are called the anycast RP set. Each device in the anycast RP set is
configured with a loopback interface using the RP address. Each device in the anycast
RP set also needs a separate physical IP address to be used for communication
between the RPs.

The RP address, or a prefix that covers the RP address, is injected into the unicast
routing system inside of the domain. Each device in the anycast RP set is configured
with the addresses of all other devices in the anycast RP set, and this configuration
must be consistent in all RPs in the set.

PIMv6 Anycast RP Normal Operation


The following illustration shows PIMv6 anycast RP normal operation and assumes
the following:

RP1, RP2, RP3, and RP4 are members in the same anycast RP group.

S11 and S31 are sources that use RP1 and RP3, respectively, based on their unicast
routing metric.
136

R11, R12, R2, R31, and R32 are receivers. Based on their unicast routing metrics,
R11 and R12 join to RP1, R2 joins to RP2 and R31, and R32 joins to RP3,
respectively.

The following sequence of events occurs when S11 starts sending packets:

 DR1 creates (S,G) states and sends a register to RP1. DR1 may also encapsulate
the data packet in the register.
 Upon receiving the register, RP1 performs normal PIM-SM RP functionality, and
forwards the packets to R11 and R12.
 RP1 also sends the register (which may encapsulate the data packets) to RP2,
RP3, and RP4.
 RP2, RP3, and RP4 do not further forward the register to each other.
 RP2, RP3, and RP4 perform normal PIM-SM RP functionality, and if there is a
data packet encapsulated, RP2 forwards the data packet to R2 and RP3 forwards
the data packet to R31 and R32, respectively.
 The previous five steps repeat for null registers sent by DR1.
 PIMv6 Anycast RP Failover

The following illustration shows PIM anycast RP failover.


137

3.3 Fundamental routing concepts

3.3.a Implement and troubleshoot static routing

Static Routes
Networking devices forward packets using route information that is either manually
configured or dynamically learned using a routing protocol. Static routes are manually
configured and define an explicit path between two networking devices. Unlike a
dynamic routing protocol, static routes are not automatically updated and must be
manually reconfigured if the network topology changes. The benefits of using static
routes include security and resource efficiency. Static routes use less bandwidth than
dynamic routing protocols and no CPU cycles are used to calculate and communicate
routes. The main disadvantage to using static routes is the lack of automatic
reconfiguration if the network topology changes.

Static routes can be redistributed into dynamic routing protocols but routes generated
by dynamic routing protocols cannot be redistributed into the static routing table. No
algorithm exists to prevent the configuration of routing loops that use static routes.

Static routes are useful for smaller networks with only one path to an outside network
and to provide security for a larger network for certain types of traffic or links to other
networks that need more control. In general, most networks use dynamic routing
protocols to communicate between networking devices but may have one or two static
routes configured for special cases.

Directly Attached Static Routes


In directly attached static routes, only the output interface is specified. The destination
is assumed to be directly attached to this interface, so the packet destination is used as
the next-hop address. This example shows such a definition:

ipv6 route 2001:DB8::/32 ethernet1/0

The example specifies that all destinations with address prefix 2001:DB8::/32 are
directly reachable through interface Ethernet1/0.

Directly attached static routes are candidates for insertion in the IPv6 routing table
only if they refer to a valid IPv6 interface; that is, an interface that is both up and has
IPv6 enabled on it.

Recursive Static Routes


In a recursive static route, only the next hop is specified. The output interface is
derived from the next hop.

This example shows such a definition:

ipv6 route 2001:DB8::/32 2001:DB8:3000:1


138

This example specifies that all destinations with address prefix 2001:DB8::/32 are
reachable via the host with address 2001:DB8:3000:1.

A recursive static route is valid (that is, it is a candidate for insertion in the IPv6
routing table) only when the specified next hop resolves, either directly or indirectly,
to a valid IPv6 output interface, provided the route does not self-recurse, and the
recursion depth does not exceed the maximum IPv6 forwarding recursion depth.

A route self-recurses if it is itself used to resolve its own next hop. For example,
suppose we have the following routes in the IPv6 routing table:

This static route will not be inserted into the IPv6 routing table because it is self-
recursive. The next hop of the static route, 2001:DB8:3000:1, resolves via the BGP
route 2001:DB8:3000:0/16, which is itself a recursive route (that is, it only specifies a
next hop). The next hop of the BGP route, 2001:DB8::0104, resolves via the static
route. Therefore, the static route would be used to resolve its own next hop.

It is not normally useful to manually configure a self-recursive static route, although it


is not prohibited.

However, a recursive static route that has been inserted in the IPv6 routing table may
become self-recursive as a result of some transient change in the network learned
through a dynamic routing protocol. If this occurs, the fact that the static route has
become self-recursive will be detected and it will be removed from the IPv6 routing
table, although not from the configuration. A subsequent network change may cause
the static route to no longer be self-recursive, in which case it will be reinserted in the
IPv6 routing table.

Fully Specified Static Routes


In a fully specified static route, both the output interface and the next hop are
specified. This form of static route is used when the output interface is a multi-access
one and it is necessary to explicitly identify the next hop. The next hop must be
directly attached to the specified output interface. The following example shows a
definition of a fully specified static route:

ipv6 route 2001:DB8:/32 ethernet1/0 2001:DB8:3000:1

A fully specified route is valid (that is, a candidate for insertion into the IPv6 routing
table) when the specified IPv6 interface is IPv6-enabled and up.

Floating Static Routes


Floating static routes are static routes that are used to back up dynamic routes learned
through configured routing protocols. A floating static route is configured with a
higher administrative distance than the dynamic routing protocol it is backing up. As a
result, the dynamic route learned through the routing protocol is always used in
preference to the floating static route. If the dynamic route learned through the routing
protocol is lost, the floating static route will be used in its place. The following
example defines a floating static route:

ipv6 route 2001:DB8:/32 ethernet1/0 2001:DB8:3000:1 210


139

Any of the three types of IPv6 static routes can be used as a floating static route. A
floating static route must be configured with an administrative distance that is greater
than the administrative distance of the dynamic routing protocol, because routes with
smaller administrative distances are preferred.

3.3.b Implement and troubleshoot default routing

Configuring IPv6 Default and Static Routes


The security appliance automatically routes IPv6 traffic between directly connected
hosts if the interfaces to which the hosts are attached are enabled for IPv6 and the
IPv6 ACLs allow the traffic.

The security appliance does not support dynamic routing protocols. Therefore, to
route IPv6 traffic to a non-connected host or network, you need to define a static route
to the host or network or, at a minimum, a default route. Without a static or default
route defined, traffic to non-connected hosts or networks generate the following error
message:

%PIX|ASA-6-110001: No route to dest_address from source_address

You can add a default route and static routes using the ipv6 route command.

To configure an IPv6 default route and static routes, perform the following steps:

Step 1 To add the default route, use the following command:

hostname(config)# ipv6 route if_name ::/0 next_hop_ipv6_addr

The address ::/0 is the IPv6 equivalent of "any."

Step 2 (Optional) Define IPv6 static routes. Use the following command to add an
IPv6 static route to the IPv6 routing table:

hostname(config)# ipv6 route if_name destination next_hop_ipv6_addr


[admin_distance]

Configuring IPv6 Access Lists

Configuring an IPv6 access list is similar configuring an IPv4 access, but with IPv6
addresses.

To configure an IPv6 access list, perform the following steps:

Step 1 Create an access entry. To create an access list, use the ipv6 access-list
command to create entries for the access list. There are two main forms of this
140

command to choose from, one for creating access list entries specifically for ICMP
traffic, and one to create access list entries for all other types of IP traffic.

 To create an IPv6 access list entry specifically for ICMP traffic, enter the
following command:

hostname(config)# ipv6 access-list id [line num] {permit | deny} icmp source


destination [icmp_type]

 To create an IPv6 access list entry, enter the following command:

hostname(config)# ipv6 access-list id [line num] {permit | deny} protocol source


[src_port] destination [dst_port]

The following describes the arguments for the ipv6 access-list command:

 id—The name of the access list. Use the same id in each command when you
are entering multiple entries for an access list.

 line num—When adding an entry to an access list, you can specify the line
number in the list where the entry should appear.

 permit | deny—Determines whether the specified traffic is blocked or allowed


to pass.

 icmp—Indicates that the access list entry applies to ICMP traffic.

 protocol—Specifies the traffic being controlled by the access list entry. This
can be the name (ip, tcp, or udp) or number (1-254) of an IP protocol.
Alternatively, you can specify a protocol object group using object-group grp_id.

 source and destination—Specifies the source or destination of the traffic. The


source or destination can be an IPv6 prefix, in the format prefix/length, to indicate
a range of addresses, the keyword any, to specify any address, or a specific host
designated by host host_ipv6_addr.

 src_port and dst_port—The source and destination port (or service) argument.
Enter an operator (lt for less than, gt for greater than, eq for equal to, neq for not
equal to, or range for an inclusive range) followed by a space and a port number
(or two port numbers separated by a space for the range keyword).

 icmp_type—Specifies the ICMP message type being filtered by the access


rule. The value can be a valid ICMP type number (from 0 to 155) or one of the
ICMP type literals as shown in Appendix D, "Addresses, Protocols, and Ports".
Alternatively, you can specify an ICMP object group using object-group id.

Step 2 To apply the access list to an interface, enter the following command:

hostname(config)# access-group access_list_name {in | out} interface if_name


141

Configuring IPv6 Neighbor Discovery


The IPv6 neighbor discovery process uses ICMPv6 messages and solicited-node
multicast addresses to determine the link-layer address of a neighbor on the same
network (local link), verify the reachability of a neighbor, and keep track of
neighboring routers.

3.3.c Compare routing protocol types


Routing protocols can be classified into different groups according to their
characteristics. Specifically, routing protocols can be classified by there:

 Purpose: Interior Gateway Protocol (IGP) or Exterior Gateway Protocol (EGP)


 Operation: Distance vector protocol, link-state protocol, or path-vector protocol
 Behavior: Classful (legacy) or classless protocol

For example, IPv4 routing protocols are classified as follows:

 RIPv1 (legacy): IGP, distance vector, classful protocol


 IGRP (legacy): IGP, distance vector, classful protocol developed by Cisco
(deprecated from 12.2 IOS and later)
 RIPv2: IGP, distance vector, classless protocol
 EIGRP: IGP, distance vector, classless protocol developed by Cisco
 OSPF: IGP, link-state, classless protocol
 IS-IS: IGP, link-state, classless protocol
 BGP: EGP, path-vector, classless protocol

The classful routing protocols, RIPv1 and IGRP, are legacy protocols and are only
used in older networks. These routing protocols have evolved into the classless
routing protocols, RIPv2 and EIGRP, respectively. Link-state routing protocols are
classless by nature.

Figure 3-10 displays a hierarchical view of dynamic routing protocol


classification.
142

IGP and EGP Routing Protocols


An autonomous system (AS) is a collection of routers under a common administration
such as a company or an organization. An AS is also known as a routing domain.
Typical examples of an AS are a company’s internal network and an ISP’s network.

The Internet is based on the AS concept; therefore, two types of routing protocols are
required:

 Interior Gateway Protocols (IGP): Used for routing within an AS. It is also
referred to as intra-AS routing. Companies, organizations, and even service
providers use an IGP on their internal networks. IGPs include RIP, EIGRP,
OSPF, and IS-IS.
 Exterior Gateway Protocols (EGP): Used for routing between autonomous
systems. It is also referred to as inter-AS routing. Service providers and large
companies may interconnect using an EGP. The Border Gateway Protocol
(BGP) is the only currently viable EGP and does the Internet use the official
routing protocol.

Because BGP is the only EGP available, the term EGP is rarely used; instead, most
engineers simply refer to BGP.

The example in Figure 3-11 provides simple scenarios highlighting the deployment of
IGPs, BGP, and static routing.
143

There are five individual autonomous systems in the scenario:

 ISP-1: This is an AS and it uses IS-IS as the IGP. It interconnects with other
autonomous systems and service providers using BGP to explicitly control how
traffic is routed.
 ISP-2: This is an AS and it uses OSPF as the IGP. It interconnects with other
autonomous systems and service providers using BGP to explicitly control how
traffic is routed.
 AS-1: This is a large organization and it uses EIGRP as the IGP. Because it is
multi-homed (i.e., connects to two different service providers), it uses BGP to
explicitly control how traffic enters and leaves the AS.
 AS-2: This is a medium-sized organization and it uses OSPF as the IGP. It is also
multi-homed; therefore, it uses BGP to explicitly control how traffic enters and
leaves the AS.
 AS-3: This is a small organization with older routers within the AS; it uses RIP as
the IGP. BGP is not required because it is single-homed (i.e., connects to one
service provider). Instead, static routing is implemented between the AS and the
service provider.

Distance Vector Routing Protocols


Distance vector means that routes are advertised by providing two characteristics:

Distance: Identifies how far it is to the destination network and is based on a metric
such as the hop count, cost, bandwidth, delay, and more Vector: Specifies the
direction of the next-hop router or exit interface to reach the destination

For example, in Figure 3-12, R1 knows that the distance to reach network
172.16.3.0/24 is one hop and that the direction is out of the interface Serial 0/0/0
toward R2.

Figure 3-12 Link-State Protocol Operation


144

Classful Routing Protocols


The biggest distinction between classful and classless routing protocols is that classful
routing protocols do not send subnet mask information in their routing updates.
Classless routing protocols include subnet mask information in the routing updates.

The two original IPv4 routing protocols developed were RIPv1 and IGRP. They were
created when network addresses were allocated based on classes (i.e., class A, B, or
C). At that time, a routing protocol did not need to include the subnet mask in the
routing update, because the network mask could be determined based on the first octet
of the network address.

3.3.d Implement, optimize and troubleshoot administrative distance


Routing decision criteria

1) Valid next hop


upon update receipt, the router verifies a valid next hop

2) Metric
the router then attempts to install the best routing metric (OSPF=cost, ie) into the
table

3) Administrative Distance
If multiple routing protocols are running, the route with the lowest AD is used

4) Prefix
Longest match preferred; trumps AD

Administrative Distances (default)


 Connected interface 0
145

 Static route 1
 EIGRP summary route 5
 External BGP 20
 Internal EIGRP 90
 IGRP 100
 OSPF 110
 IS-IS 115
 RIP 120
 Exterior Gateway Protocol 140
 On-demand routing 160
 External EIGRP 170
 Internal BGP 200
 Unknown 255

A floating static route is a route artificially increased from its default, ie. for EIGRP a
floating static could be added with an AD of 93, which would be used in the event of
a dynamic EIGRP route (AD 90) not found.

Routing Table
Routing’s classic model, hop-by-hop, keeps a list of destinations with their reachable
next hops in the routing table. In recent years the hop-by-hop paradigm has given way
to more advanced methods such as MPLS, which enables label lookup to dictate the
next hop determination. Traffic engineering considerations have become popular as
well, such as QOS that are not limited to routing table only criteria.

3.3.e Implement and troubleshoot passive interface


With IPv4 OSPF we typically make all user VLANs passive interfaces and uplinks
point-to-point. Usually with the 'passive-interface default' and no passive-interface
gigabit Ethernet x/x' commands. I have now enabled IPv6 OSPF with the global
commands and interface commands below:

ipv6 router ospf 10


auto-cost reference-bandwidth 10000
passive-interface default
no passive-interface Vlan99
ipv6 router ospf 10

auto-cost reference-bandwidth 10000


passive-interface default
no passive-interface Vlan99

interface vlan99
ipv6 ospf 10 area 0
ipv6 ospf network point-to-point
146

interface vlan10
ipv6 ospf 10 area 0

With this configuration I am seeing IPv6 OSPF hellos on vlan 10, which I am pretty
sure I shouldn’t, as it is passive. I have checked all the ipv6 ospf interface commands
and can't see one that turns the interface passive.

3.3.f Implement and troubleshoot VRF lite


Configuring VRF-lite

Virtual Private Networks (VPNs) provide a secure way for customers to share
bandwidth over an ISP backbone network. A VPN is a collection of sites sharing a
common routing table. A customer site is connected to the service provider network
by one or more interfaces, and the service provider associates each interface with a
VPN routing table. A VPN routing table is called a VPN routing/forwarding (VRFs)
table.

With the VRF-lite feature, the Catalyst 4500 series switch supports multiple VPN
routing/forwarding instances in customer edge devices. (VRF-lite is also termed
multi-VRF CE, or multi-VRF Customer Edge Device). VRF-lite allows a service
provider to support two or more VPNs with overlapping IP addresses using one
interface.

Understanding VRF-lite
VRF-lite is a feature that enables a service provider to support two or more VPNs,
where IP addresses can be overlapped among the VPNs. VRF-lite uses input
interfaces to distinguish routes for different VPNs and forms virtual packet-
forwarding tables by associating one or more Layer 3 interfaces with each VRF.
Interfaces in a VRF can be either physical, such as Ethernet ports, or logical, such as
VLAN SVIs, but a Layer 3 interface cannot belong to more than one VRF at any time.

VRF-Lite support on Cat 4500 does not include the Provider Edge MPLS
functionality. More specifically, MPLS label switching and MPLS control plane are
not supported in the VRF-Lite implementation.

VRF-lite interfaces must be Layer 3 interfaces.

VRF-lite includes these devices:

 Customer edge (CE) devices provide customer access to the service provider
network over a data link to one or more provider edge routers. The CE device
147

advertises the site’s local routes to the provider edge router and learns the remote
VPN routes from it. A Catalyst 4500 series switch can be a CE.
 Provider edge (PE) routers exchange routing information with CE devices by
using static routing or a routing protocol such as BGP, RIPv1, or RIPv2.
 The PE is only required to maintain VPN routes for those VPNs to which it is
directly attached, eliminating the need for the PE to maintain all of the service
provider VPN routes. Each PE router maintains a VRF for each of its directly
connected sites. Multiple interfaces on a PE router can be associated with a single
VRF if all of these sites participate in the same VPN. Each VPN is mapped to a
specified VRF. After learning local VPN routes from CEs, a PE router exchanges
VPN routing information with other PE routers by using internal BGP (IBPG).
 Provider routers (or core routers) are any routers in the service provider network
that do not attach to CE devices.

With VRF-lite, multiple customers can share one CE, and only one physical link is
used between the CE and the PE. The shared CE maintains separate VRF tables for
each customer and switches or routes packets for each customer based on its own
routing table. VRF-lite extends limited PE functionality to a CE device, giving it the
ability to maintain separate VRF tables to extend the privacy and security of a VPN to
the branch office.

Figure 3.13 shows a configuration where each Catalyst 4500 series switch acts as
multiple virtual CEs.

Because VRF-lite is a Layer 3 feature, each interface in a VRF must be a Layer 3


interface.

Figure 3.13 Catalyst 4500 Series Switches Acting as Multiple Virtual CE

This is the packet-forwarding process in a VRF-lite CE-enabled network as shown in


Figure 3.13:

 When the CE receives a packet from a VPN, it looks up the routing table based on
the input interface.
When a route is found, the CE forwards the packet to the PE.
 When the ingress PE receives a packet from the CE, it performs a VRF lookup.
When a route is found, the router adds a corresponding MPLS label to the packet
and sends it to the MPLS network.
148

 When an egress PE receives a packet from the network, it strips the label and uses
the label to identify the correct VPN routing table. Then the egress PE performs
the normal route lookup. When a route is found, it forwards the packet to the
correct adjacency.
 When a CE receives a packet from an egress PE, it uses the input interface to look
up the correct VPN routing table. If a route is found, the CE forwards the packet
within the VPN.

To configure VRF, create a VRF table and specify the Layer 3 interface associated
with the VRF. Then configure the routing protocols in the VPN and between the CE
and the PE. BGP is the preferred routing protocol used to distribute VPN routing
information across the provider’s backbone. The VRF-lite network has three major
components:

 VPN route target communities—Lists of all other members of a VPN community.


You need to configure VPN route targets for each VPN community member.
 Multiprotocol BGP peering of VPN community PE routers—Propagates VRF
reachability information to all members of a VPN community. You need to
configure BGP peering in all PE routers within a VPN community.
 VPN forwarding—Transports all traffic between all VPN community members
across a VPN service-provider network.

Default VRF-lite Configuration

Table 3.1 shows the default VRF configuration

3.3.g Implement, optimize and troubleshoot filtering with any routing


protocol

Restrictions for IPv6 Routing EIGRP Support


 EIGRP for IPv6 is directly configured on the interfaces over which it runs. This
feature allows EIGRP for IPv6 to be configured without the use of a global IPv6
address. There is no network statement in EIGRP for IPv6.
149

In per-interface configuration at system startup, if EIGRP has been configured on an


interface, then the EIGRP protocol may start running before any EIGRP router mode
commands have been executed.

 An EIGRP for IPv6 protocol instance requires a router ID before it can start
running.
 EIGRP for IPv6 has a shutdown feature. The routing process should be in "no
shut" mode in order to start running.
 EIGRP for IPv6 provides route filtering using the distribute-list prefix-list
command. Use of the route-map command is not supported for route filtering with
a distribute list.

Information About IPv6 Routing EIGRP Support

Cisco EIGRP for IPv6 Implementation

EIGRP is an enhanced version of the IGRP developed by Cisco. EIGRP uses the same
distance vector algorithm and distance information as IGRP. However, the
convergence properties and the operating efficiency of EIGRP have improved
substantially over IGRP.

The convergence technology is based on research conducted at SRI International and


employs an algorithm called the diffusing update algorithm (DUAL). This algorithm
guarantees loop-free operation at every instant throughout a route computation and
allows all devices involved in a topology change to synchronize at the same time.
Devices that are not affected by topology changes are not involved in re-
computations. The convergence time with DUAL rivals that of any other existing
routing protocol.

EIGRP provides the following features:

 Increased network width--With Routing Information Protocol (RIP), the


largest possible width of your network is 15 hops. When EIGRP is enabled, the
largest possible width is 224 hops. Because the EIGRP metric is large enough to
support thousands of hops, the only barrier to expanding the network is the
transport layer hop counter. Cisco works around this limitation by incrementing
the transport control field only when an IPv4 or an IPv6 packet has traversed 15
devices and the next hop to the destination was learned by way of EIGRP. When a
RIP route is being used as the next hop to the destination, the transport control
field is incremented as usual.
 Fast convergence--The DUAL algorithm allows routing information to
converge as quickly as any other routing protocol.

 Partial updates--EIGRP sends incremental updates when the state of a


destination changes, instead of sending the entire contents of the routing table.
This feature minimizes the bandwidth required for EIGRP packets.
 Neighbor discovery mechanism--This is a simple hello mechanism used to
learn about neighboring devices. It is protocol-independent.
 Arbitrary route summarization.
150

 Scaling--EIGRP scales to large networks.


 Route filtering--EIGRP for IPv6 provides route filtering using the distribute-
list prefix-list command.

Use of the route-map command is not supported for route filtering with a distribute
list.

EIGRP has the following four basic components:

 Neighbor discovery--Neighbor discovery is the process that devices use to


dynamically learn of other devices on their directly attached networks.
Devices must also discover when their neighbors become unreachable or
inoperative. EIGRP neighbor discovery is achieved with low overhead by
periodically sending small hello packets. EIGRP neighbors can also discover a
neighbor that has recovered after an outage because the recovered neighbor
will send out a hello packet. As long as hello packets are received, the Cisco
software can determine that a neighbor is alive and functioning. Once this
status is determined, the neighboring devices can exchange routing
information.

 Reliable transport protocol--The reliable transport protocol is responsible for


guaranteed, ordered delivery of EIGRP packets to all neighbors. It supports
intermixed transmission of multicast and unicast packets. Some EIGRP
packets must be sent reliably and others need not be. For efficiency, reliability
is provided only when necessary. For example, on a multi access network that
has multicast capabilities, it is not necessary to send hello packets reliably to
all neighbors individually. Therefore, EIGRP sends a single multicast hello
with an indication in the packet informing the receivers that the packet need
not be acknowledged. Other types of packets (such as updates) require
acknowledgment, which is indicated in the packet. The reliable transport has a
provision to send multicast packets quickly when unacknowledged packets are
pending. This provision helps to ensure that convergence time remains low in
the presence of varying speed links.

 DUAL finite state machine--The DUAL finite state machine embodies the
decision process for all route computations. It tracks all routes advertised by
all neighbors. DUAL uses several metrics including distance and cost
information to select efficient, loop-free paths. When multiple routes to a
neighbor exist, DUAL determines which route has the lowest metric (named
the feasible distance), and enters this route into the routing table. Other
possible routes to this neighbor with larger metrics are received, and DUAL
determines the reported distance to this network. The reported distance is
defined as the total metric advertised by an upstream neighbor for a path to a
destination. DUAL compares the reported distance with the feasible distance,
and if the reported distance is less than the feasible distance, DUAL considers
the route to be a feasible successor and enters the route into the topology table.
The feasible successor route that is reported with the lowest metric becomes
the successor route to the current route if the current route fails. To avoid
routing loops, DUAL ensures that the reported distance is always less than the
151

feasible distance for a neighbor device to reach the destination network;


otherwise, the route to the neighbor may loop back through the local device.

 Protocol-dependent modules--When there are no feasible successors to a


route that has failed, but there are neighbors advertising the route, a re-
computation must occur. This is the process in which DUAL determines a new
successor. The amount of time required to re-compute the route affects the
convergence time. Re-computation is processor-intensive; it is advantageous
to avoid unneeded re-computation. When
 a topology change occurs, DUAL will test for feasible successors. If there are
feasible successors, DUAL will use them in order to avoid unnecessary re-
computation.
 The protocol-dependent modules are responsible for network layer protocol-
specific tasks. For example, the EIGRP module is responsible for sending and
receiving EIGRP packets that are encapsulated in IPv4 or IPv6. It is also
responsible for parsing EIGRP packets and informing DUAL of the new
information received. EIGRP asks DUAL to make routing decisions, but the
results are stored in the IPv4 or IPv6 routing table. Also, EIGRP is responsible
for redistributing routes learned by other IPv4 or IPv6 routing protocols

3.3.h Implement, optimize and troubleshoot redistribution between


any routing protocol

Configuring Route Policy Language


Route Policy Language (RPL) is used in conjunction with RIP to perform different
functionalities. It can be used to set different metrics during redistribution or to filter
routing updates from neighboring routers. There are two options in RIP to change the
RIP metric: set rip-metric, which changes the metric of the route to the newly
configured value, and the add rip-metric option, which updates the original metric by
an offset defined as per the configuration.

Example 3.3.1 shows the usage of routing policy to set the metric for re- distributed
routes.

Example 3.3.1 Redistribution Using route-policy


152

Example 3.3.2 provides the syntax to configure a route policy used to filter routes.
IOS XR supports the add rip-metric option of the RPL to change the metric values for
RIP.

Example 3.3.2 Route Filtering Using route-policy and the add rip-metric Option

3.3.i - Implement, optimize and troubleshoot manual and auto


summarization with any routing protocol
There are two forms of summarization in EIGRP: auto-summaries and manual
summaries.

Auto-Summarization

EIGRP performs an auto-summarization each time it crosses a border between two


different major networks. For example, in Figure 3.14, Router Two advertises only
the 10.0.0.0/8 network to Router One, because the interface Router Two uses to reach
Router One is in a different major network.
153

On Router One, this looks like the following:

This route is not marked as a summary route in any way; it looks like an internal
route. The metric is the best metric from among the summarized routes. Note that the
minimum bandwidth on this route is 256k, although there are links in the 10.0.0.0
network that have a bandwidth of 56k.

On the router doing the summarization, a route is built to null0 for the summarized
address:

The route to 10.0.0.0/8 is marked as a summary through Null0. The topology table
entry for this summary route looks like the following:
154

To make Router Two advertise the components of the 10.0.0.0 network instead of a
summary, configure no auto-summary on the EIGRP process on Router Two:

On Router Two

router eigrp 2000


network 172.16.0.0
network 10.0.0.0
no auto-summary

With auto-summary turned off, Router One now sees all of the components of the
10.0.0.0 network:

Manual Summarization

EIGRP allows you to summarize internal and external routes on virtually any bit
boundary using manual summarization. For example, in Figure 3.15, Router Two is
summarizing the 192.1.1.0/24, 192.1.2.0/24, and 192.1.3.0/24 into the CIDR block
192.1.0.0/22.
155

The configuration on Router Two is shown below:

Note the ip summary-address eigrp command under interface Serial0, and the
summary route via Null0. On Router One, we see this as an internal route:
156

3.3.j - Implement, optimize and troubleshoot policy-based routing


Policy-based routing provides a tool for forwarding and routing data packets based on
policies defined by network administrators. In effect, it is a way to have the policy
override routing protocol decisions. Policy-based routing includes a mechanism for
selectively applying policies based on access list, packet size or other criteria. The
actions taken can include routing packets on user-defined routes, setting the
precedence, type of service bits, etc.

In policy based routing (PBR) the next-hop address is modified, or the packets are
marked to receive differential service and is configured using route maps.. In PBR,
routing is based on destination addresses and is commonly used to modify the next-
hop IP address, which is based on the source address. More recently, PBR has also
been implemented to mark the IP precedence bits in outbound IP packets so that they
comply with Quality of Service (QoS) policies.

Enabling PBR

To enable PBR, you must create a route map that specifies the match criteria and the
resulting action if all of the match clauses are met. Then, you must enable PBR for
that route map on a particular interface. All packets arriving on the specified interface
matching the match clauses will be subject to PBR.

To enable PBR on an interface, use the following commands beginning in global


configuration mode:

Command Purpose
Step 1 Router(config)# route-map map-tag [permit |deny] Defines a route map to control where packets are
[sequence-number] output. This command puts the router into route-
map configuration mode.

Step 2 Router(config-route-map)# match length min max Specifies the match criteria.
Although there are many route-map matching
Router(config-route-map)# match ip address{access-
options, here you can specify only length and/or
list-number |name} [...access-list-number | name]
ip address.
• length matches the Level 3 length of the
157

packet.
• ip address matches the source or
destination IP address that is permitted by
one or more standard or extended access
lists.
If you do not specify a match command, the
route map applies to allpackets.

Step 3 Router(config-route-map)# set ip Specifies the action(s) to take on the packets that
precedence[number | name] match the criteria. You can specify any or all of
the following:
Router(config-route-map)# set ip df
• precedence: Sets precedence value in the
IP header. You can specify either the
Router(config-route-map)# set ip vrfvrf_name
precedence number or name.
Router(config-route-map)# set ip next-hopip- • df: Sets the `Don't Fragment' (DF) bit in
address [... ip-address] the ip header.
• vrf: Sets the VPN Routing and Forwarding
Router(config-route-map)# set ip next-hop
(VRF) instance.
recursive ip-address[... ip-address]
• next-hop: Sets next hop to which to route
Router(config-route-map)# set interface the packet.
interface-type interface-number [...type number]
• next-hop recursive: Sets next hop to
which to route the packet if the hop is to a
Router(config-route-map)# set ip default next-
router which is not adjacent.
hop ip-address[... ip-address]
• interface: Sets output interface for the
Router(config-route-map)# set default packet.
interface interface-type interface-number[... type
...number] • default next-hop: Sets next hop to which
to route the packet if there is no explicit
route for this destination.
• default interface: Sets output interface for
the packet if there is no explicit route for
this destination.

Step 4 Router(config-route-map)# interfaceinterface-type Specifies the interface, and puts the router into
interface-number interface configuration mode.

Step 5 Router(config-if)# ip policy route-map map-tag Identifies the route map to use for PBR. One
interface can have only one route map tag; but
you can have several route map entries, each
with its own sequence number. Entries are
evaluated in order of their sequence numbers
until the first match occurs. If no match occurs,
packets are routed as usual.

3.3.k Identify and troubleshoot sub-optimal routing


When redistributing between different OSPF processes in multiple points on the
network, it is possible to get into situations of suboptimal routing or even worse, a
routing loop.

In the topology below we have the processes OSPF 1 and OSPF 2. Router 1 (R1) and
router 2 (R2) are redistributing from OSPF 1 into OSPF 2.
158

The configurations for routers R1 and R2 are shown below.


159

In the above topology, R4's E1/0 is in Area 1 and E0/0 is in Area 0. Therefore, R4 is
an Area Border Router (ABR) advertising the network 10.0.1.0/24 as the inter-area
(IA) route to R1 and R2. R1 and R2 are redistributing this information into OSPF 2.
The redistribute configuration commands are highlighted in the above configurations
of R1 and R2. Therefore, both R1 and R2 are going to learn about 10.0.1.0/24 as IA
via OSPF 1 and as external type 2 (E2) via OSPF 2 because the external link-state
advertisements (LSAs) are propagated throughout the OSPF 2 domain.

Since IA routes are always preferred over E1 or E2 routes, the expectation is to see, in
the routing table of R1 and R2, that 10.0.1.0/24 is an IA route with next-hop R4.
However, when viewing their routing tables, something different is seen - on R1,
10.0.1.0/24 is an IA route with next-hop R4 but on R2, 10.0.1.0/24 is an E2 route with
next-hop R1.

This is the command output of the show ip route command for R1.
160

Solution 1

Since we know that in the above case, the routers are selecting the best route based on
administrative distance, the logical way to prevent this behavior is to increase the
administrative distance of the external routes in OSPF 2. This way, routes learned via
OSPF 1 will always be preferred over external routes redistributed from OSPF 1 into
OSPF 2. This is done using the sub-router configuration command distance ospf
external <value> as shown in the configurations below.
161
162

The resulting routing table when changing the administrative distance of the external
routes in OSPF 2 is shown below.

It is important to note that in some cases, when there is also redistribution from OSPF
2 into OSPF 1 and there are other routing protocols being redistributed into OSPF 2
(Routing Information Protocol [RIP], Enhanced Interior Gateway Routing Protocol
(EIGRP) statics, and so forth), this can lead to suboptimal routing in OSPF 2 for those
external routes.

Solution 2

If the ultimate reason to implement two different OSPF processes is to filter certain
routes, there is a new feature in Cisco IOS® Software Release 12.2(4)T called OSPF
ABR Type 3 LSA Filtering which allows you to do route filtering in the ABR.
163

Instead of configuring a second OSPF process, the links that are part of OSPF 2, in
the example above, could be configured as another area inside OSPF 1. Then, you can
implement the required route filtering in R1 and R2 with this new feature.

3.3.l - Implement and troubleshoot bidirectional forwarding


detection
BFD Operation

BFD provides a low-overhead, short-duration method of detecting failures in the


forwarding path between two adjacent routers, including the interfaces, data links, and
forwarding planes. BFD is a detection protocol that you enable at the interface and
routing protocol levels. Cisco supports the BFD asynchronous mode, which depends
on the sending of BFD control packets between two systems to activate and maintain
BFD neighbor sessions between routers. Therefore, in order for a BFD session to be
created, you must configure BFD on both systems (or BFD peers). Once BFD has
been enabled on the interfaces and at the router level for the appropriate routing
protocols, a BFD session is created, BFD timers are negotiated, and the BFD peers
will begin to send BFD control packets to each other at the negotiated interval.

BFD provides fast BFD peer failure detection times independently of all media types,
encapsulations, topologies, and routing protocols BGP, EIGRP, IS-IS, and OSPF. By
sending rapid failure detection notices to the routing protocols in the local router to
initiate the routing table recalculation process, BFD contributes to greatly reduced
overall network convergence time. Figure 3.16 shows a simple network with two
routers running OSPF and BFD. When OSPF discovers a neighbor (1) it sends a
request to the local BFD process to initiate a BFD neighbor session with the OSPF
neighbor router (2). The BFD neighbor session with the OSPF neighbor router is
established (3).

Figure 3.16

Figure 3.17 shows what happens when a failure occurs in the network (1). The BFD
neighbor session with the OSPF neighbor router is torn down (2). BFD notifies the
local OSPF process that the BFD neighbor is no longer reachable (3). The local OSPF
process tears down the OSPF neighbor relationship (4). If an alternative path is
available the routers will immediately start converging on it.
164

Figure 3.17 Tearing Down an OSPF Neighbor Relationship

BFD Detection of Failures

Once a BFD session has been established and timer negations are complete, BFD
peers send BFD control packets that act in the same manner as an IGP hello protocol
to detect liveliness, except at a more accelerated rate. The following information
should be noted:

 BFD is a forwarding path failure detection protocol. BFD detects a failure, but
the routing protocol must take action to bypass a failed peer.

 Typically, BFD can be used at any protocol layer. However, the Cisco
implementation of BFD for Cisco IOS Releases 12.2(18)SXE, 12.0(31)S, and
12.4(4)T supports only Layer 3 clients, in particular, the BGP, EIGRP, IS-IS,
and OSPF routing protocols.

 Cisco devices will use one BFD session for multiple client protocols in the
Cisco implementation of BFD for Cisco IOS Releases 12.2(18)SXE,
12.0(31)S, and 12.4(4)T. For example, if a network is running OSPF and
EIGRP across the same link to the same peer, only one BFD session will be
established, and BFD will share session information with both routing
protocols.

BFD Version Interoperability

Cisco IOS Release 12.4(9)T supports BFD Version 1 as well as BFD Version 0. All
BFD sessions come up as Version 1 by default and will be interoperable with Version
0. The system automatically performs BFD version detection, and BFD sessions
between neighbors will run in the highest common BFD version between neighbors.
For example, of one BFD neighbor is running BFD Version 0 and the other BFD
neighbor is running Version 1, the session will run BFD Version 0. The output from
the show bfd neighbors [details] command will verify which BFD version a BFD
neighbor is running.

Benefits of Using BFD for Failure Detection

When you deploy any feature, it is important to consider all the alternatives and be
aware of any trade-offs being made.
165

The closest alternative to BFD in conventional EIGRP, IS-IS, and OSPF deployments
is the use of modified failure detection mechanisms for EIGRP, IS-IS, and OSPF
routing protocols.

If you set EIGRP hello and hold timers to their absolute minimums, the failure
detection rate for EIGRP falls to within a one- to two-second range.

If you use fast hellos for either IS-IS or OSPF, these Interior Gateway Protocol (IGP)
protocols reduce their failure detection mechanisms to a minimum of one second.

There are several advantages to implementing BFD over reduced timer mechanisms
for routing protocols:

 Although reducing the EIGRP, IS-IS, and OSPF timers can result in minimum
detection timer of one to two seconds, BFD can provide failure detection in
less than one second.

 Because BFD is not tied to any particular routing protocol, it can be used as a
generic and consistent failure detection mechanism for EIGRP, IS-IS, and
OSPF.

 Because some parts of BFD can be distributed to the data plane, it can be less
CPU-intensive than the reduced EIGRP, IS-IS, and OSPF timers, which exist
wholly at the control plane.

3.3.m - Implement and troubleshoot loop prevention mechanisms


Distance vector protocols are susceptible to routing loops. Split horizon is one of the
features of distance vector routing protocols that prevents them. This feature prevents
a router from advertising a route back onto the interface from which it was learned.

For example, consider the following network topology.

Router R2 has a route to the subnet 10.0.1.0/24 that is advertised to router R1 by


using RIP. Router R1 receives the update and stores the route in its routing table.
Router R1 knows that the routing update for that route has come from R2, so it won't
advertise the route back to router R2. Otherwise, if the network 10.0.1.0/24 goes
down, router R1 could receive a route to the subnet 10.0.1.0/24 from R2. Router R1
now thinks that R2 has the route to reach the subnet, and uses that route. R2 receives
the packets from R1 and sends them back to R2, because R2 thinks that R1 has a route
to reach the subnet, thereby creating a routing loop.
166

Route poisoning

Route poisoning is another method for preventing routing loops employed by distance
vector routing protocols. When a router detects that one of its directly connected
routes has failed, it sends the advertisment for that route with an infinite metric
("poisoning the route"). A router that receives the update knows that the route has
failed and doesn't use it anymore.

Consider the following example.

Router R1 is directly connected to the 10.0.1.0/24 subnet. Router R1 runs RIP and the
subnet is advertised to R2. When the R1's Fa0/1 interface fails, a route advertisement
is sent by R1 to R2, indicating that the route has failed. The route has a metric of 16,
which is more than the RIP's maximum hop count of 15, so R1 considers the route to
be unreachable.

Hold down

Hold down is a loop-prevention mechanism employed by distance vector routing


protocol. This feature prevents a router from learning new information about a failed
route. When a router receives information about an unreachable route, a hold down
timer is started. The router ignores all routing updates for that route until the timer
expires (by default, 180 seconds in RIP). Only updates allowed during that period are
updates sent from the router that originally advertised the route. If that router
advertises the update, the hold down timer is stopped and the routing information is
processed.

An example will help you understand the concept better. Consider the following
network topology.
167

Router R1 has advertised its directly connected subnet 10.0.1.0/24 through RIP. After
some period of time, the interface Fa0/1 on R1 fails and router R1 sends the poisoned
route to R2. R2 receives the routing update, marks the route as unreachable and starts
the hold down timer. During that time all updates from any other routers about that
route are ignored to prevent routing loops. If interface Fa0/1 on R1 comes back up,
R1 again advertises the route. R2 process that update even if the hold down timer is
still running, because the same router that originally advertised the route sends the
update.

3.3.n - Implement and troubleshoot routing protocol authentication


There are two general ways that authentication is implemented by most routing
protocols: using a routing protocol centric solution that configures the passwords or
keys to use within the routing protocol configuration, or by using a broader solution
that utilizes separately configured keys that are able to be used by multiple routing
protocols. Both OSPF and BGP use the prior of these methods and configure the
specific authentication type and passwords/keys within their specific respective
configurations. RIP and EIGRP utilize the former of these methods by utilizing a
separate authentication key mechanism that is configured and then utilized for either
RIP or EIGRP.

Keep in mind that these authentication solutions do not encrypt the information
exchanged between the devices, but simply verifies that the identity of these devices.

Key Chains
The idea behind a key chain is rather simple as it simply replicates an electronic
version of a key chain, a concept that most people are familiar with. The key chain
functionality provides a mechanism for storing a number of different electronic keys,
the key string value that is associated with a specific key and the lifetime that the key
is valid. Any one of these configured keys can then be used by RIP or EIGRP for
authentication.

Routing Protocol Authentication Configuration


As there are two different ways to configure routing protocol authentication; this
article will review OSPF and BGP first as they require individualized configuration.
The configuration of key chains and how they are used by RIP and EIGRP will then
be covered.

OSPF Authentication

The configuration of OSPF requires the type of authentication and method of


authentication exchange determine a couple of different commands which commands
168

are used. OSPF supports two different types of authentication that can be configured:
authentication limited to a specific interface, or authentication configured over an
entire OSPF area. Regardless of which of these options is selected there are also two
different methods of authentication exchange that can be configured for each, these
include: clear text simple exchange, or MD5 exchange. When using MD5 the
password/key that is configured is not sent between the exchanging devices, instead a
hash is calculated and sent; the remote device to ensure identity then verifies this
hash.

The commands are required to setup OSPF interface (network) authentication is


shown in Table 3.3.n.1.

Step 1 Enter privileged mode. router>enable

Step 2 Enter global configuration mode. router#configure terminal

Step 3 Enter interface configuration router(config)#interface interface-type interface-slot


mode.

Step 4 Configure OSPF (network) router(config-if)#ip ospf authentication[message-digest]


authentication.
To configure MD5 authentication
use themessage-digest keyword.

Step 5 Configure the OSPF router(config-if)#ip ospf authentication-keykey


Authentication key. or
router(config-if)#ip ospf message-digest-keykey-
id md5 key

Step 6 Exit configuration mode. router(config-if)#end

The commands that are required to setup OSPF area authentication are shown in
Table 3.3.n.2.

Step 1 Enter privileged mode. router>enable

Step 2 Enter global configuration mode. router#configure terminal

Step 3 Enter router configuration mode. router(config)#router ospf process-id

Step 4 Configure OSPF area router(config-router)#area area-idauthentication


authentication. [message-digest]
To configure MD5 authentication
use themessage-digest keyword.

Step 5 Enter interface configuration mode. router(config-router)#interface interface-type interface-


slot

Step 6 Configure the OSPF Authentication router(config-if)#ip ospf authentication-keykey


key. or
router(config-if)#ip ospf message-digest-keykey-
169

id md5 key

Step 7 Exit configuration mode. router(config-line)#end

BGP Authentication
The configuration of authentication with BGP is very simple as it requires only a
single configuration command. Unlike OSPF, BGP only supports the use of MD5
authentication.

The commands that are required to setup BGP authentication are shown in Table
3.3.n.3.

Step 1 Enter privileged mode. router>enable

Step 2 Enter global configuration mode. router#configure terminal

Step 3 Enter router configuration mode. router(config)#router bgp autonomous-system

Step 4 Configure BGP authentication. router(config-router)#neighbor {ip-address |peer-


group-name} password key

Step 5 Exit configuration mode. router(config-router)#end

RIP and EIGRP Authentication


The configuration of both RIP and EIGRP utilize key chains for their authentication
configuration.

Key Chain Configuration


The key chain configuration provides the ability to setup multiple keys that can be
used by the supporting features. This includes the ability to have keys that potentially
overlap in the time that they are valid. Keys can also be configured with specific
transmit (send) and receive (accept) lifetimes that provide the ability to have keys
automatically change at a predetermined time. The configuration required to setup a
key chain are shown in Table 3.3.n.4.

Step 1 Enter privileged mode. router>enable

Step 2 Enter global configuration mode. router#configure terminal

Step 3 Create a key chain and enter key chain router(config)#key chain name-of-key-chain
configuration mode.

Step 4 Create a key and enter key router(config-keychain)#key key-id


configuration mode.

Step 5 Configure a secret key. router(config-keychain-key)#key-string key-string

Step 6 Configure the receive key lifetime. router(config-keychain-key)#accept-lifetimestart-time


{infinite | end-time | durationseconds}

Step 7 Configure the transmit key lifetime. router(config-keychain-key)#send-lifetimestart-time {infinite |


170

end-time | durationseconds}

Step 8 Exit configuration mode. router(config-keychain-key)#end

3.4 RIP (v2 and v6)

3.4.a Implement and troubleshoot RIPv2


RIPv2 was first described in RFC 1388 and RFC 1723 (1994); the current RFC is
2453, written in November 1998. Although current environments use advanced
routing protocols such as OSPF and EIGRP, there still are networks using RIP. The
need to use VLSMs and other requirements prompted the definition of RIPv2.

RIPv2 improves upon RIPv1 with the ability to use VLSM, with support for route
authentication, and with multicasting of route updates. RIPv2 supports CIDR. It still
sends updates every 30 seconds and retains the 15-hop limit; it also uses triggered
updates. RIPv2 still uses UDP port 520; the RIP process is responsible for checking
the version number. It retains the loop-prevention strategies of poison reverse and
counting to infinity. On Cisco routers, RIPv2 has the same administrative distance as
RIPv1, which are 120. Finally, RIPv2 uses the IP address 224.0.0.9 when multicasting
route updates to other RIP routers. As in RIPv1, RIPv2 will, by default, summarize IP
networks at network boundaries. You can disable auto-summarization if required.

You can use RIPv2 in small networks where VLSM is required. It also works at the
edge of larger networks.

Authentication
Authentication can prevent communication with any RIP routers that are not intended
to be part of the network, such as UNIX stations running routed. Only RIP updates
with the authentication password are accepted. RFC 1723 defines simple plain-text
authentication for RIPv2.

Cisco implementation of RIPv2 supports two modes of authentication: plain text


authentication and Message Digest 5 (MD5) authentication. Plain text authentication
mode is the default setting in every RIPv2 packet, when authentication is enabled.
Plain text authentication should not be used when security is an issue, because the
unencrypted authentication password is sent in every RIPv2 packet.

RIP version 1 (RIPv1) does not support authentication. If you are sending and
receiving RIPv2 packets, you can enable RIP authentication on an interface.

Prerequisites

Requirements
171

Readers of this document should have the basic understanding of the following:

 RIPv1 and RIPv2

Components Used
This is not restricted to specific software and hardware versions. Starting from Cisco
IOS Software Version 11.1, RIPv2 is supported and therefore all the commands given
in the configuration are supported on Cisco IOS Software Version 11.1 and later.

The configuration in the document is tested and updated using these software and
hardware versions:

 Cisco 2500 Series Router


 Cisco IOS Software Version 12.3(3)

All of the devices used in this document started with a cleared (default) configuration.
If your network is live, make sure that you understand the potential impact of any
command.

Background Information

Security is one of the primary concerns of network designers today. Securing a


network includes securing the exchange of routing information between routers, such
as ensuring that the information entered into the routing table is valid and not
originated or tampered by someone trying to disrupt the network. An attacker might
try to introduce invalid updates to trick the router into sending data to the wrong
destination, or to seriously degrade network performance. In addition, invalid route
updates might end up in the routing table due to poor configuration (such as not using
the passive interface command on the network boundary), or due to a malfunctioning
router. Because of this it is prudent to authenticate the routing update process running
on a router.

Configure

You are presented with the information to configure the features described in this
document.

To find additional information on the commands used in this document, use the
Command Lookup Tool (registered customers only).

Network Diagram
This document uses the network setup shown in the diagram below.
172

The network above, which is used for the following configuration examples, consists
of two routers; router RA and router RB, both of which are running RIP and
periodically exchanging routing updates. It is required that this exchange of routing
information over the serial link be authenticated.

Configurations
Carry out these steps to configure authentication in RIPv2:

1. Define a key chain with a name.

The key chain determines the set of keys that can be used on the interface. If a key
chain is not configured, no authentication is performed on that interface.

2. Define the key or keys on the key chain.

3. Specify the password or key-string to be used in the key.

This is the authentication string that must be sent and received in the packets using
the routing protocol being authenticated. (In the example given below, the value
of the string is 234.)

4. Enable authentication on an interface and specify the key chain to be used.

Since authentication is enabled on a per interface basis, a router running RIPv2


can be configured for authentication on certain interfaces and can operate without
any authentication on other interfaces.

5. Specify whether the interface will use plain text or MD5 authentication.

The default authentication used in RIPv2 is plain text authentication, when


authentication is enabled in the previous step. So, if using plain text
authentication, this step is not required.

6. Configure key management (This step is optional).

Key management is a method of controlling authentication keys. This is used to


migrate form one authentication key to another.

Configuring Plain Text Authentication


173

One of the two ways in which RIP updates can be authenticated is using plain text
authentication. This can be configured as shown in the tables below.
174
175

Troubleshoot

Troubleshooting Commands
Certain show commands are supported by the Output Interpreter Tool (registered
customers only), which allows you to view an analysis of show command output.

The debug ip rip command can be used for troubleshooting RIPv2 authentication-
related problems.

Following is an example of the debug ip rip command output, when any of the
authentication-related parameters that need to be identical between the neighboring
routers is not matching. This may result in either one or both the routers not installing
the received routes in their routing table.

3.4.b Describe RIPv6 (RIPng)


The future of TCP/IP is the new Internet Protocol version 6 (IPv6), which makes
some very important changes to IP, especially with regard to addressing. Since IPv6
addresses are different than IPv4 addresses, everything that works with IP addresses
must change to function under IPv6. This includes routing protocols, which exchange
addressing information.
176

To ensure a future for the Routing Information Protocol, a new IPv6-compatible


version had to be developed. This new version was published in 1997 in RFC 2080,
RIPng for IPv6, where the ng stands for next generation (IPv6 is also sometimes
called "IP next generation").

RIPng, which is also occasionally seen as RIPv6 for obvious reasons, was designed to
be as similar as possible to the current version of RIP for IPv4, which is RIP Version
2 (RIP-2). In fact, RFC 2080 describes RIPng as "the minimum change" possible to
RIP to allow it to work on IPv6. Despite this effort, it was not possible to define
RIPng as just a new version of the older RIP protocol, like RIP-2 was. RIPng is a new
protocol, which was necessary because of the significance of the changes between
IPv4 and IPv6—especially the change from 32-bit to 128-bit addresses in IPv6, which
necessitated a new message format.

RIPng Version-Specific Features


Even though RIPng is a new protocol, a specific effort was made to make RIPng like
its predecessors. RIPng also does not introduce any specific new features compared to
RIP-2, except those needed to implement RIP on IPv6.

RIPng maintains most of the enhancements introduced in RIP-2; some are


implemented as they were in RIP-2, while others appear in a modified form. Here's
specifically how the five extensions in RIP-2 are implemented in RIPng:

Classless Addressing Support and Subnet Mask Specification: In IPv6 all


addresses are classless, and specified using an address and a prefix length, instead of a
subnet mask. Thus, a field for the prefix length is provided for each entry instead of a
subnet mask field.

 Next Hop Specification: This feature is maintained in RIPng, but implemented


differently. Due to the large size of IPv6 addresses, including a Next Hop field in
the format of RIPng RTEs would almost double the size of every entry. Since
Next Hop is an optional feature, this would be wasteful. Instead, when a Next Hop
is needed, it is specified in a separate routing entry.
 Authentication: RIPng does not include its own authentication mechanism. It is
assumed that if authentication and/or encryption are needed, they will be provided
using the standard IPSec features defined for IPv6 at the IP layer. This is more
efficient than having individual protocols like RIPng perform authentication.
 Route Tag: This field is implemented the same way as it is in RIP-2.
 Use of Multicasting: RIPng uses multicasts for transmissions, using reserved
IPv6 multicast address FF02::9.

RIPng Messaging
There are two basic RIPng message types, RIP Request and RIP Response, which are
exchanged using the User Datagram Protocol (UDP) as with RIP-1 and RIP-2. Since
RIPng is a new protocol, it cannot use the same UDP reserved port number 520 used
for RIP-1/RIP-2. Instead, RIPng uses well-known port number 521. The semantics for
the use of this port is the same as those used for port 520 in RIP-1 and RIP-2. For
convenience, here are the rules again:
177

RIP Request messages are sent to UDP destination port 521. They may have a source
port of 521 or may use an ephemeral port number.

RIP Response messages sent in reply to an RIP Request are sent with a source port of
521, and a destination port equal to whatever source port the RIP Request used.
Unsolicited RIP Response messages (sent on a routine basis and not in response to a
request) are sent with both the source and destination ports set to 521.

RIPng Message Format


IPv6 RIP functions the same and offers the same benefits as RIP in IPv4. RIP
enhancements for IPv6, detailed in RFC 2080, include support for IPv6 addresses and
prefixes, and the use of the all-RIP-devices multicast group address FF02::9 as the
destination address for RIP update messages.

In the Cisco software implementation of IPv6 RIP, each IPv6 RIP process maintains a
local routing table, referred to as a Routing Information Database (RIB). The IPv6
RIP RIB contains a set of best-cost IPv6 RIP routes learned from all its neighboring
networking devices. If IPv6 RIP learns the same route from two different neighbors,
but with different costs, it will store only the lowest cost route in the local RIB. The
RIB also stores any expired routes that the RIP process is advertising to its neighbors
running RIP. IPv6 RIP will try to insert every non-expired route from its local RIB
into the master IPv6 RIB. If the same route has been learned from a different routing
protocol with a better administrative distance than IPv6 RIP, the RIP route will not be
added to the IPv6 RIB but the RIP route will still exist in the IPv6 RIP RIB.

3.5 EIGRP (for IPv4 and IPv6)


Like OSPFv3 compared to OSPFv2, EIGRP for IPv6 has a great deal in common with
EIGRP for IPv4. In fact, EIGRP for IPv6 is very similar to EIGRP for IPv4.

Comparison of EIGRP for IPv4 and for IPv6


IPv6 EIGRP requires a routing process to be defined and enabled (no shutdown) and a
router ID (in 32-bit IPv4 address format) to be manually assigned using the router-id
command, both of which must be done in IPv6 router configuration mode before the
IPv6 EIGRP routing process can start. These are two of the differences between
EIGRP for IPv4 and IPv6. Some others include the following:

 Configured on the interface— As with OSPFv3 (and RIPng), EIGRP


advertises networks based on interface commands rather than routing process
network commands. For example, the command to enable IPv6 EIGRP AS 100 on
an interface is ipv6 eigrp 100.
 Must no shut the routing process— When EIGRP for IPv6 is first configured
on an interface; this action creates the IPv6 EIGRP routing process on the router.
However, the routing process is initially placed in the shutdown state, and requires
a no shutdown command in router configuration mode to become active.
 Router ID— EIGRP for IPv6 requires a 32-bit router ID (a dotted-decimal
IPv4 address) to be configured before it starts. A router does not complain about
178

the lack of an EIGRP RID, however, so remember to configure one statically


when doing a no shutdown in the routing process.
 Passive interfaces— IPv6 EIGRP, passive interfaces are configured in the
routing process only. That is, no related configuration commands are required on
the interface.
 Route filtering— IPv6 EIGRP performs route filtering using only the
distribute-list prefix-list command.
 Automatic summarization— IPv6 EIGRP has no equivalent to the IPv4 (no)
auto-summary command, because there is no concept of classful routing in IPv6.
 Cisco IOS support— EIGRP for IPv6 is supported in Cisco IOS beginning
with Release 12.4(6) T.

Unchanged Features
All of the following EIGRP features work the same way in IPv6 as they do in IPv4.
The only exceptions are the commands themselves, with ipv6 instead of ip in interface
commands:

 Metric weights
 Authentication
 Link bandwidth percentage
 Split horizon
 Next-hop setting, configured via the interface-level ipv6 next-hop-self eigrp as
command
 Hello interval and hold time configuration
 Address summarization (syntax differs slightly to accommodate IPv6 address
format)
 Stub networks (syntax and options differ slightly)
 Variance
 Most other features

IPv6 EIGRP uses authentication keys configured exactly as they are for IPv4 EIGRP.

Route Filtering
IPv6 EIGRP uses prefix lists for route filtering. To filter routes from EIGRP updates,
configure an IPv6 prefix list that permits or denies the desired prefixes. Then apply it
to the EIGRP routing process using the distribute-list prefix-list name command.

Configuring EIGRP for IPv6


The basic steps required to configure IPv6 EIGRP are quite similar to those for IPv4
EIGRP, with several additions:

 Step 1. Enable IPv6 unicast routing.


 Step 2. Configure EIGRP on at least one router interface.
 Step 3. In the EIGRP routing process, assign a router ID.
 Step 4. Issue the no shutdown command in the EIGRP routing process
to activate the protocol.
 Step 5. Use the relevant show commands to check your configuration.
179

3.5.a Describe packet types

EIGRP Packet Types


EIGRP uses the following packet types: hello and acknowledgment, update, and
query and reply.

Hello packets are multicast for neighbor discovery/recovery and do not require
acknowledgment. An acknowledgment packet is a hello packet that has no data.
Acknowledgment packets contain a nonzero acknowledgment number and always are
sent by using a unicast address.

Update packets are used to convey reachability of destinations. When a new neighbor
is discovered, unicast update packets are sent so that the neighbor can build up its
topology table. In other cases, such as a link-cost change, updates are multicast.
Updates always are transmitted reliably.

Query and reply packets are sent when a destination has no feasible successors. Query
packets are always multicast. Reply packets are sent in response to query packets to
instruct the originator not to re-compute the route because feasible successors exist.
Reply packets are unicast to the originator of the query. Both query and reply packets
are transmitted reliably.

Packet Formats

EIGRP uses five packet types:

 Hello/Acks
 Updates
 Queries
 Replies
 Requests

As stated earlier, hellos are multicast for neighbor discovery/recovery. They do not
require acknowledgment. A hello with no data is also used as an acknowledgment
(ack). Acks are always sent using a unicast address and contain a non-zero
acknowledgment number.

Updates are used to convey reachability of destinations. When a new neighbor is


discovered, update packets are sent so the neighbor can build up its topology table. In
this case, update packets are unicast. In other cases, such as a link cost change,
updates are multicast. Updates are always transmitted reliably.

Queries and replies are sent when destinations go into Active state. Queries are
always multicast unless they are sent in response to a received query. In this case, it is
unicast back to the successor that originated the query. Replies are always sent in
response to queries to indicate to the originator that it does not need to go into Active
state because it has feasible successors. Replies are unicast to the originator of the
query. Both queries and replies are transmitted reliably.
180

Request packets are used to get specific information from one or more neighbors.
Request packets are used in route server applications. They can be multicast or
unicast. Requests are transmitted unreliably.

EIGRP uses the Reliable Transport Protocol (RTP) to handle the guaranteed and
reliable delivery of EIGRP packets to neighbors. "Guaranteed and reliable" sounds a
lot like TCP, but the two are quite different in how they operate. Not all EIGRP
packets are going to be sent reliably.

EIGRP uses five packet types. You're likely familiar with Hello packets, used for
neighbor discovery and to exist neighbor relationships alive. EIGRP Hello packets are
multicast to 224.0.0.10.

Acknowledgement packets themselves are simply hello packets that contain no data.
Neither Hello nor Ack packets use RTP, and are therefore considered unreliable.

Update packets are sent to new neighbors to allow the neighbor to build an accurate
routing and topology table, and are also sent when a change in the network occurs.
Update packets are generally multicast packets.

Query packets are sent when a router loses a successor route and have no feasible
successor ("DUAL Query").

Reply packets are sent in response to query packets, and a reply packet indicates that a
new route to the destination has been found. Update, query, and reply packets all use
RTP and are considered reliable.

To see how many of these packets have passed through a router, run show ip eigrp
traffic.

R1#show ip eigrp traffic


IP-EIGRP Traffic Statistics for process 100
Hellos sent/received: 2/2
Updates sent/received: 13/4
Queries sent/received: 0/0
Replies sent/received: 0/0
Acks sent/received: 0/2
Input queue high water mark 1, 0 drops
SIA-Queries sent/received: 0/0
SIA-Replies sent/received: 0/0

To review: Hello and ACK packets are unreliable. Reply, Query, and Update packets
are reliable.

In the following example, R1 has just had EIGRP enabled on a Serial interface. R1
will send an EIGRP Hello packet out that interface in an attempt to find potential
neighbors. EIGRP Hello packets are multicast to 224.0.0.10.
181

The downstream router R2 receives this Hello and checks it to verify that certain
values in the Hello packet - including the Autonomous System number - match those
on R2.

If those values match, R2 responds with an EIGRP Update packet, which contains all
the EIGRP-derived routes that R2 knows. (R1 will also receive a Hello packet,
multicast from R2.)

Note the EIGRP Update packet going back to R1 is a unicast.

Generally, EIGRP Update packets are multicast to 224.0.0.10, just as EIGRP Hello
packets are. This particular situation is an exception to that rule - during the initial
exchange of routes between two new EIGRP neighbors, update packets are unicast
rather than multicast.

Finally, R1 will send an EIGRP Acknowledgement packet to let R2 know the routes
in the Update packet were received. R1 will also send an Update packet of its own,
unicast to R2, containing all EIGRP routes R1 has. R2 will respond with an ack of its
own.
182

EIGRP Packets
EIGRP will use six different packet types when communicating with its neighboring
EIGRP routers,

 Hello Packets - EIGRP sends Hello packets once it has been enabled on a router
for a particular network. These messages are used to identify neighbors and once
identified, serve or function as a keep alive mechanism between neighboring
devices. EIGRP Hello packets are sent to the link local Multicast group address
224.0.0.10. Hello packets sent by EIGRP do not require an Acknowledgment to
be sent confirming that they were received. Because they require no explicit
acknowledgment, Hello packets are classified as unreliable EIGRP packets.
EIGRP Hello packets have an OPCode of 5.
 Acknowledgement Packets - An EIGRP Acknowledgment (ACK) packet is
simply an EIGRP Hello packet that contains no data. Acknowledgement packets
are used by EIGRP to confirm reliable delivery of EIGRP packets. ACKs are
always sent to a Unicast address, which is the source address of the sender of the
reliable packet, and not to the EIGRP Multicast group address. In addition,
Acknowledgement packets will always contain a non-zero acknowledgment
number. The ACK uses the same OPCode as the Hello Packet because it is
essentially just a Hello that contains no information. The OPCode is 5.
 Update Packets - EIGRP Update packets are used to convey reachability of
destinations. Update packets contain EIGRP routing updates. When a new
neighbor is discovered, Update packets are sent via Unicast to the neighbor which
the can build up its EIGRP Topology Table. It is important to know that Update
packets are always transmitted reliably and always require explicit
acknowledgement. Update packets are assigned an OPCode of 1.
 Query Packet - EIGRP Query packets are Multicast and are used to reliably
request routing information. EIGRP Query packets are sent to neighbors when a
route is not available and the router needs to ask about the status of the route for
fast convergence. If the router that sends out a Query does not receive a response
from any of its neighbors, it resends the Query as a Unicast packet to the non-
responsive neighbor(s). If no response is received in 16 attempts, the EIGRP
neighbor relationship is reset. EIGRP Query packets are assigned an OPCode of 3.
 Reply Packets - EIGRP Reply packets are sent in response to Query packets. The
Reply packets are used to reliably respond to a Query packet. Reply packets are
183

Unicast to the originator of the Query. The EIGRP Reply packets are assigned an
OPCode of 4.
 Request Packets - Request packets are used to get specific information from one
or more neighbors and are used in route server applications. These packet types
can be sent either via Multicast or Unicast, but are always transmitted unreliably.

3.5.a (ii) Route types (internal, external)


EIGRP supports internal and external routes. Internal routes originate within an
EIGRP AS. Therefore, a directly attached network that is configured to run EIGRP is
considered an internal route and is propagated with this information throughout the
EIGRP AS. External routes are learned by another routing protocol or reside in the
routing table as static routes. These routes are tagged individually with the identity of
their origin.

External routes are tagged with the following information:

Router ID of the EIGRP router that redistributed the route


 AS number of the destination
 Configurable administrator tag
 ID of the external protocol
 Metric from the external protocol
 Bit flags for default routing

Route tagging allows the network administrator to customize routing and maintain
flexible policy controls. Route tagging is particularly useful in transit ASs, where
EIGRP typically interacts with an inter-domain routing protocol that implements more
global policies, resulting in a very scalable, policy-based routing.

Route Tagging
EIGRP has the notion of internal and external routes. Internal routes are ones that
have been originated within an EIGRP autonomous system (AS). Therefore, a directly
attached network that is configured to run EIGRP is considered an internal route and
is propagated with this information throughout the EIGRP AS. External routes are
ones that have been learned by another routing protocol or reside in the routing table
as static routes. These routes are tagged individually with the identity of their
origination.

As an example, suppose there is an AS with three border routers. A border router is


one that runs more than one routing protocol. The AS uses EIGRP as the routing
protocol. Let's say two of the border routers, BR1 and BR2, use Open Shortest Path
First (OSPF) and the other, BR3, uses Routing Information Protocol (RIP).

Routes learned by one of the OSPF border routers, BR1 could be conditionally
redistributed into EIGRP. This means that EIGRP running in BR1 advertises the
OSPF routes within its own AS. When it does so, it advertises the route and tags it as
an OSPF learned route with a metric equal to the routing table metric of the OSPF
route. The router-id is set to BR1. The EIGRP route propagates to the other border
184

routers. Let's say that BR3, the RIP border router, also advertises the same
destinations as BR1. Therefore BR3 redistributes the RIP routes into the EIGRP AS.
BR2, then, has enough information to determine the AS entry point for the route, the
original routing protocol used, and the metric. Further, the network administrator
could assign tag values to specific destinations when redistributing the route. BR2 can
use any of this information to use the route or re-advertise it back out into OSPF.

Using EIGRP route tagging can give a network administrator flexible policy controls
and help customize routing. Route tagging is particularly useful in transit ASes where
EIGRP would typically interact with an inter-domain routing protocol that
implements more global policies. This combines for very scalable policy based
routing.

IP Internal Routes TLV


An internal route is a path to a destination within the EIGRP autonomous system.

Next Hop – is the next-hop, neighboring router, IP address.

 Delay – is the sum of the configured delays expressed in units of 10


microseconds. A delay of 0xFFFFFFFF, or 256, indicates an unreachable
route.
 Bandwidth – is 256 x BW(min), or 2,560,000,000 divided by the lowest
configured bandwidth of any interface along the route.
 MTU – is the smallest Maximum Transmission Unit of any link along the
route to the destination. This value is not used for the metric calculation.
 Hop Count – is a number between 0×01 and 0xFF indicating the number of
hops to the destination. A router will advertise a directly connected network
with a hop count of 0.
 Reliability – is a number between 0×01 and 0xFF that reflects the total
outgoing error rates of the interfaces along the route, calculated on a five-
185

minute exponentially weighted average. 0xFF indicates a 100 percent reliable


link.
 Load – is also a number between 0×01 and 0xFF, reflecting the total outgoing
load of the interfaces along the route, calculated on a five-minute
exponentially weighted average? 0×01 indicates a minimally loaded link.
 Reserved – is an unused field and is always 0×0000
 Prefix Length – specifies the number of network bits of the address mask.
 Destination – is the destination address of the route.

IP External Routes TLV


An external route is a path that leads to a destination outside of the EIGRP
autonomous system and that has been redistributed into the EIGRP domain.

 Next Hop – is the next-hop IP address. On a multi-access network, the router


advertising the route might not be the best next-hop router to the destination. The
Next Hop field allows the “bilingual” router to tell its EIGRP neighbors, “Use
address A.B.C.D as the next hop instead of using my interface address.”
 Originating Router – is the IP address or router ID of the router that
redistributed the external route into the EIGRP autonomous system.
 Originating Autonomous System Number – is the autonomous system number
of the router originating the route.
 Arbitrary Tag – may be used to carry a tag set by route maps.
 External Protocol Metric – is the metric of the external protocol.
 Reserved – is an unused field and is always 0×0000.
 External Protocol ID – specifies the protocol from which the external route
was learned. 0×01 = IGRP, 0×02 = EIGRP, 0×03 = Static Route, 0×04 = RIP,
186

0×05 = Hello, 0×06 = OSPF, 0×07 = IS-IS, 0×08 = EGP, 0×09 = BGP, 0x0A =
IDRP, 0x0B = Connected Link.
 Flags – currently constitute just two flags. If the right-most bit of the eight-bit
field is set (0×01), the route is an external route. If the second bit is set (0×02), the
route is a candidate default route.

3.5.b Implement and troubleshoot neighbor relationship


This is due to broadcast now being permitted on the frame-map between the two
devices. As a restriction the ISP prohibits broadcast on that specific PVC. With that
being said, keep in mind that multicast is treated like broadcast on frame relay
networks.

As a fix to this issue you can define a static neighbor in the EIGRP routing process
which will force EIGRP to communicate to that neighbor via unicast similar to RIP;
even the commands are the same which is neighbor x.x.x.x interface#/# where x.x.x.x
equals the IP address of the interface and the interface#/# is the interface of which the
neighboring relationship will peer over.

When configuring an EIGRP static neighbor, the neighbor statement is required on


both ends of the neighbor relationship in the EIGRP routing process that operate in
the same autonomous system. Also keep in mind when you specify a static neighbor
relationship over a particular interface, EIGRP will disable the processing of multicast
EIGRP packets on the specified interface so with that being said EIGRP will not send
nor process received multicast EIGRP traffic on an interface which has a static
neighbor defined under the EIGRP routing process.

In this lab you will configure a static neighbor relationships on the hub and spokes of
the frame-relay network. (R1 to R2, R1 to R3 and R1 to R4)
187

With the introduction of Over the ToP (OTP), Cisco has empowered enterprise
customers to regain control of their WAN deployments. By focusing on simplicity,
OTP helps remove the complexity of the deployment of branch networks utilizing
Multiprotocol Label Switching (MPLS) Virtual Private Network (VPN), and adds the
ability to utilize lower cost public networks.
Traditional MPLS VPN support deployments consist of a set of sites interconnected
by an MPLS provider core network. At each customer site, one or more Customer
Edge (CE) devices attach to one or more Provider Edge (PE) devices. MPLS VPN
support for Enhanced Interior Gateway Routing Protocol (EIGRP) requires service
providers to configure the EIGRP between PE and CE to those customers that require
native support for EIGRP. Yet PE/CE deployments offer a number of challenges for
enterprise customers, specifically, the following:

 Either EIGRP or Border Gateway Protocol (BGP) must be run between the PE/CE
 Service providers must enterprise routes via Multiprotocol internal BGP (MP-
iBGP)
 BGP route propagation impacts enterprise network convergence
 Provider often limits the number of routes being redistributed
 Route flaps within sites and results in BGP convergence events
 Route metric changes result in new extended communities flooded into the core

In addition, the need for the service provider to carry site specific routes mean the CE
devices must be co-supported, and the enterprise customer must consider the
following:

 Managed services is required, even if not needed


 Control of traffic flow using multiple providers can be problematic
 Changing providers requires coordination of switch over to prevent route
loops
 OTP simplifies this. With OTP, enterprise customers can view the WAN as a
virtual extension of the network and transparently extend their infrastructure
OVER the provider's network. The advantage of this approach includes the
following:
 No special requirements on the service provider (this is a provider independent
solution)
 No special requirements on the enterprise customers network
 Support for both IPv4 and IPv6
 No route redistribution or site tag management
 No limitation on the number of routes being exchanged between sites
 A single routing protocol solution (convergence is not depending on the
service provider)
 Works with both traditional managed and non-managed internet connections
 Compliments an L3 any-to-any architecture (optional hair pinning of traffic)
 Support for multiple WAN connections and multiple WAN providers
 Support connections are not part of the MPLS VPN backbone (aka "backdoor"
links)
188

3.5.c Implement and troubleshoot loop free path selection

Feasible successor
A feasible successor for a particular destination is a next hop router that is guaranteed
not to be a part of a routing loop. This condition is verified by testing the feasibility
condition.

Thus, every successor is also a feasible successor. However, in most references about
EIGRP the term feasible successor is used to denote only those routes which provide a
loop-free path but which are not successors (i.e. they do not provide the least
distance). From this point of view, for a reachable destination there is always at least
one successor, however, there might not be any feasible successors.

A feasible successor provides a working route to the same destination, although with a
higher distance. At any time, a router can send a packet to a destination marked
"Passive" through any of its successors or feasible successors without alerting them in
the first place, and this packet will be delivered properly. Feasible successors are also
recorded in the topology table.

The feasible successor effectively provides a backup route in the case that existing
successors become unavailable. Also, when performing unequal-cost load-balancing
(balancing the network traffic in inverse proportion to the cost of the routes), the
feasible successors are used as next hops in the routing table for the load-balanced
destination.

By default, the total count of successors and feasible successors for a destination
stored in the routing table is limited to four. This limit can be changed in the range
from 1 to 6. In more recent versions of Cisco IOS (e.g. 12.4), this range is between 1
and 16.

EIGRP Wide Metrics


The EIGRP Wide Metrics feature supports 64-bit metric calculations and Routing
Information Base (RIB) scaling in Enhanced Interior Gateway Routing Protocol
(EIGRP) topologies. The 64-bit calculations work only in EIGRP named mode
configurations. EIGRP classic mode configurations use 32-bit calculations. This
module provides an overview of the EIGRP Wide Metrics feature.

The Enhanced Interior Gateway Routing Protocol (EIGRP) composite cost metric
(calculated using the bandwidth, delay, reliability, load, and K values) is not scaled
correctly for high-bandwidth interfaces or Ethernet channels, resulting in incorrect or
inconsistent routing behavior. The lowest delay that can be configured for an interface
is 10 microseconds. As a result, high-speed interfaces, such as 10 Gigabit Ethernet
(GE) interfaces, or high-speed interfaces channeled together (GE ether channel) will
appear to EIGRP as a single GE interface. This may cause undesirable equal-cost load
balancing. To resolve this issue, the EIGRP Wide Metrics feature supports 64-bit
metric calculations and Routing Information Base (RIB) scaling that provides the
ability to support interfaces (either directly or via channeling techniques like port
channels or ether channels) up to approximately 4.2 terabits.
189

3.5.d Implement and troubleshoot operations


There are a number of show and debug commands that can be used to configure,
maintain, and troubleshoot a live EIGP network. The show commands are:
 show ip eigrp neighbors, which provides detailed information on the
neighbors. This command records the communication between the router and the
neighbors as well as the interface and address by which they communicate.
 show ip eigrp topology, which provides details about the routes held in the
topology table and for detailed information on the networks that the router is
aware of and the preferred paths to those networks, as well as the next logical hop
as the first step in the path. The router will track the EIGRP packets that have
been sent to neighbors in this table.
 show ip eigrp topology all, which provides details about all the routes and
alternative paths held in the topology table. The router will track the EIGRP
packets that have been sent to neighbors in this table.
 show ip eigrp traffic, which provides information on the aggregate traffic sent
to and from the EIGRP process.
 show ip route, which provides detailed information on the networks that the
router is aware of and the preferred paths to those networks. It also gives the next
logical hop as the next step in the path.
 show ip protocols, which displays the IP configuration on the router, including
the interfaces and the configuration of the IP routing protocols.

3.5.e Implement and troubleshoot EIGRP stub

EIGRP Stub Routing


The EIGRP stub routing feature improves network stability, reduces resource
utilization, and simplifies the stub device configuration.

Stub routing is commonly used in hub-and-spoke network topologies. In a hub-and-


spoke network, one or more end (stub) networks are connected to a remote device (the
spoke) that is connected to one or more distribution devices (the hub). The remote
device is adjacent to one or more distribution devices. The only route for IP traffic to
reach the remote device is through a distribution device. This type of configuration is
commonly used in WAN topologies, where the distribution device is directly
connected to a WAN. The distribution device can be connected to many remote
devices, which is often the case. In a hub-and-spoke topology, the remote device must
forward all nonlocal traffic to a distribution device, so it becomes unnecessary for the
remote device to have a complete routing table. Generally, the distribution device
need not send anything more than a default route to the remote device.

When using the EIGRP stub routing feature, you need to configure the distribution
and remote devices to use EIGRP and configure only the remote device as a stub.
Only specified routes are propagated from the remote (stub) device. The stub device
responds to all queries for summaries, connected routes, and redistributed static
routes, external routes, and internal routes with the message “inaccessible.” A device
that is configured as a stub will send a special peer information packet to all
neighboring devices to report its status as a stub device.
190

Any neighbor that receives a packet informing it of the stub status will not query the
stub device for any routes, and a device that has a stub peer will not query that peer.
The stub device will depend on the distribution device to send proper updates to all
peers.

The figure below shows a simple hub-and-spoke network.

Figure 3.17. Simple Hub-and-Spoke Network

The stub routing feature by itself does not prevent routes from being advertised to the
remote device. In the above example, the remote device can access the corporate
network and the Internet only through the distribution device. Having a complete
route table on the remote device would serve no functional purpose because the path
to the corporate network and the Internet would always be through the distribution
device. The large route table would only reduce the amount of memory required by
the remote device. Summarizing and filtering routes in the distribution device can
conserve bandwidth and memory. The remote device need not receive routes that
have been learned from other networks because the remote device must send all
nonlocal traffic, regardless of the destination, to the distribution device. If a true stub
network is desired, the distribution device should be configured to send only a default
route to the remote device. The EIGRP stub routing feature does not automatically
enable summarization on distribution devices. In most cases, the network
administrator will need to configure summarization on distribution devices.

Example: eigrp stub Command


In the following example, the eigrp stub command is used to configure the device as a
stub that advertises connected and summary routes:

Device(config)# router eigrp 1


Device(config-router)# network 10.0.0.0
Device(config-router)# eigrp stub

Example: eigrp stub connected static Command


191

In the following example, the eigrp stub command is used with the connected and
static keywords to configure the device as a stub that advertises connected and static
routes (sending summary routes will not be permitted):

Device(config)# router eigrp 1


Device(config-router)# network 10.0.0.0
Device(config-router)# eigrp stub connected static

Example: eigrp stub leak-map Command

In the following example, the eigrp stub command is issued with the leak-map name
keyword-argument pair to configure the device to reference a leak map that identifies
routes that would have been suppressed:

Device(config)# router eigrp 1


Device(config-router)# network 10.0.0.0
Device(config-router)# eigrp stub leak-map map1

Example: eigrp stub receive-only Command


In the following example, the eigrp stub command is issued with the receive-only
keyword to configure the device as a receive-only neighbor (connected, summary, and
static routes will not be sent):

Device(config)# router eigrp 1


Device(config-router)# network 10.0.0.0
Device(config-router)# eigrp stub receive-only

Example: eigrp stub redistributed Command


In the following example, the eigrp stub command is issued with the redistributed
keyword to configure the device to advertise other protocols and autonomous
systems:

Device(config)# router eigrp 1


Device(config-router)# network 10.0.0.0
Device(config-router)# eigrp stub redistributed

Example: eigrp stub Command


In the following example, the eigrp stub command is used to configure the device as a
stub that advertises connected and summary routes:

Device(config)# router eigrp virtual-name1


Device(config-router)# address-family ipv4 autonomous-system 4453
Device(config-router-af)# network 10.0.0.0
Device(config-router-af) eigrp stub

Example: eigrp stub connected static Command


In the following named configuration example, the eigrp stub command is issued with
the connected and static keywords to configure the device as a stub that advertises
connected and static routes (sending summary routes will not be permitted):
192

Device(config)# router eigrp virtual-name1


Device(config-router)# address-family ipv4 autonomous-system 4453
Device(config-router-af)# network 10.0.0.0
Device(config-router-af)# eigrp stub connected static

Example: eigrp stub leak-map Command

In the following named configuration example, the eigrp stub command is issued with
the leak-map name keyword-argument pair to configure the device to reference a leak
map that identifies routes that would normally have been suppressed:

Device(config)# router eigrp virtual-name1


Device(config-router)# address-family ipv4 autonomous-system 4453
Device(config-router-af)# network 10.0.0.0
Device(config-router-af)# eigrp stub leak-map map1

Example: eigrp stub receive-only Command


In the following named configuration example, the eigrp stub command is issued with
the receive-only keyword to configure the device as a receive-only neighbor
(connected, summary, and static routes will not be sent):

Device(config)# router eigrp virtual-name1


Device(config-router)# address-family ipv4 autonomous-system 4453
Device(config-router-af)# network 10.0.0.0
Device(config-router-af)# eigrp stub receive-only

Example: eigrp stub redistributed Command


In the following named configuration example, the eigrp stub command is issued with
the redistributed keyword to configure the device to advertise other protocols and
autonomous systems:

Device(config)# router eigrp virtual-name1


Device(config-router)# address-family ipv4 autonomous-system 4453
Device(config-router-af)# network 10.0.0.0
Device(config-router-af) eigrp stub redistributed

3.5.f Implement and troubleshoot load-balancing


Load balancing is the capability of a router to distribute traffic over all the router
network ports that are the same distance from the destination address. Load balancing
increases the utilization of network segments, and so increases effective network
bandwidth. There are two types of load balancing:

Equal cost path – Applicable when different paths to a destination network report the
same routing metric value. The maximum-paths command determines the maximum
number of routes that the routing protocol can use.
193

Unequal cost path – Applicable when different paths to a destination network report
are of different routing metric values. The variance command determines which of
these routes the router uses.

Prerequisites

Requirements

Understanding of IP routing protocols and EIGRP routing protocol. In order to learn


more about IP routing protocols and EIGRP.

 Routing Basics
 EIGRP Support Page

Components Used
EIGRP is supported in Cisco IOS® Software Release 9.21 and later.

You can configure EIGRP in all routers (such as the Cisco 2500 series and the Cisco
2600 series) and in all Layer 3 switches.

All of the devices used in this document started with a cleared (default) configuration.
If your network is live, make sure that you understand the potential impact of any
command.

EIGRP Load Balancing


Every routing protocol supports equal cost path load balancing. In addition, Interior
Gateway Routing Protocol (IGRP) and EIGRP also support unequal cost path load
balancing. Use the variance n command in order to instruct the router to include
routes with a metric of less than n times the minimum metric route for that
destination. The variable n can take a value between 1 and 128. The default is 1,
which means equal cost load balancing. Traffic is also distributed among the links
with unequal costs, proportionately, with respect to the metric.

3.5.g Implement EIGRP (multi-address) named mode

 Types of families
 IPv4 address-family
 IPv6 address-family

piece of cake if you are familiar with address families…

eigrp_named_mode
194

R1(config)#do sh ver
Cisco IOS Software, 7200 Software (C7200-ADVENTERPRISEK9-M), Version
15.2(4)S, RELEASE SOFTWARE (fc1)

you’ll need a relatively new version of ios to support this use a name instead of a
number

R1(config)#router eigrp ?
<1-65535> Autonomous System
WORD EIGRP Virtual-Instance Name

R1(config)#router eigrp OZ

R1(config-router)#address-fam ipv4 autonomo 1


R1(config-router-af)#netw 0.0.0.0
R1(config-router-af)#
*Apr 26 14:55:15.199: %DUAL-5-NBRCHANGE: EIGRP-IPv4 1: Neighbor
192.168.12.2 (FastEthernet1/0) is up: new adjacency

older versions in address-family mode the autonomous system was configured on a


separate line… now just put it on the same line with address-fam ipv4 use af-interface
mode for interface specific commands:

R1(config-router-af)#af-interface

topology base mode gives you the usual manipualation set:

R1(config-router-af)#topology base
R1(config-router-af-topology)#?
Address Family Topology configuration commands:
auto-summary Enable automatic network number summarization
default Set a command to its defaults
default-information Control distribution of default information
default-metric Set metric of redistributed routes
distance Define an administrative distance
distribute-list Filter entries in eigrp updates
eigrp EIGRP specific commands
exit-af-topology Exit from Address Family Topology configuration mode
fast-reroute Configure Fast-Reroute
maximum-paths Forward packets over multiple paths
metric Modify metrics and parameters for advertisement
no Negate a command or set its defaults
offset-list Add or subtract offset from EIGRP metrics
redistribute Re-distribute IPv4 routes from another routing protocol
snmp Modify snmp parameters
summary-metric Specify summary to apply metric/filtering
timers Adjust topology specific timers
traffic-share How to compute traffic share over alternate paths
variance Control load balancing variance
195

EIGRP Classic to Named Mode Conversion - Overview

The Enhanced Interior Gateway Routing Protocol (EIGRP) can be configured using
either the classic mode or the named mode. The classic mode is the old way of
configuring EIGRP. In classic mode, EIGRP configurations are scattered across the
router mode and the interface mode. The named mode is the new way of configuring
EIGRP; this mode allows EIGRP configurations to be entered in a hierarchical
manner under the router mode.

Each named mode configuration can have multiple address families and autonomous
system number combinations. In the named mode, you can have similar
configurations across IPv4 and IPv6. We recommend that you upgrade to EIGRP
named mode because all new features, such as Wide Metrics, IPv6 VRF Lite, and
EIGRP Route Tag Enhancements, are available only in EIGRP named mode.

Use the eigrp upgrade-cli command to upgrade from classic mode to named mode.
You must use the eigrp upgrade-cli command for all classic router configurations to
ensure that these configurations are upgraded to the named mode. Therefore, if
multiple classic configurations exist, you must use this command per autonomous
system number. You must use this command separately for IPv4 and IPv6
configurations.

Prior to the EIGRP Classic to Named Mode Conversion feature, upgrading to EIGRP
named mode required that the user manually un-configure the classic mode using the
no router eigrp autonomous-system-number command and then re-configure EIGRP
configurations under named mode using the router eigrp virtual name command. This
method may lead to network churn and neighbor ship or network flaps.

The EIGRP Classic to Named Mode Conversion feature allows you to convert from
classic mode to named mode without causing network flaps or the EIGRP process to
restart. With this feature, you can move an entire classic mode configuration to a
router named mode configuration, and consequently, all configurations under
interfaces will be moved to the address-family interface under the appropriate address
family and autonomous-system number. After conversion, the show running-config
command will show only named mode configurations; you will not see any old classic
mode configurations.

3.5.h Implement troubleshoots and optimize EIGRP convergence and


scalability

EIGRP Convergence
Enhanced IGRP (EIGRP) convergence differs slightly. If a router detects a link failure
between itself and a neighbor, it checks the network topology table for a feasible
alternate route. If it does not find a qualifying alternate route, it enters in an active
convergence state and sends a Query out all interfaces for other routes to the failed
196

link. If a neighbor replies to the Query with a route to the failed link, the router
accepts the new path and metric information, places it in the topology table, and
creates an entry for the routing table. It then sends an update about the new route out
all interfaces. All neighbors acknowledge the update and send updates of their own
back to the sender. These bi-directional updates ensure the routing tables are
synchronized and validate the neighbor’s awareness of the new topology.
Convergence time in this event is the total of detection time, plus Query and Reply
times and Update times.

Link-State Convergence
The convergence cycle used in Link-State Routing Protocols, such as OSFP and IS-
IS, differs from that of the distance-vector protocols. When a router detects a link
failure between itself and a neighbor, it tries to perform a Designated Router (DR)
election process on the LAN interface, but fails to reach any neighbors. It then deletes
the route from the routing table, builds a link-state advertisement (LSA) for OSFP or
a link-state PDU (LSP) for IS-IS, and sends it out all other interfaces. Upon receipt of
the LSA, the other neighbors that are up copy the advertisement and forward the LSA
packet out all interfaces other than the one upon which it arrived. All routers,
including the router that detected the failure, wait five seconds after receiving the
LSA and run the shortest path first (SPF) algorithm. There after the router that
detected the failure adds the new route to the routing table, while its neighbors update
the metric in their routing table. After approximately 30 seconds, the failed router
sends an LSA after aging out the topology entry from router that detected the failure.
After five seconds, all routers run the SPF algorithm again and update their routing
tables to the path to the failed link. Convergence time is the total of detection time,
plus LSA flooding time, plus the five seconds wait before the second SPF algorithm is
run.

3.6 OSPF (v2 and v3)


IPv6 supports OSPF V3 functions similarly to OSPF V2 (the current version that IPv4
supports), except for the following enhancements:

 Support for IPv6 addresses and prefixes.

 While OSPF V2 runs per IP subnet, OSPF V3 runs per link. In general, you
can configure several IPv6 addresses on a router interface, but OSPF V3 forms
one adjacency per interface only, using the interface’s associated link-local
address as the source for OSPF protocol packets. On virtual links, OSPF V3 uses
the global IP address as the source.
 You can run one instance of OSPF Version 2 and one instance of OSPF V3
concurrently on a link.
 Support for IPv6 link state advertisements (LSAs).
197

3.6.a Describe packet types

Protocol processing per-link, not per-subnet:


IPv6 uses the term "link" instead of "subnet" or "network" to define a medium used to
communicate between nodes at the link layer. Multiple IP subnets can be assigned to
a single link, and two nodes can communicate with each other even if they do not
share a common IP subnet.

OSPfv3 Packet/Interface Type

OSPFv3 Header Comparison

In OSPFv3, Instance ID is a new field that is used to have multiple OSPF process’
instance per link. By default it is 0 and for any additional instance it is increased,
instance ID has local link significance only. OSPFv3 routers will only become
neighbors if the instanceIDs match. It is thus possible to have multiple routers on a
broadcast domain and all run Ospfv3 but not all of them becoming neighbours.

OSPFv3 Hello Packet and Functioning


198

 As compared, OSPFv3 does not require a Network mask to form an adjacency


formation. Adjacency is formed on the link local as v6 runs on per link instead
of per subnet.
 OSPFv3 Option field is 24-bit as that of V2 8-bits
 Dead intervals field reduced to 16 bits from 32.
 Multicast Address

OSPFv3 LSA Types

 LSA Type or the function code matches the same LSA type as in OSPFv2
 Type 3 is now called inter-area-prefix-LSA
 Type 4 is now called inter-area-router-LSA
Two new LSA type have been added (Link LSA and Intra-Area Prefix LSA)

Intra-Area Prefix LSA is a new LSA in OSPFv3 and used in order to advertise one or
more IPv6 prefixes. In OSPFv2 the intera-area prefix information was carried in the
router and network LSA's (Type 1 & 2).

Support for Multiple Instances Per Link:


Instance ID is a new field that is used to have multiple OSPF process instance per
link. In order for 2 instances talk to each other they need to have the same instance
199

ID. By default it is 0 and for any additional instance it is increased, Instance ID has
local link significance only.

Authentication method changes:


OSPFv2 authentication is achieved by implementing a shared secret and MD5 HMAC
supported as part of the OSPFv2 protocol. OSPFv3 does away its own support for
authentication entirely, instead relying on the more flexible IPsec framework offered
by IPv6.

I. OSPF Neighbor relationship

1. The OSPF neighbor list is empty.


OSPF is not enabled on the interface.
show ip ospf interface
Layer 1/2 is down.
show ip ospf interface
show interface
show ip ospf neighbor

Possible reasons:
 Unplugged cable
 Loose cable
 Bad cable
 Bad transceiver
 Bad port
 Bad interface card
 Layer 2 problem at telco in case of a WAN link
 Missing clock statement in case of back-to-back serial connection

The interface is defined as passive under OSPF.

show ip ip ospf interface


no passive-interface
show ip ospf neighbor

Passive-interface command is entered intentionally so that the router cannot take part
in any OSPF process on that segment. This is the case when you don't want to form
any neighbor relationship on an interface but you do want to advertise that interface.

In OSPF, a passive interface means "do not send or receive OSPF Hellos on this
interface." So, making an interface passive under OSPF with the intention of
preventing the router from sending any routes on that interface but receiving all the
routes is wrong.

 An access list is blocking OSPF Hellos on both sides.


200

OSPF sends its Hello on a multicast address of 224.0.0.5. If only one side is blocking
OSPF Hellos, the output of show ip ospf neighbor will indicate that the neighbor is
stuck in the INIT state. using debug commands:

access-list 101 permit ip x.x.x.0 0.0.0.255 host 224.0.0.5


debug ip packet 101 detail
output:
IP: s=131.108.1.2 (Ethernet0), d=224.0.0.5, len 68, access denied, proto=89
Solution: add the multicast in the ACL
access-list 100 permit ip any host 224.0.0.5

verification:
show ip ospf neighbor

A subnet number/mask has been mismatched over a broadcast link.


OSPF performs the subnet number and mask check on all media except point-to-point
and virtual links

debug ip ospf
adj

Solution: correct the mask for both sides of d link.

Verification:
show ip osp neighbor

 The Hello/dead interval has been mismatched.


OSPF neighbors exchange Hello packets periodically to form and maintain neighbor
relation-ships. OSPF advertises the router's Hello and dead intervals in the Hello
packets. These intervals must match with the neighbor's; otherwise, an adjacency will
not form.

debug ip ospf adj

Solution: ensure the OSPF timers are matched at both sides of the link.
to change hello interval from its default value:
ip ospf hello-interval #

verification:

show ip ospf neighbor

The authentication type (plain text versus MD5) has been mismatched.
OSPF uses two types of authentication, plain-text (Type 1) and MD5 (Type 2). Type 0
is called null authentication. If the plain-text authentication type is enabled on one
side, the other side must also have plain-text authentication. OSPF will not form an
adjacency unless both sides agree on the same authentication type.

debug ip ospf adj


201

Solution: ensure that authentication mode is the same at both sides.

verification:

show ip ospf neighbor

 An authentication key has been mismatched.


When authentication is enabled, the authentication key also must be configured on the
interface. Authentication previously was supported on a per-area basis, but beginning
with the specifications in RFC 2328, authentication is supported on a per-interface
basis. This feature has been implemented in Cisco IOS Software Release 12.0.8 and
later.

If authentication is enabled on one side but not the other, OSPF complains about the
mismatch in authentication type. Sometimes, the authentication key is configured
correctly on both sides but debug ip ospf adj still complains about a mismatched
authentication type. In this situation, authentication-key must be typed again because
there is a chance that a space was added during the authentication key configuration
by mistake. Because the space character is not visible in the configuration, this part is
difficult to determine.

Another possible thing that can go wrong is for one side, R1, to have a plain-text key
configured and the other side, R2, to have an MD5 key configured, even though the
authentication type is plain text. In this situation, the MD5 key is completely ignored
by R2 because MD5 has not been enabled on the router.

debug ip ospf adj

Solution: make sure that both sides have the same kind of authentication key. If the
problem still exists, retype the authentication key; there is a possibility of an added
space character before or after the authentication key.

verification:

show ip ospf neighbor

An area ID has been mismatched.


OSPF sends area information in the Hello packets. If both sides do not agree that they
are members of a common area, no OSPF adjacency will be formed. The area
information is a part of the OSPF protocol header.

debug ip ospf adj


Solution: ensure that same area are used in the network command at both sides of the
link.

verification:

show ip ospf neighbor


Stub/transit/NSSA area options have been mismatched.
202

When OSPF exchanges Hello packets with a neighbor, one of the things that it
exchanges in the Hello packet is an optional capability represented by 8 bits. One of
the option fields is for the E bit, which is the OSPF stub area flag. When the E bit is
set to 0, the area with which the router is associated is a stub area, and no external
LSAs are allowed in this area.

If one side has the E bit set to 0 and the other side doesn't, OSPF adjacency is not
formed. This is called an optional capability mismatch. One side says that it can allow
external routes, and the other side says that it cannot allow external routes, so OSPF
neighbor relationships are not formed.
debug ip ospf adj

Solution: make sure that both sides agree on the same type of area

verification:

show ip ospf neighbor

An OSPF adjacency exists with secondary IP addressing.


This is a very common problem in which a customer might have one Class C address
on a LAN segment. When the customer runs out of address space, he gets another
Class C address and assigns the new address as a secondary address under the same
interface. Everything works fine until two routers must exchange OSPF
Hellos/updates and one router's primary IP ad-dress is assigned as the secondary IP
address on the other side, as depicted in the network

debug ip ospf adj

Solution: this kind of problem is to create sub-interfaces on R1. This is possible only
if the interface that has the secondary address is Fast Ethernet or Gigabit Ethernet and
it is con-nected through a Layer 2 switch. This can be achieved through an Inter-
Switch Link (ISL), in the case of a Cisco switch, or dot1Q encapsulation, in the case
of a different vendor's switch. ISL or dot1Q encapsulation is used to route between
two separate VLANs.

verification:

show ip ospf neighbor

3.6.c Implement and troubleshoot OSPFv3 address-family support

OSPFv3 Address Families


The OSPFv3 address family’s feature enables both IPv4 and IPv6 unicast traffic to be
supported. With this feature, users may have two processes per interface, but only one
process per AF. If the IPv4 AF is used, an IPv4 address must first be configured on
the interface, but IPv6 must be enabled on the interface. A single IPv4 or IPv6
OSPFv3 process running multiple instances on the same interface is not supported.
203

Users with an IPv6 network that uses OSPFv3 as its IGP may want to use the same
IGP to help carry and install IPv4 routes. All devices on this network have an IPv6
forwarding stack. Some (or all) of the links on this network may be allowed to do
IPv4 forwarding and be configured with IPv4 addresses. Pockets of IPv4-only devices
exist around the edges running an IPv4 static or dynamic routing protocol. In this
scenario, users need the ability to forward IPv4 traffic between these pockets without
tunneling overhead, which means that any IPv4 transit device has both IPv4 and IPv6
forwarding stacks (e.g., is dual stack).

This feature allows a separate (possibly incongruent) topology to be constructed for


the IPv4 AF. It installs IPv4 routes in IPv4 RIB, and then the forwarding occurs
natively. The OSPFv3 process fully supports an IPv4 AF topology and can
redistribute routes from and into any other IPv4 routing protocol.

An OSPFv3 process can be configured to be either IPv4 or IPv6. The address-family


command is used to determine which AF will run in the OSPFv3 process, and only
one address family can be configured per instance. Once the AF is selected, users can
enable multiple instances on a link and enable address-family-specific commands.

Different instance ID ranges are used for each AF. Each AF establishes different
adjacencies, has a different link state database, and computes a different shortest path
tree. The AF then installs the routes in AF-specific RIB. LSAs that carry IPv6 unicast
prefixes are used without any modification in different instances to carry each AFs’
prefixes.

The IPv4 subnets configured on OSPFv3-enabled interfaces are advertised through


intra-area prefix LSAs, just as any IPv6 prefixes. External LSAs are used to advertise
IPv4 routes redistributed from any IPv4 routing protocol, including connected and
static. The IPv4 OSPFv3 process runs the SPF calculations and finds the shortest path
to those IPv4 destinations. These computed routes are then inserted in the IPv4 RIB
(computed routes are inserted into an IPv6 RIB for an IPv6 AF).

Because the IPv4 OSPFv3 process allocates a unique pdbindex in the IPv4 RIB, all
other IPv4 routing protocols can redistribute routes from it. The parse chain for all
protocols is same, so the ospfv3 keyword added to the list of IPv4 routing protocols
causes OSPFv3 to appear in the redistribute command from any IPv4 routing
protocol. With the ospfv3 keyword, IPv4 OSPFv3 routes can be redistributed into any
other IPv4 routing protocol as defined in the redistribute ospfv3 command.

Third-party devices will not neighbor with devices running the AF feature for the
IPv4 AF because they do not set the AF bit. Therefore, those devices will not
participate in the IPv4 AF SPF calculations and will not install the IPv4 OSPFv3
routes in the IPv6 RIB.

Cisco Support for OSPFv3 Address Families

The Cisco OSPFv3 Address Families feature helps allow Open Shortest Path First
Version 3 (OSPFv3) IPv6 networks to support both IPv6 and IPv4 nodes. This
feature, available for select Cisco integrated services routers, helps enable:
204

 Interoperability between IPv4 and IPv6 networks


 Interoperability between IPv4 nodes in different sub-nets, using IPv6 link-local
addresses for peering

What Problems Need to be Solved?


 OSPFv3 is expected to be widely deployed, especially in government
environments where IPv6 support has been mandated. As OSPF-based networks
evolve, organizations need to ensure that IPv4 and IPv6 resources can coexist.
However, OSPFv3 was defined to support only the IPv6 unicast address family.
Migrating real-world networks to OSPFv3 requires a solution that supports both
IPv6 and IPv4 traffic.

 Many organizations are extending their networks to support highly mobile


users. Mobile networks based on IPv4 can only support peering between nodes on
the same subnet, which can be a significant limitation in complex mobile
operations. While IPv6-based OSPFv3 eliminates a number of addressing and
peering restrictions, it does not support IPv4 nodes as originally defined. Users
need a solution that utilizes the benefits of OSPFv3 link-local addressing for
existing IPv4 nodes.

 Organizations require standards-based solutions to ensure interoperability in


multi-agency environments. Any OSPF interoperability solution must be both
standards-based and easily implemented.

Configuring the IPv4 Address Family in OSPFv3


Perform this task to configure the IPv4 address family in OSPFv3. Once you have
completed step 4 and entered IPv4 address-family configuration mode, you can
perform any of the subsequent steps in this task as needed to configure the IPv4 AF.

SUMMARY STEPS
1. enable
2. configure terminal
3. router ospfv3 [process-id]
4. address-family ipv4 unicast
5. area area-id range ip-address ip-address-mask [advertise | not-advertise] [cost
cost]
6. default {area area-ID[range ipv6-prefix | virtual-link router-id]} [default-
information originate [always | metric | metric-type | route-map] | distance |
distribute-list prefix-list prefix-list-name {in | out} [interface] | maximum-paths
paths | redistribute protocol | summary-prefix ipv6-prefix]

7. default-information originate [always | metric metric-value | metric-type type-


value| route-map map-name]
8. default-metric metric-value
9. distance distance
10. distribute-list prefix-list list-name {in[interface-type interface-number]
| out routing-process [as-number]}
11. maximum-paths number-paths
12. summary-prefix prefix [not-advertise | tag tag-value]
205

DETAILED STEPS
206

Configuring the IPv6 Address Family in OSPFv3


Perform this task to configure the IPv6 address family in OSPFv3. Once you have
completed step 4 and entered IPv6 address-family configuration mode, you can
perform any of the subsequent steps in this task as needed to configure the IPv6 AF.

SUMMARY STEPS
1. enable
2. configure terminal
3. router ospfv3 [process-id]
4. address-family ipv6 unicast
5. area area-ID range ipv6-prefix / prefix-length
6. default {area area-ID[range ipv6-prefix | virtual-link router-id]} [default-
information originate [always | metric | metric-type | route-map] | distance |
distribute-list prefix-list prefix-list-name {in | out} [interface] | maximum-paths
paths | redistribute protocol | summary-prefix ipv6-prefix]
7. default-information originate [always | metric metric-value | metric-type type-
value| route-map map-name]
8. default-metric metric-value
9. distance distance
10. distribute-list prefix-list list-name {in[interface-type interface-number] | out
routing-process [as-number]}
11. maximum-paths number-paths
12. summary-prefix prefix [not-advertise | tag tag-value

DETAILED STEPS
207
208

3.6.d Implement and troubleshoot network types, area types and


router types

When building out an OSPF network you must take into consideration of the internet
network types. This is dependent on the layer 2 technology used such as Ethernet,
point-to-point T1 circuit, frame relay and even frame relay with no broadcast.

There are five different configurable OSPF network types on a Cisco router,
broadcast, non-broadcast, point-to-point, point-to-multipoint and point-to-multipoint
non-broadcast.

As a network engineer in the field working with OSPF you must know the differences
in the OSPF network types and which types are compatible with one another. Some
types will work with each other but you have to adjust the hello/dead timers. With this
being said the following list below shows which OSPF network types can inter-
operate with each other;

Broadcast to Broadcast
209

 Non-broadcast to Non-broadcast
 Point-to-Point to Point-to-Point
 Broadcast to Non-broadcast (adjust hello/dead timers)
 Point-to-Point to Point-to-Multipoint (adjust hello/head timers)

If you’ve read through Lab 9-1 you’ll see a nice little bullet list of the different types
of OSPF network types and their features, I’ve added that list to this lab to refresh
your memory. As a CCNA you must know these network types inside and out;

Non-Broadcast
 The Non-Broadcast network type is the default for OSPF enabled frame relay
physical interfaces.
 Non-Broadcast networks require the configuration of static neighbors; hello’s
are sent via unicast.
 The Non-Broadcast network type has a 30 second hello and 120 second dead
timer.
 An OSPF Non-Broadcast network type requires the use of a DR/BDR

Broadcast
 The Broadcast network type is the default for an OSPF enabled ethernet interface.
 The Broadcast network type requires that a link support Layer 2 Broadcast
capabilities.
 The Broadcast network type has a 10 second hello and 40-second dead timer.
 An OSPF Broadcast network type requires the use of a DR/BDR.

Point-to-Point
 A Point-to-Point OSPF network type does not maintain a DR/BDR
relationship.
 The Point-to-Point network type has a 10 second hello and 40-second dead
timer.
 Point-to-Point network types are intended to be used between 2 directly
connected routers.

Point-to-Multipoint
 OSPF treats Point-to-Multipoint networks as a collective of point-to-point links.
 Point-to-Multipoint networks do not maintain a DR/BDR relationship.
 Point-to-Multipoint networks advertise a hot route for all the frame-relay
endpoints.
 The Point-to-Multipoint network type has a 30 second hello and 120 second dead
timer.

Point-to-Multipoint Non-Broadcast

 Same as Point-to-Multipoint but requires static neighbors. Used on Non-broadcast


layer 2 topologies.
210

 Gives you the ability to define link cost on a per neighbor basis.

Loopback
 The default OSPF network type; only available to loopback interfaces.
 Advertises the interface as a host route; changeable by configuring the interface as
point-to-point.

3.6.e Implement and troubleshoot path preference


This is actually a very common area of confusion and misunderstanding in OSPF. Part
of the problem is that the vast majority of CCNA and CCNP texts teach the theory
that for OSPF path selection of E1 vs E2 routes, E1 routes use the redistributed cost
plus the cost to the ASBR, while with E2 routes only use the redistributed cost. When
I just checked the most recent CCNP ROUTE text from Cisco Press, it specifically
says “When flooded, OSPF has little work to do to calculate the metric for an E2
route, because by definition, the E2 route’s metric is simply the metric listed in the
Type 5 LSA. In other words, the OSPF routers do not add any internal OSPF cost to
the metric for an E2 route.” While technically true, this statement is an
oversimplification. For CCNP level, this might be fine, but for CCIE level it is not.

The key point that I’ll demonstrate in this post is that while it is true that “OSPF
routers do not add any internal OSPF cost to the metric for an E2 route”, both the
intra-area and inter-area cost is still considered in the OSPF path selection state
machine for these routes.

First, let’s review the order of the OSPF path selection process. Regardless of a
route’s metric or administrative distance, OSPF will choose routes in the following
order:

 Intra-Area (O)
 Inter-Area (O IA)
 External Type 1 (E1)
 External Type 2 (E2)
 NSSA Type 1 (N1)
 NSSA Type 2 (N2)

To demonstrate this, take the following topology:


211

R1 connects to R2 and R3 via area 0. R2 and R3 connect to R4 and R5 via area 1


respectively. R4 and R5 connect to R6 via another routing domain, which is EIGRP in
this case. R6 advertises the prefix 10.1.6.0/24 into EIGRP. R4 and R5 perform mutual
redistribution between EIGRP and OSPF with the default parameters, as follows:

The result of this is that R1 learns the prefix 10.1.6.0/24 as an OSPF E2 route via both
R2 and R3, with a default cost of 20. This can be seen in the routing table output
below. The other OSPF learned routes are the transit links between the routers in
question.
212

Note that all the routes redistributed from EIGRP appear on R1 with a default metric
of 20. Now let’s examine the details of the route 10.1.6.0/24 on R1.

As expected, the metrics of both paths via R2 and R3 have a metric of 20. However,
there is an additional field in the route’s output called the “forward metric”. This field
denotes the cost to the ASBR(s). In this case, the ASBRs are R4 and R5 for the routes
via R2 and R3 respectively. Since all interfaces are Fast Ethernet, with a default OSPF
cost of 1, the cost to both R4 and R5 is 2, or essentially 2 hops.

The reason that multiple routes are installed in R1’s routing table is that the route type
(E2), the metric (20), and the forward metric (2) are all a tie. If any of these fields
were to change, the path selection would change.

To demonstrate this, let’s change the route type to E1 under R4’s OSPF process. This
can be accomplished as follows:

3.6.f Implement and troubleshoot operations


213

The main operation of the OSPF protocol occurs in the following consecutive stages
and leads to the convergence of the internetwork:

 Compiling the LSDB.


 Calculating the Shortest Path First (SPF) Tree.
 Creating the routing table entries.

Formation of the LSDB Using Link State Advertisements


The LSDB is a database of all OSPF router LSAs, summary LSAs, and external route
LSAs. The LSDB is compiled by an ongoing exchange of LSAs between neighboring
routers so that each router is synchronized with its neighbor. When the AS has
converged, all routers have the appropriate entries in their LSDB.

To create the LSDB, each OSPF router must receive a valid LSA from each other
router in the AS. This is performed through a procedure called flooding. Each router
initially sends out an LSA that contains its own configuration. As it receives LSAs
from other routers, it propagates those LSAs to its neighbor routers.

In this way, an LSA from a given router is flooded across the AS so that each other
router contains that router's LSA. While it appears that the flooding of LSAs across
the AS causes a large amount of network traffic, OSPF is very efficient in the
propagation of LSA information. Figure 3.18 shows a simple OSPF AS, the flooding
of LSAs between neighboring routers, and the LSDB.

Example of OSPF Operation


The following examples illustrate how an OSPF internetwork compiles the LSDB,
performs the least cost analysis, and creates routing table entries. This example is
214

deliberately simplified to help you gain an understanding of the basic principles of


OSPF convergence.

Compiling the LSDB


Consider the simple AS in Figure 3.19. At each router interface, a unit-less cost
metric is assigned as a reflection of the preference of using that interface. These cost
values can be a reflection of bandwidth, delay, or reliability factors and are assigned
by the network administrator.

3.6.g Implement troubleshoots and optimize OSPF convergence and


scalability
This is a brief discussion of main factors controlling fast convergence in OSPF-based
networks. Network convergence is a term that is sometimes used under various
interpretations. Before we discuss the optimization procedures for OSPF, we define
network convergence as the process of synchronizing network forwarding tables after
a topology change. Network is said to be converged when none of forwarding tables
are changing for “some reasonable” amount of time. This “some” amount of time
could be defined as some interval, based on the expected maximum time to stabilize
after a single topology change. Network convergence based on native IGP
mechanisms is also known as network restoration, since it heals the lost connections.
Network mechanisms for traffic protection such as ECMP, MPLS FRR or IP FRR
offering different approach to failure handling are outside the scope of this article. We
are further taking multicast routing fast recovery out of the scope as well, even though
this process is tied to IGP re-convergence.

It is interesting to notice that IGP-based “restoration” techniques have one (more or


less) important problem. During the time of re-convergence, temporary micro-loops
may exist in the topology due to inconsistency of FIB (forwarding) tables of different
215

routers. This behavior is fundamental to link-state algorithms, as routers closer to


failure tend to update their forwarding database before the other routers. The only
popular routing protocol that lacks this property is EIGRP, which is loop-free at any
moment during re-convergence, thanks to the explicit termination of the diffusing
computations.

It should be noted that compared to IS-IS, OSPF provides less “knobs” for
convergence optimization. The main reason is probably the fact that ISIS is being
developed and supported by a separate team of developers, more geared towards the
ISPs where fast convergence is a critical competitive factor.
Resiliency and redundancy to circuit failures are provided by the convergence
capabilities of OSPF at layer 3.

There are two components to OSPF routing convergence: detection of topology


changes and recalculation of routes.

Detection of topology changes is supported in two ways by OSPF. The first, and
quickest, is a failure or change of status on the physical interface, such as Loss of
Carrier. The second is a timeout of the OSPF hello timer. An OSPF neighbor is
deemed to have failed if the time to wait for a hello packet exceeds the dead timer,
which defaults to four times the value of the hello timer.

The default hello timer is set to 10 seconds for broadcast and 30 sec for non-broadcast
with a dead timer 4x the hello timer.

Each router does recalculation of routes after a failure has been detected. A link-state
advertisement (LSA) is sent all routers in the OSPF area to signal a change in
topology. This causes all routers to recalculate all of their routes using the Djikstra
(SPF) algorithm. This is a CPU intensive task, and a large network, with unreliable
links, could cause a CPU overload.

When link goes down and if layer 2 is not able to detect the failure, convergence can
be improved by decreasing the value of the hello timer. The timer should not be set
too low as this may cause phantom failures, hence unnecessary topology
recalculations. Remember that these timers are used to detect failures that are not at
the physical level. For example, carrier still exists but there is some sort of failure in
the intermediate network.

Once a topology change has been detected, a LSA is generated and flooded to the rest
of the devices in the network. Recalculation of the routes will not occur until the spf
timer has expired. The default value of this timer is 5 seconds. SPF hold time is also
used to delay consecutive SPF calculations (give the router some breathing space).
The default for his value is 10 seconds. As a result, the min time for the routes to
converge in case of failure is always going to be more than 5 secs unless the SPF
timers are tuned using OSPF throttle timers.

It is also possible to schedule SPF to run right after flooding the LSA information, but
this can potentially cause the instabilities in the network e.g. even a flash congestion
216

in the network for a very short duration could declare the link down and trigger the
SPF run.

3.7 BGP

Understanding BGP Characteristics


The Internet has grown significantly over the past several decades. The current BGP
table in the Internet has more than 100,000 routes. Many enterprises have also
deployed BGP to interconnect their networks. These widespread deployments have
proven BGP’s capability to support large and complex networks.
The reason BGP has achieved its status in the Internet today is because it has the
following characteristics:

 Reliability
 Stability
 Scalability
 Flexibility

3.7.a Describe, implement and troubleshoot peer relationships

Configuring a BGP Peer


Perform this task to configure BGP between two IPv4 routers (peers). The address
family configured here is the default IPv4 unicast address family and the
configuration is done at Router A in the figure above.

Remember to perform this task for any neighbor routers that are to be BGP peers.

Configuring a BGP Peer


Perform this task to configure BGP between two IPv4 routers (peers). The address
family configured here is the default IPv4 unicast address family and the
configuration is done at Router A in the figure above. Remember to perform this task
for any neighbor routers that are to be BGP peers.

The configuration in this task is done at Router B in the figure below and would need
to be repeated with appropriate changes to the IP addresses, for example, at Router E
to fully configure a BGP process between the two routers.

Figure 3.20 BGP Peer Topology


217

Print | Rate this content


Layer 3 - IP Routing: Troubleshoot BGP - Peer relationship not established

Border Gateway Protocol (BGP) is an exterior gateway protocol and includes the
following naming conventions:

 BGP is "internal Border Gateway Protocol (IBGP)" when it runs within an


autonomous system (AS).
 BGP is "external Border Gateway Protocol (EBGP)" when it runs between
autonomous systems (ASs).

The current version is Border Gateway Protocol 4 (BGP-4, RFC 4271). Unless
otherwise stated, "Border Gateway Protocol (BGP)" refers to Border Gateway
Protocol 4 (BGP-4).

Peer relationship not established

Symptom

When displaying Border Gateway Protocol (BGP) peer information using the display
bgp peer command, the state of the connection to a peer cannot be established.

Analysis
To become Border Gateway Protocol (BGP) peers, any two routers must establish a
TCP session using port 179 and exchange Open messages successfully

Solution
 Use the display current-configuration command to check that the peer's
autonomous system (AS) number is correct.
218

 Use the display bgp peer command to check that the peer's IP address is correct.
 If a loopback interface is used, check that the loopback interface is specified with
the peer connect-interface command.
 If the peer is a non-direct external BGP (EBGP) peer, check that the peer ebgp-
max-hop command is configured.
 If the peer ttl-security hops command is configured, check that the command is
configured on the peer, and the hop-count values configured on the peers are
greater than the number of hops between them.
 Check that a valid route to the peer is available.
 Use the ping command to check the connectivity to the peer.
 Use the display tcp status command to check the TCP connection.
 Check whether an access control list (ACL) disabling TCP port 179 is configured.

3.7.b Implement and troubleshoot IBGP and EBGP

There are two types of BGP-4: Internal BGP-4 (iBGP-4) and External BGP-4 (eBGP-
4). The difference depends on the function of the routing protocol. The router will
determine if the peer BGP-4 router is going to be an external BGP-4 peer or an
internal BGP-4 peer by checking the autonomous system number in the open message
that was sent.
 Internal BGP-4 is used within an autonomous system (AS). It conveys
information to all BGP-4 routers within the domain and ensures that they have a
consistent understanding of the available networks. Internal BGP-4 is used within
an ISP or a large organization to coordinate the knowledge of that AS. The routers
are not required to be physical neighbors on the same medium, and are often
located on the edges of the network. Internal BGP-4 is used to convey BGP-4
information about other ASs across a transit autonomous system. Another routing
protocol, an interior routing protocol such as OSPF, is used to route the BGP-4
packets to their remote locations. To achieve this, internal BGP requires the
destination BGP neighbor’s IP address to be contained within the normal routing
table kept by another routing protocol.
 External BGP-4 complies with the common perception of an external routing
protocol; it sends routing information between differing ASs. Therefore, the
border router between different ASs is the external BGP router.

BGP-4 Synchronization
Before iBGP-4 can propagate a route into another AS handing it over to eBGP-4, the
route must be totally known within the AS. In other words, the Internal Gateway
Protocol (IGP) or internal routing protocol must be synchronized with BGP-4. This
ensures that if traffic is sent into the AS, the interior routing protocol can direct it to
its destination. It thus prevents traffic from being forwarded to unreachable
destinations and reduces unnecessary traffic. It also ensures consistency within the
AS.

Synchronization is enabled by default, but, in some case it may be useful to turn off
synchronization, such when all the routers in the AS are running BGP-4, or when all
219

the routers inside the AS are meshed, or when the AS is not a transit domain, i.e., an
AS that is used to carry BGP-4 updates from one AS to another

Inter-AS and ASBRs


Separate autonomous systems from different service providers can communicate by
exchanging IPv4 NLRI in the form of VPN-IPv4 addresses. The ASBRs use eBGP to
exchange that information. Then an Interior Gateway Protocol (IGP) distributes the
network layer information for VPN-IPV4 prefixes throughout each VPN and each
autonomous system. The following protocols are used for sharing routing
information:

• Within an autonomous system, routing information is shared using an IGP.


• Between autonomous systems, routing information is shared using an eBGP.
An eBGP lets service providers set up an inter-domain routing system that
guarantees the loop-free exchange of routing information between separate
autonomous systems.

The primary function of an eBGP is to exchange network reachability information


between autonomous systems, including information about the list of autonomous
system routes. The autonomous systems use EBGP border edge routers to distribute
the routes, which include label-switching information. Each border edge router
rewrites the next-hop and MPLS labels.
Inter-AS configurations supported in an MPLS VPN can include:

• Inter-provider VPN—MPLS VPNs that include two or more autonomous


systems, connected by separate border edge routers. The autonomous systems
exchange routes using eBGP. No IGP or routing information is exchanged
between the autonomous systems.

• BGP Confederations—MPLS VPNs that divide a single autonomous system


into multiple sub-autonomous systems and classify them as a single,
designated confederation. The network recognizes the confederation as a
single autonomous system. The peers in the different autonomous systems
communicate over eBGP sessions; however, they can exchange route
information as if they were iBGP peers.

Transmitting Information Between Autonomous Systems


Figure 3.21 illustrates one MPLS VPN consisting of two separate autonomous
systems. Each autonomous system operates under different administrative control and
runs a different IGP. Service providers exchange routing information through eBGP
border edge routers (ABSR1 and ASBR2).
220

• Step 1 The provider edge router (PE-1) assigns a label for a route before
distributing that route. The PE router uses the multiprotocol extensions of BGP to
transmit label mapping information. The PE router distributes the route as a VPN-
IPv4 address. The address label and the VPN identifier are encoded as part of the
NLRI.
• Step 2 The two route reflectors (RR-1 and RR-2) reflect VPN-IPv4 internal
routes within the autonomous system. The border edge routers of the autonomous
system (ASBR1 and ASBR2) advertise the VPN-IPv4 external routes.
• Step 3 The eBGP border edge router (ASBR1) redistributes the route to the
next autonomous system (ASBR2). ASBR1 specifies its own address as the value
of the eBGP next-hop attribute and assigns a new label.
• The address ensures:
• That the next-hop router is always reachable in the service provider (P)
backbone network.
• That the label assigned by the distributing router is properly
interpreted. (The corresponding next-hop router must assign the label
associated with a route.)

• Step 4 The eBGP border edge router (ASBR2) redistributes the route in one of
the following ways, depending on the configuration:

• If the iBGP neighbors are configured with the next-hop-self-command,


ASBR2 changes the next-hop address of updates received from the eBGP
peer, then forwards it.
• If the iBGP neighbors are not configured with the next-hop-self-command, the next-
hop address does not get changed. ASBR2 must propagate a host route for the eBGP
peer through the IGP. To propagate the eBGP VPN-IPv4 neighbor host route, use the
redistribute command with the connected keyword. The eBGP VPN-IPv4 neighbor
host route is automatically installed in the routing table when the neighbor comes up.
This automatic installation is essential to establish the label switched path between PE
routers in different autonomous systems. You need to manually configure the static
221

route to the next hop and redistribute it to IGP, to let other PE routers use the /32 host
prefix label to forward traffic for an Inter-AS VPN redistribute connected option.

3.8 ISIS (for IPv4 and IPv6)

3.8.a Describe basic ISIS network

IS-IS Enhancements for IPv6


IS-IS in IPv6 functions the same and offers many of the same benefits as IS-IS in
IPv4. IPv6 enhancements to IS-IS allow IS-IS to advertise IPv6 prefixes in addition to
IPv4 and OSI routes. Extensions to the IS-IS command-line interface (CLI) allow
configuration of IPv6-specific parameters. IPv6 IS-IS extends the address families
supported by IS-IS to include IPv6, in addition to OSI and IPv4.

IS-IS in IPv6 supports either single-topology mode or multiple topology modes.

IS-IS Single-Topology Support for IPv6


Single-topology support for IPv6 allows IS-IS for IPv6 to be configured on interfaces
along with other network protocols (for example, IPv4 and Connectionless Network
Service [CLNS]). All interfaces must be configured with the identical set of network
address families. In addition, all routers in the IS-IS area (for Level 1 routing) or the
domain (for Level 2 routing) must support the identical set of network layer address
families on all interfaces.

When single-topology support for IPv6 is being used, either old- or new-style TLVs
may be used. However, the TLVs used to advertise reachability to IPv6 prefixes use
extended metrics. Cisco routers do not allow an interface metric to be set to a value
greater than 63 if the configuration is not set to support only new-style TLVs for IPv4.
In single-topology IPv6 mode, the configured metric is always the same for both IPv4
and IPv6.

IS-IS Multi-topology Support for IPv6


IS-IS multi-topology support for IPv6 allows IS-IS to maintain a set of independent
topologies within a single area or domain. This mode removes the restriction that all
interfaces on which IS-IS is configured must support the identical set of network
address families. It also removes the restriction that all routers in the IS-IS area (for
Level 1 routing) or domain (for Level 2 routing) must support the identical set of
network layer address families. Because multiple SPFs are performed, one for each
configured topology, it is sufficient that connectivity exists among a subset of the
routers in the area or domain for a given network address family to be routable.

You can use the isis ipv6 metric command to configure different metrics on an
interface for IPv6 and IPv4.

When multi-topology support for IPv6 is used, use the metric-style wide command to
configure IS-IS to use new-style TLVs because TLVs used to advertise IPv6
information in link-state packets (LSPs) are defined to use only extended metrics.
222

Transition from Single-Topology to Multi-topology Support for IPv6


All routers in the area or domain must use the same type of IPv6 support, either
single-topology or multitopology. A router operating in multitopology mode will not
recognize the ability of the single-topology mode router to support IPv6 traffic, which
will lead to holes in the IPv6 topology. To transition from single-topology support to
the more flexible multitopology support, a multitopology transition mode is provided.

The multi-topology transition mode allows a network operating in single-topology IS-


IS IPv6 support mode to continue to work while upgrading routers to include multi-
topology IS-IS IPv6 support. While in transition mode, both types of TLVs (single-
topology and multi-topology) are sent in LSPs for all configured IPv6 addresses, but
the router continues to operate in single-topology mode (that is, the topological
restrictions of the single-topology mode are still in effect). After all routers in the area
or domain have been upgraded to support multi-topology IPv6 and are operating in
transition mode, transition mode can be removed from the configuration. Once all
routers in the area or domain are operating in multi-topology IPv6 mode, the
topological restrictions of single-topology mode are no longer in effect.

3.8 b Describe neighbor relationship


This document provides a sample configuration for Intermediate System-to-
Intermediate System (IS-IS) over IP version 6 (IPv6).

Network Diagram
This document uses the network setup shown in the diagram below.

Configurations
This document uses the configurations shown below.

 c7200-1
 c7200-2
223
224

3.8 c Describe network types, levels and router types

NSAP addressing

NSAP is the network-layer address for CLNS packets. An NSAP describes an


attachment to a particular service at the network layer of a node, similar to the
combination of IP destination address and IP protocol number in an IP packet. NSAP
encoding and format are specified by ISO 8348/Ad2.

ISO 8348/Ad2 uses the concept of hierarchical addressing domains. The global
domain is the highest level. This global domain is subdivided into sub-domains, and
each sub-domain is associated with an addressing authority that has a unique plan for
constructing NSAP addresses.

The IDP consists of a 1-byte authority and format identifier (AFI) and a variable-
length initial domain identifier (IDI), and the DSP is a string of digits identifying a
particular transport implementation of a specified AFI authority. Everything to the left
of the system ID can be thought of as the area address of a network node.

A network entity title (NET) is an NSAP with an n-selector of zero. All router NETs
have an n-selector of zero, implying the network layer of the IS itself (0 means no
transport layer). For this reason, the NSAP of a router is always referred to as a NET.
The NSEL (NSAP selector) is like a TCP port number: It indicates the transport layer.

Routers are identified with NETs of 8 to 20 bytes. ISO/IEC 10589 distinguishes only
three fields in the NSAP address format: a variable-length area address beginning
with a single octet, a system ID, and a 1-byte n-selector. Cisco implements a fixed
length of 6 bytes for the system ID, which is like the OSPF router ID.

The LSP identifier is derived from the system ID (along with the pseudo node ID and
LSP number). Each IS is usually configured with one NET and in one area; each
system ID within an area must be unique.

The big difference between NSAP style addressing and IP style addressing is that, in
general, there will be a single NSAP address for the entire router, whereas with IP
there will be one IP address per interface. All ISs and ESs in a routing domain must
have system IDs of the same length. All routers in an area must have the same area
address. All Level 2 routers must have a unique system ID domain-wide, and all
Level 1 routers must have a unique system ID area-wide. All ESs in an area will form
an adjacency with a Level 1 router on a shared media segment if they share the same
area address. If multiple NETs are configured on the same router, they must all have
the same system ID.
225

Point-to-point, broadcast
In Intermediate System-to-Intermediate System (IS-IS) Protocol, there are two types
of networks: point-to-point and broadcast. Unlike Open Shortest Path First (OSPF)
Protocol, IS-IS does not have other network types like non-broadcast and point-to-
multipoint. For each type of network, a different type of IS-IS Hello (IIH) packet is
exchanged to establish adjacency. On point-to-point networks, point-to-point IIHs are
exchanged; and on broadcast networks (such as LAN), Level 1 or Level 2 LAN IIHs
are exchanged. A frame-relay network that is running IS-IS can be configured to
belong to one of these network types, depending on the type of connectivity (Fully
meshed, Partially meshed, or Hub and Spoke) that is available between the routers
through the cloud. This document gives an example of a network type configuration
mismatch in such a scenario, and it shows how to diagnose and fix the problem.

3.8 d Describe operations


From a high level, IS-IS operates as follows:
 Routers running IS-IS will send hello packets out all IS-IS-enabled interfaces to
discover neighbors and establish adjacencies.
 Routers sharing a common data link will become IS-IS neighbors if their hello
packets contain information that meets the criteria for forming an adjacency. The
criteria differ slightly depending on the type of media being used (p2p or
broadcast). The main criteria are matching authentication, IS-type and MTU size).
 Routers may build a link-state packet (LSP) based upon their local interfaces that
are configured for IS-IS and prefixes learned from other adjacent routers.
 Generally, routers flood LSPs to all adjacent neighbors except the neighbor from
which they received the same LSP. However, there are different forms of flooding
and also a number of scenarios in which the flooding operation may differ.
 All routers will construct their link-state database from these LSPs.
 A shortest-path tree (SPT) is calculated by each IS, and from this SPT the routing
table is built.

3.8 e Describe optimization features


The original IS-IS specification defines four different types of metrics. Cost, being the
default metric, is supported by all routers. Delay, expense, and error are optional
metrics. The delay metric measures transit delay, the expense metric measures the
monetary cost of link utilization, and the error metric measures the residual error
probability associated with a link.

The Cisco implementation uses cost only. If the optional metrics were implemented,
there would be a link-state database for each metric and SPF would be run for each
link-state database.

Default Metric
While some routing protocols calculate the link metric automatically based on
bandwidth (OSPF) or bandwidth/delay (Enhanced Interior Gateway Routing Protocol
[EIGRP]), there is no automatic calculation for IS-IS. Using old-style metrics, an
interface cost is between 1 and 63 (6 bit metric value). All links use the metric of 10
226

by default. The total cost to a destination is the sum of the costs on all outgoing
interfaces along a particular path from the source to the destination, and the least-cost
paths are preferred.

The total path metric was limited to 1023 (the sum of all link metrics along a path
between the calculating router and any other node or prefix). This small metric value
proved insufficient for large networks and provided too little granularity for new
features such as Traffic Engineering and other applications, especially with high
bandwidth links. Wide metrics are also required if route-leaking is used.

Extended Metric
Cisco IOS Software addresses this issue with the support of a 24-bit metric field, the
so-called "wide metric". Using the new metric style, link metrics now have a
maximum value of 16777215 (224-1) with a total path metric of 4261412864 (254 x
224).

Deploying IS-IS in the IP network with wide metrics is recommended to enable finer
granularity and to support future applications such as Traffic Engineering.

Running different metric styles within one network poses one serious problem: Link-
state protocols calculate loop-free routes because all routers (within one area)
calculate their routing table based on the same link-state database. This principle is
violated if some routers look at old-style (narrow), and some at new-style (wider)
TLVs. However, if the same interface cost is used for both the old- and new-style
metrics, then the SPF will compute a loop-free topology.
227

4.0 VPN Technologies

4.1 Tunneling
Virtual private network technology is based on the idea of tunneling. VPN tunneling
involves establishing and maintaining a logical network connection (that may contain
intermediate hops). On this connection, packets constructed in a specific VPN
protocol format are encapsulated within some other base or carrier protocol, then
transmitted between VPN client and server, and finally de-encapsulated on the
receiving side.

Compulsory VPN tunneling authenticates clients and associates them with specific
VPN servers using logic built into the broker device. This network device is
sometimes called the VPN Front End Processor (FEP), Network Access Server (NAS)
or Point of Presence Server (POS). Compulsory tunneling hides the details of VPN
server connectivity from the VPN clients and effectively transfers management
control over the tunnels from clients to the ISP. In return, service providers must take
on the additional burden of installing and maintaining FEP devices.

VPN Tunneling Protocols


Several computer network protocols have been implemented specifically for use with
VPN tunnels. The three most popular VPN tunneling protocols listed below continue
to compete with each other for acceptance in the industry. These protocols are
generally incompatible with each other.

Point-to-Point Tunneling Protocol (PPTP)


Several corporations worked together to create the PPTP specification. People
generally associate PPTP with Microsoft because nearly all flavors of Windows
include built-in client support for this protocol. The initial releases of PPTP for
Windows by Microsoft contained security features that some experts claimed were
too weak for serious use.

4.1.a Implement and troubleshoot MPLS operations

Label Switching
Traditionally, there are two different approaches to packet forwarding, each mapping
to a specific structure of the forwarding table. They are called forwarding by network
address and label switching.

The most intuitive approach is forwarding by network address that is the approach of
IP. When a packet arrives at a router, the router parses the destination address from
the packet header and looks it up in its forwarding table. The forwarding table has a
simple 2-column structure where each row maps a destination address to the egress
interface that the packet should be forwarded. For scalability and efficiency reasons, it
is possible to aggregate several destination prefixes into a single row, provided that
they can be numerically aggregated and that they share the same egress interface.
228

An alternative approach is known as label switching. Essentially, while forwarding by


network address requires that the egress interface be chosen based on the destination
of the packet, label switching requires that such an interface be chosen based on the
flow the packet belongs to, where a flow corresponds to an instance of transmission,
i.e., a set of packets, from a source to a destination and is identified by a tag (called
label) attached to each packet of the flow.

LDP Authentication
The Label Distribution Protocol (LDP) can also be secured with MD-5 authentication
across the MPLS cloud. This prevents hackers from introducing bogus routers, which
would participate in the LDP.

MPLS VPNs
At its simplest, a virtual private network (VPN) is a collection of sites that share the
same routing table. A VPN is also a network in which customer connectivity to
multiple sites is deployed on a shared infrastructure with the same administrative
policies as a private network. The path between two systems in a VPN, and the
characteristics of that path, might also be determined (wholly or partially) by policy.
Whether a system in a particular VPN is allowed to communicate with systems not in
the same VPN is also a matter of policy.

In an MPLS VPN, the VPN generally consists of a set of sites that are interconnected
by means of an MPLS provider core network, but it is also possible to apply different
policies to different systems that are located at the same site. Policies can also be
applied to systems that dial in; the chosen policies would be based on the dial-in
authentication processes.

A given set of systems can be in one or more VPNs. A VPN can consist of sites (or
systems) that are all from the same enterprise (intranet), or from different enterprises
(extranet); it might consist of sites (or systems) that all attach to the same service
provider backbone, or to different service provider backbones.

Figure 4-1 VPNs Sharing Sites

4.1.b Implement and troubleshoot basic MPLS L3VPN


MPLS Layer 3 VPNs use a peer-to-peer model that uses Border Gateway Protocol
(BGP) to distribute VPN-related information. This highly scalable, peer-to-peer
model allows enterprise subscribers to outsource routing information to service
providers, resulting in significant cost savings and a reduction in operational
complexity for enterprises. Service providers can then offer value-added services like
229

Quality of Service (QoS) and Traffic Engineering, allowing network convergence that
encompasses voice, video, and data.

IP-based VPNs use the next-generation Virtual Routing/Forwarding instance (VRF)-


Lite, called Easy Virtual Network (EVN). This simplifies Layer 3 network
virtualization and allows customers to easily provide traffic separation and path
isolation on a shared network infrastructure, removing the need to deploy MPLS in
the enterprise network. EVN is fully integrated with traditional MPLS-VPN or MPLS
VPNomGRE.

Any-to-any IP connectivity among PEs

The first move is actually quite simple. It is nothing more than what any Internal
Gateway Protocol (IGP) is designed to achieve: seamless, redundant and dynamic IP-
level any-to-any connectivity. Since PEs are our encapsulation endpoints, we want
them to be reachable independent of the availability of specific network interfaces. In
other words, we do not want to use the IP address of physical interfaces for PEs, but
loopback addresses. A loopback address is an address associated with a virtual
interface of the router. Since it is virtual, a loopback interface it is active independent
of the status of physical network interfaces. To fulfill Move 1, we simply assign a
loopback address to each PE router and use an IGP (e.g. OSPF or IS-IS) to announce
these addresses as /32 prefixes in order to ensure any-to-any connectivity among
them.

4.1.c – Implement and troubleshoot encapsulation

GRE

Pseudo wire Headed


Pseudo wires (PWs) enable payloads to be transparently carried across IP/MPLS
packet-switched networks (PSNs). PWs are regarded as simple and manageable
lightweight tunnels for returning customer traffic into core networks. Service
providers are now extending PW connectivity into the access and aggregation regions
of their networks.

Pseudo wire Headed (PWHE) is a technology that allows termination of access


pseudo wires (PWs) into a Layer 3 (VRF or global) domain or into a Layer 2 domain.
PWs provide an easy and scalable mechanism for tunneling customer traffic into a
common IP/MPLS network infrastructure. PWHE allows customers to provision
features such as QOS access lists (ACL), L3VPN on a per PWHE interface basis, on a
service Provider Edge (PE) router.

Benefits of PWHE
Some of the benefits of implementing PWHE are:
• dissociates the customer facing interface (CFI) of the service PE from the
underlying physical transport media of the access or aggregation network
• reduces capex in the access or aggregation network and service PE
• distributes and scales the customer facing Layer 2 UNI interface set
• implements a uniform method of OAM functionality
230

• providers can extend or expand the Layer 3 service footprints


• provides a method of terminating customer traffic into a next generation network
(NGN)

L2VPN over GRE


To transport an IP packet over a generic routing encapsulation (GRE) tunnel, the
system first encapsulates the original IP packet with a GRE header. The encapsulated
GRE packet is encapsulated once again by an outer transport header that is used to
forward the packet to its destination.

Figure 4-2 GRE Encapsulation

When a GRE tunnel endpoint de-capsulate a GRE packet, it further forwards the
packet based on the payload type. For example, if the payload is a labeled packet then
the packet is forwarded based on the virtual circuit (VC) label or the VPN label for
L2VPN and L3VPN respectively.

L2VPN over GRE Restrictions


Some of the restrictions that you must consider while configuring L2VPN over GRE:

• For VPLS flow-based load balancing scenario, the GRE tunnel is pinned down to
outgoing path based on tunnel source or destination cyclic redundancy check
(CRC). Unicast and flood traffic always takes the same physical path for a given
GRE tunnel.
• Ingress attachment circuit must be an ASR 9000 Enhanced Ethernet Line Card for
L2VPN over GRE. Additionally, GRE tunnel destination should be reachable only
on an ASR 9000 Enhanced Ethernet Line Card.
• The L2VPN over GRE feature is not supported on the ASR 9000 Ethernet Line
Card or Cisco ASR 9000 Series SPA Interface Processor-700 line cards as the
ingress attachment circuit and GRE destination is reachable over GRE.
• Pseudowire over TE over GRE scenario is not supported.
• Preferred Path Limitations:
o When you configure GRE as a preferred path, egress features are not
supported under the GRE tunnel (Egress ACL).
o VCCV ping or trace-route are not supported for preferred path.
o Preferred path is supported only for pseudowires configured in a provider
egde (PE) to PE topology.

GRE Deployment Scenarios


In an L2VPN network, you can deploy GRE in the following scenarios:

• Configuring GRE tunnel between provider edge (PE) to PE routers


• Configuring GRE tunnel between P to P routers
• Configuring GRE tunnel between P to PE routers
231

The following diagrams depict the various scenarios:

Figure 4-3 GRE tunnel configured between PE-to-PE routers

Figure 4-4 GRE tunnel configured between P to P routers

Figure 4-5 GRE tunnel configured between P to PE routers

GRE Tunnel as Preferred Path


Preferred tunnel path feature allows you to map pseudo wires to specific GRE tunnels.
Attachment circuits are cross connected to GRE tunnel interfaces instead of remote
PE router IP addresses (reachable using IGP or LDP). Using preferred tunnel path, it
is always assumed that the GRE tunnel that transports the L2 traffic runs between the
232

two PE routers (that is, its head starts at the imposition PE router and terminates on
the disposition PE router).

Configuring L2VPN over GRE


Perform these tasks to configure L2VPN over GRE.

SUMMARY STEPS

1. configure
2. interface type interface-path-id
3. l2transport
4. exit
5. interface loopback instance
6. ipv4 address ip-address
7. exit
8. interface loopback instance
9. ipv4 address ip-address
10. router ospf process-name
11. area area-id
12. interface type interface-path-id
13. interface tunnel-ip number
14. exit
15. interface tunnel-ip number
16. ipv4 address ipv4-address mask
17. tunnel source type path-id
18. tunnel destination ip-address
19. end
20. l2vpn
21. bridge group bridge-group-name
22. bridge-domain bridge-domain-name
23. interface type interface-path-id
24. neighbor {A.B.C.D} {pw-id value}
25. mpls ldp
26. router-id {router-id}
27. interface tunnel-ip number
28. end or
commit

DETAILED STEPS
233
234
235
236

LISP encapsulation principles supporting EIGRP OTP


Cisco Systems has been hoisting the EIGRP flag up high of late. Enhanced Interior
Gateway Routing Protocol--a routing protocol, of all things (yawn)--wouldn't seem
like anything to get too excited about, what with all the hoopla surrounding software-
defined networking, overlays and network virtualization. And yet Cisco is clearly
committed to EIGRP.

First of all, much of EIGRP has moved from proprietary status to open. Open EIGRP
was announced earlier this year, when Cisco released significant portions of the
EIGRP specification to the open source community in the form of an IETF draft. This
move met with mixed reviews and some sideways glances, but it demonstrates Cisco's
desire to see EIGRP used even more broadly in the industry. With the widespread
acceptance of the OSPF routing protocol, some might wonder what the point is, but
EIGRP fans can attest to its flexibility and scalability--EIGRP really stands up.

Cisco is giving EIGRP users an interesting feature, which was announced in June at
Cisco Live. EIGRP Over the Top (OTP) allows EIGRP routers to peer across a
service provider infrastructure without the SP's involvement. In fact, with OTP, the
237

provider won't see customer routes at all. EIGRP OTP acts as a provider-independent
overlay that transports customer data between the customer's routers.

To the customer, the EIGRP domain is contiguous. A customer's EIGRP router sits at
the edge of the provider cloud and peers with another EIGRP router a different
location across the cloud. Learned routes feature a next hop of the customer router--
not the provider. Good news for service providers is that customers can deploy
EIGRP OTP with their involvement.

Inside EIGRP OTP

OTP is a genuinely new feature that Cisco has created for EIGRP, so let's peek under
the hood at how it works. There are a few key elements:

 Neighbors are discovered statically. There's no auto-discovery mechanism


here. For customers thinking about trying to build an EIGRP mesh across a
provider cloud and shuddering at the thought of manually configuring n-1
relationships per router, it's not as bad as all that. OTP does not require a mesh
of peering relationships to support a full mesh topology. (See the next point.)

 An OTP mesh scales by use of a route reflector (RR). When designing the
EIGRP OTP overlay, a customer selects a router to be a RR. When additional
customer routers are added to the OTP overlay, EIGRP is configured on that
new router to peer with the RR. The RR takes route advertisements in, and
reflects them out to all other EIGRP customer routers in the OTP overlay. The
RR preserves the next hop attribute of the route, which is critical. This means
that the RR is not going to be the hub of a hub and spoke forwarding topology.
Instead, a full forwarding mesh is formed. For example, let's say we've got
three routers: R1, R2 and R3. R1 is the RR. R1 is peered with R2 and R3.
When R2 advertises a route, let's say 10.2.2.0/24, to R1, R1 reflects
10.2.2.0/24 to R3, preserving R2 as the next hop of that route. When R3 needs
to talk to 10.2.2.0/24, it'll connect directly to R2, and not through R1, which
reflected the route to it.

 Metrics are preserved across the service provider cloud. In other words, the
EIGRP domain treats these neighbors and links just like any other EIGRP
neighbors and links. Therefore with OTP, a customer ends up with a
contiguous EIGRP domain across the SP cloud. That eliminates a common
scenario of isolated EIGRP domains at each customer office being
redistributed into the SP cloud, and then redistributed again from the SP cloud
into remote office EIGRP domains. OTP also eliminates the scenario of
multipoint GRE tunnels (for example, with DMVPN and NHRP) being nailed
up as a manually created overlay that runs EIGRP across it. An OTP
configuration is comparatively simple to code in IOS.

If you're thinking this through, you might be wondering how the EIGRP customer
routers can push traffic for remote subnets across the SP cloud, if the customer routers
are not advertising those subnets to the SP. The answer is that traffic between the
customer EIGRP routers is encapsulated in LISP packets. Therefore, all the SP needs
to know is how to get from customer router to customer router to carry the LISP
238

packets flowing between them; the SP will know this by virtue of the directly
connected routes used to uplink the customer routers to the SP cloud.

4.1.d - Implement and troubleshoot DMVPN (single hub)


The Dynamic Multipoint VPN (DMVPN) feature allows users to better scale large
and small IPSec VPNs by combining generic routing encapsulation (GRE) tunnels,
IPSec encryption, and Next Hop Resolution Protocol (NHRP) to provide users with
easy configuration through crypto profiles, which override the requirement for
defining static crypto maps, and dynamic discovery of tunnel endpoints.

With DMVPN, one central router, usually placed at the head office, undertakes the
role of the Hub while all other branch routers are Spokes that connect to the Hub
router so the branch offices can access the company’s resources. DMVPN consists of
two mainly deployment designs:

 DMVPN Hub & Spoke, used to perform headquarters-to-branch


interconnections
 DMVPN Spoke-to-Spoke, used to perform branch-to-branch interconnections

In both cases, the Hub router is assigned a static public IP Address while the branch
routers (spokes) can be assigned static or dynamic public IP addresses.

DMVPN combines multiple GRE (mGRE) Tunnels, IPSec encryption and NHRP
(Next Hop Resolution Protocol) to perform its job and save the administrator the need
to define multiple static crypto maps and dynamic discovery of tunnel endpoints.

NHRP is layer 2 resolution protocol and cache, much like Address Resolution
Protocol (ARP) or Reverse ARP (Frame Relay).
The Hub router undertakes the role of the server while the spoke routers act as the
clients. The Hub maintains a special NHRP database with the public IP Addresses of
all configured spokes.
239

Each spoke registers its public IP address with the hub and queries the NHRP
database for the public IP address of the destination spoke it needs to build a VPN
tunnel.

DMVPN

BENEFITS

DMVPN provides a number of benefits, which have helped make them very popular
and highly recommended. These include:

 Simplified Hub Router Configuration. No more multiple tunnel interfaces


for each branch (spoke) VPN. A single mGRE, IPSec profile without any
crypto access lists, is all that is required to handle all Spoke routers. No matter
how many Spoke routers connect to the Hub, the Hub configuration remains
constant.
 Full Support for Spoke Routers with Dynamic IP Addressing. Spoke
routers can use dynamic public IP Addresses. Thanks to NHRP, Spoke routers
rely on the Hub router to find the public IP Address of other Spoke routers and
construct a VPN Tunnel with them.
 Dynamic Creation of Spoke-to-Spoke VPN Tunnels. Spoke routers are able
to dynamically create VPN Tunnels between them as network data needs to
travel from one branch to another.
 Lower Administration Costs. DMVPN simplifies greatly the WAN network
topology, allowing the Administrator to deal with other more time-consuming
problems. Once setup, DMVPN continues working around the clock, creating
dynamic VPNs as needed and keeping every router updated on the VPN
topology.
 Optional Strong Security with IPSec. Optionally, IPSecurity can be
configured to provide data encryption and confidentiality. IPSec is used to
secure the mGRE tunnels by encrypting the tunnel traffic using a variety of
available encryption algorithms.

4.1.e Describe IPv6 tunneling techniques


240

When IPv6 development and initial deployment began in the 1990s, most of the
world's networks were already built on an IPv4 infrastructure. As a result, several
groups recognized that there was going to be a need for ways to transport IPv6 over
IPv4 networks, and, as some people anticipated, vice versa.

One of the key reasons for tunneling is that today's Internet is IPv4-based, yet at least
two major academic and research networks use IPv6 natively, and it is desirable to
provide mechanisms for hosts on those networks to reach each other over the IPv4
Internet. Tunneling is one of the ways to support that communication.

As you may gather, tunneling meets a number of needs in a mixed IPv4 and IPv6
world; as a result, several kinds of tunneling methods have emerged.

Tunneling, in a general sense, is encapsulating traffic. More specifically, the term


usually refers to the process of encapsulating traffic at a given layer of the OSI seven-
layer model within another protocol running at the same layer. Therefore,
encapsulating IPv6 packets within IPv4 packets and encapsulating IPv4 packets
within IPv6 packets are both considered tunneling?

However, you should be aware that both of these types of tunneling exist, in addition
to the ones covered here. With that in mind, consider some of the more common
tunneling methods, starting with a summary below:

Summary of Tunneling Methods


Tunnel Mode Topology and Address Space Applications
Automatic 6to4 Point-to-multipoint; 2002::/16 Connecting isolated IPv6 island
addresses networks.
Manually Point-to-point; any address space; Carries only IPv6 packets
configured requires dual-stack support at both across IPv4 networks.
ends
IPv6 over IPv4 Point-to-point; unicast addresses; Carries IPv6, CLNS, and other
GRE requires dual-stack support at both traffic.
ends
ISATAP Point-to-multipoint; any multicast Intended for connecting IPv6
addresses hosts within a single site.
Automatic IPv4- Point-to-multipoint; ::/96 address Deprecated. Cisco recommends
compatible space; requires dual-stack support at using ISATAP tunnels instead.
both ends

4.1.g - Describe basic layer 2 VPN —wireline

Overview of Dial Access


With MPLS VPN, a service provider can create scalable, efficient, and feature-rich
customer VPNs across the core of a network. Adding remote dial access integration
241

provides the remote customer edge router (CE) to provider edge router (PE) link that
integrates dial users into their MPLS VPNs.

Cisco remote dial access integration covers the following scenarios:

• Individuals dialing in over ISDN or the analog public switched telephone


network (PSTN) to a PE from their laptop computers, or users at a remote
office dialing in to a PE through a CE. This is dial-in access.
• A CE dialing in to a PE, creating a backup link for use when a primary, direct
remote connection, such as cable or digital subscriber line (DSL), has failed.
This is dial backup access.
• A PE dialing out to a remote CE, with the call triggered by traffic coming
from the MPLS VPN. For example, a central database system might connect
to vending machines at night to collect daily sales data and check inventories.
This is dial-out access.

Figure 4-6 shows a service provider network with several kinds of remote dial access.
In this example, the customer is outsourcing all remote access operations to the
service provider, but the service provider operates an MPLS VPN that interconnects
all customer sites.

Figure 4-6 Overview of Remote Dial Access to MPLS VPN

Overview of L2TP Dial-in Remote Access


Layer 2 Tunnel Protocol (L2TP) dial-in accesses is designed for service providers
who want to offer wholesale dial service to their customers. The service provider (or a
large Internet service provider) maintains geographically dispersed points of presence
(POPs). A customer of the service provider dial sin to a network access server (NAS)
at a local POP, and the NAS creates a virtual private dial network (VPDN) tunnel to
the customer’s network.

Figure 4-7 Topology of L2TP Dial-in Access to MPLS VPN


242

These are the main events in the call flow that corresponds to the topology shown in
the figure:

1. The remote user initiates a PPP connection to a network access server (NAS)
using either analog service or ISDN. If MLP is enabled, the session is identified as
potentially a part of an MLP bundle.
2. The NAS accepts the connection and a PPP or MLP link is established.
3. The NAS partially authenticates the user with Challenge Handshake
Authentication Protocol (CHAP) or Password Authentication Protocol (PAP). The
domain name or dialed number identification service (DNIS) is used to determine
whether the user is a VPN client. If the user is not a VPN client (the service
provider is also the user’s ISP), authentication continues on the NAS. If the user is
a VPN client, as in the L2TP dial-in scenario, the AAA server returns the address
of a virtual home gateway/provider edge router (VHG/PE).
4. If an L2TP tunnel does not exist, the NAS initiates a tunnel to the VHG/PE. The
NAS and the VHG/PE authenticate each other before any sessions are attempted
within a tunnel.

A VHG/PE can also accept tunnel creation without the NAS providing tunnel
authentication.

1. Once the tunnel exists, a session within the tunnel is created for the remote user,
and the PPP connection is extended to terminate on the VHG/PE.
2. The NAS propagates all available PPP information (the LCP negotiated options
and the partially authenticated CHAP/PAP information) to the VHG/PE.
3. The VHG/PE associates the remote user with a specific customer MPLS VPN.
The VPN's virtual routing/forwarding instance (VRF) has been instantiated on the
VHG/PE. (The VRF is information associated with a specific VPN.)
4. The VHG/PE completes the remote user's authentication.
5. The VHG/PE obtains an IP address for the remote user.
6. The remote user becomes part of the customer VPN. Packets flow from and to the
remote user.
7. If MLP is enabled, the remote user initiates a second PPP link of the MLP bundle.
243

The above steps are repeated, except that an IP address is not obtained; the existing IP
address is used. The remote user can use both PPP sessions. Packets are fragmented
across links and defragmented on the VHG/PE, with both MLP bundles being put into
the same VRF. The VRF includes routing information for a specific customer VPN
site.

Overview of Direct ISDN PE Dial-in Remote Access


In direct ISDN PE dial-in access to an MPLS VPN, a NAS functions as both NAS and
PE. (For that reason, the NAS is referred to here as a NAS/PE.) In contrast to an L2TP
dial-in access session, the PPP session is placed directly in the appropriate VRF for
the MPLS VPN, rather than being forwarded to a network concentrator by a tunneling
protocol. Direct dial-in is implemented only with pure ISDN calls, not analog calls.

Figure 4-8 shows an example of direct dial-in topology.

These are the main events in the call flow that corresponds to the topology shown in
Figure 4-8:

1. The remote user initiates a PPP or MLP connection to the NAS/PE using ISDN.
2. The NAS/PE accepts the connection, and a PPP or MLP link is established.
3. The NAS/PE authorizes the call with the service provider AAA server.
Authorization is based on the domain name or DNIS.
4. The service provider AAA server associates the remote user with a specific VPN
and returns the corresponding VPN routing/forwarding instance (VRF) name to
the NAS/PE, along with an IP address pool name.
5. The NAS/PE creates a virtual access interface to terminate the user’s PPP
sessions. Part of the virtual interface’s configuration will have been retrieved from
the service provider AAA server as part of the authorization. The remainder
comes from a locally configured virtual template.
6. CHAP continues and completes. An IP address is allocated to the remote user.
You can use any of several different methods for address assignment.
7. The remote user is now part of the customer VPN. Packets can flow from and to
the remote user.

Direct ISDN PE Dial-in Components


244

It also describes the role each component plays and the specific platforms and
software this architecture supports. Table 3-1 describes additional components
common to dial access methods.

Network Access Servers/Provider Edge Routers


Each NAS performs both NAS and PE functions:

1. It receives incoming PPP sessions over ISDN.


2. It terminates the PPP session in an MLP virtual access bundle.
3. It inserts the bundle into the specific customer VRF domain.
4. It removes PPP encapsulation.
5. It forwards the IP header and data to the MPLS VPN network through tag
switching.

Table 4-1 lists the platforms that direct ISDN PE dial-in supports.

Overview of Dial Backup


You can use dial backup to provide a fallback link for a primary, direct connection
such as cable or DSL.

If you use L2TP dial-in architecture, dial backup provides connectivity from the
customer's remote office to the customer's VPN when the primary link becomes
unavailable.

You typically configure the primary link and the backup link on the same CE router at
the remote site.

Call flow in dial backup is identical to that in L2TP dial-in access, except that the call
is initiated by a backup interface when connectivity to the primary interface is lost,
instead of by a remote user. A dialer interface is configured to dial in to the service
provider’s NAS using a dial backup phone number. The phone number indicates that
dial backup is being initiated instead of a typical L2TP dial-in.

Using L2TP, the NAS tunnels the PPP session to the VHG/PE, which then maps the
incoming session into the appropriate VRF. The VRF routing tables on all remote PEs
must converge; updates come from the VHG/PE.

When the primary link is restored, the primary route is also restored, the remote user
terminates the backup connection, and the VHG/PE deletes the backup route.

Figure 4-9 shows an example of topology for dial backup.


245

Dial Backup Components and Features


Like L2TP dial-in, dial backup requires a NAS and a VHG/PE.

No Address Assignment
Because dial backup is used primarily to connect remote sites (not remote users) to a
customer VPN, address assignment is not needed.

MLP Typically Used


Backup links are typically MLP links, and you can configure an IGP routing protocol
on the backup link.

Static or Dynamic Routing Must Be Provisioned


If routing is not enabled on the links between the CE and the VHG/PE, you must
provision static VRF routes on the VHG/PE. For the primary link, provisioning is
straightforward. The primary static route is withdrawn when the primary link goes
down, due to lack of connectivity. For the backup PPP session, you can download the
static route from the RADIUS AAA server as part of the virtual profile (framed-route
attribute). The route is then inserted into the appropriate VRF when the backup virtual
interface is brought up.

When the primary link is restored, the primary static VRF route is also restored, and
the CE terminates the backup connection. The PE then deletes the backup static VRF
route.

Alternatively, you can configure dynamic routing on both the primary and the backup
CE-PE link.

Authentication by Service Provider AAA server


With dial backup, authentication of the remote CE is similar to remote user
authentication in L2TP dial-in. If there is a managed CE, the service provider AAA
server can authenticate the remote CE; proxy authentication is not needed.

Accounting
The service provider AAA server or RADIUS proxy on the VHG/PE maintains
accounting records, including MLP information, for the duration of the backup
session.

Overview of Dial-out Access


246

In dial-out remote access, instead of a remote user or CE initiating a call into the
MPLS VPN, the connection is established by traffic coming from the MPLS VPN and
triggering a call from the dial-out router to the remote CE. Dial-out access can use
either L2TP or direct ISDN architecture.

Dial-out is often used for automated functions. For example, a central database system
might dial out nightly to remote vending machines to collect daily sales data and
check inventories.

In this release of Cisco Remote Access to MPLS VPN integration, the dialer interface
used is a dialer profile. With a dialer profile, each physical interface becomes a
member of a dialer pool. The VHG/PE (in L2TP dial-out) or the NAS/PE (in direct
dial-out) triggers a call when it receives interesting traffic from a remote peer in the
customer VPN. (“Interesting traffic” is traffic identified as destined for this particular
dial-out network.)
Based on the dialer interface configuration, the VHG/PE or NAS/PE borrows a
physical interface from the dialer pool for the duration of the call. Once the call is
complete, the router returns the physical interface to the dialer pool. Because of this
dynamic binding, different dialer interfaces can be configured for different customer
VPNs, each with its own VRF, IP address, and dialer string.

Unlike dial-in remote access, dial-out access does not require the querying of an AAA
server or the use of two-way authentication, because user information is directly
implemented on the dialer profile interface configured on the dial-out router.

Figure 4-10 shows an example of the topology for L2TP dial-out access, and Figure
4-11 shows an example of the topology for direct ISDN dial-out access.

Figure 4-10 Topology of L2TP Dial-out Remote Access

Figure 4-11 Topology of Direct ISDN Dial-out Remote Access


247

4.1.h Describe basic L2VPN — LAN services


VPLS Overview

Virtual Private LAN Services (VPLS) enables enterprises to link together their
Ethernet-based LANs from multiple sites via the infrastructure provided by their
service provider. From the enterprise perspective, the service provider’s public
network looks like one giant Ethernet LAN. For the service provider, VPLS provides
an opportunity to deploy another revenue-generating service on top of the existing
network without major capital expenditures. Operators can extend the operational life
of equipment in their network.

VPLS uses the provider core to join multiple attachment circuits together to simulate
a virtual bridge that connects the multiple attachment circuits together. From a
customer point of view, there is no topology for VPLS. All customer edge (CE)
devices appear to connect to a logical bridge emulated by the provider core (see the
figure below).

Full-Mesh Configuration
248

A full-mesh configuration requires a full mesh of tunnel label switched paths (LSPs)
between all provider edge (PE) devices that participate in Virtual Private LAN
Services (VPLS). With a full mesh, signaling overhead and packet replication
requirements for each provisioned virtual circuit (VC) on a PE can be high.

You set up a VPLS by first creating a virtual forwarding instance (VFI) on each
participating PE device. The VFI specifies the VPN ID of a VPLS domain, the
addresses of other PE devices in the domain, and the type of tunnel signaling and
encapsulation mechanism for each peer PE device.

The set of VFIs formed by the interconnection of the emulated VCs is called a VPLS
instance; it is the VPLS instance that forms the logic bridge over a packet switched
network. After the VFI has been defined, it needs to be bound to an attachment circuit
to the CE device. The VPLS instance is assigned a unique VPN ID.

PE devices use the VFI to establish a full-mesh LSP of emulated VCs to all other PE
devices in the VPLS instance. PE devices obtain the membership of a VPLS instance
through static configuration using the Cisco IOS CLI.

A full-mesh configuration allows the PE device to maintain a single broadcast


domain. When the PE device receives a broadcast, multicast, or unknown unicast
packet on an attachment circuit (AC), it sends the packet out on all other ACs and
emulated circuits to all other CE devices participating in that VPLS instance. The CE
devices see the VPLS instance as an emulated LAN.

To avoid the problem of a packet looping in the provider core, PE devices enforce a
“split-horizon” principle for emulated VCs. In a split horizon, if a packet is received
on an emulated VC, it is not forwarded on any other emulated VC.

Looking up the Layer 2 VFI of a particular VPLS domain makes the packet
forwarding decision.

A VPLS instance on a particular PE device receives Ethernet frames that enter on


specific physical or logical ports and populates a MAC table similarly to how an
Ethernet switch works. The PE device can use the MAC address to switch these
frames into the appropriate LSP for delivery to the PE device at a remote site.

If the MAC address is not available in the MAC address table, the PE device
replicates the Ethernet frame and floods it to all logical ports associated with that
VPLS instance, except the ingress port from which it just entered. The PE device
updates the MAC table as it receives packets on specific ports and removes addresses
not used for specific periods.

Static VPLS Configuration

Virtual Private LAN Services (VPLS) over Multiprotocol Label Switching-Transport


Profile (MPLS-TP) tunnels allows you to deploy a multipoint-to-multipoint layer 2
operating environment over an MPLS-TP network for services such as Ethernet
249

connectivity and multicast video. To configure static VPLS, you must specify static
range of MPLS labels using the mpls label range command with the static keyword.

H-VPLS

Hierarchical VPLS (H-VPLS) reduces signaling and replication overhead by using


full-mesh and hub-and-spoke configurations. Hub-and-spoke configurations operate
with split horizon to allow packets to be switched between pseudowires (PWs),
effectively reducing the number of PWs between provider edge (PE) devices.

4.2 Encryption

4.2.a Implement and troubleshoot IPsec with preshared key

IPsec for IPv6


IP Security, or IPsec, is a framework of open standards developed by the Internet
Engineering Task Force (IETF) that provide security for transmission of sensitive
information over unprotected networks such as the Internet. IPsec acts at the network
layer, protecting and authenticating IP packets between participating IPsec devices
(peers), such as Cisco routers. IPsec provides the following optional network security
services. In general, local security policy will dictate the use of one or more of these
services:

 Data confidentiality--The IPsec sender can encrypt packets before sending them
across a network.
 Data integrity--The IPsec receiver can authenticate packets sent by the IPsec
sender to ensure that the data has not been altered during transmission.
 Data origin authentication--The IPsec receiver can authenticate the source of the
IPsec packets sent. This service depends upon the data integrity service.
 Antireplay--The IPsec receiver can detect and reject replayed packets.
 With IPsec, data can be sent across a public network without observation,
modification, or spoofing. IPsec functionality is similar in both IPv6 and IPv4;
however, site-to-site tunnel mode only is supported in IPv6.

In IPv6, IPsec is implemented using the AH authentication header and the ESP
extension header. The authentication header provides integrity and authentication of
the source. It also provides optional protection against replayed packets. The
authentication header protects the integrity of most of the IP header fields and
authenticates the source through a signature-based algorithm. The ESP header
provides confidentiality, authentication of the source, connectionless integrity of the
inner packet, antireplay, and limited traffic flow confidentiality.

The Internet Key Exchange (IKE) protocol is a key management protocol standard
that is used in conjunction with IPsec. IPsec can be configured without IKE, but IKE
enhances IPsec by providing additional features, flexibility, and ease of configuration
for the IPsec standard.
250

IKE is a hybrid protocol that implements the Oakley key exchange and Skeme key
exchange inside the Internet Security Association Key Management Protocol
(ISAKMP) framework (ISAKMP, Oakley, and Skeme are security protocols
implemented by IKE) (see the figure below). This functionality is similar to the
security gateway model using IPv4 IPsec protection.

VIRTUAL TUNNEL INTERFACES


Cisco IPSec VTIs are a new tool that customers can use to configure IPSec-based
VPNs between site-to-site devices. IPSec VTI tunnels provide a designated pathway
across a shared WAN and encapsulate traffic with new packet headers, which helps to
ensure delivery to specific destinations. The network is private because traffic can
enter a tunnel only at an endpoint. In addition, IPSec provides true confidentiality (as
does encryption) and can carry encrypted traffic.

With IPSec VTIs, users can provide highly secure connectivity for site-to-site VPNs
and can be combined with Cisco AVVID (Architecture for Voice, Video and
Integrated Data) to deliver converged voice, video, and data over IP networks.

BENEFITS
 Simplifies management---Customers can use the Cisco IOS® Software virtual
tunnel constructs to configure an IPSec virtual tunnel interface, thus simplifying
VPN configuration complexity, which translates into reduced costs because the
need for local IT support is minimized. In addition, existing management
applications that can monitor interfaces can be used for monitoring purposes.
 Supports multicast encryption---Customers can use the Cisco IOS Software IPSec
VTIs to transfer the multicast traffic, control traffic, or data traffic---for example,
many voice and video applications---from one site to another securely.
 Provides a routable interface---Cisco IOS Software IPSec VTIs can support all
types of IP routing protocols. Customers can use these VTI capabilities to connect
larger office environments---for example, a branch office, complete with a private
branch exchange (PBX) extension.
 Improves scaling---IPSec VTIs need fewer established security associations to
cover different types of traffic, both unicast and multicast, thus enabling improved
scaling.
 Offers flexibility in defining features---An IPSec VTI is an encapsulation within
its own interface. This offers flexibility of defining features to run on either the
physical or the IPSec interface.

VTI
The use of IPsec VTIs both greatly simplifies the configuration process when you
need to provide protection for remote access and provides a simpler alternative to
using generic routing encapsulation (GRE) or Layer 2 Tunneling Protocol (L2TP)
tunnels associated with IPsec VTIs is that the configuration does not require a static
mapping of IPsec sessions to a physical interface. The IPsec tunnel endpoint is
251

associated with an actual (virtual) interface. Because there is a routable interface at


the tunnel endpoint, many common interface capabilities can be applied to the IPsec
tunnel.

The IPsec VTI allows for the flexibility of sending and receiving both IP unicast and
multicast encrypted traffic on any physical interface, such as in the case of multiple
paths. Traffic is encrypted or decrypted when it is forwarded from or to the tunnel
interface and is managed by the IP routing table. Using IP routing to forward the
traffic to the tunnel interface simplifies the IPsec VPN configuration compared to the
more complex process of using access control lists (ACLs) with the crypto map in
native IPsec configurations. Because DVTIs function like any other real interface you
can apply quality of service (QoS), firewall, and other security services as soon as the
tunnel is active.

Without VPN Acceleration Module2+ (VAM2+) accelerating virtual interfaces, the


packet traversing an IPsec virtual interface is directed to the Router Processor (RP)
for encapsulation. This method tends to be slow and has limited scalability. In
hardware crypto mode, the VAM2+ crypto engine accelerates all the IPsec VTIs, and
all traffic going through the tunnel is encrypted and decrypted by the VAM2+.

4.2.b Describe GET VPN

Implementing GET VPN

Today's networked applications, such as voice and video, are accelerating the
necessity for instantaneous, branch-interconnected, and QoS-enabled WANs. And the
distributed nature of these applications results in increased demands for scale. At the
same time, enterprise WAN technologies force businesses to trade off between QoS-
enabled branch interconnectivity and transport security. As network security risks
increase and regulatory compliance becomes essential, GET VPN, a next-generation
WAN encryption technology, eliminates the need to compromise between network
intelligence and data privacy.

With the introduction of GET, Cisco now delivers a new category—tunnel-less VPN
—that eliminates the need for tunnels. By removing the need for point-to-point
tunnels, meshed networks can scale higher while maintaining network-intelligence
features critical to voice and video quality. GET offers a new standards-based security
model that is based on the concept of "trusted" group members. Trusted member
routers use a common security methodology that is independent of any point-to-point
IPsec tunnel relationship. By using trusted groups instead of point-to-point tunnels,
"any-any" networks can scale higher while maintaining network-intelligence features
(such as QoS, routing, and multicast), which are critical to voice and video quality.

GET-based networks can be used in a variety of WAN environments, including IP


and Multiprotocol Label Switching (MPLS). MPLS VPNs that use this encryption
technology are highly scalable, manageable, and cost-effective, and they meet
252

government-mandated encryption requirements. The flexible nature of GET allows


security-conscious enterprises either to manage their own network security over a
service provider WAN service or to offload encryption services to their providers.
GET simplifies securing large Layer 2 or MPLS networks that require partial or full-
mesh connectivity.

Cisco Group Encrypted Transport VPN Architecture


GET VPN is an enhanced solution that encompasses Multicast Rekeying, a Cisco
solution for enabling encryption for "native" multicast packets, and unicast rekeying
over a private WAN. Multicast Rekeying and GET VPN is based on GDOI as defined
in Internet Engineering Task Force (IETF) RFC 3547. In addition, there are
similarities to IPsec in the area of header preservation and SA lookup. Dynamic
distribution of IPsec SAs has been added, and tunnel overlay properties of IPsec have
been removed.

Figure 4-12 illustrates a high-level GET VPN system configuration topology.

Figure 4-12 System Configuration Topology


253

The topology in Figure 4.1 is used to setup the GET VPN network. The IP VPN core
interconnects

VPN sites as shown in the figure. The CE/CPE routers (GMs A and B) on each VPN
site are grouped into a GDOI group. Therefore, all KSs and GMs are part of the same
VPN. KS-1 is the primary KS and KS-2 is the secondary KS.

5.1 Infrastructure Security

5.2 Device security

5.1.a Implement and troubleshoot IOS AAA using local database

Authentication, Authorization, and Accounting (AAA)


Triple A (AAA) provides a modular framework for the three key security functions in
any network environment. These three functions are: Authentication, Authorization,
and Accounting.

Authentication
Authentication is the process of identifying of the user that is attempting to access a
networking device, such as an end station, or a router. This is usually performed by
means of logon a credentials that is provided by the user. Logon credentials consist of
two parts: user identification and proof of identity. A user name or e-mail address is
usually used for user identification, while passwords are usually used as proof of
identity.

Line Authentication
The most common form of authentication on a router is line authentication, which
uses different passwords to authenticate users depending on the line they are using to
connect to the router. However, line authentication is limited because all users must
use the same password to authenticate. Generally, line authentication is acceptable in
environments that have few administrators and few routers. However, when an
administrator leaves the group, all passwords on all routers should be changed to
ensure continued security.

Local Authentication
Local authentication provides increased security and allows for greater accountability
and more exacting control on a local router. With local authentication, each user has a
254

separate usernames and passwords, which is stored locally in the router configuration
and allows for additional password protection and logging. Because each user must be
created on each router, the administration of local authentication can be a time
consuming task if there are a large number of routers or users.

To use local authentication, you must configure the router to use local list of users
when authenticating. You can do this by using the login local command. Once a user
has been authenticated, you can use the show users command to monitor the user’s
actions.

Remote Security Servers


A remote security server, which is also called an authentication server, provides
centralized management of usernames and passwords. All usernames and password
are stored centrally on the remote security server. When a user attempts to
authenticate to a router, the router passes the username and password information to
the remote security server. The security server then compares the user credentials with
the user database to determine if the user should be permitted access to the router.
This greatly reduces the administrative effort required to manage user authentication.

Cisco routers support three types of security servers: RADIUS, TACACS+, and
Kerberos.
 Remote Authentication Dial-In User Service (RADIUS), which was developed
by the Internet Engineering Task Force (IETF) and comprises of a set of
authentication server and client protocols that provide security to networks against
unauthorized access. RADIUS uses a client/server architecture with the router
typically representing the client, and a Windows NT or UNIX server running the
RADIUS software representing the server.

The RADIUS authentication process has three stages: first, the user is prompted
for a username and password; second, the username and encrypted password are
sent over the network to the RADIUS server; and third, the RADIUS server will
replies with an Accept if the user has been successfully authenticated, a Reject if
the username and/or password are invalid, a Challenge if the RADIUS server
requests additional information, or a Change Password if the user’s password
needs to be changed.
 Terminal Access Controller Access Control System (TACACS+), which is
defined in RFC 29765, is a Cisco development of the TACACS protocol that is
specifically designed to interact with Cisco’s AAA and similar to RADIUS. The
TACACS server handles the full implementation of AAA features: Authentication
includes messaging support in addition to login and password functions,
Authorization enables explicit control over user capabilities, and Accounting
supplies detailed information about user activities.

TACACS+ is can be enabled by using the aaa commands. TACACS+ makes


provision for individual and modular authentication, authorization, and accounting
facilities and allows a single access control server, the TACACS+ daemon, to
supply authentication, authorization, and accounting services separately.
 The Kerberos protocol was designed by the Massachusetts Institute of
Technology (MIT), and provides strong authentication for client/server
applications by using secret key cryptography based on the Data Encryption
255

Standard (DES) cryptographic algorithm. Kerberos maintains a database of its


clients and their confidential keys. Exclusively Kerberos and the client it belongs
to know the confidential key. The password is encrypted for users. Network
services necessitating authentication enlist with Kerberos. Kerberos can generate
messages that persuade one client that another is really who it professes to be.
Kerberos can also distribute provisional private keys, known as session keys to
two clients exclusively. These two clients can use the session key to encrypt
messages.

Kerberos can be used like RADIUS or TACACS+ for authenticating a user. Once a
user is authenticated with Kerberos, an admission ticket is granted. The ticket will
allow the user to access other resources on the network without resubmitting the
password across the network. These tickets have a limited life span, and upon
expiration they require renewal to access resources again.

5.1.b Implement and troubleshoot device access control

TACACS+ and Cisco Secure Access Control Server


Terminal Access Controller Access Control System Plus (TACACS+) is a widely
used protocol that provides access control for routers, network access servers and
other networked computing devices. Cisco Secure Access Control Server (ACS) is a
high-performance access control server that operates as a centralized TACACS+ or
RADIUS server. It extends access security by combining authentication, user access,
and administrator access with policy control within a centralized identity networking
solution. It enforces a uniform security policy for all users regardless of how they
access the network. Cisco Secure ACS centralizes the control of all user privileges
and distributes them to the managed devices throughout the network. It also provides
detailed reporting and monitoring capabilities of network users' behavior and keeps a
record of every access connection and device configuration change across the entire
network.

Cisco ANA Managing Devices Configured For TACACS+


TACACS+ server, as part of Cisco Secure ACS, provides powerful authentication,
authorization, and accounting capabilities to network administrators. It provides initial
authentication when users log in to the network devices, authorization at the
granularity of the command-line interface (CLI) level, and detailed logging
capabilities that facilitate accounting for network devices. TACACS+ can handle
multiple users accessing the devices simultaneously.

Cisco ANA manages thousands of devices by design. Figure 4-13 shows a network of
devices, managed by Cisco ANA, with Cisco Secure ACS providing the AAA
functionality using TACACS+. Cisco ANA actively monitors the network devices
and maintains a fully correlated object model of the network. The benefit of this
approach is that individual management applications do not need to access network
devices separately but can rely on the Cisco ANA object model for the latest status.
Cisco ANA uses both Simple Network Management Protocol (SNMP) and Telnet to
256

perform fault management. SNMP is used mainly while Telnet is used when the
management data required by Cisco ANA is unavailable through SNMP, due to
device software and hardware limitations. These Telnet connections can incur
additional load on the TACACS+ server.

Figure 5.1

Recommended Device Configuration for TACACS+ in IP NGN


Cisco ANA implements role-based access control. Users are authenticated and
authorized at a level of authorization associated with their roles at Cisco ANA. All
user activities are also logged by Cisco ANA. Since Cisco ANA is already enforcing
security policies on network administrators, we can simplify the authorization and
accounting requirements on the managed network devices. Cisco ANA requires a user
account on the device in order to access it. This user account is a server-to-device
specific account, and we shall refer to it as device user account Cisco ANA hereafter.
Because security policies are enforced in Cisco ANA, user account Cisco ANA can be
considered as a "trusted source", and thus does not require TACACS+ authorization
and accounting for every command.

Figure 5.2 describes the recommended device AAA configuration for TACACS+
when the device is managed by Cisco ANA (or other high-performance management
platforms) It can be summarized as follows:

 Differentiate the user Cisco ANA from other users on the device by assigning
dedicated VTY line(s)
 Use Network Access Restriction (NAR) to restrict user Cisco ANA access
from only the specified IP address (es)
 Use local authorization at the device for the user Cisco ANA
 Configure for no accounting for user Cisco ANA at the device

This configuration uses TACACS+ for authentication and authorizes user Cisco ANA
locally at the device for CLI commands. Local authorization will significantly reduce
the load on the TACACS+ server from the large amount of automated transactions
generated by user Cisco ANA. Note that this authentication approach should not
affect other users accessing a router over Telnet or Shared Shell (SSH) Protocol. User
Cisco ANA should be authenticated through TACACS+, although it can be
authenticated locally on the managed device. If account information of user Cisco
257

ANA is to be stored locally in the device configuration, additional measures should be


taken to minimize security risks associated. By assigning dedicated VTY lines and
using NAR, it is ensured that device access by user Cisco ANA is allowed only from
Cisco ANA, not by Telnet from anywhere else.

Figure 5.2

Device Configuration
1. Assign one VTY line dedicated to management software transactions.
a. This example configuration uses the capability of Cisco IOS® Software to
change the inbound Telnet port for a particular VTY line.
Example:
Line VTY 5
Rotary 25
This command will change the port for the inbound Telnet session from 23
to 3025.
b. To achieve a similar result for SSH, the solution uses the global configuration
command:
ip ssh port 2025 rotary 25
This command changes the inbound SSH port (for line VTY 5 only) from 22 to 2025.

2. Now a separate VTY line that could be used by Cisco ANA user is constructed.
Since it is a separate line, we can apply different authentication methods.
a. This is an example configuration of a Cisco IOS Software system that uses TACACS+
for all AAA:
aaa new-model
aaa authentication login default tacacs+ local
aaa authorization exec default tacacs+ if-authenticated
aaa authorization commands 0 default tacacs+ if-authenticated
aaa authorization commands 1 default tacacs+ if-authenticated
aaa authorization commands 15 default tacacs+ if-authenticated
aaa accounting exec default start-stop tacacs+
aaa accounting commands 0 default start-stop tacacs+
aaa accounting commands 1 default start-stop tacacs+
258

aaa accounting commands 15 default start-stop tacacs+


b. Configure separate authentication and authorization methods that will be used only for
the management software user account.
aaa authentication login mgmt local
aaa authorization exec mgmt local
aaa authorization commands 0 mgmt local
aaa authorization commands 1 mgmt local
aaa authorization commands 15 mgmt local
When applied to line VTY 5, it will force anyone to get authenticated by using the
"mgmt" method, which is defined as local.
line VTY 5
login authentication mgmt
authorization exec mgmt
authorization commands 0 mgmt
authorization commands 1 mgmt
If the CiscoANA user account is to be stored locally the following security hardening
measures are recommended:
a. Configure the CiscoANA user using MD5 one-way so instead of using:
username devrec password cisco
Use
username devrec secret cisco
This will create an MD5 hash password.
b. Assign an inbound access list to line 5, so only Cisco ANA can initiate inbound Telnet
or SSH sessions.
ip access-list standard mgmt-servers
permit {management server IP address}
line vty 5
transport input telnet ssh
access-class mgmt-servers inbound
username devrec secret cisco
aaa authentication login mgmt local
aaa authorization exec mgmt local
aaa authorization commands 0 mgmt local
aaa authorization commands 1 mgmt local
aaa authorization commands 15 mgmt local
ip ssh port 2025 rotary 25
ip access-list standard mgmt-servers
permit {management server IP address}
line VTY 5
rotary 25
login authentication mgmt
authorization exec mgmt
authorization commands 0 mgmt
authorization commands 1 mgmt
authorization commands 15 mgmt
transport input telnet ssh
access-class mgmt-servers inbound

5.1.c Implement and troubleshoot control plane policing


The MPLS VPN control plane defines protocols and mechanisms to overcome the
problems created by overlapping customer IP address spaces, while adding
mechanisms to add more functionality to an MPLS VPN, particularly as compared to
259

traditional Layer 2 WAN services. To understand the mechanics, you need a good
understanding of BGP, IGPs, and several new concepts created by both MP-BGP
RFCs and MPLS RFCs.

Configuring the Control Plane Policing feature on your Cisco router or switch
provides the following benefits:

 Protection against DoS attacks at infrastructure routers and switches


 QoS control for packets that are destined to the control plane of Cisco routers
or switches
 Ease of configuration for control plane policies
 Better platform reliability and availability

Because different platforms can have different architectures, the following set of
terms is defined. Figure 5.3 illustrates how control plane policing works.

 Control plane (CP)—A collection of processes that run at the process level on
the route processor (RP). These processes collectively provide high-level
control for most Cisco IOS functions.
 Central switch engine—A device that is responsible for high-speed routing of
IP packets. It also typically performs high-speed input and output services for
non-distributed interfaces. The central switch engine is used to implement
aggregate CP protection for all interfaces on the router
260

5.1.d - Describe device security using IOS AAA with TACACS+ and
RADIUS

AAA clients to communicate relevant data for each user session to the AAA server
for recording use TACACS+ and RADIUS protocols accounting functions. Cisco
Secure ACS writes accounting records to a comma-separated value (CSV) log file or
ODBC database, depended on your configuration. It is possible to import these logs
into a popular database and spreadsheet applications for billing, security audits, and
report generation.

Authentication proxy is a feature that became available with Cisco IOS Software
Release 12.0.5.T. It allows users to authenticate via the firewall when accessing
specific resources. The Cisco IOS firewall is designed to interface with AAA servers
using standard authentication protocols to perform this function. The Cisco IOS
firewall supports TACACS+ and RADIUS AAA servers. Cisco Secure Access
Control Server (CSACS) can perform both TACACS+ and RADIUS functions.
Authentication proxy is one of the core components of the Cisco IOS firewall feature
set. Prior to the implementation of authentication proxy, access to a resource was
usually limited by the IP address of the requesting source and a single policy was
applied to that source or network. Authentication proxy permits administrators to limit
access to resources on an individual user basis and tailor the privileges of each
individual as opposed to applying a generic policy to all users.

Authentication proxy is not a service that is transparent to the user, it requires user
interaction. The authentication proxy is activated when the user opens an HTTP
session through the Cisco IOS firewall. The firewall verifies whether the user has
already been authenticated. If the user was previously authenticated, it permits the
connection. If the user has not been previously authenticated, the firewall prompts the
user for a username and password and verifies the user input with a TACACS+ or
RADIUS server.

5.2 – Network Security

5.2.a - Implement and troubleshoot switch security features

Access control lists (ACLs) provide the ability to filter ingress and egress traffic
based on conditions specified in the ACL.

 Cisco IOS ACLs are applied to Layer 3 interfaces. They filter traffic routed
between VLANs.

 VACLs control access to the VLAN of all packets (bridged and routed).
Packets can either enter the VLAN through a Layer 2 port or through a Layer
261

3 port after being routed. You can also use VACLs to filter traffic between
devices in the same VLAN.
 Port ACLs perform access control on all traffic entering the specified Layer 2
port.
 PACLs and VACLs can provide access control based on the Layer 3 addresses
(for IP protocols) or Layer 2 MAC addresses (for non-IP protocols).
You can apply only one IP access list and one MAC access list to a Layer 2
interface

Port ACL

The port ACL (PACL) feature provides the ability to perform access control on
specific Layer 2 ports.

A Layer 2 port is a physical LAN or trunk port that belongs to a VLAN. Port ACLs
are applied only on the ingress traffic. The port ACL feature is supported only in
hardware (port ACLs are not applied to any packets routed in software).
When you create a port ACL, an entry is created in the ACL TCAM. You can use the
show tcam counts command to see how much TCAM space is available.
The PACL feature does not affect Layer 2 control packets received on the port.
You can use the access-group mode command to change the way that
PACLs interact with other ACLs.
PACLs use the following modes:

 Prefer port mode — If a PACL is configured on a Layer 2 interface, the PACL


takes effect and overwrites the effect of other ACL
 Merge mode —In this mode, the PACL, VACL, and Cisco IOS ACLs are
merged in the ingress direction.

VLAN ACL
VLAN ACLs (VACLs) can provide access control for all packets that are bridged
within a VLAN or that are routed into or out of a VLAN or a WAN interface
for VACL capture. Unlike Cisco IOS ACLs that are applied on routed packets only,
VACLs apply to all packets and can be applied to any VLAN or WAN interface.
VACLs are processed in the ACL TCAM hard ware. VACLs ignore any Cisco IOS
ACL fields that are not supported in hardware. You can configure VACLs for IP and
MAC-layer traffic. VACLs applied to WAN interfaces support only IP traffic for
VACL capture.

If a VACL is configured for a packet type, and a packet of that type does not match
the VACL, the default action is to deny the packet

Consider the following guidelines when configuring PACLs:

 There can be at most one IP access list and one MAC access list applied to the
same Layer 2 interface per direction.
 PACLs are not applied to IPv6, MPLS, or ARP messages.
262

 An IP access list filters only IPv4 packets. ForIP access lists, you can define a
standard, extended, or named access-list.
 A MAC access list filters ingress packets that are of an unsupported type (not
IP, IPv6, ARP, or MPLS packets) based on the fields of the Ethernet
datagram. You can define only named MAC access lists.
 The hardware resources bound the number of ACLs and ACEs that can be
configured as part of a PACL on the switch. Those hardware resources are
shared by various ACL features (such as VACLs) that are configured on the
system. If there are insufficient hardware resources to program a PACL in
hardware, the PACL is not applied.
 PACL does not support the access-list log and reflect/evaluate keywords.
These keywords are ignored if you add them to the access list for a PACL.
 The access group mode can change the way PACLs interact with other ACLs.
To maintain consistent behavior across Cisco platforms, use the default access
group mode (merge mode)

Consider the following guidelines when configuring VACLs:

 VACLs use standard and extended Cisco IOS IP and MAC layer-named ACLs
and VLAN access maps.
 VLAN access maps can be applied to VLANs or to WAN interfaces for
VACL capture. VACLs attached to WAN interfaces support only standard and
extended Cisco IOS IP ACLs.
 Each VLAN access map can consist of one or more map sequences; each
sequence has a match clause and an action clause. The match clause specifies
IP or MAC ACLs for traffic filtering and the action clause specifies the action
to be taken when a match occurs. When a flow matches a permit ACL entry,
the associated action is taken and the flow is not checked against the
remaining sequences. When a flow matches a deny ACL entry,it will be
checked against the next ACL in the same sequence or the next sequence. If a
flow does not match any ACL entry and at least one ACL is configured for
that packet type, the packet is denied.
 To apply access control to both bridged and routed traffic, you can use VACLs
alone or a combination of VACLs and ACLs. You can define ACLs on the
VLAN interfaces to apply access control to both the ingress and egress routed
traffic. You can define a VACL to apply access control to the bridged traffic.
 The following caveats apply to ACLs when used with VACLs:
o Packets that require logging on the outbound ACLs are not logged if
they are denied by a VACL.
o VACLs are applied on packets before NAT translation. If the
translated flow is not subject to access control, the flow might be
subject to access control after the translation because of theVACL
configuration.
 The action clause in a VACL can be forward, drop, capture, or redirect.
Traffic can also be logged. VACLs applied to WAN interfaces do not support
the redirect or log actions.
 VACLs cannot be applied to IGMP, MLD, or PIM traffic.
263

Storm control
A traffic storm occurs when packets flood the LAN, creating excessive traffic and
degrading network performance. The traffic storm control feature prevents LAN ports
from being disrupted by a broadcast, multicast, or unicast traffic storm on physical
interfaces.

Traffic storm control (also called traffic suppression) monitors incoming traffic levels
over a 1-second traffic storm control interval, and during the interval it compares the
traffic level with the traffic storm control level that you configure. The traffic storm
control level is a percentage of the total available bandwidth of the port. Each port has
a single traffic storm control level that is used for all types of traffic (broadcast,
multicast, and unicast).

Traffic storm control monitors the level of each traffic type for which you enable
traffic storm control in 1-second traffic storm control intervals.

When the ingress traffic for which traffic storm control is enabled reaches the traffic
storm control level that is configured on the port, traffic storm control drops the traffic
until the traffic storm control interval ends.

Optional actions:

 Shutdown—When a traffic storm occurs, traffic storm control puts the port
into the error-disabled state. To re-enable ports, use the error-disable detection
and recovery feature or the shutdown and no shutdown commands.
 Trap—When a traffic storm occurs, traffic storm control generates an SNMP
trap.

DHCP Snooping
DHCP snooping is a security feature that acts like a firewall between untrusted hosts
and trusted DHCP servers. The DHCP snooping feature performs the following
activities:

 Validates DHCP messages received from untrusted sources and filters out
invalid messages.
 Rate-limits DHCP traffic from trusted and untrusted sources.
 Builds and maintains the DHCP snooping binding database, which contains
information about untrusted hosts with leased IP addresses.
 Utilizes the DHCP snooping binding database to validate subsequent requests
from untrusted hosts.

The DHCP snooping feature determines whether traffic sources are trusted or
untrusted. An untrusted source may initiate traffic attacks or other hostile actions. To
prevent such attacks, the DHCP snooping feature filters messages and rate-limits
traffic from untrusted sources.

Trusted and Untrusted Sources


264

In an enterprise network, devices under your administrative control are trusted


sources. These devices include the switches, routers, and servers in your network.
Any device beyond the firewall or outside your network is an untrusted source. Host
ports and unknown DHCP servers are generally treated as untrusted sources.

In the switch, you indicate that a source is trusted by configuring the trust state of its
connecting interface.

The defaults trust state of all interfaces is untrusted. You must configure DHCP
server interfaces as trusted. You can also configure other interfaces as trusted if they
connect to devices (such as switches or routers) inside your network. You usually do
not configure host port interfaces as trusted.

DHCP Snooping Binding Database


The DHCP snooping feature dynamically builds and maintains the database using
information extracted from intercepted DHCP messages. The database contains an
entry for each untrusted host with a leased IP address if the host is associated with a
VLAN that has DHCP snooping enabled. The database does not contain entries for
hosts connected through trusted interfaces.

Each entry in the DHCP snooping binding database includes the MAC address of the
host, the leased IP address, the lease time, the binding type, and the VLAN number
and interface information associated with the host.

IP Source Guard
IP Source Guard is a security feature that restricts IP traffic on untrusted Layer 2 ports
by filtering traffic based on the DHCP snooping binding database or manually
configured IP source bindings. This feature helps prevent IP spoofing attacks when a
host tries to spoof and use the IP address of another host. Any IP traffic coming into
the interface with a source IP address other than that assigned (via DHCP or static
configuration) will be filtered out on the untrusted Layer 2 ports.
The IP Source Guard feature is enabled in combination with the DHCP snooping
feature on untrusted Layer 2 interfaces. It builds and maintains an IP source-binding
table that is learned by DHCP snooping or manually configured (static IP source
bindings). An entry in the IP source-binding table contains the IP address and the
associated MAC and VLAN numbers. The IP Source Guard is supported on Layer 2
ports only, including access and trunk ports.

Dynamic ARP Inspection


Address Resolution Protocol (ARP) provides IP-to-MAC (32-bit IP address into a 48-
bit Ethernet address) resolution. ARP operates at Layer 2 (the data-link layer) of the
OSI model. ARP provides the translation mapping the IP address to the MAC address
of the destination host using a lookup table (also known as the ARP cache).
Several types of attacks can be launched against a host or devices connected to Layer
2 networks by "poisoning" the ARP caches. A malicious user could intercept traffic
intended for other hosts on the LAN segment and poison the ARP caches of
connected systems by broadcasting forged ARP responses. Several known ARP-based
attacks can have a devastating impact on data privacy, confidentiality, and sensitive
265

information. To block such attacks, the Layer 2 switch must have a mechanism to
validate and ensure that only valid ARP requests and responses are forwarded.
Dynamic ARP inspection is a security feature that validates ARP packets in a
network. Dynamic ARP inspection determines the validity of packets by performing
an IP-to-MAC address binding inspection stored in a trusted database, (the DHCP
snooping binding database) before forwarding the packet to the appropriate
destination. Dynamic ARP inspection will drop all ARP packets with invalid IP-to-
MAC address bindings that fail the inspection. The DHCP snooping binding database
is built when the DHCP snooping feature is enabled on the VLANs and on the switch.

Dynamic ARP inspection inspects inbound packets only; it does not check outbound
packets.

Port Security
You can use port security with dynamically learned and static MAC addresses to
restrict a port's ingress traffic by limiting the MAC addresses that are allowed to send
traffic into the port. When you assign secure MAC addresses to a secure port, the port
does not forward ingress traffic that has source addresses outside the group of defined
addresses. If you limit the number of secure MAC addresses to one and assign a
single secure MAC address, the device attached to that port has the full bandwidth of
the port.

A security violation occurs in either of these situations:

 When the maximum number of secure MAC addresses is reached on a secure


port and the source MAC address of the ingress traffic is different from any of
the identified secure MAC addresses, port security applies the configured
violation mode.
 If traffic with a secure MAC address that is configured or learned on one
secure port attempts to access another secure port in the same VLAN, applies
the configured violation mode.

Port security includes the secure addresses in the address table in one of these ways:

 You can statically configure all secure MAC addresses by using the switchport
port-security mac-address mac_address interface configuration command.
 You can allow the port to dynamically configure secure MAC addresses with
the MAC addresses of connected devices.
 You can statically configure a number of addresses and allow the rest to be
dynamically configured.

If the port has a link-down condition, all dynamically learned addresses are removed.

Sticky MAC Addresses


Port security with sticky MAC addresses provides many of the same benefits as port
security with static MAC addresses, but sticky MAC addresses can be learned
dynamically. Port security with sticky MAC addresses retains dynamically learned
MAC addresses during a link-down condition.
266

If you enter a write memory or copy running-config startup-config command, then


port security with sticky MAC addresses saves dynamically learned MAC addresses
in the startup-config file and the port does not have to learn addresses from ingress
traffic after bootup or a restart.

5.2.b - Implement and troubleshoot router security features

IPv4 access control lists

Cisco MDS 9000 Family switches can route IP version 4 (IPv4) traffic between
Ethernet and Fibre Channel interfaces. The IP static routing feature routes traffic
between VSANs. To do so, each VSAN must be in a different IPv4 sub-network.
Each Cisco MDS 9000 Family switch provides the following services for network
management systems (NMS):

 IP forwarding on the out-of-band Ethernet interface (mgmt0) on the front


panel of the supervisor modules.

 IP forwarding on the in-band Fibre Channel interface using the IP over Fibre
Channel (IPFC) function—IPFC specifies how IP frames can be transported
over Fibre Channel using encapsulation techniques. IP frames are
encapsulated into Fibre Channel frames so NMS information can cross the
Fibre Channel network without using an overlay Ethernet network.

 IP routing (default routing and static routing)—If your configuration does not
need an external router, you can configure a default route using static routing.

Switches are compliant with RFC 2338 standards for Virtual Router Redundancy
Protocol (VRRP) features. VRRP is a re-start-able application that provides a
redundant, alternate path to the gateway switch.

IPv4 Access Control Lists (IPv4-ACLs and IPv6-ACLs) provide basic network
security to all switches in the Cisco MDS 9000 Family. IPv4-ACLs and IPv6-ACLs
restrict IP-related traffic based on the configured IP filters. A filter contains the rules
to match an IP packet, and if the packet matches, the rule also stipulates if the packet
should be permitted or denied.

Each switch in the Cisco MDS 9000 Family can have a maximum total of 128 IPv4-
ACLs or 128 IPv6-ACLs and each IPv4-ACL or IPv6-ACL can have a maximum of
256 filters.

Unicast Reverse Path Forwarding


Network administrators can use Unicast Reverse Path Forwarding (Unicast RPF) to
help limit the malicious traffic on an enterprise network. This security feature works
by enabling a router to verify the reachability of the source address in packets being
forwarded. This capability can limit the appearance of spoofed addresses on a
267

network. If the source IP address is not valid, the packet is discarded. Unicast RPF
works in one of three different modes: strict mode, loose mode, or VRF mode. Note
that not all network devices support all three modes of operation. Unicast RPF in VRF
mode will not be covered in this document.

When administrators use Unicast RPF in strict mode, the packet must be received on
the interface that the router would use to forward the return packet. Unicast RPF
configured in strict mode may drop legitimate traffic that is received on an interface
that was not the router's choice for sending return traffic. Dropping this legitimate
traffic could occur when asymmetric routing paths are present in the network.

When administrators use Unicast RPF in loose mode, the source address must appear
in the routing table. Administrators can change this behavior using the allow-default
option, which allows the use of the default route in the source verification process.
Additionally, a packet that contains a source address for which the return route points
to the Null 0 interface will be dropped. An access list may also be specified that
permits or denies certain source addresses in Unicast RPF loose mode.

Care must be taken to ensure that the appropriate Unicast RPF mode (loose or strict)
is configured during the deployment of this feature because it can drop legitimate
traffic. Although asymmetric traffic flows may be of concern when deploying this
feature, Unicast RPF loose mode is a scalable option for networks that contain
asymmetric routing paths.

Unicast RPF in an Enterprise Network


In many enterprise environments, it is necessary to use a combination of strict mode
and loose mode Unicast RPF. The choice of the Unicast RPF mode that will be used
will depend on the design of the network segment connected to the interface on which
Unicast RPF is deployed.

Administrators should use Unicast RPF in strict mode on network interfaces for which
all packets received on an interface are guaranteed to originate from the subnet
assigned to the interface. A subnet composed of end stations or network resources
fulfills this requirement. Such a design would be in place for an access layer network
or a branch office where there is only one path into and out of the branch network. No
other traffic originating from the subnet is allowed and no other routes are available
past the subnet.

Unicast RPF loose mode can be used on an uplink network interface that has a default
route associated with it.

5.2.c - Implement and troubleshoot IPv6 first hop security

There are a growing number of large-scale IPv6 deployments at enterprise, university,


and government networks. For the success of each of these networks, it is important
that the IPv6 deployments are secure and are of a service quality that equals that of
the existing IPv4 infrastructure.
268

Network users have an expectation that there is functional parity between IPv4 and
IPv6 and that on each of these protocols security and serviceability concerns are
similar. From the network operator perspective there is a similar assumption that both
IPv4 and IPv6 are secure environments with a high degree of traceability and quality
assurance.

IPv6 First-Hop Security Binding Table

A database table of IPv6 neighbors connected to the switch is created from


information sources such as Neighbor Discovery (ND) protocol snooping. This
database, or binding, table is used by various IPv6 guard features to validate the link-
layer address (LLA), the IPv4 or IPv6 address, and prefix binding of the neighbors to
prevent spoofing and redirect attacks.

IPv6 Device Tracking

The IPv6 device-tracking feature provides IPv6 host live-ness tracking so that a
neighbor table can be immediately updated when an IPv6 host disappears. The feature
tracks of the live-ness of the neighbors connected through the Layer 2 switch on
regular basis in order to revoke network access privileges as they become inactive.

IPv6 Port-Based Access List Support

The IPv6 PACL feature provides the ability to provide access control (permit or deny)
on L2 switch ports for IPv6 traffic. IPv6 PACLs are similar to IPv4 PACLs, which
provide access control on L2 switch ports for IPV4 traffic. They are supported only in
ingress direction and in hardware.

PACL can filter ingress traffic on L2 interfaces based on L3 and L4 header


information or non-IP L2 information.

IPv6 Global Policies


IPv6 global policies provide policy database services to features with regard to storing
and accessing those policies. IPv6 ND inspection and IPv6 RA guard are IPv6 global
policies features. Every time an ND inspection or RA guard is configured globally,
the attributes of the policy are stored in the software policy database. The policy is
then applied to an interface, and the software policy database entry is updated to
include this interface to which the policy is applied.

IPv6 RA Guard
IPv6 RA guard provides support for allowing the network administrator to block or
reject unwanted or rogue RA guard messages that arrive at the network switch
platform. RAs are used by routers to announce themselves on the link. The RA Guard
feature analyzes these RAs and filters out bogus RAs sent by unauthorized routers. In
host mode, all router advertisement and router redirect messages are disallowed on the
port. The RA guard feature compares configuration information on the L2 device with
the information found in the received RA frame. Once the L2 device has validated the
content of the RA frame and router redirect frame against the configuration, it
forwards the RA to its unicast or multicast destination. If the RA frame content is not
validated, the RA is dropped.
269

IPv6 ND Inspection

IPv6 ND inspection learns and secures bindings for stateless autoconfiguration


addresses in layer 2 neighbor tables. IPv6 ND inspection analyzes neighbor discovery
messages in order to build a trusted binding table database, and IPv6 neighbor
discovery messages that do not conform are dropped. SA neighbor discovery message
is considered trustworthy if its IPv6-to-Media Access Control (MAC) mapping is
verifiable.

This feature mitigates some of the inherent vulnerabilities for the neighbor discovery
mechanism, such as attacks on duplicate address detection (DAD), address resolution,
router discovery, and the neighbor cache.

5.2.d - Describe 802.1x


IEEE 802.1x is a port-based authentication standard that can be used on local area
networks (LANs) to authenticate a user before allowing services on Ethernet, FE, and
WLANs.

With 802.1x, client workstations run 802.1x client software to request services.
Clients use the Extensible Authentication Protocol (EAP) to communicate with the
LAN switch. The LAN switch verifies client information with the authentication
server and relays the response to the client. LAN switches use a Remote
Authentication Dial-In User Service (RADIUS) client to communicate with the
server. The RADIUS authentication server validates the identity of the client and
authorizes the client. The server uses RADIUS with EAP extensions to make the
authorization.

You can configure IEEE 802.1x port-based authentication by enabling AAA


authentication, configuring the RADIUS server parameters, and enabling 802.1x on
the interface.

6.1 - Infrastructure Services

6.1 – System Management

6.1.a - Implement and troubleshoot device management


270

Console Port Handling Overview

Users using the console port to access the router are automatically directed to the IOS
command-line interface, by default.

If a user is trying to access the router through the console port and sends a break
signal (a break signal can be sent by entering Ctrl-C or Ctrl-Shift-6 , or by entering
the send break command at the Telnet prompt ) before connecting to the IOS
command-line interface, the user is directed into a diagnostic mode by default if the
non-RPIOS sub-packages can be accessed.

Configuring a transport map for the console port and applying that transport map to
the console interface can change these settings.

Telnet and SSH Overview for the Cisco ASR 1000 Series Routers
Telnet and Secure Shell (SSH) on the Cisco ASR 1000 Series Routers can be
configured and handled like Telnet and SSH on other Cisco platforms.
The Cisco ASR 1000 Series Routers also introduces persistent Telnet and persistent
SSH. Persistent Telnet and persistent SSH allow network administrators to more
clearly define the treatment of incoming traffic when users access the router through
the Management Ethernet port using Telnet or SSH. Notably, persistent Telnet and
persistent SSH provide more robust network access by allowing the router to be
configured to be accessible through the Ethernet Management port using Telnet or
SSH even when the IOS process has failed.

Persistent Telnet and Persistent SSH Overview


In traditional Cisco routers, accessing the router using Telnet or SSH is not possible in
the event of an IOS failure. When Cisco IOS fails on a traditional Cisco router, the
only method of accessing the router is through the console port. Similarly, if all active
IOS processes have failed on a Cisco ASR 1000 Series Router that is not using
persistent Telnet or persistent SSH, the only method of accessing the router is through
the console port.

With persistent Telnet and persistent SSH, however, users can configure a transport
map that defines the treatment of incoming Telnet or SSH traffic on the Management
Ethernet interface. Among the many configuration options, a transport map can be
configured to direct all traffic to the IOS command-line interface, diagnostic mode, or
to wait for an IOS vty line to become available and then direct users into diagnostic
mode when the user sends a break signal while waiting for the IOS vty line to become
available. If a user uses Telnet or SSH to access diagnostic mode, that Telnet or SSH
connection will be usable even in scenarios when no IOS process is active. Therefore,
persistent Telnet and persistent SSH introduce the ability to access the router via
diagnostic mode when the IOS process is not active.

6.1.b - Implement and troubleshoot SNMP


271

SNMP is an application-layer protocol that provides a message format for


communication between managers and agents. The SNMP system consists of an
SNMP manager, an SNMP agent, and a MIB. The SNMP manager can be part of a
network management system (NMS) such as Cisco Works. The agent and MIB reside
on the switch. To configure SNMP on the switch, you define the relationship between
the manager and the agent.

The SNMP agent contains MIB variables whose values the SNMP manager can
request or change. A manager can get a value from an agent or store a value into the
agent. The agent gathers data from the MIB, the repository for information about
device parameters and network data. The agent can also respond to a manager's
requests to get or set data.

An agent can send unsolicited traps to the manager. Traps are messages alerting the
SNMP manager to a condition on the network. Traps can mean improper user
authentication, restarts, link status (up or down), MAC address tracking, closing of a
TCP connection, loss of connection to a neighbor, or other significant events.

SNMP Versions
This software release supports these SNMP versions:
 SNMPv1—The Simple Network Management Protocol, a Full Internet Standard,
defined in RFC 1157.
 SNMPv2C replaces the Party-based Administrative and Security Framework of
SNMPv2Classic with the community-string-based Administrative Framework of
SNMPv2C while retaining the bulk retrieval and improved error handling of
SNMPv2Classic. It has these features:
– SNMPv2—Version 2 of the Simple Network Management Protocol, a
Draft Internet Standard, defined in RFCs 1902 through 1907.
– SNMPv2C—The community-string-based Administrative Framework for
SNMPv2, an Experimental Internet Protocol defined in RFC 1901.
 SNMPv3—Version 3 of the SNMP is an interoperable standards-based protocol
defined in RFCs 2273 to 2275. SNMPv3 provides secure access to devices by
authenticating and encrypting packets over the network and includes these
security features:
– Message integrity—ensuring that a packet was not tampered with in
transit
– Authentication—determining that the message is from a valid source
– Encryption—mixing the contents of a package to prevent it from being
read by an unauthorized source.

Both SNMPv1 and SNMPv2C use a community-based form of security. The


community of managers able to access the agent's MIB is defined by an IP address
access control list and password.
272

SNMPv2C includes a bulk retrieval mechanism and more detailed error message
reporting to management stations. The bulk retrieval mechanism retrieves tables and
large quantities of information, minimizing the number of round-trips required. The
SNMPv2C improved error-handling includes expanded error codes that distinguish
different kinds of error conditions; these conditions are reported through a single error
code in SNMPv1. Error return codes in SNMPv2C report the error type.

SNMPv3 provides for both security models and security levels. A security model is an
authentication strategy set up for a user and the group within which the user resides. A
security level is the permitted level of security within a security model. A
combination of the security level and the security model determine which security
mechanism is used when handling an SNMP packet. Available security models are
SNMPv1, SNMPv2C, and SNMPv3.

6.1.c - Implement and troubleshoot logging

Local Logging
Device logs often offer valuable information when troubleshooting a network issue.
Interface status, security alerts, environmental conditions, CPU process hog, and
many other events on the router or switch can be captured and analyzed later by
studying the logs. By default, all log messages on a Cisco router or switch are sent to
the console port. Only users that are physically connected to the console port may
view these messages. If you are connected to a Cisco device via Telnet or SSH and
want to see console messages, you can enter the command terminal monitor in
privileged exec mode. Cisco devices support five types of logging:

 console logging - all messages are sent to the console port (default)
 terminal logging - similar with the console logging but the messages are
displayed on the VTY lines of the device.
 buffered logging - stores the log messages using a circular buffer in the
device's RAM
 host logging (syslog) - forwards log messages to an external syslog server
 SNMP logging - uses SNMP traps to send log messages to an external SNMP
server

Debug Client
The command debug client <MACADDRESS> is a macro that enables eight debug
commands, plus a filter on the MAC address provided, so only messages that contain
the specified MAC address are shown. The eight debug commands show the most
important details on client association and authentication. The filter helps with
situations where there are multiple wireless clients. Situations such as when too much
output is generated or the controller is overloaded when debugging is enabled without
the filter.

Syslog
Syslog is a method to collect messages from devices to a server running a syslog
daemon. Logging to a central syslog server helps in aggregation of logs and alerts.
273

Cisco devices can send their log messages to a Unix-style SYSLOG service. A
SYSLOG service simply accepts messages, and stores them in files or prints them
according to a simple configuration file. This form of logging is the best available for
Cisco devices because it can provide protected long-term storage for logs. This is
useful both in routine troubleshooting and in incident handling.

6.2 – Quality of Service

6.2.a – Implement and troubleshoot end-to-end QoS

Typically, networks operate on a best-effort delivery basis, which means that all
traffic has equal priority and an equal chance of being delivered in a timely manner.
When congestion occurs, all traffic has an equal chance of being dropped.

When you configure the QoS feature, you can select specific network traffic,
prioritize it according to its relative importance, and use congestion-management and
congestion-avoidance techniques to provide preferential treatment. Implementing QoS
in your network makes network performance more predictable and bandwidth
utilization more effective.

The QoS implementation is based on the DiffServ architecture, an emerging standard


from the Internet Engineering Task Force (IETF). This architecture specifies that each
packet is classified upon entry into the network. The classification is carried in the IP
packet header, using 6 bits from the deprecated IP Type-of-Service (TOS) field to
carry the classification (class) information.

Configuring the CoS-to-DSCP Map


You use the CoS-to-DSCP map-to-map CoS values in incoming packets to a DSCP
value that QoS uses internally to represent the priority of the traffic.

The following table shows the default CoS-to-DSCP map.

CoS value 0 1 2 3 4 5 6 7

DSCP value 0 8 16 24 32 40 48 56

If these values are not appropriate for your network, you need to modify them.
Beginning in privileged EXEC mode, follow these steps to modify the CoS-to-DSCP
map:
Command Purpose
274

Step 1 configure terminal Enter global configuration mode.

Step 2 mls qos map cos- Modify the CoS-to-DSCP map.


dscpdscp1...dscp8 For dscp1...dscp8, enter 8 DSCP
values that correspond to CoS
values 0 to 7. Separate each DSCP
value with a space.
The supported DSCP values are 0,
8, 10, 16, 18, 24, 26, 32, 34, 40,
46, 48, and 56.

Step 3 end Return to privileged EXEC mode.

Step 4 show mls qos maps Verify your entries.


cos-dscp

Step 5 copy running-config (Optional) Save your entries in the


startup-config configuration file.

To return to the default map, use the no mls qos map cos-dscp global configuration
command.
This example shows how to modify and display the CoS-to-DSCP map:
Switch# configure terminal
Switch(config)#mls qos map cos-dscp 8 8 8 8 24 32 56 56
Switch(config)# end
Switch# show mls qos maps cos-dscp

Cos-dscp map:

cos: 0 1 2 3 4 5 6 7
--------------------------------
dscp: 8 8 8 8 24 32 56 56

6.2.b – Implement, optimize and troubleshoot QoS using MQC

Classification
Classifying network traffic allows you to organize traffic (that is, packets) into traffic
classes or categories on the basis of whether the traffic matches specific criteria.
Classifying network traffic (used in conjunction with marking network traffic) is the
foundation for enabling much quality of service (QoS) features on your network.

Packet classification is pivotal to policy techniques that select packets traversing a


network element or a particular interface for different types of QoS service. For
example, you can use classification to mark certain packets for IP Precedence, and
you can identify other packets as belonging to a Resource Reservation Protocol
(RSVP) flow.
275

Methods of classification were once limited to use of the contents of the packet
header. Current methods of marking a packet with its classification allow you to set
information in the Layer 2, 3, or 4 headers, or even by setting information within the
payload of a packet. Criteria for classification of a group might be as broad as "traffic
destined for sub-network X" or as narrow as a single flow.

NBAR
To configure NBAR using the MQC, you must define a traffic class, configure a
traffic policy (policy map), and then attach that traffic policy to the appropriate
interface. These three tasks can be accomplished by using the MQC. The MQC is a
command-line interface that allows you to define traffic classes, create and configure
traffic policies (policy maps), and then attach these traffic policies to interfaces.

In the MQC, the class-map command is used to define a traffic class (which is then
associated with a traffic policy). The purpose of a traffic class is to classify traffic.

Using the MQC to configure NBAR consists of the following:

 Defining a traffic class with the class-map command.


 Creating a traffic policy by associating the traffic class with one or more QoS
features (using the policy-map command).
 Attaching the traffic policy to the interface with the service-policy command.

A traffic class contains three major elements: a name, one or more match commands,
and, if more than one match command exists in the traffic class, an instruction on how
to evaluate these match commands (that is, match-all or match-any). The traffic class
is named in the class-map command line; for example, if you enter the class-map
cisco command while configuring the traffic class in the CLI, the traffic class would
be named "cisco."

The match commands are used to specify various criteria for classifying packets.
Packets are checked to determine whether they match the criteria specified in the
match commands. If a packet matches the specified criteria, that packet is considered
a member of the class and is forwarded according to the QoS specifications set in the
traffic policy. Packets that fail to meet any of the matching criteria are classified as
members of the default traffic class.

IP Precedence
Use of IP Precedence allows you to specify the class of service (CoS) for a packet.
You use the three precedence bits in the type of service (ToS) field of the IP version 4
(IPv4) header for this purpose. Figure 6.1 shows the ToS field.

Figure 6.1
276

Using the ToS bits, you can define up to six classes of service. Other features
configured throughout the network can then use these bits to determine how to treat
the packet. These other QoS features can assign appropriate traffic-handling policies
including congestion management strategy and bandwidth allocation. For example,
although IP Precedence is not a queueing method, queueing methods such as
weighted fair queueing (WFQ) and Weighted Random Early Detection (WRED) can
use the IP Precedence setting of the packet to prioritize traffic.

By setting precedence levels on incoming traffic and using them in combination with
the Cisco IOS QoS queueing features, you can create differentiated service. You can
use features such as policy-based routing (PBR) and committed access rate (CAR) to
set precedence based on extended access list classification. These features afford
considerable flexibility for precedence assignment. For example, you can assign
precedence based on application or user, or by destination and source sub-network.

So that each subsequent network element can provide service based on the determined
policy, IP Precedence is usually deployed as close to the edge of the network or the
administrative domain as possible. You can think of IP Precedence as an edge
function that allows core, or backbone, QoS features such as WRED to forward traffic
based on CoS. IP Precedence can also be set in the host or network client, but this
setting can be overridden by policy within the network.

The following QoS features can use the IP Precedence field to determine how traffic
is treated:

 Distributed WRED (DWRED)


 WFQ
 CAR

Congestion Management
Congestion management features allow you to control congestion by determining the
order in which packets are sent out an interface based on priorities assigned to those
packets. Congestion management entails the creation of queues, assignment of
packets to those queues based on the classification of the packet, and scheduling of
the packets in a queue for transmission. The congestion management QoS feature
offers four types of queueing protocols, each of which allows you to specify creation
of a different number of queues, affording greater or lesser degrees of differentiation
of traffic, and to specify the order in which that traffic is sent.
277

During periods with light traffic, that is, when no congestion exists, packets are sent
out the interface as soon as they arrive. During periods of transmit congestion at the
outgoing interface, packets arrive faster than the interface can send them. If you use
congestion management features, packets accumulating at an interface are queued
until the interface is free to send them; they are then scheduled for transmission
according to their assigned priority and the queueing mechanism configured for the
interface. The router determines the order of packet transmission by controlling which
packets are placed in which queue and how queues are serviced with respect to each
other.

There are four types of queueing, which constitute the congestion management QoS
features:

 FIFO (first-in, first-out). FIFO entails no concept of priority or classes of


traffic. With FIFO, transmission of packets out the interface occurs in the
order the packets arrive.
 Weighted fair queueing (WFQ). WFQ offers dynamic, fair queueing that
divides bandwidth across queues of traffic based on weights. (WFQ ensures
that all traffic is treated fairly, given its weight.) To understand how WFQ
works, consider the queue for a series of File Transfer Protocol (FTP) packets
as a queue for the collective and the queue for discrete interactive traffic
packets as a queue for the individual. Given the weight of the queues, WFQ
ensures that for all FTP packets sent as a collective an equal number of
individual interactive traffic packets are sent.)

Given this handling, WFQ ensures satisfactory response time to critical applications,
such as interactive, transaction-based applications, that are intolerant of performance
degradation. For serial interfaces at E1 (2.048 Mbps) and below, flow-based WFQ is
used by default. When no other queueing strategies are configured, all other interfaces
use FIFO by default.

There are four types of WFQ:

 Flow-based WFQ (WFQ)


 Distributed WFQ (DWFQ)
 Class-based WFQ (CBWFQ)
 Distributed class-based WFQ (DCBWFQ)

 Custom queueing (CQ). With CQ, bandwidth is allocated proportionally for


each different class of traffic. CQ allows you to specify the number of bytes or
packets to be drawn from the queue, which is especially useful on slow
interfaces.
 Priority queueing (PQ). With PQ, packets belonging to one priority class of
traffic are sent before all lower priority traffic to ensure timely delivery of
those packets.

Congestion Avoidance
Congestion avoidance techniques monitor network traffic loads in an effort to
anticipate and avoid congestion at common network bottlenecks. Congestion
avoidance is achieved through packet dropping. Among the more commonly used
278

congestion avoidance mechanisms is Random Early Detection (RED), which is


optimum for high-speed transit networks. Cisco IOS QoS includes an implementation
of RED that, when configured, controls when the router drops packets. If you do not
configure Weighted Random Early Detection (WRED), the router uses the cruder
default packet drop mechanism called tail drop.

This module gives a brief description of the kinds of congestion avoidance


mechanisms provided by the Cisco IOS QoS features.

 Tail drop. This is the default congestion avoidance behavior when WRED is
not configured.
 WRED. WRED and distributed WRED (DWRED)—both of which are the
Cisco implementations of RED—combine the capabilities of the RED
algorithm with the IP Precedence feature.
 Flow-based WRED. Flow-based WRED extends WRED to provide
greater fairness to all flows on an interface in regard to how packets are
dropped.
 DiffServ Compliant WRED. DiffServ Compliant WRED extends
WRED to support Differentiated Services (DiffServ) and Assured
Forwarding (AF) Per Hop Behavior (PHB). This feature enables customers
to implement AF PHB by coloring packets according to differentiated
services code point (DSCP) values and then assigning preferential drop
probabilities to those packets.

Policing and Traffic Shaping


Cisco IOS QoS offers two kinds of traffic regulation mechanisms—policing and
shaping.

The rate-limiting features of committed access rate (CAR) and the Traffic Policing
feature provide the functionality for policing traffic. The features of Generic Traffic
Shaping (GTS), Class-Based Traffic Shaping, Distributed Traffic Shaping (DTS), and
Frame Relay Traffic Shaping (FRTS) provide the functionality for shaping traffic.

Use Cisco Feature Navigator to find information about platform support and Cisco
software image support. To access Cisco Feature Navigator, go to
http://www.cisco.com/go/cfn. An account on Cisco.com is not required.

You can deploy these features throughout your network to ensure that a packet, or
data source, adheres to a stipulated contract and to determine the QoS to render the
packet. Both policing and shaping mechanisms use the traffic descriptor for a packet
—indicated by the classification of the packet—to ensure adherence and service.

Policers and shapers usually identify traffic descriptor violations in an identical


manner. They usually differ, however, in the way they respond to violations, for
example:

 A policer typically drops traffic. (For example, the CAR rate-limiting policer
will either drop the packet or rewrite its IP precedence, resetting the type of
service bits in the packet header.)
279

 A shaper typically delays excess traffic using a buffer, or queueing


mechanism, to hold packets and shape the flow when the data rate of the
source is higher than expected. (For example, GTS and Class-Based Shaping
use a weighted fair queue to delay packets in order to shape the flow, and DTS
and FRTS use a priority queue, a custom queue, or a FIFO queue for the same,
depending on how you configure it.)

Traffic shaping and policing can work in tandem. For example, a good traffic-shaping
scheme should make it easy for nodes inside the network to detect misbehaving flows.
This activity is sometimes called policing the traffic of the flow.

6.2.c - Describe layer 2 QoS

Queuing and Scheduling

Each physical port has four transmit queues (egress queues). Each packet that needs to
be transmitted is enqueued to one of the transmit queues. The transmit queues are then
serviced based on the transmit queue scheduling algorithm.

Once the final transmit DSCP is computed (including any markdown of DSCP), the
transmit DSCP to transmit queue mapping configuration determines the transmit
queue. The packet is placed in the transmit queue of the transmit port, determined
from the transmit DSCP. Use the qos map dscp to tx-queue command to configure the
transmit DSCP to transmit queue mapping. The transmit DSCP is the internal DSCP
value if the packet is a non-IP packet as determined by the QoS policies and trust
configuration on the ingress and egress ports.

Active Queue Management


Active queue management (AQM) informs you about congestion before a buffer
overflow occurs. AQM is done using dynamic buffer limiting (DBL). DBL tracks the
queue length for each traffic flow in the switch. When the queue length of a flow
exceeds its limit, DBL drop packets or set the Explicit Congestion Notification (ECN)
bits in the packet headers.
DBL classifies flows in two categories, adaptive and aggressive. Adaptive flows
reduce the rate of packet transmission once it receives congestion notification.
Aggressive flows do not take any corrective action in response to congestion
notification. For every active flow the switch maintains two parameters,
"buffersUsed" and "credits." All flows start with "max-credits," a global parameter.
When a flow with credits less than "aggressive-credits" (another global parameter) it
is considered an aggressive flow and is given a small buffer limit called
"aggressiveBufferLimit."
Queue length is measured by the number of packets. The number of packets in the
queue determines the amount of buffer space that a flow is given. When a flow has a
high queue length the computed value is lowered. This allows new incoming flows to
280

receive buffer space in the queue. This allows all flows to get a proportional share of
packets using the queue.

Because 4 transmit queues exist per interface and DBL is a per-queue mechanism,
DSCP values can make DBL application more complex.
The following table provides the default DSCP-to-transmit queue mapping:
DSCP txQueue
0-15 1

16-31 2

32-48 3

49-63 4

For example, if you are sending two streams, one with a DSCP of 16 and other with a
value of 0, they will transmit from different queues. Even though an aggressive flow
in txQueue 2 (packets with DSCP of 16) can saturate the link, packets with a DSCP of
0 are not blocked by the aggressive flow, because they transmit from txQueue 1. Even
without DBL, packets whose DSCP value places them in txQueue 1, 3, or 4 are not
dropped because of the aggressive flow.

Sharing Link Bandwidth Among Transmit Queues


The four transmit queues for a transmit port share the available link bandwidth of that
transmit port. You can set the link bandwidth to be shared differently among the
transmit queues using bandwidth command in interface transmit queue configuration
mode. With this command, you assign the minimum guaranteed bandwidth for each
transmit queue.
By default, all queues are scheduled in a round-robin manner.
For systems using Supervisor Engine II-Plus, Supervisor Engine II-Plus TS,
Supervisor Engine III, and Supervisor Engine IV, bandwidth can be configured on
these ports only:
 Uplink ports on supervisor engines
 Ports on the WS-X4306-GB GBIC module
 Ports on the WS-X4506-GB-T CSFP module
 The two 1000BASE-X ports on the WS-X4232-GB-RJ module
 The first two ports on the WS-X4418-GB module
 The two 1000BASE-X ports on the WS-X4412-2GB-TX module
For systems using Supervisor Engine V, bandwidth can be configured on all ports
(10/100 Fast Ethernet, 10/100/1000BASE-T, and 1000BASE-X).

Strict Priority / Low Latency Queueing


You can configure transmit queue 3 on each port with higher priority using
the priority high tx-queue configuration command in the interface configuration
281

mode. When transmit queue 3 is configured with higher priority, packets in transmit
queue 3 are scheduled ahead of packets in other queues.
When transmit queue 3 is configured at a higher priority, the packets are scheduled
for transmission before the other transmit queues only if it has not met the allocated
bandwidth sharing configuration. Any traffic that exceeds the configured shape rate is
queued and transmitted at the configured rate. If the burst of traffic, exceeds the size
of the queue, packets are dropped to maintain transmission at the configured shape
rate.

Traffic Shaping

Traffic shaping provides the ability to control the rate of outgoing traffic in order to
make sure that the traffic conforms to the maximum rate of transmission contracted for
it. Traffic that meets certain profile can be shaped to meet the downstream traffic rate
requirements to handle any data rate mismatches.

Each transmit queue can be configured to transmit a maximum rate using


the shape command. The configuration allows you to specify the maximum rate of
traffic. Any traffic that exceeds the configured shape rate is queued and transmitted at
the configured rate. If the burst of traffic exceeds the size of the queue, packets are
dropped to maintain transmission at the configured shape rate.

6.3 – Network Services

6.3.a Implement and troubleshoot first-hop redundancy protocols

GLBP Overview
GLBP provides automatic router backup for IP hosts configured with a single default
gateway on an IEEE 802.3 LAN. Multiple first-hop routers on the LAN combine to
offer a single virtual first-hop IP router while sharing the IP packet-forwarding load.
Other routers on the LAN may act as redundant GLBP routers that will become active
if any of the existing forwarding routers fail.
GLBP performs a similar function for the user as HSRP and VRRP. HSRP and VRRP
allow multiple routers to participate in a virtual router group configured with a virtual
IP address. One member is elected to be the active router to forward packets sent to
the virtual IP address for the group. The other routers in the group are redundant until
the active router fails. These standby routers have unused bandwidth that the protocol
is not using. Although multiple virtual router groups can be configured for the same
set of routers, the hosts must be configured for different default gateways, which
results in an extra administrative burden. The advantage of GLBP is that it
additionally provides load balancing over multiple routers (gateways) using a single
virtual IP address and multiple virtual MAC addresses. The forwarding load is shared
among all routers in a GLBP group rather than being handled by a single router while
the other routers stand idle. Each host is configured with the same virtual IP address,
and all routers in the virtual router group participate in forwarding packets. GLBP
282

members communicate between each other through hello messages sent every 3
seconds to the multicast address 224.0.0.102, UDP port 3222 (source and destination).

GLBP Active Virtual Gateway


Members of a GLBP group elect one gateway to be the active virtual gateway (AVG)
for that group. Other group members provide backup for the AVG if the AVG
becomes unavailable. The AVG assigns a virtual MAC address to each member of the
GLBP group. Each gateway assumes responsibility for forwarding packets sent to the
virtual MAC address assigned to it by the AVG. These gateways are known as active
virtual forwarders (AVFs) for their virtual MAC address.
The AVG is also responsible for answering Address Resolution Protocol (ARP)
requests for the virtual IP address. Load sharing is achieved by the AVG replying to
the ARP requests with different virtual MAC addresses.
Prior to Cisco IOS Release 15.0(1)M1, 12.4(24)T2, 15.1(2)T, and later releases, when
the no glbp load-balancing command is configured, the AVG always responds to
ARP requests with the MAC address of its AVF.
In Cisco IOS Release 15.0(1)M1, 12.4(24)T2, 15.1(2)T, and later releases, when
the no glbp load-balancing command is configured, if the AVG does not have an
AVF, it preferentially responds to ARP requests with the MAC address of the first
listening virtual forwarder (VF), which will causes traffic to route via another
gateway until that VF migrates back to being the current AVG.

HSRP Operation
Most IP hosts have an IP address of a single router configured as the default gateway.
When HSRP is used, the HSRP virtual IP address is configured as the host's default
gateway instead of the IP address of the router.

HSRP is useful for hosts that do not support a router discovery protocol (such as
ICMP Router Discovery Protocol [IRDP]) and cannot switch to a new router when
their selected router reloads or loses power. Because existing TCP sessions can
survive the failover, this protocol also provides a more transparent recovery for hosts
that dynamically choose a next hop for routing IP traffic.

When HSRP is configured on a network segment, it provides a virtual MAC address


and an IP address that is shared among a group of routers running HSRP. The address
of this HSRP group is referred to as the virtual IP address. One of these devices is
selected by the protocol to be the active router. The active router receives and routes
packets destined for the MAC address of the group. For n routers running HSRP, n+ 1
IP and MAC addresses are assigned.

HSRP detects when the designated active router fails, at which point a selected
standby router assumes control of the MAC and IP addresses of the Hot Standby
group. A new standby router is also selected at that time.

HSRP uses a priority mechanism to determine which HSRP configured router is to be


the default active router. To configure a router as the active router, you assign it a
priority that is higher than the priority of all the other HSRP-configured routers. The
283

default priority is 100, so if you configure just one router to have a higher priority,
that router will be the default active router.

Devices that are running HSRP send and receive multicast UDP-based hello messages
to detect router failure and to designate active and standby routers. When the active
router fails to send a hello message within a configurable period of time, the standby
router with the highest priority becomes the active router. The transition of packet
forwarding functions between routers is completely transparent to all hosts on the
network.

You can configure multiple Hot Standby groups on an interface, thereby making
fuller use of redundant routers and load sharing.

The figure below shows a network configured for HSRP. By sharing a virtual MAC
address and IP address, two or more routers can act as a single virtual router. The
virtual router does not physically exist but represents the common default gateway for
routers that are configured to provide backup to each other. You do not need to
configure the hosts on the LAN with the IP address of the active router. Instead, you
configure them with the IP address (virtual IP address) of the virtual router as their
default gateway. If the active router fails to send a hello message within the
configurable period of time, the standby router takes over and responds to the virtual
addresses and becomes the active router, assuming the active router duties.

Figure 6.2 – HSRP topology

VRRP Operation

There are several ways a LAN client can determine which router should be the first
hop to a particular remote destination. The client can use a dynamic process or static
configuration. Examples of dynamic router discovery are as follows:
284

 Proxy ARP--The client uses Address Resolution Protocol (ARP) to get the
destination it wants to reach, and a router will respond to the ARP request with
its own MAC address.
 Routing protocol--The client listens to dynamic routing protocol updates (for
example, from Routing Information Protocol [RIP]) and forms its own routing
table.
 ICMP Router Discovery Protocol (IRDP) client--The client runs an Internet
Control Message Protocol (ICMP) router discovery client.

The drawback to dynamic discovery protocols is that they incur some configuration
and processing overhead on the LAN client. Also, in the event of a router failure, the
process of switching to another router can be slow.

An alternative to dynamic discovery protocols is to statically configure a default


router on the client. This approach simplifies client configuration and processing, but
creates a single point of failure. If the default gateway fails, the LAN client is limited
to communicating only on the local IP network segment and is cut off from the rest of
the network.

VRRP can solve the static configuration problem. VRRP enables a group of routers to
form a single virtual router. The LAN clients can then be configured with the virtual
router as their default gateway. The virtual router, representing a group of routers, is
also known as a VRRP group.

VRRP is supported on Ethernet, Fast Ethernet, BVI, and Gigabit Ethernet interfaces,
and on MPLS VPNs, VRF-aware MPLS VPNs, and VLANs.

The figure below shows a LAN topology in which VRRP is configured. In this
example, Routers A, B, and C are VRRP routers (routers running VRRP) that
comprise a virtual router. The IP address of the virtual router is the same as that
configured for the Ethernet interface of Router A (10.0.0.1).

Figure 6.3 – Basic VRRP topology


285

Because the virtual router uses the IP address of the physical Ethernet interface of
Router A, Router A assumes the role of the virtual router master and is also known as
the IP address owner. As the virtual router master, Router A controls the IP address of
the virtual router and is responsible for forwarding packets sent to this IP address.
Clients 1 through 3 are configured with the default gateway IP address of 10.0.0.1.

Routers B and C function as virtual router backups. If the virtual router master fails,
the router configured with the higher priority will become the virtual router master
and provide uninterrupted service for the LAN hosts. When Router A recovers, it
becomes the virtual router master again.

6.3.b Implement and troubleshoot network time protocol

NTP Version 3 (RFC 1305) allows IP hosts to synchronize their time-of-day clocks
with a common source clock. For instance, routers and switches can synchronize their
clocks to make event correlation from an SNMP management station more
meaningful, by ensuring that any events and traps have accurate time stamps.

By design, most routers and switches use NTP client mode, adjusting their clocks
based on the time as known by an NTP server. NTP defines the messages that flow
between client and server, and the algorithms a client uses to adjust its clock. Routers
and switches can also be configured as NTP servers, as well as using NTP symmetric
active mode—a mode in which the router or switch mutually synchronizes with
another NTP host.

NTP servers may reference other NTP servers to obtain a more accurate clock source
as defined by the stratum level of the ultimate source clock. For instance, atomic
clocks and Global Positioning System (GPS) satellite transmissions provide a source
of stratum 1 (lowest/best possible stratum level). For an enterprise network, the
routers and switches can refer to a low-stratum NTP source on the Internet, or
purpose-built rack-mounted NTP servers, with built-in GPS capabilities, can be
deployed.

6.3.c Implement and troubleshoot IPv4 and IPv6 DHCP

DHCP
One alternative to static IPv6 addressing, namely stateless auto-configuration was
covered earlier. Another alternative also exists: stateful auto-configuration. This is
where DHCPv6 comes in. DHCPv6 is specified in RFC 3315.
Two conditions can cause a host to use DHCPv6:

 The host is explicitly configured to use DHCPv6 based on an implementation-


specific setting.
286

 An IPv6 router advertises in its RA messages that it wants hosts to use


DHCPv6 for addressing. Routers do this by setting the M flag (Managed
Address Configuration) in RAs.

To use stateful auto-configuration, a host sends a DHCP request to one of two well-
known IPv6 multicast addresses on UDP port 547:

 FF02::1:2, all DHCP relay agents and servers


 FF05::1:3, all DHCP servers

The DHCP server then provides the necessary configuration information in reply to
the host on UDP port 546. This information can include the same types of information
used in an IPv4 network, but additionally it can provide information for multiple
subnets, depending on how the DHCP server is configured.

To configure a Cisco router as a DHCPv6 server, you first configure a DHCP pool,
just as in IPv4 DHCP. Then, you must specifically enable the DHCPv6 service using
the ipv6 dhcp server pool-name interface command.

6.3.d Implement and troubleshoot IPv4 network address translation

The advantage of using private IP addresses is that it allows an organization to use


private addressing in a network, and use the Internet at the same time, by
implementing Network Address Translation (NAT).

NAT is defined in RFC 1631 and allows a host that does not have a valid registered IP
address to communicate with other hosts through the Internet. Essentially, NAT
allows hosts that use private addresses or addresses assigned to another organization,
i.e. addresses that are not Internet-ready, to continue to be used and still allows
communication with hosts across the Internet. NAT accomplishes this by using a
valid registered IP address to represent the private address to the rest of the Internet.
The NAT function changes the private IP addresses to publicly registered IP addresses
inside each IP packet that is transmitted to a host on the Internet.

The Cisco IOS software supports several variations of NAT. These include Static
NAT; Dynamic NAT; and Overloading NAT with Port Address Translation (PAT).
Static NAT
In Static NAT, the IP addresses are statically mapped to each other. Thus, the NAT
router simply configures a one-to-one mapping between the private address and the
registered address that is used on its behalf. Supporting two IP hosts in the private
network requires a second static one-to-one mapping using a second IP address in the
public address range, depending on the number of addresses supported by the
registered IP address.
287

Dynamic NAT
Dynamic NAT is similar to static NAT in that the NAT router creates a one-to-one
mapping between an inside local and inside global address and changes the IP
addresses in packets as they exit and enter the inside network. However, the mapping
of an inside local address to an inside global address happens dynamically. Dynamic
NAT accomplishes this by setting up a pool of possible inside global addresses and
defining criteria for the set of inside local IP addresses whose traffic should be
translated with NAT.

With dynamic NAT, you can configure the NAT router with more IP addresses in the
inside local address list than in the inside global address pool. When the number of
registered public IP addresses is defined in the inside global address pool, the router
allocates addresses from the pool until all are allocated. If a new packet arrives, and it
needs a NAT entry, but all the pooled IP addresses are already allocated, the router
discards the packet. The user must try again until a NAT entry times out, at which
point the NAT function works for the next host that sends a packet. This can be
overcome through the use of Port Address Translation (PAT).

Overloading NAT with Port Address Translation (PAT)


In some networks, most, if not all, IP hosts need to reach the Internet. If that network
uses private IP addresses, the NAT router needs a very large set of registered IP
addresses. If you use static NAT, each private IP host that needs Internet access needs
a publicly registered IP address. Dynamic NAT lessens the problem, but if a large
percentage of the IP hosts in the network need Internet access throughout normal
business hours, a large number of registered IP addresses would also be required.
These problems can be overcome through overloading with port address translation.
Overloading allows NAT to scale to support many clients with only a few public IP
addresses.

To support lots of inside local IP addresses with only a few inside global, publicly
registered IP addresses, NAT overload uses Port Address Translation (PAT),
translating the IP address as well as translating the port number. When NAT creates
the dynamic mapping, it selects not only an inside global IP address but also a unique
port number to use with that address. The NAT router keeps a NAT table entry for
every unique combination of inside local IP address and port, with translation to the
inside global address and a unique port number associated with the inside global
address. And because the port number field has 16 bits, NAT overload can use more
than 65,000 port numbers, allowing it to scale well without needing many registered
IP addresses.

Translating Overlapping Addresses


NAT can also be used in organizations that do not use private addressing but use a
network number registered to another company. If one organization uses a network
number that is registered to another organization, and both organizations are
connected to the Internet, NAT can be used to translate both the source and the
destination IP addresses. However, both the source and the destination addresses must
be changed as the packet passes through the NAT router.
288

Configuring NAT
There are a number of commands that can be used to configure the different variations
of NAT.
 Static NAT configuration requires that each static mapping between a local, or
private, address and a global, or public, address must be configured. Then, each
interface needs to be identified as either an inside or outside interface.

The ip nat inside source static command is used to create a static mapping. The
inside keyword indicates that NAT translates addresses for hosts on the inside part
of the network. The source keyword indicates that NAT translates the source IP
address of packets coming into its inside interfaces. The static keyword indicates
that the parameters define a static entry. If two hosts require Internet access, two
ip nat inside commands must be used.

The ip nat inside and ip nat outside interface subcommands identify which
interfaces are “inside” and which are “outside” respectively.

Two show commands list the most important information about static NAT. These
commands are:
 show ip nat translations, which lists the static NAT entries; and
 show ip nat statistics, which lists statistics, including the number of currently
active translation table entries and the number of hits, which increments for
every packet for which NAT must translate addresses.
 Dynamic NAT configuration differs from static NAT but it also has some
similarities. It requires that each interface be identified as either an inside or
outside interface but the static mapping is not required. In addition, a pool of
inside global addresses needs to be defined.

The ip nat inside source command is used to identify which inside local IP
addresses need to have their addresses translated.

The ip nat pool command defines the set of IP addresses to be used as inside
global addresses.

The two show commands used to trouble shoot static NAT can also be used to
troubleshoot dynamic NAT. In addition to these you can use the debug ip nat
command. This command causes the router to issue a message every time a packet
has its address translated for NAT.

The ip nat inside source overload command is used to configure NAT overload.
The overload parameter is required to enable overload. Without this parameter,
the router does not perform overload, but dynamic NAT.

You can use the show ip nat translations to troubleshoot NAT overload.
289

6.3.e Describe IPv6 network address translation

NAT64

There are two different forms of NAT64, Stateless and Stateful. Stateless NAT64 is
outside the scope of this document, as it really has no place in the enterprise Internet
edge for the purpose of accessing enterprise-hosted content.

The Cisco implementation of Stateful NAT64 is quite powerful and offers many
implementation use cases, including:
 PAT/Overload
 Dynamic 1:1
 Static IPv6-to-IPv4
 Static IPv6-to-IPv4 with ports
 Static IPv4-to-IPv6
 Static IPv4-to-IPv6 with ports

Some of the key features and considerations of Stateful NAT64 include:

 Support of TCP, UDP, and ICMP


 IPv6 host address translated statefully through normal NAT mechanism
 IPv4 host address translated using either Well Known Prefix (WKP) or
configured stateful prefix
 Dynamic Stateful NAT64 is dependent on DNS64 implementation
 Static Stateful NAT64 is not dependent on DNS64 implementation

The various use cases and detailed Cisco implementation of NAT64 can be found in
the white paper referenced earlier (NAT64 Technology: Connecting IPv6 and IPv4
Networks).

The mappings for the IPv4 servers are static and therefore no DNS64 implementation
is needed and the IPv6-enabled clients can use standard DNS mechanisms to query
IPv6-enabled DNS servers to resolve the AAAA record (or other record type) for the
host.

The primary reasons for using Stateful NAT64 instead of the full dual stack design are
the exact same as for using SLB64. The difference is that you may not have a Cisco
ACE or other load balancer on which to perform SLB64. If you have an existing
investment in an application delivery controller now and it does not support IPv6 and
you cannot afford to replace the HW or upgrade the SW, then Stateful NAT64 is an
option. However, the same issue applies with NAT64 functionality in your current
routing products. If your current routing infrastructure does not support NAT64, then
an incremental capital expenditure has to be made and you have to decide if that is
going to be in a new routing platform, such as the Cisco ASR 1000 series, or in a new
application delivery controller, such as the Cisco ACE. Either way, if you do not own
it now, you will have to buy one or the other to handle your IPv6-to-IPv4 translation
requirements.
290

The Stateful NAT64 configuration is not all that different from traditional NAT44 on
Cisco IOS. In this setup the G0/0/2 interface is the north-facing (client-side) interface
and the G0/0/3 interface is the south-facing (server-side) interface. Notice that IPv6 is
only enabled on the G0/0/2 interface and IPv4 only on the G0/0/3 interface. You also
can deploy this in a one arm scenario just like on the Cisco ACE. NAT64 is enabled
on both interfaces. An ACL is defined to permit NAT64 processing for the two
statically defined IPv6 addresses that represent that real IPv4-only configured Web
servers. The static mapping between the outside IPv6 addresses and inside IPv4
addresses are:

 2001:DB8:CAFE:BEEF::10 <> 10.140.19.80


 2001:DB8:CAFE:BEEF::11 <> 10.140.19.81

The IPv6 addresses must be out of the NAT64 stateful prefix, which in this case is
2001:DB8:CAFE:BEEF::/96. A /96 is used here as it maps to a 32-bit address range
(128-32=96). You can use a different sized prefix.

An IPv4 pool is defined and is used for connections between the NAT64 process and
the inside IPv4 servers.

NPTv6
NPTv6 was proposed solely for the use of multi-homing when an enterprise needs to
translate between IPv6 prefixes but still maintain some level of end-to-end
reachability.

6.4 Network optimization

6.4.a Implement and troubleshoot IP SLA

CISCO IOS IP SERVICE LEVEL AGREEMENTS OVERVIEW


Cisco IOS IP Service Level Agreements (SLAs) allow users to monitor network
performance between Cisco routers or from either a Cisco router to a remote IP
device.

Configuration examples include both Command Line Interface (CLI) and Simple
Network Management Protocol (SNMP).

Cisco IOS IP SLAs capabilities:

• Voice-over-IP (VoIP), video, and VPN network monitoring


• SLA monitoring
• Network performance monitoring and network performance visibility
• IP service network health readiness or assessment
• Edge-to-edge network availability monitoring
• Troubleshooting of network operation
• Multiprotocol Label Switching (MPLS) network monitoring
291

Cisco IOS IP SLAs Benefits


• Measure end-to-end IP layer network
• Deploy new applications and services with complete confidence
• Verify and monitor quality of service (QoS) and differentiated services
• Increase end user confidence and satisfaction
• Implement SLA measurement metrics
• Notify users about network issues proactively
• Measure network performance continuously, reliably, and predictably

Cisco IOS IP SLAs Feature Overview

Measurement capabilities
• User Datagram Protocol (UDP) response time, one-way delay, jitter, and
packet loss and connectivity
• ICMP response time and connectivity
• Hop-by-hop ICMP response time and jitter
• Performance metric including DNS lookup, TCP connect, and HTTP
transaction time
• Packet loss statistics
• DHCP response time measurements
• Response times from a Cisco network devices to network servers
• MOS/ICPIF Voice Quality scoring and simulation of VoIP codec’s
• DLSw+ peer tunnel performance

· Proactive Notification
• Ability to define rising and falling thresholds to monitor SLAs
• Ability to generate SNMP Traps when a performance threshold is violated
• Ability to trigger another operation for more detailed analysis

· Flexible scheduling
• Measure at any given time, or continuously at any time interval
• Sequential activation for a large number of IP SLAs operations by utilizing
multi-operation scheduler

Figure 6.4 Cisco IOS IP SLAs Overview


292

MEASURING THE NETWORK WITH CISCO IOS IP SLAS


Cisco IOS IP SLAs is a network performance measurement and diagnostic tool that
uses active monitoring, which includes the generation of traffic in a continuous,
reliable, and predictable manner. Cisco IOS IP SLAs actively sends data across the
network to measure performance between multiple network locations or across
multiple network paths. It uses the timestamp information to calculate performance
metrics such as jitter, latency, network and server response times, packet loss, and
MOS voice quality scores. The user defines an IP SLAs operation (probe) within
Cisco IOS Software using the SNMP MIB or CLI. The measurement characteristics
include packet size, packet spacing, protocol type, DSCP marking, and other
parameters. The operation is scheduled to generate traffic and retrieve performance
measurements. The data from the Cisco IOS IP SLAs operation is stored within the
RTTMON MIB and available within CLI for Network Management System
applications to retrieve network performance statistics. Users can schedule a Cisco
IOS IP SLAs operation at any point in time or continuously over any time interval.
Cisco IOS IP SLAs is configured to monitor per-class traffic over the same link by
setting the Diff-Serv Code Point (DSCP) bits.

A destination router running Cisco IOS Software is configured as a Cisco IOS IP


SLAs Responder, which processes measurement packets and provides detailed
timestamp information. The responder can send information about the destination
router’s processing delay back to the source Cisco router. Uni-direction measurements
are also possible using Cisco IOS IP SLAs.

Cisco IOS IP SLAs provides a proactive notification feature with an SNMP trap. Each
measurement operation can monitor against a pre-set performance threshold. Cisco
IOS IP SLAs generates an SNMP trap to alert management applications if this
threshold is crossed. Several SNMP traps are available: round trip time, average jitter,
one-way latency, jitter, packet loss, MOS, and connectivity tests. Administrators can
also configure Cisco IOS IP SLAs to run a new operation automatically when the
threshold is crossed. For instance, when latency exceeds a threshold this can trigger a
293

secondary operation to measure hop-by-hop latency to isolate the problem area in the
network.

HOW TO MONITOR A NETWORK WITH CISCO IOS IP SLAS


Cisco IOS IP SLAs can be used for network access, troubleshooting, QoS
verification, and service level monitoring. Several items need to be resolved before
deciding when to monitor the network performance and service levels.
What is the primary goal of the measurements? Which metrics are important to
monitor? In other words, at what days and times are measurements needed?

The second step is to make a broad assessment of traffic patterns within the network.
When packet samples are distributed and measured more frequently, network traffic
patterns are more reliable. More points mean that information is more accurate.
Active measurements should mimic the type of traffic run on the network; for
example, the correct packet size, spacing an interval to mimic a VoIP Codec.

Scenario 1: Measure Data Traffic Performance from the Branch to Central Office

Figure 6.5 Network for Scenario 1

An Enterprise customer has one central headquarters site along with two branch
offices. One of the branch offices is communicating via a dedicated FR circuit (256
kbps), while the second branch office is accessing the corporate headquarters using a
WAN link through the public Internet via a VPN.

Client stations in both branch offices require access to a central web server at the
corporate headquarters. For example, corporate can claim to provide server 99.95%
availability with a response time of no greater than 50ms. For the branch office
accessing the servers via the Internet, corporate headquarters provides a latency SLA
of no more than 100ms. Based on this data, the Enterprise must consider how it can
measure and verify that both branch offices are getting their agreed-upon service
levels from corporate headquarters. Furthermore, if corporate is not meeting these
service levels, what part or parts of the network are contributing to this degradation
(i.e.: WAN links, client application, web server)?

6.4.b Implement and troubleshoot tracking object


294

Object tracking is an independent process that manages creating, monitoring, and


removing tracked objects such as the state of the line protocol of an interface. Clients
such as the Hot Standby Router Protocol (HSRP), Gateway Load Balancing Protocol
(GLBP), and VRRP register their interest with specific tracked objects and act when
the state of an object changes.

A unique number that is specified on the tracking CLI identifies each tracked object.
Client processes such as VRRP use this number to track a specific object.

The tracking process periodically polls the tracked objects and notes any change of
value. The changes in the tracked object are communicated to interested client
processes, either immediately or after a specified delay. The object values are reported
as either up or down.

VRRP object tracking gives VRRP access to all the objects available through the
tracking process. The tracking process allows you to track individual objects such as a
the state of an interface line protocol, state of an IP route, or the reachability of a
route.

VRRP provides an interface to the tracking process. Each VRRP group can track
multiple objects that may affect the priority of the VRRP router. You specify the
object number to be tracked and VRRP is notified of any change to the object. VRRP
increments (or decrements) the priority of the virtual router based on the state of the
object being tracked.

How VRRP Object Tracking Affects the Priority of a Device


The priority of a device can change dynamically if it has been configured for object
tracking and the object that is being tracked goes down. The tracking process
periodically polls the tracked objects and notes any change of value. The changes in
the tracked object are communicated to VRRP, either immediately or after a specified
delay. The object values are reported as either up or down. Examples of objects that
can be tracked are the line protocol state of an interface or the reachability of an IP
route. If the specified object goes down, the VRRP priority is reduced. The VRRP
router with the higher priority can now become the virtual router master if it has the
vrrp preempt command configured.

Object Tracking Integration

Example: Tracking an IPv6 Object using VRRPv3


In the following example, the tracking process is configured to track the state of the
IPv6 object using the VRRPv3 group. VRRP on Gigabit Ethernet interface 0/0/0 then
registers with the tracking process to be informed of any changes to the IPv6 object
on the VRRPv3 group. If the IPv6 object state on 20 reduce serial interface VRRPv3
goes down, then the priority of the VRRP GROUP:

Router(config)# fhrp version vrrp v3


Router(config)# interface GigabitEthernet 0/0/0
Router(config-if)# vrrp 1 address-family ipv6
Router(config-if-vrrp)# track 1 decrement 20
Example: Verifying VRRP IPv6 Object Tracking
295

Router# show vrrp

Ethernet0/0 - Group 1 - Address-Family IPv4


State is BACKUP
State duration 1 mins 41.856 secs
Virtual IP address is 172.24.1.253
Virtual MAC address is 0000.5E00.0101
Advertisement interval is 1000 msec
Preemption enabled
Priority is 80 (configured 100)
Track object 1 state Down decrement 20
Master Router is 172.24.1.2, priority is 100
Master Advertisement interval is 1000 msec (learned)
Master Down interval is 3609 msec (expires in 3297 msec)

Router# show track ipv6 route brief

6.4.c Implement and troubleshoot netflow


Cisco IOS Flexible NetFlow is the next-generation in flow technology allowing
optimization of the network infrastructure, reducing operation costs, improving
capacity planning and security incident detection with increased flexibility and
scalability. Flexible NetFlow has many benefits above the Cisco traditional NetFlow
functionality available for years in Cisco hardware and software.
Key Advantages to using Flexible NetFlow:

 Flexibility, scalability of flow data beyond traditional NetFlow


 The ability to monitor a wider range of packet information producing new
information about network behavior not available today
 Enhanced network anomaly and security detection
 User configurable flow information to perform customized traffic identification
and the ability to focus and monitor specific network behavior
 Convergence of multiple accounting technologies into one accounting mechanism
296

Flexible NetFlow is integral part of Cisco IOS Software that collects and measures
data allowing all routers or switches in the network to become a source of telemetry
and a monitoring device. Flexible NetFlow allows extremely granular and accurate
traffic measurements and high-level aggregated traffic collection. Because it is part of
Cisco IOS Software, Flexible NetFlow enables Cisco product-based networks to
perform traffic flow analysis without purchasing external probes--making traffic
analysis economical on large IP networks.

Opportunities and Uses of Flexible NetFlow include:

 Application and network usage


 Network productivity and utilization of network resources
 The impact of changes to the network
 Network anomaly and security vulnerabilities
 Long term compliance, business process and audit trail
 Understand who, what, when, where, and how network traffic is flowing

Applications for NetFlow data are constantly being invented but the key usages
include:

 Real-time Network monitoring


 Application and user Profiling
 Network planning and capacity planning
 Security incident detection and classification
 Accounting and billing
 Network data warehousing, forensics and data mining
 Troubleshooting

Network Application and User monitoring


Flexible NetFlow data enables users to view detailed, time-based and application-
based usage of a network. This information allows planning and allocation of network
and application resources including extensive near real-time network monitoring
capabilities and can be used to display traffic patterns application-based views.
Flexible NetFlow services data optimizes network planning including device ingress
and egress information and is useful for monitoring to and between datacenters.
Flexible NetFlow provides proactive problem detection, efficient troubleshooting, and
rapid problem resolution and the information is used to efficiently allocate network
resources as well as to detect and resolve potential security and policy violations.
Flexible NetFlow adds the benefit of customized flow analysis allowing the
customization of network information in the diagnosis of the issue and focusing on
the details of the problem at hand.

Network Planning
Flexible NetFlow can be used to capture data over a long period of time producing the
opportunity to track and anticipate network growth and plan upgrades to increase the
number of routing devices, ports, or higher- bandwidth interfaces. Flexible NetFlow
helps to minimize the total cost of network operations while maximizing network
performance, capacity, and reliability. NetFlow detects unwanted WAN traffic
validates bandwidth and Quality of Service (QOS) and allows the analysis of new
297

network applications.. Flexible NetFlow allows the tracking of information within a


NetFlow database or Flow Monitor. Multiple flow monitors may be implemented that
include specific information useful for network planning. Flexible NetFlow will give
you valuable information to reduce the cost of operating your network.

Security Analysis
Flexible NetFlow data identifies and classifies DDOS attacks, viruses and worms in
real-time. Changes in network behavior indicate anomalies that are clearly
demonstrated in NetFlow data. The data is also a valuable forensic tool to understand
and replay the history of security incidents. Security analysis may include detailed
customized Flow Monitors to create virtual or on demand views of network data
enhancing detection capabilities already available in traditional NetFlow.

IP Accounting and Usage-Based Billing


Flexible NetFlow also enables customers to implement usage-based billing, providing
them with the ability to implement competitive pricing schemes and premium
services. Flexible NetFlow has the concept of permanent monitoring in which
metering or accounting information is continuously and periodically completed (i.e
similar to SNMP counters). Customers can, therefore, use NetFlow to track IP traffic
flowing into or out of their datacenters for capacity planning or to implement usage-
based billing.

Traffic Engineering
NetFlow can measure the amount of traffic crossing peering or transit points to
determine if a peering arrangement with other service providers is fair and equitable.
For instance Flexible NetFlow includes the use of information such as BGP policy
accounting traffic index, detailed peering analysis with BGP NextHop and BGP AS
information for peering analysis.

How Does NetFlow produce information for your network?


NetFlow includes two key components that perform the following capabilities:

 Flow caching analyzes and collects IP data flows within a router or switch and
prepares data for export. Flexible NetFlow has the ability to implement multiple
flow caches or flow monitors for tracking different NetFlow applications
simultaneously. For instance, the user can track security and traffic analysis
simultaneously in separate NetFlow caches. This gives the ability to focus,
pinpoint and monitor specific information for the application. Flexible flow data is
now available using the latest NetFlow v.9 export data format.
 NetFlow reporting collection utilizes exported data from multiple routers and
filters and aggregates the data according to customer policies, and then stores this
summarized or aggregated data. NetFlow collection systems allow users to
complete real-time visualization or trending analysis of recorded and aggregated
flow data. Users can specify the router and aggregation scheme and time interval
desired.

6.4.d. Implement and troubleshoot embedded event manager


298

Cisco IOS Embedded Event Manager (EEM) is a powerful and flexible subsystem
that provides real-time network event detection and onboard automation. It gives you
the ability to adapt the behavior of your network devices to align with your business
needs.

Your business can benefit from the capabilities of IOS Embedded Event Manager
without upgrading to a new version of Cisco IOS Software. It is available on a wide
range of Cisco platforms.

IOS Embedded Event Manager supports more than 20 event detectors that are highly
integrated with different Cisco IOS Software components to trigger actions in
response to network events. Your business logic can be injected into network
operations using IOS Embedded Event Manager policies. These policies are
programmed using either simple command-line interface (CLi) or using a scripting
language called Tool Command Language (Tcl).

Harnessing the significant intelligence within Cisco devices, IOS Embedded Event
Manager helps enable creative solutions, including automated troubleshooting, fault
detection, and device configuration.

Embedded Event Manager (EEM) is a unique subsystem within Cisco IOS Software.
EEM is a powerful and flexible tool to automate tasks and customize the behavior of
Cisco IOS Software and the operation of the device. Customers can use EEM to
create and run programs or scripts directly on a router or switch. The scripts are
referred to as EEM policies and can be programmed using a simple Command-Line-
Interface (CLI)-based interface or using a scripting language called Tool Command
Language (Tcl). EEM allows customers to harness the significant intelligence within
Cisco IOS Software to respond to real-time events, automate tasks, create customized
commands, and take local automated action based on conditions detected by the Cisco
IOS Software itself.

The latest version of the EEM subsystem within Cisco IOS Software is EEM v4.0.

Applications
The applications are endless and only limited by your imagination.

Suppose you want to automatically configure a switch interface depending on the


device, for example, an IP phone that is connected to a port or interface. A script can
be devised that is triggered on the interface up condition and determines the details of
the connected device. Upon discovery and verification of a newly connected IP
phone, the port can be automatically configured according to prescribed parameters.

Another example might be to react to an abnormal condition, such as the detection of


a high error rate on an interface by forcing transit traffic over a more stable and error-
free path. EEM can watch for the increased error rate and trigger a policy into action.
The policy could notify network operations personnel and take immediate action to
reroute traffic.

A third example might be to collect detailed data upon detection of a specific failure
condition, in order to gather information that can allow the root cause of the problem
299

to be determined faster, leading to a lower mean time to repair and higher availability.
EEM could detect a specific syslog message and trigger a script to collect detailed
data using a series of show commands. After automatically collecting the data, it can
be saved to flash memory or sent to an external management system or by email to a
network operator.

The control is in the network administrator's hands. You control what events to detect
and what actions to take.

EEM is optional-it is up to the network administrator if and when it should be used,


and only takes the actions you program it to take.

Features and Benefits


Cisco IOS Embedded Event Manager provides a level of embedded systems
management not previously seen in Cisco IOS Software. More than 20 event detectors
provide an extensive set of conditions that can be monitored and defined as event
triggers. The system is extensible with new capabilities, and further subsystem
integration is planned. The feature is mostly product independent and available across
a wide range of Cisco products. Each new version of the EEM feature introduces new
event detectors or new capabilities. Consult the Cisco documentation for detailed
information.

EEM Version 4.0


The latest version of the EEM subsystem is EEM 4.0. See Table 1 for a list of features
and benefits. In this release, we introduced the following enhancements related to
EEM security, resource management, event detection and policy execution
capabilities:

 EEM Email Action Enhancements


 Custom TCP port for SMTP mail actions

 Tcl-based and CLI-based EEM policies to establish secured SMTP


connections with public email servers using Transport Layer Security (TLS)

 EEM Security Enhancements


 Tcl policy checksum integrity check (MD5/SHA-1)
 3rd party digital signature support
 Tcl policy owner identification
 Registration of remote Tcl policies

 EEM Resource Management


 Manually set CPU, memory, and EEM queue thresholds
 Blocks new policy execution when system is already busy with existing
functionalities
300

 EEM Event Detector Enhancements


 IPv6 routing event detector support
 Syslog event detector performance enhancement
 New environment variables for CLI event detector

 EEM Usability Enhancement


 Capability to report policy execution statistics including number of times a
policy is triggered, dropped, length of policy execution time, and next policy
execution for timed events
 Powerful file operation support for applet policies

6.4.e Identify performance routing (PfR)


Performance Routing (PfR) complements traditional routing technologies by using the
intelligence of a Cisco IOS infrastructure to improve application performance and
availability. PfR can select the best path for each application based upon advanced
criteria such as, reachability, delay, loss, jitter, and mean opinion score (MOS).

PfR can also improve application availability by dynamically routing around network
problems like black holes and brownouts that traditional IP routing may not detect. In
addition, the intelligent load balancing capability of PfR can optimize path selection
based on link use or circuit pricing.

Translation through two ISP connections. The Cisco IOS Software Network Address
Translation (NAT) can distribute subsequent TCP connections and UDP sessions over
multiple network connections if equal-cost routes to a given destination are available.
In the event that one of the connections becomes unusable, object-tracking, a
component of Optimized Edge Routing (OER), can be used to deactivate the route
until the connection becomes available again, which assures network availability in
spite of instability or unreliability of an Internet connection.
301

Cisco Performance Routing


As enterprise organizations grow their businesses, the demand for real-time
application performance and a better application experience for users increases. For
example, voice and TelePresence applications are becoming integral parts of
corporate networks, and performance of these applications is crucial. In order to
improve application performance, companies have typically deployed two common
solutions: provide additional bandwidth by deploying more network connections or
offer application optimization technologies (for example, Cisco Wide Area
Application Services [WAAS]). Additional WAN bandwidth may improve aggregate
throughput but may not improve delay or loss for critical applications. Application
Optimization technologies such as Cisco WAAS can improve performance with data-
reduction techniques, but fluctuating network performance can still affect
applications. Cisco Performance Routing (PfR) addresses network performance
problems by allowing the network to intelligently choose a path that meets the current
requirements of application performance. In addition, this technology allows the
network to choose resources appropriately to reduce operational costs incurred by
enterprises.

Cisco PfR complements classic routing technologies by adding intelligence to select


best paths to meet performance requirements of applications. The first phase of Cisco
PfR intelligently optimizes performance of applications over WANs and intelligently
load balances traffic to the Internet. Later phases of PfR will enhance application
intelligence and extend this technology across the enterprise network.
Cisco PfR selects an egress or ingress WAN path based on parameters that affect
application performance, including reachability, delay, cost, jitter, and Mean Opinion
Score (MOS). The technology can also select an egress or ingress WAN path to
intelligently load balance traffic based on usage or circuit cost to reduce costs
incurred by enterprises. To achieve this balance, PfR selects a WAN path based on
interface parameters such as reachability, load, throughput, and link cost of using a
302

path. Classic routing (Enhanced IGRP [EIGRP], Open Shortest Path First [OSPF],
Routing Information Protocol Version 2 [RIPv2], Border Gateway Protocol [BGP],
etc..) protocols generally focus on providing reachability by creating a loop-free
topology based on shortest or least-cost path. Cisco PfR focuses on providing
application performance by understanding application requirements and current
network performance characteristics.

Cisco PfR gains this additional intelligence from the following:

 Allows network administrators to provide business policy or application


requirements to IP routing
 Monitors the network performance by taking advantage of embedded Cisco IOS
Software intelligence in switches and routers

The following types of businesses typically use Cisco PfR technology:


 Large, medium, and small enterprises with mission-critical Internet presence
requirements
 Enterprises with multiple diverse WANs for availability requirements
 Enterprises with remote offices with a primary and backup WAN service
 Small and home offices with dual Internet connections

Cisco PfR: An Overview


Many businesses have multiple WAN and Internet connections to enhance the
network and its applications. These businesses face complex implementation
challenges when trying to split inbound and outbound traffic across multiple links
while ensuring continuous network availability. This complexity increases further
when these connections are from different service providers or have different
characteristics such as bandwidth, quality, or cost. Cisco Performance Routing
removes this complexity from network operators and provides needed application
performance when the network is operational but parts of the network are
experiencing performance-degradation problems.

Cisco PfR in its truest form is "application routing based on network performance". It
automatically detects path degradation and responds to avoid continued degradation.
In many cases for a multi-homed enterprise, the traffic on an initial path is routed
through another egress path that can meet the application performance requirements.
This routing is different from classic routing, because classic routing looks only at
reachability and does not look into the required traffic service needs, such as low loss
or low delay. In addition, Cisco PfR allows a multi-homed enterprise to use all
available WAN or Internet links. It can track throughput, link usage, and link cost,
and automatically determine the best load balancing to optimize throughput, load, and
cost-and network operators define the Cisco PfR policies that implement these
adaptive routing techniques.

Cisco PfR policies can be based on the following parameters:

 WAN out-bound performance (traffic exiting from an enterprise): Delay, loss,


reachability, throughput, jitter, and MOS
 WAN in-bound performance (traffic arriving into an enterprise): Delay, loss,
reachability, and throughput
303

 WAN and Internet path parameters: Reachability, throughput, load, and link
usage cost

Cisco Performance Routing consists of two distinct elements: border routers and a
master controller. The border routers connect enterprises to the WAN; the master
controller is a software entity supported by Cisco IOS Software on a router platform.
Border routers gather traffic and path information and send this information to a
master controller, which places all received information into a database. The master
controller is configured with the requested service policies, so it is aware of
everything that happens at the network edge and can automatically detect and take
action when certain parameters are out-of-policy (OOP).

No licensing is required to enable Cisco Performance Routing.

Cisco IOS Software PfR Support


The initial implementation of Cisco PfR uses Cisco IOS Optimized Edge Routing
(OER) techniques. Additional PfR technologies will supersede current OER functions
in future Cisco IOS Software releases.

Cisco Performance Routing takes advantage of the vast intelligence imbedded in


Cisco IOS Software to determine the optimal path based upon network and
application policies. It is an evolution of the Cisco IOS OER technology with a much
broader scope. The application intelligence and end-to-end network strategy of Cisco
PfR are significantly broader than the scope of OER. The initial phase of PfR uses
OER technology extensively to meet emerging application demands on the enterprise
network.

Available starting with Cisco IOS Software Release 12.3(8)T, the Cisco OER solution
supports both the border router and the master controller functions. The following
functions have been added to OER since its inception:

 Cisco IOS Software Release 12.3(8)T: Initial OER support


 Cisco IOS Software Release 12.3(11)T: Added VPN IP Security (IPsec) and
generic routing encapsulation (GRE) tunnel optimization
 Cisco IOS Software Release 12.3(11)T: Added port- and protocol-based prefix
learning
 Cisco IOS Software Release 12.3(11)T:: Added support for policy-rules
configuration
 Cisco IOS Software Release 12.3(14)T: Added support for cost-based
optimization and trace-route reporting
 Cisco IOS Software Release 12.4(2)T: Added application-aware routing: Policy-
Based Routing (PBR)
 Cisco IOS Software Release 12.4(2)T: Added active probe source address
 Cisco IOS Software Release 12.4(6)T: Added voice traffic optimization
 Cisco IOS Software Release 12.4(9)T: Added differentiated services code point
(DSCP) monitoring
 Cisco IOS Software Release 12.4(9)T: Added BGP inbound optimization
 Cisco IOS Software Release 12.2(33)SRB: Added support on Cisco7600
304

 Cisco IOS Software Release 12.2(33)SXH: Added border router support on Cisco
Catalyst 6500
 Border routers must run Cisco Express Forwarding
 The master controller must be able to reach and communicate with border routers
for operations.

Supported WAN interfaces include Ethernet, serial, tunnel, High-Speed Serial


Interface (HSSI), ISDN Basic Rate Interface (BRI), ATM, Packet over SONET/SDH
(PoS), VLAN, Dialer, Multilink, Frame Relay, and port channel.

The required Cisco IOS Software feature sets to run Cisco Performance Routing
include SP Services, Advanced IP Services, Enterprise Services, and Advanced
Enterprise Services (refer to Figure 6.6).

Figure 6.6. Cisco IOS Software Feature Sets Required to Run Cisco PfR

You might also like