BGP Oreilly PDF
BGP Oreilly PDF
m
pl
im
en
ts
of
BGP in the
Data Center
Dinesh G. Dutt
Bringing Web-Scale Networking
to Enterprise Cloud
NetQ
Third party apps Cumulus apps
VS
Locked, proprietary
systems
Customer choice
Economical scalability
Built for the automation age
Standardized toolsets
Choice and flexibility
Learn more at
cumulusnetworks.com/oreilly
BGP in the Data Center
Dinesh G. Dutt
The O’Reilly logo is a registered trademark of O’Reilly Media, Inc. BGP in the Data
Center, the cover image, and related trade dress are trademarks of O’Reilly Media,
Inc.
While the publisher and the author have used good faith efforts to ensure that the
information and instructions contained in this work are accurate, the publisher and
the author disclaim all responsibility for errors or omissions, including without limi‐
tation responsibility for damages resulting from the use of or reliance on this work.
Use of the information and instructions contained in this work is at your own risk. If
any code samples or other technology this work contains or describes is subject to
open source licenses or the intellectual property rights of others, it is your responsi‐
bility to ensure that your use thereof complies with such licenses and/or rights.
978-1-491-98338-6
[LSI]
Table of Contents
Preface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
v
Routing Policy 36
Using Interface Names as Neighbors 42
Summary 45
vi | Table of Contents
Preface
vii
The people who really paid the price, as I took on the writing of this
booklet along with my myriad other tasks, were my wife Shanthala
and daughter Maya. Thank you. And it has been nothing but a
pleasure and a privilege to work with Cumulus Networks’ engineer‐
ing, especially the routing team, in developing and working through
ideas to make BGP simpler to configure and manage.
viii | Preface
CHAPTER 1
Introduction to Data Center
Networks
1
• What are the goals behind a modern data center network
design?
• How are these goals different from other networks such as
enterprise and campus?
• Why choose BGP as the routing protocol to run the data center?
Let’s examine this design in a little more detail. The first thing to
note is the uniformity of connectivity: servers are typically three
network hops away from any other server. Next, the nodes are quite
homogeneous: the servers look alike, as do the switches. As required
by the modern data center applications, the connectivity matrix is
quite rich, which allows it to deal gracefully with failures. Because
Draw the path between a server connected to the leftmost leaf and a
server connected to the rightmost leaf. It zigzags back and forth
between racks. This is highly inefficient and nonuniform connectiv‐
ity.
Routing, on the other hand, is able to utilize all paths, taking full
advantage of the rich connectivity matrix of a Clos network. Routing
also can take the shortest path or be programmed to take a longer
path for better overall link utilization.
Thus, the first conclusion is that routing is best suited for Clos net‐
works, and bridging is not.
A key benefit gained from this conversion from bridging to routing
is that we can shed the multiple protocols, many proprietary, that
are required in a bridged network. A traditional bridged network is
typically running STP, a unidirectional link detection protocol
(though this is now integrated into STP), a virtual local-area net‐
work (VLAN) distribution protocol, a first-hop routing protocol
such as Host Standby Routing Protocol (HSRP) or Virtual Router
Redundancy Protocol (VRRP), a routing protocol to connect multi‐
ple bridged networks, and a separate unidirectional link detection
protocol for the routed links. With routing, the only control plane
protocols we have are a routing protocol and a unidirectional link
detection protocol. That’s it. Servers communicating with the first-
hop router will have a simple anycast gateway, with no other addi‐
tional protocol necessary.
Figure 1-6. Connecting a Clos network to the external world via a bor‐
der pod
The main advantage of border pods or border leaves is that they iso‐
late the inside of the data center from the outside. The routing pro‐
tocols that are inside the data center never interact with the external
world, providing a measure of stability and security.
However, smaller networks might not be able to dedicate separate
switches just to connect to the external world. Such networks might
connect to the outside world via the spines, as shown in Figure 1-7.
The important point to note is that all spines are connected to the
internet, not some. This is important because in a Clos topology, all
spines are created equal. If the connectivity to the external world
were via only some of the spines, those spines would become
Figure 1-7. Connecting a Clos network to the external world via spines
Before its use in the data center, BGP was primarily, if not exclu‐
sively, used in service provider networks. As a consequence of its
primary use, operators cannot use BGP inside the data center in the
same way they would use it in the service provider world. If you’re a
network operator, understanding these differences and their reason
is important in preventing misconfiguration.
The dense connectivity of the data center network is a vastly differ‐
ent space from the relatively sparse connectivity between adminis‐
trative domains. Thus, a different set of trade-offs are relevant inside
the data center than between data centers. In the service provider
network, stability is preferred over rapid notification of changes. So,
BGP typically holds off sending notifications about changes for a
while. In the data center network, operators want routing updates to
be as fast as possible. Another example is that because of BGP’s
default design, behavior, and its nature as a path-vector protocol, a
single link failure can result in an inordinately large number of BGP
messages passing between all the nodes, which is best avoided. A
third example is the default behavior of BGP to construct a single
best path when a prefix is learned from many different Autonomous
System Numbers (ASNs), because an ASN typically represents a sep‐
arate administrative domain. But inside the data center, we want
multiple paths to be selected.
15
Two individuals put together a way to fit BGP into the data center.
Their work is documented in RFC 7938.
This chapter explains each of the modifications to BGP’s behavior
and the rationale for the change. It is not uncommon to see network
operators misconfigure BGP in the data center to deleterious effect
because they failed to understand the motivations behind BGP’s
tweaks for the data center.
ASN Numbering
Autonomous System Number (ASN) is a fundamental concept in
BGP. Every BGP speaker must have an ASN. ASNs are used to iden‐
tify routing loops, determine the best path to a prefix, and associate
routing policies with networks. On the internet, each ASN is
allowed to speak authoritatively about particular IP prefixes. ASNs
come in two flavors: a two-byte version and a more modern four-
byte version.
The ASN numbering model is different from how they’re assigned
in traditional, non-data center deployments. This section covers the
concepts behind how ASNs are assigned to routers within the data
center.
If you choose to follow the recommended best practice of using
eBGP as your protocol, the most obvious ASN numbering scheme is
that every router is assigned its own ASN. This approach leads to
problems, which we’ll talk about next. However, let’s first consider
the numbers used for the ASN. In internet peering, ASNs are pub‐
licly assigned and have well-known numbers. But most routers
within the data center will rarely if ever peer with a router in a dif‐
ferent administrative domain (except for the border leaves described
in Chapter 1). Therefore, ASNs used within the data center come
from the private ASN number space.
Private ASNs
A private ASN is one that is for use outside of the global internet.
Much like the private IP address range of 10.0.0.0/8, private ASNs
are used in communication between networks not exposed to the
external world. A data center is an example of such a network.
ASN Numbering | 17
Nothing stops an operator from using the public ASNs, but this is
not recommended for two major reasons.
The first is that using global ASNs might confuse operators and
tools that attempt to decode the ASNs into meaningful names.
Because many ASNs are well known to operators, an operator might
very well become confused, for example, on seeing Verizon’s ASN on
a node within the data center.
The second reason is to avoid the consequences of accidentally leak‐
ing out the internal BGP information to an external network. This
can wreak havoc on the internet. For example, if a data center used
Twitter’s ASN internally, and accidentally leaked out a route claim‐
ing, say, that Twitter was part of the AS_PATH1 for a publicly reach‐
able route within the data center, the network operator would be
responsible for a massive global hijacking of a well-known service.
Misconfigurations are the number one or number two source of all
network outages, and so avoiding this by not using public ASNs is a
good thing.
The old-style 2-byte ASNs have space for only about 1,023 private
ASNs (64512–65534). What happens when a data center network
has more than 1,023 routers? One approach is to unroll the BGP
knob toolkit and look for something called allowas-in. Another
approach, and a far simpler one, is to switch to 4-byte ASNs. These
new-fangled ASNs come with support for almost 95 million private
ASNs (4200000000–4294967294), more than enough to satisfy a
data center of any size in operation today. Just about every routing
suite, traditional or new, proprietary or open source, supports 4-
byte ASNs.
1 This is additional information passed with every route, indicating the list of ASNs trav‐
ersed from the origin of this advertisement.
In this topology, all of the nodes have separate ASNs. Now, consider
the reachability to prefix 10.1.1.1 from R1’s perspective. R2 and R3
advertise reachability to the prefix 10.1.1.1 to R1. The AS_PATH
advertised by R2 for 10.1.1.1 is [R2, R4], and the AS_PATH adver‐
tised by R3 is [R3, R4]. R1 does not know how R2 and R3 them‐
selves learned this information. When R1 learns of the path to
10.1.1.1 from both R2 and R3, it picks one of them as the best path.
Due to its local support for multipathing, its forwarding tables will
contain reachability to 10.1.1.1 via both R2 and R3, but in BGP’s
best path selection, only one of R2 or R3 can win.
Let’s assume that R3 is picked as the best path to 10.1.1.1 by R1. R1
now advertises that it can reach 10.1.1.1 with the AS_PATH [R1, R3,
R4] to R2. R2 accepts the advertisement, but does not consider it a
better path to reach 10.1.1.1, because its best path is the shorter
AS_PATH R4.
Now, when the node R4 dies, R2 loses its best path to 10.1.1.1, and
so it recomputes its best path via R1, AS_PATH [R1, R3, R4] and
sends this message to R1. R2 also sends a route withdrawal message
for 10.1.1.1 to R1. When R3’s withdrawal to route 10.1.1.1 reaches
R1, R1 also withdraws its route to 10.1.1.1 and sends its withdrawal
to R2. The exact sequence of events might not be as described here
due to the timing of packet exchanges between the nodes and how
BGP works, but it is a close approximation.
The short version of this problem is this: because a node does not
know the physical link state of every other node in the network, it
doesn’t know whether the route is truly gone (because the node at
the end went down itself) or is reachable via some other path. And
ASN Numbering | 19
so, a node proceeds to hunt down reachability to the destination via
all its other available paths. This is called path hunting.
In the simple topology of Figure 2-1, this didn’t look so bad. But in a
Clos topology, with its dense interconnections, this simple problem
becomes quite a significant one with a lot of additional message
exchanges and increased loss of traffic loss due to misinformation
propagating for a longer time than necessary.
Multipath Selection
In a densely connected network such as a Clos network, route multi‐
pathing is a fundamental requirement to building robust, scalable
networks. BGP supports multipathing, whether the paths have equal
costs or unequal costs, though not all implementations support
unequal-cost multipathing. As described in the previous section,
two paths are considered equal if they are equal in each of the eight
criteria. One of the criteria is that the AS numbers in the AS_PATH
match exactly, not just that they have equal-length paths. This
Multipath Selection | 23
There are multiple ways to address this problem, but the simplest
one is to configure a knob that modifies the best-path algorithm.
The knob is called bestpath as-path multipath-relax. What it
does is simple: when the AS_PATH lengths are the same in adver‐
tisements from two different sources, the best-path algorithm skips
checking for exact match of the ASNs, and proceeds to match on the
next criteria.
Advertisement Interval
BGP maintains a minimum interval per neighbor. Events within this
minimum interval window are bunched together and sent at one
shot when the minimum interval expires. This is essential for the
most stable code, but it also helps prevent unnecessary processing in
the event of multiple updates within a short duration. The default
value for this interval is 30 seconds for eBGP peers, and 0 seconds
for iBGP peers. However, waiting 30 seconds between updates is
entirely the wrong choice for a richly connected network such as
those found in the data center. 0 is the more appropriate choice
because we’re not dealing with routers across administrative
Connect Timer
This is the least critical of the four timers. When BGP attempts to
connect with a peer but fails due to various reasons, it waits for a
certain period of time before attempting to connect again. This
period by default is 60 seconds. In other words, if BGP is unable to
establish a session with its peer, it waits for a minute before attempt‐
ing to establish a session again. This can delay session reestablish‐
ment when a link recovers from a failure or a node powers up.
Summary
This chapter covered the basic concepts behind adapting BGP to the
data center, such as the use of eBGP as the default deployment
model and the logic behind configuring ASNs. In the next two chap‐
ters, we’ll apply what we learned in this chapter to configuring nodes
in a Clos topology.
27
changes don’t become hazardous. We must also avoid duplication.
In the section that follows, we’ll examine both of these problems in
detail, and see how we can eliminate them.
Except for the servers, all of the devices listed are routers, and the
routing protocol used is BGP.
network 10.0.254.1/32
This tells BGP to advertise reachability to the prefix
10.0.254.1/32. This prefix needs to already be in the routing
table in order for BGP to advertise it.
maximum-paths 64
This tells BGP that it needs to use multiple paths, if available, to
reach a prefix.
The meaning of the various timers was discussed in “Slow Conver‐
gence Due to Default Timers” on page 24.
Let’s look at leaf01 by itself first to see what is duplicated in it. For
example, 10.0.254.1 is specified twice, once with /32 and once
without. The first time it is specified as the default gateway address,
and the second time as the interface.
Configuration is less error-prone when there is as little duplication
as possible. It is a well-known maxim in coding to avoid duplicating
code. Duplication is problematic because with more places to fix the
same piece of information, it is easy to forget to fix one of the multi‐
ple places when making a change or fixing a problem. Duplication is
also cumbersome because a single change translates to changes
needing to be made in multiple places.
Consider the effects of duplicating the IP address across the inter‐
face and inside BGP. If the interface IP address changes, a corre‐
sponding change must be made in the BGP configuration, as well.
The same issues that were present in the configuration across the
leaves is also present in the configuration across the spines.
However, there are a few things done right in this configuration:
Redistribute Routes
To eliminate the specification of individual IP addresses to
announce via network statements, we can use a different command:
redistribute.
Since just about their first introduction, all routing protocol suites
have provided an option to take prefixes from one protocol and
advertise it in another. This practice is called redistributing routes.
The general command format in BGP looks like this:
redistribute protocol route-map route-map-name
The configuration on leaf01 would look like this after replacing net
work statements with redistribute:
log file /var/log/frr/frr.log
router bgp 65000
bgp router-id 10.0.254.1
bgp log-neighbor-changes
bgp no default ipv4-unicast
timers bgp 3 9
neighbor peer-group ISL
neighbor ISL remote-as 65500
neighbor ISL advertisement-interval 0
neighbor ISL timers connect 5
neighbor 169.254.1.0 peer-group ISL
neighbor 169.254.1.64 peer-group ISL
address-family ipv4 unicast
neighbor ISL activate
redistribute connected
maximum-paths 64
exit-address-family
However, the use of an unadorned redistribute statement leads to
potentially advertising addresses that should not be, such as the
interface IP addresses, or in propagating configuration errors. As an
example of the latter, if an operator accidentally added an IP address
of 8.8.8.8/32 on an interface, the BGP will announce reachability to
that address, thereby sending all requests meant for the public, well-
known, DNS server to that hapless misconfigured router.
To avoid all of these issues, just about every routing protocol sup‐
ports some form of routing policy.
Redistribute Routes | 35
Routing Policy
Routing policy, at its simplest, specifies when to accept or reject
route advertisements. Based on where they’re used, the accept or
reject could apply to routes received from a peer, routes advertised
to a peer, and redistributed routes. At its most complex, routing pol‐
icy can modify metrics that affect the best-path selection of a prefix,
and add or remove attributes or communities from a prefix or set of
prefixes. Given BGP’s use primarily in connecting different adminis‐
trative domains, BGP has the most sophisticated routing policy
constructs.
A routing policy typically consists of a sequence of if-then-else state‐
ments, with matches and actions to be taken on a successful match.
While we’ve thus far avoided the use of any routing policy, we can
now see the reason for using them with BGP in the data center.
For example, to avoid the problem of advertising 8.8.8.8, as
described in the previous section, the pseudocode for the routing
policy would look like the following (we develop this pseudocode
into actual configuration syntax by the end of this section):
if prefix equals '8.8.8.8/32' then reject else accept
In a configuration in which connected routes are being redistrib‐
uted, a safe policy would be to accept the routes that belong to this
data center and reject any others. The configurations I’ve shown,
contain two kinds of prefixes: 10.1.0.0/16 (assuming there are lots of
host-facing subnets in the network) and the router’s loopback IP
address, as an example 10.0.254.1/32. We also see the interface
address subnet, 169.254.0.0/16, which must not be advertised. So, a
first stab at a routing policy would be the following:
if prefix equals 10.1.0.0/16 then accept
else if prefix equals 10.0.254.1/32 then accept
else reject
Route-Maps
route-maps are a common way to implement routing policies. Cis‐
co’s IOS, NXOS, the open source protocol suite FRRouting, Arista,
and others support route-maps. JunOS uses a different syntax with,
some would argue, more intuitive keywords. The open source rout‐
ing suite BIRD goes a step further and uses a simple domain-specific
Routing Policy | 37
programming language instead of this combination of route-maps
and prefix-lists. The details of describing that are beyond the
scope of this book, but if you’re interested, you can find the details
on BIRD’s web pages.
route-maps have the following syntax:
route-map NAME (permit|deny) [sequence_number]
match classifier
set action
This assigns a name to the policy, indicates whether the matched
routes will be permitted or denied, and then matches inputs against
a classifier. If a match clause successfully matches a classifier, the set
clause acts on the route. The optional sequence number orders the
sequence of clauses to be executed within a route-map.
When we use the permit keyword, the set action is applied when
the match succeeds, but when we use the deny keyword, the set
action is applied when the match fails. In other words, deny func‐
tions as a “not” operator: if there’s a match, reject the route.
route-maps have an implicit “deny” at the end. Thus, if no entry is
matched, the result is to reject the input.
Classifiers in route-maps
route-maps come with a rich set of classifiers. You can use an exten‐
sive variety of traits as classifiers, and different implementations
support different subsets of these classifiers (some support all and
more). The list in Table 3-1 is taken from FRRouting’s list.
Routing Policy | 39
Instead of IP prefixes, we can use any of the other classifiers, as well.
For example, if all we need to do was advertise the router’s primary
loopback IP address, the config lines are as follows:
route-map ADV_LO permit 10
match interface lo
route-maps in BGP
Besides redistributed routes, you can apply route-maps in multiple
other places during BGP processing. Here are some examples:
Routing Policy | 41
sending of updates. Slow update processing can result in poor con‐
vergence times, too.
Therefore, peer-groups often are used with route-maps to drasti‐
cally reduce the amount of processing BGP needs to do before
advertising a route to its neighbors. Instead of relying on just user-
configured peer groups, implementations typically build up these
groups dynamically. This is because even within a single peer-
group, different neighbors might support different capabilities (for
example, some might support MPLS, and some might not). This
information can be determined only during session establishment.
So, user configuration either doesn’t help or places an undue burden
on the user to ensure that it all neighbors in a peer group support
exactly the same capabilities.
Thus, an implementation that supports the dynamic creation and
teardown of peer groups puts all neighbors that have the same out‐
going route policy and the same capabilities in a new, dynamically
created peer group or, more precisely, dynamic update group. BGP
runs the policy once for a prefix that encompasses the entire peer
group. The result is then automatically applied to each member of
that dynamically constructed peer group. This allows implementa‐
tions to scale to supporting hundreds or even thousands of
neighbors.
The configuration across the spines also looks the same, except for
changes to the router-id and neighbor’s ASN. Here is the result:
log file /var/log/frr/frr.log
Summary
This chapter fired the first shots at making BGP configuration more
automation friendly. First, we used routing policy to replace IP
addresses from individual network statements with a single redis
tribute connected directive with a route map that ensures that
Summary | 45
only the appropriate addresses are advertised. Next, building on the
small number of addresses covered by /30 and /31 subnets (which
makes it easy to determine the remote end’s IP address once the
local end’s IP address is known), we reduce the configuration to use
interface names instead of IP addresses to identify a peer.
However, we’re not yet done. What this configuration hides is that
interfaces still need IP address configuration—even if they’re hidden
from the BGP configuration and not duplicated. Also, the configu‐
ration still relies on knowledge of the peer’s ASN. In Chapter 4, we
eliminate both of these requirements.
47
The Need for Interface IP Addresses and
remote-as
Because BGP runs on TCP/IP, it needs an IP address to create a con‐
nection. How can we identify this remote node’s address while at the
same time not allocating any IP addresses on interfaces? Answering
this question will involve understanding a lesser-known RFC and
the stateless configuration tools provided by IPv6. It also involves
understanding the real heart of routing.
The second problem is that every BGP configuration relies on
knowing the remote ASN. But this ASN is really required for only
one thing: to identify whether the session is governed by the rules of
internal BGP (iBGP) or external BGP (eBGP).
BGP Unnumbered
All of this is well and good, but how can BGP work in a world
without interface IP addresses?
BGP Unnumbered | 51
So, how can we, without user configuration and using interface
addresses, discover the peer’s IP address?
Enter IPv6, and an obscure standard, RFC 5549.
RFC 5549
Even though we now potentially can establish a BGP peering
without requiring an interface IP address, advertising routes also
requires a way to specify how to reach the router advertising the
routes. In BGP, this is signaled explicitly in the route advertisement
via the NEXTHOP attribute. The previous section showed how this
BGP Unnumbered | 53
3. The ARP reply from the neighboring router populates the ARP
cache with the MAC address of 20.1.1.1 on interface swp1.
4. The router then sticks this MAC address as the destination
MAC address on the packet, with the source MAC address of
interface swp1, and sends the packet on its merry way.
Except for getting the MAC address to put on the packet, the nex‐
thop IP address is not used in the packet at all.
In case of IPv6, as well, the nexthop IPv6 address is used to identify
the nexthop MAC address, using IPv6’s equivalent of ARP: Neigh‐
bor Discovery (ND). Even in IPv6, forwarding to the original desti‐
nation involves only the nexthop’s MAC address. The nexthop IP
address is used only to get the nexthop’s MAC address.
RFC 5549 builds on this observation and provides an encoding
scheme to allow a router to advertise IPv4 routes with an IPv6 nex‐
thop.
BGP Unnumbered | 55
whether it has the information for the MAC address associated with
this IPv6 LLA. Let this MAC address be 00:00:01:02:03:04. The RIB
process now adds a static ARP entry for 169.254.0.1 with this MAC
address, pointing out the peering interface. 169.254.0.1 is an IPv4
LLA, although it is not automatically assigned to an interface the
way IPv6 LLA is. FRRouting assumes that 169.254.0.1 is reserved (as
of this writing, this cannot be changed through a configuration
option). The reason for the static ARP entry is so that the router
cannot run ARP to get this address; this IP address was assigned by
the router implicitly without its neighbor knowing anything about
this assignment; thus, the neighbor cannot respond to the ARP,
because it doesn’t have the IP address assigned to the interface.
The RIB process then pushes the route into the kernel routing table
with a nexthop of 169.254.0.1 and an outgoing interface set to that
of the peering interface. So, the final state in the tables looks like
this:
ROUTE: 10.1.1.0/24 via 169.254.0.1 dev swp1
ARP: 169.254.0.1 dev swp1 lladdr 00:00:01:02:03:04 PERMANENT
At this point, everything is set up for packet forwarding to work
correctly. More specifically, the packet forwarding logic remains
unchanged with this model.
If the link goes down or the remote end stops generating an RA, the
local RA process yanks out the LLA and its associated MAC from
the RIB. This causes the RIB process to decide that the nexthop is no
longer reachable, which causes it to notify the BGP process that the
peer is no longer reachable. RIB also tears down the static ARP entry
that it created. Terminating the session causes BGP to yank out the
routes pointing out this peering interface.
To summarize:
Interoperability
Every eBGP peer sets the NEXTHOP to its own IP address before
sending out a route advertisement.
Figure 4-2 shows a hypothetical network in which routers B and D
support RFC 5549, whereas routers A and C do not. So, there are
interface IP addresses on the links between B and A and between B
and C. When A announces reachability to 10.1.1.0/24, it provides its
peering interface’s IPv4 address as the nexthop. When B advertises
reachability to 10.1.1.0/24, it sets its IPv6 LLA as the nexthop when
sending the route to D, and sets its interface’s IPv4 address as the
nexthop when sending the route to C.
In the reverse direction, if D announces reachability to a prefix
10.1.2.0/24, it uses its interface’s IPv6 LLA to send it to B. When B
announces this to A and C, it sets the nexthop to be that of the IPv4
address of the peering interface.
BGP Unnumbered | 57
Figure 4-2. Interoperability with RFC 5549
Summary
By eliminating interface IP addresses and the specification of the
exact remote-as in the neighbor command specification, we can
arrive at a configuration, listed in Example 4-1, that looks remarka‐
bly similar across the leaves and spines illustrated in Figure 3-1. The
only differences between the nodes are shown in bold in the
example.
Example 4-1. Final BGP configuration for a leaf and spine in a Clos
network
// leaf01 configuration
// spine01 configuration
Summary | 59
route-map ACCEPT_DC_LOCAL permit 10
match ip-address DC_LOCAL_SUBNET
So far, this book has laid the groundwork to create a simple, auto‐
matable configuration for a data center network using BGP. But we
have just gone through the initial configuration of a leaf or spine
router. As any network operator knows, the job is far from done
after the network is deployed. Routers need to be upgraded, security
patches need to be applied, new routers need to be rolled in, and
heaven help us all, what if BGP refuses to behave? This chapter
addresses these questions.
61
Figure 5-1. Showing the network
This command shows only the output of IPv4 BGP sessions. When
BGP began life, there was only IPv4 and the keyword ip was unam‐
biguous with respect to what protocol it referred to. Since the advent
of IPv6, and with the evolution of BGP to support multiple proto‐
cols, we need a command to display IPv6 sessions, as well. In line
with the AFI/SAFI model, the show bgp commands have evolved to
support show bgp ipv4 unicast summary and show bgp ipv6 uni
cast summary. For many operators, however, sheer muscle memory
forces them to type show ip bgp summary.
Following are the key points to note in this output:
• All the neighbors with whom this router is supposed to peer are
listed (unlike with other protocols such as OSPF).
• The state of each session is listed. If a session is in the Estab‐
lished state, instead of the state name, the number of prefixes
accepted from the peer is shown.
• Every session’s uptime is shown (or its downtime, if the session
is not in Established state).
• Information such as the node’s router ID and ASN is also
shown.
The version of the BGP (the “V” column in Figure 5-1) is archaic,
given that all BGP implementations in use today, especially in the
data center, are running version 4 of the protocol. The remaining
fields are mostly uninteresting unless there’s a problem.
One difference to note in the previous output compared to what you
might see in just about every other implementation (except
ExaBGP) is the display of the hostname of the peer. This is based on
an IETF draft that defined a new BGP capability, called hostname,
which allows operators to advertise the hostname along with the
You can use the same command with a specific prefix to get the
details of the received prefix advertisement. For example, Figure 5-3
depicts the output of the command show ip bgp 10.254.0.3.
exit01 and exit02 are the two nodes that demarcate the inside of the
data center from the outside. They’re connected to the node titled
internet; this is the data center’s edge switch, which is the switch
that peers with the external world. exit01 and exit02 are called bor‐
der leaves or exit leaves (the border leaves maybe in a border pod in a
three-tier Clos network as described in Chapter 1).
Border leaves serve two primary functions: stripping off the private
ASNs, and optionally aggregating the internal data center routes and
announcing only the summary routes to the edge routers.
You strip the private ASNs from the path via the command neigh
bor neighbor_name remove-private-AS all.
You can summarize routes and announce only the aggregate via the
command aggregate-address summary-route summary-only.
The keyword summary-only specifies that the individual routes must
not be sent. Without that option, summary routes as well as individ‐
ual routes are advertised. When a route is aggregated and only the
summary route announced, the entire AS_PATH is also removed
unless specified otherwise.
Debugging BGP
Like any other software, BGP will occasionally behave unpredictably
due to a bug or to a misunderstanding by the operator. A common
solution to such a problem is to enable debugging and look at the
debug logs to determine the cause of the unpredictable behavior.
Different router software provides different knobs to tweak during
debugging. In FRRouting, the command debug bgp is the gateway
Debugging BGP | 69
to understanding what’s going on with BGP. There are many options
listed under debug, but three in particular are key:
neighbor-events
This is used to debug any session and bring up issues. The
debugging can be for all sessions, or for only a specific session.
Information such as which end initiated the connection, the
BGP state machine transitions, and what capabilities were
exchanged can all be seen in the debug log with this option
enabled.
bestpath
This is used to debug bestpath computation. If you enable it for
a specific prefix, the logs will show the logic followed in select‐
ing the bestpath for a prefix, including multipath selection.
Figure 5-6 shows an example of the snippet from a log. This is
for debugging the same prefix shown in Figure 5-3 and
Figure 5-5. As seen, you also can use the debug logs to gain a
better understanding of how BGP’s bestpath selection logic
works—in this case, how a longer AS_PATH prevents a path
from being selected.
Updates
This is used to debug problems involving either advertising or
receiving advertisements of prefixes with a neighbor. You can
specify a single prefix, all prefixes, or all prefixes for a single
neighbor in order to more closely examine the root cause of a
problem. The debug logs show you not only the prefixes that
were accepted, but also the ones that were rejected. For exam‐
ple, given that the spines share the same ASN, the loopback IP
address of a spine cannot be seen by the other spines. To see this
in action, by issuing debug bgp updates prefix
10.254.0.253/32, we get the output shown by Example 5-1 in
the log file.
Summary
This chapter provided information for some of the less frequent, but
nevertheless critical tools and tasks for managing and troubleshoot‐
ing BGP deployments in a data center. At this stage, you should
hopefully possess a good understanding of data center networks,
BGP, and how to configure and manage a Clos network in the data
center.
Summary | 71
Chapter 6 covers extending BGP routing all the way to the host,
something that is also increasingly being deployed as a solution in
the data center due to the rise in virtual services, among other uses.
73
because the boundary represented in some sense the separation of
the client from the server. It was logical to assign firewalls at this
boundary to protect servers from malicious or unauthorized clients.
Similarly, load balancers front-ended servers, typically web servers,
in support of a scale-out model. This design also extended to fire‐
walls, where load balancers front-ended a row of firewalls when the
traffic bandwidth exceeded the capacity of a single firewall.
These firewalls and load balancers were typically appliances, which
were usually scaled with the scale-in model; that is, purchasing
larger and larger appliances to support the increasing volume of
traffic.
The Clos network destroyed any such natural boundary, and with its
sheer scale, the modern data center made scale-in models impracti‐
cal. In the new world, the services are provided by virtual machines
(VMs) running on end hosts or nonvirtualized end hosts. Two pop‐
ular services provided this way are the load balancer and firewall
services. In this model, as the volume of traffic ebbs and flows, VMs
can be spun up or down dynamically to handle the changing traffic
needs.
Anycast Addresses
Because the servers (or VMs) providing a service can pop up any‐
where in the data center, the IP address no longer can be con‐
strained to a single rack or router. Instead, potentially several racks
could announce the same IP address. With routing’s ECMP for‐
warding capability, the packets would flow to one of the nearest
nodes offering the service. These endpoint IP addresses have no sin‐
gle rack or switch to which they can be associated. These IP
addresses that are announced by multiple endpoints are called any‐
cast IP addresses. They are unicast IP addresses, meaning that they
are sent to a single destination (as opposed to multidestination
addresses such as multicast or broadcast), but the destination that is
picked is determined by routing, and different endpoints pick differ‐
ent nodes offering the same service.
Subnets are typically assigned per rack. As we discussed in Chap‐
ter 1, 40 servers per rack result in the ToR announcing a /26 subnet.
But how does a ToR discover or advertise a nonsubnet address that
is an anycast service IP address? Static routing configuration is not
acceptable. BGP comes to the rescue again.
ASN Assignment
The most common deployment I have seen is to dedicate an ASN
for all servers. The advantages of this approach are that it is simple
to configure and automate, and it simplifies identifying and filtering
routes from the server. The two main disadvantages of this approach
are 1) the complexity of the configuration on the server increases if
we need to announce anything more than just the default route to
the host, and 2) tracking which server announced a route becomes
trickier because all servers share the same ASN.
Another approach would be to assign a single ASN for all servers
attached to the same switch, but separate ASNs for separate
switches. In a modern data center, this translates to having a sepa‐
rate server ASN per rack. The benefit of this model is that it now
looks like the servers are just another tier of a Clos network. The
main disadvantages of this model are the same as the previous mod‐
el’s, though we can narrow a route announcement to a specific rack.
The final approach is to treat each server as a separate node and
assign separate ASNs for each server. Although a few customers I
know of are using this approach, it feels like overkill. The primary
benefits of this approach are that it perfectly fits the model prescri‐
bed for a Clos network, and that it is easy to determine which server
advertised a route. Given the sheer number of servers, using 4-byte
ASNs seems the prudent thing to do with this approach.
Dynamic neighbors
Because BGP runs over TCP, as long as one of the peers initiates a
connection, the other end can remain passive, silently waiting for a
connection to come, just as a web server waits for a connection from
a browser or other client.
BGP dynamic neighbors is a feature supported in some implementa‐
tions whereby one end is typically passive. It is just told what IP sub‐
net to accept connections from, and is associated with a peer group
that controls the characteristics of the peering session.
Recall that the servers within a rack typically share a subnet with the
other servers in the same rack. As an example, let’s assume that a
group of 40 servers connected to a ToR switch are in 10.1.0.0/26
subnet. A typical configuration of BGP dynamic neighbors on a ToR
will look as follows:
neighbor servers peer-group
neighbor servers remote-as 65530
bgp listen range 10.1.0.0/26 peer-group servers
At this point, the BGP daemon will begin listening passively on port
179 (the well-known BGP port). If it receives a connection from
anyone in the 10.1.0.0/26 subnet that says it’s ASN is 65530, the BGP
daemon will accept the connection request, and a new BGP session
is established.
On the server side, the switch’s peering IP address is typically that of
the default gateway. For the subnet 10.1.0.0/26, the gateway address
is typically 10.1.0.1. Thus, the BGP configuration on the server can
be as follows:
neighbor ISL peer-group
neighbor ISL remote-as external
neighbor 10.1.0.1 peer-group ISL
At this point, the BGP daemon running on the server will initiate a
connection to the switch, and as soon as the connection is estab‐
lished, the rest of the BGP state machine proceeds as usual.
Unfortunately, the dynamic neighbors features is not currently sup‐
ported over an interface; that is, you cannot say bgp listen inter
face vlan10 peer-group servers. Nor is it possible to use the
Summary
This chapter showed how we can extend the use of BGP all the way
to the hosts. With the advent of powerful, full-featured routing
suites such as FRRouting, it is possible to configure BGP simply by
using BGP unnumbered, making it trivial to automate BGP configu‐
ration across all servers. If you cannot live with the current limita‐
tions of BGP unnumbered or you prefer a more traditional BGP
peering, BGP dynamic neighbors is an alternative solution. Further,
we showed how we could limit any damage that can be caused by
servers advertising incorrect routes into the network, advertently or
inadvertently.