0% found this document useful (0 votes)
9 views38 pages

Protocols For Support: - The World Crisis, Winston Churchill

This chapter discusses the increasing demands on data networks due to multimedia applications and the need for Quality of Service (QoS) support within the TCP/IP architecture. It introduces key protocols such as RSVP (Resource ReSerVation Protocol), MPLS (Multiprotocol Label Switching), and RTP (Real-Time Transport Protocol) that facilitate resource reservation and efficient data transmission. The chapter emphasizes the importance of dynamic routing, multicast transmission, and receiver-initiated reservations to manage network traffic effectively.

Uploaded by

ssabeshadithya05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views38 pages

Protocols For Support: - The World Crisis, Winston Churchill

This chapter discusses the increasing demands on data networks due to multimedia applications and the need for Quality of Service (QoS) support within the TCP/IP architecture. It introduces key protocols such as RSVP (Resource ReSerVation Protocol), MPLS (Multiprotocol Label Switching), and RTP (Real-Time Transport Protocol) that facilitate resource reservation and efficient data transmission. The chapter emphasizes the importance of dynamic routing, multicast transmission, and receiver-initiated reservations to manage network traffic effectively.

Uploaded by

ssabeshadithya05
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

CHAPTER

Protocols for
QoS Support

The reader who has persevered thus far in this account will realize the difficulties that
were coped with, the hazards that were encountered, the mistakes that were made, and
the work that was done.

—The World Crisis, Winston Churchill

A s this book has stressed, the demands on data networks continue


their relentless rise. Traditional data-oriented applications, such as
file transfer, electronic mail, USENET news, and client/server sys¬
tems, place an increasing load on LANs, the Internet, and private internets.
This increasing load is due not just to the number of users and the increased
amount of time of their use, but also to increasing reliance on image as well
as text and numerical data. At the same time, there is increasing use of video
and audio. One option for multimedia applications is to use a combination
of dedicated circuits and ATM technology. But the timing of the demand has
outrun any hope of installing a desktop-to-desktop ATM infrastructure. On
the Internet and corporate intranets, there is an explosive growth in the use
of audio and video on Web pages. In addition, these networks are being
used for audio/video teleconferencing and other multicast “radio” and video
applications.
Thus, the burden of meeting this new demands falls on the TCP/IP
architecture over a packet-based network infrastructure. The central issues
that must be addressed are capacity and burstiness. Audio and video appli¬
cations generate huge numbers of bits per second, and the traffic has to be
streamed, or transmitted in a smooth continuous flow rather than in bursts.
508 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

This contrasts with conventional types of data as text, files, and graphics, for which
the uneven flow typical of packet transmission is acceptable.
In the absence of a universal ATM service, designers have looked for ways to
accommodate both bursty and stream traffic within the TCP/IP architecture. The
problem has been attacked with a number of complementary techniques:

1. To increased capacity, corporate LANs and WANs, as well as the Internet back¬
bone structure and corporate internets, have been upgraded to higher data
rates, with high-performance switches and routers. However, it is uneconom¬
ical and, indeed, given the self-similar nature of much of the traffic (see Chap¬
ter 9), virtually impossible to size the network infrastructure to handle the
peak busy period traffic. Accordingly, intelligent routing policies (Part Five)
coupled with end-to-end flow control techniques (Chapter 12) are vital in deal¬
ing with the high volume such networks support.
2. Multimedia applications inevitably imply multicast transmission. Efficient tech¬
niques for multicasting over an internet are needed. This topic was covered in
Chapter 15.
3. Users need to be able to intelligently network capacity and assign priorities to
various traffic types. In essence, the Internet needs to supply a QoS (quality of
service) capability.
4. A transport protocol appropriate for the streaming requirements of video and
other real-time data is needed.

The first two items have been addressed in other chapters. This chapter looks
at some key protocols that support the provision of QoS on the Internet. We begin
with an examination of RSVP (Resource ReSerVation Protocol), which is designed
to be an integral component of an Integrated Services Architecture (ISA), includ¬
ing a real-time transport protocol. We then look at MPLS (Multiprotocol Label
Switching). Finally, we examine RTP (Real-Time Transport Protocol).

18.1 RESOURCE RESERVATION: RSVP

A key task, perhaps the key task, of an internet is to deliver data from a source to
one or more destinations with the desired QoS (throughput, delay, delay variance,
etc.). This task becomes increasingly difficult on any internet with increasing num¬
ber of users, data rate of applications, and use of multicasting. We have seen that
one tool for coping with a high demand is dynamic routing. A dynamic routing
scheme, supported by protocols such as OSPF and BGP, can respond quickly to fail¬
ures in the internet by routing around points of failure. More important, a dynamic
routing scheme can, to some extent, cope with congestion, first by load balancing to
smooth out the load across the internet, and second by routing around areas of
developing congestion using least-cost routing. In the case of multicasting, dynamic
routing schemes have been supplemented with multicast routing capabilities that
take advantage of shared paths from a source to multicast destinations to minimize
the number of packet duplications.
18.1 / RESOURCE RESERVATION: RSVP 509

Another tool available to routers is the ability to process packets on the basis
of a QoS (quality of service) label. We have seen that routers can (1) use a queue
discipline that gives preference to packets on the basis of QoS; (2) select among
alternate routes on the basis of QoS characteristics of each path; and (3) when pos¬
sible, invoke QoS treatment in the subnetwork of the next hop.
All of these techniques are means of coping with the traffic presented to the
internet but are not preventive in any way. Based only on the use of dynamic rout¬
ing and QoS, a router is unable to anticipate congestion and prevent applications
from causing an overload. Instead, the router can simply supply a best-effort deliv¬
ery service, in which some packets may be lost and others delivered with less than
the requested QoS.
As the demands on internets grow, it appears that prevention as well as re¬
action to congestion is needed. As this section shows, a means to implement a pre¬
vention strategy is resource reservation.
Preventive measures can be useful in both unicast and multicast transmission.
For unicast, two applications agree on a specific QoS for a session and expect the
internet to support that QoS. If the internet is heavily loaded, it may not provide the
desired QoS and instead deliver packets at a reduced QoS. In that case, the appli¬
cations may have preferred to wait before initiating the session or at least to have
been alerted to the potential for reduced QoS. A way of dealing with this situation
is to have the unicast applications reserve resources in order to meet a given QoS.
Routers along an intended path could then preallocate resources (queue space, out¬
going capacity) to assure the desired QoS. If a router could not meet the resource
reservation because of prior outstanding reservations, then the applications could
be informed. The applications may then decide to try again at a reduced QoS reser¬
vation or may decide to try later.
Multicast transmission presents a much more compelling case for implement¬
ing resource reservation. A multicast transmission can generate a tremendous
amount of internet traffic if either the application is high volume (e.g., video) or the
group of multicast destinations is large and scattered, or both. What makes the case
for multicast resource reservation is that much of the potential load generated by a
multicast source may easily be prevented. This is so for two reasons:

1. Some members of an existing multicast group may not require delivery from
a particular source over some given period of time. For example, there may be
two “channels” (two multicast sources) broadcasting to a particular multicast
group at the same time. A multicast destination may wish to “tune in” to only
one channel at a time.
2. Some members of a group may only be able to handle a portion of the source
transmission. For example, a video source may transmit a video stream that
consists of two components: a basic component that provides a reduced pic¬
ture quality, and an enhanced component.1 Some receivers may not have the
processing power to handle the enhanced component, or may be connected to
the internet through a subnetwork or link that does not have the capacity for
the full signal.

'For example, the MPEG video compression standard discussed in Chapter 21 provides this capability.
510 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

Thus, the use of resource reservation can enable routers to decide ahead of time if
they can meet the requirement to deliver a multicast transmission to all designated
multicast receivers and to reserve the appropriate resources if possible.
Internet resource reservation differs from the type of resource reservation that
may be implemented in a connection-oriented network, such as ATM or frame
relay. An internet resource reservation scheme must interact with a dynamic rout¬
ing strategy that allows the route followed by packets of a given transmission to
change. When the route changes, the resource reservations must be changed. To
deal with this dynamic situation, the concept of soft state is used. A soft state is sim¬
ply a set of state information at a router that expires unless regularly refreshed from
the entity that requested the state. If a route for a given transmission changes, then
some soft states will expire and new resource reservations will invoke the appropri¬
ate soft states on the new routers along the route. Thus, the end systems requesting
resources must periodically renew their requests during the course of an application
transmission.
We now turn to the protocol that has been developed for performing resource
reservation in an internet environment: RSVP.2

RSVP Goals and Characteristics


Perhaps the best way to introduce RSVP is to list the design goals and characteris¬
tics. In [ZHAN93], the developers of RSVP list the following design goals:

1. Provide the ability of heterogeneous receivers to make reservations specifi¬


cally tailored to their own needs. As was mentioned, some members of a mul¬
ticast group may be able to handle or may want to handle only a portion of a
multicast transmission, such as a low-resolution component of a video signal.
Differing resource reservations among members of the same multicast group
should be allowed.
2. Deal gracefully with changes in multicast group membership. Membership in
a group can be dynamic. Thus, reservations must be dynamic, and again, this
suggests that separate dynamic reservations are needed for each multicast
group member.
3. Specify resource requirements in such a way that the aggregate resources
reserved for a multicast group reflect the resources actually needed. Multicast
routing takes place over a tree such that packet splitting is minimized. There¬
fore, when resources are reserved for individual multicast group members,
these reservations must be aggregated to take into account the common path
segments shared by the routes to different group members.
.
4 Enable receivers to select one source from among multiple sources transmitting
to a multicast group. This is the channel-changing capability described earlier.
5. Deal gracefully with changes in routes, automatically reestablishing the re¬
source reservation along the new paths as long as adequate resources are
available. Because routes may change during the course of an application’s

RFC 2205, Resource ReSerVation Protocol (RSVP) _ Version 1 Functional Specification, September 1997.
18.1 / RESOURCE RESERVATION: RSVP 511

transmission, the resource reservations must also change so that the routers
actually on the current path receive the reservations.
6. Control protocol overhead. Just as resource reservations are aggregated to
take advantage of common path segments among multiple multicast receivers,
so the actual RSVP reservation request messages should be aggregated to min¬
imize the amount of RSVP traffic in the internet.
7. Be independent of routing protocol. RSVP is not a routing protocol; its task is
to establish and maintain resource reservations over a path or distribution tree,
independent of how the path or tree was created.

Based on these design goals, RFC 2205 lists the following characteristics of RSVP:

• Unicast and multicast; RSVP makes reservations for both unicast and multi¬
cast transmissions, adapting dynamically to changing group membership as
well as to changing routes, and reserving resources based on the individual
requirements of multicast members.
• Simplex: RSVP makes reservations for unidirectional data flow. Data ex¬
changes between two end systems require separate reservations in the two
directions.
• Receiver-initiated reservation: The receiver of a data flow initiates and main¬
tains the resource reservation for that flow.
• Maintaining soft state in the internet: RSVP maintains a soft state at interme¬
diate routers and leaves the responsibility for maintaining these reservation
states to end users.
• Providing different reservation styles: These allow RSVP users to specify how
reservations for the same multicast group should be aggregated at the inter¬
mediate switches. This feature enables a more efficient use of internet resources.
• Transparent operation through non-RSVP routers: Because reservations and
RSVP are independent of routing protocol, there is no fundamental conflict in
a mixed environment in which some routers do not employ RSVP. These
routers will simply use a best-effort delivery technique.
• Support for IPv4 and IPv6: RSVP can exploit the Type-of-Service field in the
IPv4 header and the Flow Label field in the IPv6 header.

It is worth elaborating on two of these design characteristics: receiver-initiated


reservations, and soft state.

Receiver-Initiated Reservation
Previous attempts at resource reservation, and the approach taken in frame
relay and ATM networks, are for the source of a data flow to request a given set of
resources. In a strictly unicast environment, this approach is reasonable. A trans¬
mitting application is able to transmit data at a certain rate and has a given QoS
designed into the transmission scheme. However, this approach is inadequate for
multicasting. As was mentioned, different members of the same multicast group
may have different resource requirements. If the source transmission flow can be
divided into component subflows, then some multicast members may only require
512 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

a single subflow. If there are multiple sources transmitting to a multicast group, then
a particular multicast receiver may want to select only one or a subset of all sources
to receive. Finally, the QoS requirements of different receivers may differ depend¬
ing on the output equipment, processing power, and link speed of the receiver.
It therefore makes sense for receivers rather than senders to make resource
reservations. A sender needs to provide the routers with the traffic characteristics
of the transmission (data rate, variability), but it is the receivers that must specify
the desired QoS. Routers can then aggregate multicast resource reservations to take
advantage of shared path segments along the distribution tree.

Soft State
RSVP makes use of the concept of a soft state. This concept was first intro¬
duced by David Clark in [CLAR88], and it is worth quoting his description:

While the datagram has served very well in solving the most important goals
of the Internet, the goals of resource management and accountability have
proved difficult to achieve. Most datagrams are part of some sequence of pack¬
ets from source to destination, rather than isolated units at the application
level. However, the gateway3 [footnote reference added] cannot directly see
the existence of this sequence, because it is forced to deal with each packet in
isolation. Therefore, resource management decisions or accounting must be
done on each packet separately.
This suggests that there may be a better building block than the datagram
for the next generation of architecture. The general characteristic of this build¬
ing block is that it would identify a sequence of packets traveling from source
to destination. I have used the term flow to characterize this building block. It
would be necessary for the gateways to have flow state in order to remember
the nature of the flows which are passing through them, but the state infor¬
mation would not be critical in maintaining the described type of service asso¬
ciated with the flow. Instead, that type of service would be enforced by the end
points, which would periodically send messages to ensure that the proper type
of service was being associated with the flow. In this way, the state informa¬
tion associated with the flow could be lost in a crash without permanent dis¬
ruption of the service features being used. I call this concept soft state.

In essence, a connection-oriented scheme takes a hard-state approach, in


which the nature of the connection along a fixed route is defined by the state infor¬
mation in the intermediate switching nodes. RSVP takes a soft-state, or connec¬
tionless, approach, in which the reservation state is cached information in the
routers that is installed and periodically refreshed by end systems. If a state is not
refreshed within a required time limit, the router discards the state. If a new route
becomes preferred for a given flow, the end systems provide the reservation to the
new routers on the route.

1Gateway is the term used for router in most of the earlier RFCs and TCP/IP literature; it is still occa¬
sionally used today (e.g.. Border Gateway Protocol).
18.1 / RESOURCE RESERVATION: RSVP 513

Data Flows

Three concepts relating to data flows form the basis of RSVP operation: session,
flow specification, and filter specification.
A session is a data flow identified by its destination. The reason for using the
term session rather than simply destination is that it reflects the soft-state nature of
RSVP operation. Once a reservation is made at a router by a particular destination,
the router considers this as a session and allocates resources for the life of that ses¬
sion. In particular, a session is defined by

Session: Destination IP address


IP protocol identifier
Destination port

The destination IP address may be unicast or multicast. The protocol identifier indi¬
cates the user of IP (e.g., TCP or UDP), and the destination port is the TCP or UDP
port for the user of this transport layer protocol. If the address is multicast, the des¬
tination port may not be necessary, because there is typically a different multicast
address for different applications.
A reservation request issued by a destination end system is called a flow
descriptor and consists of a flowspec and a filter spec. The flowspec specifies a
desired QoS and is used to set parameters in a node’s packet scheduler. That is, the
router will transmit packets with a given set of preferences based on the current
flowspecs. The filter spec defines the set of packets for which a reservation is
requested. Thus, the filter spec together with the session define the set of packets,
or flow, that are to receive the desired QoS. Any other packets addressed to the
same destination are handled as best-effort traffic.
The content of the flowspec is beyond the scope of RSVP, which is merely a
carrier of the request. In general, a flowspec contains the following elements:

Flowspec: Service class


Rspec
Tspec

The service class is an identifier of a type of service being requested; it includes


information used by the router to merge requests. The other two parameters are sets
of numeric values. The Rspec (R for reserve) parameter defines the desired QoS,
and the Tspec (T for traffic) parameter describes the data flow. The contents of
Rspec and Tspec are opaque to RSVP.
In principle, the filter spec may designate an arbitrary subset of the packets of
one session (i.e., the packets arriving with the destination specified by this session).
For example, a filter spec could specify only specific sources, or specific source pro¬
tocols, or in general only packets that have a match on certain fields in any of the
protocol headers in the packet. The current RSVP version uses a restricted filter
spec consisting of the following elements:
514 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

Filter spec: Source address


UDP/TCP source port

Figure 18.1 indicates the relationship among session, flowspec, and filter spec.
Each incoming packet is part of at most one session and is treated according to the
logical flow indicated in the figure for that session. If a packet belongs to no session,
it is given a best-effort delivery service.

RSVP Operation
Much of the complexity of RSVP has to do with dealing with multicast transmis¬
sion. Unicast transmission is treated as a special case. In what follows, we examine
the general operation of RSVP for multicast resource reservation. The internet con¬
figuration shown in Figure 18.2a is used in the discussion. This configuration con¬
sist of four routers connected as shown. The link between two routers, indicated by
a line, could be a point-to-point link or a subnetwork. Three hosts, Gl, G2, and G3,
are members of a multicast group and can receive datagrams with the correspond¬
ing destination multicast address. Two hosts, SI and S2, transmit data to this multi¬
cast address. The heavy black lines indicate the routing tree for source SI and this
multicast group, and the heavy gray lines indicate the routing tree for source S2 and
this multicast group. The arrowed lines indicate packet transmission from SI (black)
and S2 (gray).
We can see that all four routers need to be aware of the resource reservations
of each multicast destination. Thus, resource requests from the destinations must
propagate backward through the routing trees toward each potential host.

Filtering
Figure 18.2b shows the case that G3 has set up a resource reservation with a
filter spec that includes both SI and S2, whereas Gl and G2 have requested trans¬
missions from SI only. R3 continues to deliver packets from S2 for this multicast
address to G3 but does not forward such packets to R4 for delivery to Gl and G2.
The reservation activity that produces this result is as follows. Both Gl and G2 send
an RSVP request with a filter spec that excludes S2. Because Gl and G2 are the
only members of the multicast group reachable from R4, R4 no longer needs to for¬
ward packets for this session. Therefore, it can merge the two filter spec requests
and send these in an RSVP message to R3. Having received this message, R3 will
no longer forward packets for this session to R4. However, it still needs to forward
such packets to G3. Accordingly, R3 stores this reservation but does not propagate
it back up to R2.
RSVP does not specify how filtered packets are to be handled. R3 could
choose to forward such packets to R4 in a best-effort basis or could simply drop the
packets. The latter course is indicated in Figure 18.2b.
A more fine-grained example of filtering is illustrated in Figure 18.2c. Here we
only consider transmissions from SI, for clarity. Suppose that two types of packets
are transmitted to the same multicast address representing two substreams (e.g., two
parts of a video signal). These are illustrated by black and gray arrowed lines. Gl
and G2 have sent reservations with no restriction on the source, whereas G3 has
a
o
<D
C
O

c
o
<D
co
<D
C
O

CD

cj
03
Ph

C
CD
a
03
CD

OC
<y
i-
3
OD

515
516
Figure 18.2 RSVP Operation
18.1 / RESOURCE RESERVATION: RSVP 517

used a filter spec that eliminates one of the two substreams. This request propagates
from R3 to R2 to Rl. R1 then blocks transmission of part of the stream to G3. This
saves resources on the links from Rl to R2, R2 to R3, and R3 to G3, as well as
resources in R2, R3, and G3.

Reservation Styles

The manner in which resource requirements from multiple receivers in the same
multicast group are aggregated is determined by the reservation style. These styles
are, in turn, characterized by two different options in the reservation request:

• Reservation attribute: A receiver may specify a resource reservation that is


to be shared among a number of senders (shared) or may specify a resource
reservation that is to be allocated to each sender (distinct). In the former case,
the receiver is characterizing the entire data flow that it is to receive on this
multicast address from the combined transmission of all sources in the filter
spec. In the latter case, the receiver is saying that it is simultaneously capable
of receiving a given data flow from each sender characterized in its filter spec.
• Sender selection: A receiver may either provide a list of sources (explicit) or
implicitly select all sources by providing no filter spec (wild card).

Based on these two options, three reservation styles are defined in RSVP, as
shown in Table 18.1. The wild-card-filter (WF) style specifies a single resource reser¬
vation to be shared by all senders to this address. If all of the receivers use this style,
then we can think of this style as a shared pipe whose capacity (or quality) is the
largest of the resource requests from all receivers downstream from any point on
the distribution tree. The size is independent of the number of senders using it. This
type of reservation is propagated upstream to all senders. Symbolically, this style is
represented in the form WF(*{Q}), where the asterisk represents wild-card sender
selection and Q is the flowspec.
To see the effects of the WF style, we use the router configuration of Figure
18.3a, taken from the RSVP specification. This is a router along the distribution
tree that forwards packets on port y for receiver Rl and on port z for receivers R2
and R3. Transmissions for this group arrive on port w from SI and on port x from
S2 and S3. Transmissions from all sources are forwarded to all destinations through
this router.
Figure 18.3b shows the way in which the router handles WF requests. For sim¬
plicity, the flowspec is a one-dimensional quantity in multiples of some unit resource
B. The Receive column shows the requests that arrive from the receivers. The
Reserve column shows the resulting reservation state for each outgoing port. The
Send column indicates the requests that are sent upstream to the previous-hop
nodes. Note that the router must reserve a pipe of capacity 4B for port y and of
capacity 3B for port z. In the latter case, the router has merged the requests from
R2 and R3 to support the maximum requirement for that port. However, in passing
requests upstream the router must merge all outgoing requests and send a request
for 4B upstream on both ports w and x.
Now suppose that the distribution tree is such that this router forwards pack¬
ets from SI on both ports y and z but forwards packets from S2 and S3 only on port
518 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

Table 18.1 Reservation Attributes and Styles

Reservation Attribute

Sender Selection Distinct Shared

Explicit Fixed-Filter Shared-Explicit


(FF) Style (SE) Style

Wild card — Wild card-Filter


(WF) Style

z, because the internet topology provides a shorter path from S2 and S3 to Rl. Fig¬
ure 18.3c indicates the way in which resource requests are merged in this case. The
only change is that the request sent upstream on port x is for 3B. This is because
packets arriving from this port are only to be forwarded on port z, which has a max¬
imum flowspec request of 3B.
A good example of the use of the WF style is for an audio teleconference with
multiple sites. Typically, only one person at a time speaks, so a shared capacity can
be used by all senders.
The fixed-filter (FF) style specifies a distinct reservation for each sender and
provides an explicit list of senders. Symbolically, this style is represented in the form
FF(S1{Q1), S2{Q2}, . . .), where Si is a requested sender and Qi is the resource
request for that sender. The total reservation on a link for a given session is the sum
of the Qi for all requested senders.
Figure 18.3d illustrates the operation of the FF style. In the Reserve column,
each box represents one reserved pipe on the outgoing link. All of the incoming
requests for SI are merged to send a request for 4B out on port w. The flow descrip¬
tors for senders S2 and S3 are packed (not merged) into the request sent of port x;
for this request, the maximum requested flowspec amount for each source is used.
A good example of the use of the FF style is for video distribution. To receive
video signals simultaneously from different sources requires a separate pipe for
each of the streams. The merging and packing operations at the routers assure that
adequate resources are available. For example, in Figure 18.2a, R3 must reserve
resources for two distinct video streams going to G3, but it needs only a single pipe
on the stream going to R4 even though that stream is feeding two destinations (G1
and G2). Thus, with FF style, it may be possible to share resources among multiple
receivers but it is never possible to share resources among multiple senders.
The shared-explicit (SE) style specifies a single resource reservation to be
shared among an explicit list of senders. Symbolically, this style is represented in the
form SE(S1, S2, . . . {Q}). Figure 18.3e illustrates the operation of this style. When
SE-style reservations are merged, the resulting filter spec is the union of the origi¬
nal filter specs, and the resulting flowspec is the largest flowspec.
As with the WF style, the SE style is appropriate for multicast applications in
which there are multiple data sources but they are unlikely to transmit simultaneously.
519
520 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

RSVP Protocol Mechanisms


RSVP uses two basic message types: Resv and Path. Resv messages originate at mul¬
ticast group receivers and propagate upstream through the distribution tree, being
merged and packed when appropriate at each node along the way. These messages
create soft states within the routers of the distribution tree that define the resources
reserved for this session (this multicast address). Ultimately, the merged Resv mes¬
sages reach the sending hosts, enabling the hosts to set up appropriate traffic control
parameters for the first hop. Figure 18.3d indicates the flow of Resv messages. Note
that messages are merged so that only a single message flows upstream along any
branch of the combined distribution trees. However, these messages must be
repeated periodically to maintain the soft states.
The Path message is used to provide upstream routing information. In all of the
multicast routing protocols currently in use, only a downstream route, in the form
of a distribution tree, is maintained. However, the Resv messages must propagate
upstream through all intermediate routers and to all sending hosts. In the absence
of reverse routing information from the routing protocol, RSVP provides this with
the Path message. Each host that wishes to participate as a sender in a multicast
group issues a Path message that is transmitted throughout the distribution tree to
all multicast destinations. Along the way, each router and each destination host cre¬
ates a path state that indicates the reverse hop to be used for this source. Figure
18.2a indicates the paths taken by these messages, which is the same as the paths
taken by data packets.
Figure 18.4 illustrates the operation of the protocol from the host perspective.
The following events occur:

a. A receiver joins a multicast group by sending an IGMP (Internet Group


Message Protocol) join message to a neighboring router.
b. A potential sender issues a Path message to the multicast group address.
c. A receiver receives a Path message identifying a sender.

Figure 18.4 RSVP Host Model


18.2 / MULTIPROTOCOL LABEL SWITCHING 521

d. Now that the receiver has reverse path information, it may begin sending
Resv messages, specifying the desired flow descriptors.
e. The Resv message propagates through the internet and is delivered to
the sender.
f. The sender starts sending data packets.
g. The receiver starts receiving data packets.

Events a and b may happen in either order.

18.2 MULTIPROTOCOL LABEL SWITCHING

So far, in Parts Five and Six, we have looked at a number of IP-based mechanisms
designed to improve the performance of IP-based networks and to provide differ¬
ent levels of QoS to different service users. Although the routing protocols discussed
in Part Five have as their fundamental purpose dynamically finding a route through
an internet between any source and any destination, they also provide supported for
performance goals in two ways:

1. Because these protocols are distributed and dynamic, they can react to conges¬
tion by altering routes to avoid pockets of heavy traffic. This tends to smooth
out and balance the load on the internet, improving overall performance.
2. Routes can be based on various metrics, such as hop count and delay. Thus a
routing algorithm develops information that can be used in determining how
to handle packets with different service needs.

More directly, the topics covered so far in Part Six (IS, DS, RSVP) provide
enhancements to an IP-based internet that explicitly provide support for QoS. How¬
ever, none of the mechanisms or protocols so far discussed in Part Six directly
addresses the performance issue: how to improve the overall throughput and delay
characteristics of an internet. MPLS is a promising effort to provide the kind of traf¬
fic management and connection-oriented QoS support found in ATM networks, to
speed up the IP packet forwarding process, and to retain the flexibility of an IP-
based networking approach.

Background

The roots of MPLS go back to a number of efforts in the mid-1990s to marry IP and
ATM technologies. The first such effort to reach the marketplace was IP switching,
developed by Ipsilon. To compete with this offering, numerous other companies
announced their own products, notably Cisco Systems (tag switching), IBM (aggre¬
gate route-based IP switching), and Cascade (IP navigator). The goal of all these
products was to improve the throughput and delay performance of IP, and all took
the same basic approach: Use a standard routing protocol such as OSPF to define
paths between endpoints; assign packets to these paths as they enter the network;
and use ATM switches to move packets along the paths. When these products came
out, ATM switches were much faster than IP routers, and the intent was to improve
522 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

performance by pushing as much of the traffic as possible down to the ATM level
and using ATM switching hardware.
In response to these proprietary initiatives, the IETF set up the MPLS work¬
ing group in 1997 to develop a common, standardized approach. The working group
issued its first set of Proposed Standards in 2001.4 Meanwhile, however, the market
did not stand still. The late 1990s saw the introduction of a number of routers that
are as fast as ATM switches, eliminating the need to provide both ATM and IP tech¬
nology in the same network. Nevertheless, MPLS has a strong role to play. MPLS
reduces the amount of per-packet processing required at each router in an IP-based
network, enhancing router performance even more. More significantly, MPLS pro¬
vides significant new capabilities in four areas that have ensured its popularity: QoS
support, traffic engineering, virtual private networks, and multiprotocol support.
Before turning to the details of MPLS, we briefly examine each of these.

Connection-Oriented QoS Support


Network managers and users require increasingly sophisticated QoS support
for a number of reasons. [SIKE00] lists the following key requirements:

• Guarantee a fixed amount of capacity for specific applications, such as audio/


video conference.
• Control latency and jitter and ensure capacity for voice.
• Provide very specific, guaranteed, and quantifiable service level agreements, or
traffic contracts.
• Configure varying degrees of QoS for multiple network customers.

A connectionless network, such as in IP-based internet, cannot provide truly


firm QoS commitments. A differentiated service (DS) framework works in only a
general way and upon aggregates of traffic from a number of sources. An IS frame¬
work, using RSVP, has some of the flavor of a connection-oriented approach but
is nevertheless limited in terms of its flexibility and scalability. For services such
as voice and video that require a network with high predictability, the DS and IS
approaches, by themselves, may prove inadequate on a heavily loaded network. By
contrast, a connection-oriented network, as we have seen, has powerful traffic man¬
agement and QoS capabilities. MPLS imposes a connection-oriented framework on
an IP-based internet and thus provides the foundation for sophisticated and reliable
QoS traffic contracts.

Traffic Engineering
MPLS makes it easy to commit network resources in such a way as to balance
the load in the face of a given demand and to commit to differential levels of sup¬
port to meet various user traffic requirements. The ability to define routes dynam¬
ically, plan resource commitments on the basis of known demand, and optimize
network utilization is referred to as traffic engineering.

4RFC 3031, Multiprotocol Label Switching Architecture, January 2001.


18.2 / MULTIPROTOCOL LABEL SWITCHING 523

With the basic IP mechanism, there is a primitive form of automated traffic


engineering. Specifically, routing protocols such as OSPF enable routers to change
the route dynamically to a given destination on a packet-by-packet basis to try to
balance load. But such dynamic routing reacts in a very simple manner to conges¬
tion and does not provide a way to support QoS. All traffic between two endpoints
follows the same route, which may be changed when congestion occurs. MPLS, on
the other hand, is aware of not just individual packets but flows of packets in which
each flow has certain QoS requirements and a predictable traffic demand. With
MPLS, it is possible to set up routes on the basis of these individual flows, with two
different flows between the same endpoints perhaps following different routers.
Further, when congestion threatens, MPLS paths can be rerouted intelligently. That
is, instead of simply changing the route on a packet-by-packet basis, with MPLS, the
routes are changed on a flow-by-flow basis, taking advantage of the known traffic
demands of each flow. Effective use of traffic engineering can substantially increase
usable network capacity.

Virtual Private Network (VPN) Support


MPLS provides an efficient mechanism for supporting VPNs. With a VPN, the
traffic of a given enterprise or group passes transparently through an internet in a
way that effectively segregates that traffic from other packets on the internet, prov¬
ing performance guarantees and security.

Multiprotocol Support
MPLS can be used on a number of networking technologies. Our focus in Part
Six is on IP-based internets, and this is likely to be the principal area of use. MPLS
is an enhancement to the way a connectionless IP-based internet is operated, requir¬
ing an upgrade to IP routers to support the MPLS features. MPLS-enabled routers
can coexist with ordinary IP routers, facilitating the introduction of evolution to
MPLS schemes. MPLS is also designed to work in ATM and frame relay networks.
Again, MPLS-enabled ATM switches and MPLS-enabled frame relay switches can
be configured to coexist with ordinary switches. Furthermore, MPLS can be used in
a pure IP-based internet, a pure ATM network, a pure frame relay network, or an
internet that includes two or even all three technologies. This universal nature of
MPLS should appeal to users who currently have mixed network technologies and
seek ways to optimize resources and expand QoS support.
For the remainder of this discussion, we focus on the use of MPLS in IP-based
internets, with brief comments about formatting issues for ATM and frame relay
networks. Table 18.2 defines key MPLS terms used in our discussion.

MPLS Operation

An MPLS network or internet'’ consists of a set of nodes, called label switched


routers (LSRs), capable of switching and routing packets on the basis of which a

5For simplicity, we will use the term network for the remainder of this section. In the case of an IP-based
internet, we are referring to the internet, where the IP routers function as MPLS nodes.
524 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

Table 18.2 MPLS Terminology

Forwarding equivalence class (FEC) A group of IP Label switching router (LSR) An MPLS node that
packets that are forwarded in the same manner is capable of forwarding native L3 packets.
(e.g., over the same path, with the same forwarding
Label stack An ordered set of labels.
treatment).
Merge point A node at which label merging is done.
Frame merge Label merging, when it is applied
to operation over frame based media, so that the MPLS domain A contiguous set of nodes that
potential problem of cell interleave is not an operate MPLS routing and forwarding and that are
issue. also in one Routing or Administrative Domain.

Label A short fixed-length physically contiguous MPLS edge node An MPLS node that connects an
identifier that is used to identify a FEC, usually of MPLS domain with a node that is outside of the
local significance. domain, either because it does not run MPLS,
and/or because it is in a different domain. Note that
Label merging The replacement of multiple incom¬ if an LSR has a neighboring host that is not running
ing labels for a particular FEC with a single outgo¬ MPLS, then that LSR is an MPLS edge node.
ing label.
MPLS egress node An MPLS edge node in its role
Label swap The basic forwarding operation consist¬ in handling traffic as it leaves an MPLS domain.
ing of looking up an incoming label to determine
MPLS ingress node An MPLS edge node in its role
the outgoing label, encapsulation, port, and other
in handling traffic as it enters an MPLS domain.
data handling information.
MPLS label A short, fixed-length physically con¬
Label swapping A forwarding paradigm allowing
tiguous identifier that is used to identify a FEC,
streamlined forwarding of data by using labels to
usually of local significance. A label is carried in a
identify classes of data packets that are treated
packet header.
indistinguishably when forwarding.
MPLS node A node that is running MPLS. An
Label switched hop The hop between two MPLS
MPLS node will be aware of MPLS control proto¬
nodes, on which forwarding is done using labels.
cols, will operate one or more L3 routing protocols,
Label switched path The path through one or more and will be capable of forwarding packets based on
LSRs at one level of the hierarchy followed by a labels. An MPLS node may optionally be also capa¬
packets in a particular FEC. ble of forwarding native L3 packets.

label has been appended to each packet. Labels define a flow of packets between
two endpoints or, in the case of multicast, between a source endpoint and a multi¬
cast group of destination endpoints. For each distinct flow, called a forwarding equiv¬
alence class (FEC), a specific path through the network of LSRs is defined. Thus,
MPLS is a connection-oriented technology. Associated with each FEC is a traffic
characterization that defines the QoS requirements for that flow. The LSRs need not
examine or process the IP header but rather simply forward each packet based on
its label value. Thus, the forwarding process is simpler than with an IP router.
Figure 18.5, based on one in [REDFOO], depicts the operation of MPLS within
a domain of MPLS-enabled routers. The following are key elements of the operation:

1. Prior to the routing and delivery of packets in a given FEC, a path through the
network, known as a label switched path (LSP), must be defined and the QoS
parameters along that path must be established. The QoS parameters deter¬
mine (1) how much resources to commit to the path, and (2) what queuing and
18.2 / MULTIPROTOCOL LABEL SWITCHING 525

discarding policy to establish at each LSR for packets in this FEC. To accom¬
plish these tasks, two protocols are used to exchange the necessary informa¬
tion among routers:
a. An interior routing protocol, such as OSPF, is used to exchange reachabil¬
ity and routing information.
b. Labels must be assigned to the packets for a particular FEC. Because the
use of globally unique labels would impose a management burden and limit
the number of usable labels, labels have local significance only, as discussed
subsequently. A network operator can specify explicit routes manually and
assign the appropriate label values. Alternatively, a protocol is used to
determine the route and establish label values between adjacent LSRs.
Either of two protocols can be used for this purpose: the Label Distribu¬
tion Protocol (LDP) or an enhanced version of RSVP.
2. A packet enters an MPLS domain through an ingress edge LSR, where it is
processed to determine which network-layer services it requires, defining its
QoS. The LSR assigns this packet to a particular FEC, and therefore a par¬
ticular LSP; appends the appropriate label to the packet; and forwards the
packet. If no LSP yet exists for this FEC, the edge LSR must cooperate with
the other LSRs in defining a new LSP.

Figure 18,5 MPLS Operation


526 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

3. Within the MPLS domain, as each LSR receives a labeled packet, it


a. Removes the incoming label and attaches the appropriate outgoing label to
the packet.
b. Forwards the packet to the next LSR along the LSP.
4. The egress edge LSR strips the label, reads the IP packet header, and forwards
the packet to its final destination.

Several key features of MLSP operation can be noted at this point:

1. An MPLS domain consists of a contiguous, or connected, set of MPLS-


enabled routers. Traffic can enter or exit the domain from an endpoint on a
directly connected network, as shown in the upper-right corner of Figure 18.5.
Traffic may also arrive from an ordinary router that connects to a portion of
the internet not using MPLS, as shown in the upper-left corner of Figure 18.5.
2. The FEC for a packet can be determined by one or more of a number of para¬
meters, as specified by the network manager. Among the possible parameters
are the following:
o Source and/or destination IP addresses or IP network addresses

o Source and/or destination port numbers


o IP protocol ID

o Differentiated services codepoint

o IPv6 flow label


3. Forwarding is achieved by doing a simple lookup in a predefined table that
maps label values to next hop addresses. There is no need to examine or process
the IP header or to make a routing decision based on destination IP address.
4. A particular per-hop behavior (PHB) can be defined at an LSR for a given
FEC. The PHB defines the queuing priority of the packets for this FEC and
the discard policy.
5. Packets sent between the same endpoints may belong to different FECs. Thus,
they will be labeled differently, will experience different PHB at each LSR,
and may follow different paths through the network.

Figure 18.6 shows the label-handling and forwarding operation in more detail.
Each LSR maintains a forwarding table for each LSP passing through the LSR.
When a labeled packet arrives, the LSR indexes the forwarding table to determine
the next hop. For scalability, as was mentioned, labels have local significance only.
Thus, the LSR removes the incoming label from the packet and attaches the
matching outgoing label before forwarding the packet. The ingress edge LSR deter¬
mines the FEC for each incoming unlabeled packet and, on the basis of the FEC,
assigns the packet to a particular LSP, attaches the corresponding label, and for¬
wards the packet.

Label Stacking

One of the most powerful features of MPLS is label stacking. A labeled packet may
carry a number of labels, organized as a last-in-first-out stack. Processing is always
In In Out Out In In Out Out
label i'face i'face label label i'face i'face label

cr
©
1

ON
T*H
o

ON

00
©
o
1

©











IT*
<N

<u
M -o
0 X <U
G OX
00
Os
00
<N
IT)
[Data

in
r-
SO
Os
rn
OS
Data

Figure 18.6 MPLS Packet Forwarding

527
528 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

based on the top label. At any LSR, a label may be added to the stack (push oper¬
ation) or removed from the stack (pop operation). Label stacking allows the aggre¬
gation of LSPs into a single LSP for a portion of the route through a network,
creating a tunnel. At the beginning of the tunnel, an LSR assigns the same label to
packets from a number of LSPs by pushing the label onto each packet’s stack. At
the end of the tunnel, another LSR pops the top element from the label stack,
revealing the inner label. This is similar to ATM, which has one level of stacking
(virtual channels inside virtual paths) but MPLS supports unlimited stacking.
Label stacking provides considerable flexibility. An enterprise could establish
MPLS-enabled networks at various sites and establish a number of LSPs at each site.
The enterprise could then use label stacking to aggregate multiple flows of its own
traffic before handing it to an access provider. The access provider could aggregate
traffic from multiple enterprises before handing it to a larger service provider. Ser¬
vice providers could aggregate many LSPs into a relatively small number of tunnels
between points of presence. Fewer tunnels means smaller tables, making it easier
for a provider to scale the network core.

Label Format and Placement


An MPLS label is a 32-bit field consisting of the following elements (Figure 18.7):6

• Label value: Locally significant 20-bit label.


• Exp: 3 bits reserved for experimental use. For example, these bits could com¬
municate DS information or PHB guidance.
• S: Set to one for the oldest entry in the stack, and zero for all other entries.
• Time to live (TTL): 8 bits used to encode a hop count, or time to live, value.

Time to Live Processing


A key field in the IP packet header is the TTL field (IPv4, Figure 2.2a), or hop
limit (IPv6, Figure 2.2b). In an ordinary IP-based internet, this field is decremented
at each router and the packet is dropped if the count falls to zero. This is done to
avoid looping or having the packet remain to long in the internet due to faulty rout¬
ing. Because an LSR does not examine the IP header, the TTL field is included in

Bits: 20 3 1 8

Label value Exp S Time to live |

Exp = experimental
S = bottom of stack bit

Figure 18.7 MPLS Label Format

6RFC 3032, MPLS Label Stack Encoding, January 2001.


18.2 / MULTIPROTOCOL LABEL SWITCHING 529

the label so that the TTL function is still supported. The rules for processing the TTL
field in the label are as follows:

1. When an IP packet arrives at an ingress edge LSR of an MPLS domain, a sin¬


gle label stack entry is added to the packet. The TTL value of this label stack
entry is set to the value of the IP TTL value. If the IP TTL field needs to be
decremented, as part of the IP processing, it is assumed that this has already
been done.
2. When an MPLS packet arrives at an internal LSR of an MPLS domain, the
TTL value in the top label stack entry is decremented. Then
a. If this value is zero, the MPLS packet is not forwarded. Depending on the
label value in the label stack entry, the packet may be simply discarded, or
it may be passed to the appropriate “ordinary” network layer for error pro¬
cessing (e.g., for the generation of an ICMP error message).
b. If this value is positive, it is placed in the TTL field of the top label stack
entry for the outgoing MPLS packet, and the packet is forwarded. The out¬
going TTL value is a function solely of the incoming TTL value and is inde¬
pendent of whether any labels are pushed or popped before forwarding.
There is no significance to the value of the TTL field in any label stack
entry that is not at the top of the stack.
3. When an MPLS packet arrives at an egress edge LSR of an MPLS domain,
the TTL value in the single label stack entry is decremented and the label is
popped, resulting in an empty label stack. Then
a. If this value is zero, the IP packet is not forwarded. Depending on the
label value in the label stack entry, the packet may be simply discarded,
or it may be passed to the appropriate “ordinary” network layer for error
processing.
b. If this value is positive, it is placed in the TTL field of the IP header, and
the IP packet is forwarded using ordinary IP routing. Note that the IP
header checksum must be modified prior to forwarding

Label Stack
The label stack entries appear after the data link layer headers, but before any
network layer headers. The top of the label stack appears earliest in the packet
(closest to the network layer header), and the bottom appears latest (closest to the
data link header). The network layer packet immediately follows the label stack
entry which has the S bit set. In data link frame, such as for PPP (point-to-point pro¬
tocol), the label stack appears between the IP header and the data link header (Fig¬
ure 18.8a). For an IEEE 802 frame, the label stack appears between the IP header
and the LLC (logical link control) header (Figure 18.8b).
If MPLS is used over a connection-oriented network service, a slightly differ¬
ent approach may be taken, as shown in Figures 18.8c and d. For ATM cells, the label
value in the topmost label is placed in the VPI/VCI field (Figure 5.4) in the
ATM cell header. The entire top label remains at the top of the label stack, which
is inserted between the cell header and the IP header. Placing the label value in the
530 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

Data link header MPLS Data link


IP header Data
(e.g., PPP) label stack trailer

(a) Data link frame

MPLS
MAC header LLC header IP header Data MAC trailer
label stack

(b) IEEE 802 MAC frame

VPI/VCI field

top MPLS MPLS


IP header Data
label label stack
--y-'
ATM cell header
(c) ATM cell

DLCI field

top MPLS MPLS


IP header Data FR trailer
label label stack
-v
FR header
(d) Frame relay frame

Figure 18.8 Position of MPLS Label

ATM cell header facilitates switching by an ATM switch, which would, as usual,
only need to look at the cell header. Similarly, the topmost label value can be placed
in the DLCI (data link connection identifier) field (Figure 4.9) of a frame relay
header. Note that in both these cases, the Time to Live field is not visible to the
switch and so is not decremented. The reader should consult the MPLS specifica¬
tions for the details of the way this situation is handled.

FECs, LSPs, and Labels

To understand MPLS, it is necessary to understand the operational relationship


among FECs, LSPs, and labels. The specifications covering all of the ramifications of
this relationship are lengthy. In the remainder of this section, we provide a summary.
The essence of MPLS functionality is that traffic is grouped into FECs. The
traffic in an FEC transits an MPLS domain along an LSP. Individual packets in an
FEC are uniquely identified as being part of a given FEC by means of a locally
significant label. At each LSR, each labeled packet is forwarded on the basis of
its label value, with the LSR replacing the incoming label value with an outgoing
label value.
The overall scheme described in the previous paragraph imposes a number of
requirements. Specifically,
18.2 / MULTIPROTOCOL LABEL SWITCHING 531

1. Each traffic flow must be assigned to a particular FEC.


2. A routing protocol is needed to determine the topology and current conditions
in the domain so that a particular LSP can be assigned to an FEC. The rout¬
ing protocol must be able to gather and use information to support the QoS
requirements of the FEC.
3. Individual LSRs must become aware of the LSP for a given FEC, must assign
an incoming label to the LSP, and must communicate that label to any other
LSR that may send it packets for this FEC.

The first requirement is outside the scope of the MPLS specifications. The
assignment needs to be done either by manual configuration, or by means of some
signaling protocol, or by an analysis of incoming packets at ingress LSRs. Before
looking at the other two requirements, let us consider topology of LSPs. We can
classify these in the following manner:

• Unique ingress and egress LSR: In this case a single path through the MPLS
domain is needed.
• Unique egress LSR, multiple ingress LSRs: If traffic assigned to a single FEC
can arise from different sources that enter the network at different ingress
LSRs, then this situation occurs. An example is an enterprise intranet at a sin¬
gle location but with access to an MPLS domain through multiple MPLS in¬
gress LSRs. This situation would call for multiple paths through the MPLS
domain, probably sharing a final few hops.
• Multiple egress LSRs for unicast traffic: RFC 3031 states that most commonly,
a packet is assigned to an FEC based (completely or partially) on its network
layer destination address. If not, then it is possible that the FEC would require
paths to multiple distinct egress LSRs. However, more likely, there would be
a cluster of destination networks all of which are reached via the same MPLS
egress LSR.
• Multicast: RFC 3031 lists multicast as a subject for further study.

Route Selection
Route selection refers to the selection of an LSP for a particular FEC. The
MPLS architecture supports two options: hop-by-hop routing and explicit routing.
With hop-by-hop routing, each LSR independently chooses the next hop for
each FEC. RFC implies that this option makes use of an ordinary routing protocol,
such as OSPF. This option provides some of the advantages of MPLS, including
rapid switching by labels, the ability to use label stacking, and differential treatment
of packets from different FECs following the same route. However, because of the
limited use of performance metrics in typical routing protocols, hop-by-hop routing
does not readily support traffic engineering or policy routing (defining routes based
on some policy related to QoS, security, or some other consideration).
With explicit routing, a single LSR, usually the ingress or egress LSR, speci¬
fies some or all of the LSRs in the LSP for a given FEC. For strict explicit routing,
an LSR specifies all of the LSRs on an LSP. For loose explicit routing, only some of
532 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

the LSRs are specified. Explicit routing provides all of the benefits of MPLS, includ¬
ing the ability to do traffic engineering and policy routing.
Explicit routes can be selected by configuration, that is, set up ahead of time, or
dynamically. Dynamic explicit routing would provide the best scope for traffic engi¬
neering. For dynamic explicit routing, the LSR setting up the LSP would need infor¬
mation about the topology of the MPLS domain as well as QoS-related information
about that domain. An MPLS traffic engineering specification7 suggests that the
QoS-related information falls into two categories:

• A set of attributes associated with an FEC or a collection of similar FECs that


collectively specify their behavioral characteristics.
• A set of attributes associated with resources (nodes, links) that constrain the
placement of LSPs through them.

A routing algorithm that takes into account the traffic requirements of vari¬
ous flows and that takes into account the resources available along various hops and
through various nodes is referred to as a constraint-based routing algorithm. In
essence, a network that uses a constraint-based routing algorithm is aware of cur¬
rent utilization, existing capacity, and committed services at all times. Traditional
routing algorithms, such as OSPF and BGP, do not employ a sufficient array of cost
metrics in their algorithms to qualify as constraint based. Furthermore, for any given
route calculation, only a single cost metric (e.g., number of hops, delay) can be used.
For MPLS, it is necessary either to augment an existing routing protocol or to
deploy a new one. For example, an enhanced version of OSPF has been defined"1
that provides at least some of the support required for MPLS. Examples of metrics
that would be useful to constraint based routing are as follows:

• Maximum link data rate


• Current capacity reservation
• Packet loss ratio
• Link propagation delay

Label Distribution
Route selection consists of defining an LSP for an FEC. A separate function
is the actual setting up of the LSP. For this purpose, each LSR on the LSP must:

1. Assign a label to the LSP to be used to recognize incoming packets that belong
to the corresponding FEC.
2. Inform all potential upstream nodes (nodes that will send packets for this FEC
to this LSR) of the label assigned by this LSR to this FEC, so that these nodes
can properly label packets to be sent to this LSR.

7RFC 2702. MPLS Traffic Engineering, September 1999.


SRFC 2676, QoS Routing Mechanisms and OSPF Extensions, August 1999.
18.3 / REAL-TIME TRANSPORT PROTOCOL (RTP) 533

3. Learn the next hop for this LSP and learn the label that the downstream node
(LSR that is the next hop) has assigned to this FEC. This will enable this LSR
to map an incoming label to an outgoing label.

The first item in the preceding list is a local function. Items 2 and 3 must either
be done by manual configuration or require the use of some sort of label distribu¬
tion protocol. Thus, the essence of a label distribution protocol is that it enables one
LSR to inform others of the label/FEC bindings it has made. In addition, a label dis¬
tribution protocol enables two LSRs to learn each other’s MPLS capabilities. The
MPLS architecture does not assume a single label distribution protocol but allows
for multiple such protocols. Specifically, RFC 3031 refers to a new label distribution
protocol and to enhancements to existing protocols, such as RSVP and BGP, to
serve the purpose.
The relationship between label distribution and route selection is complex. It
is best to look at in the context of the two types of route selection.
With hop-by-hop route selection, no specific attention is paid to traffic en¬
gineering or policy routing concerns, as we have seen. In such a case, an ordinary
routing protocol such as OSPF is used to determine the next hop by each LSR. A
relatively straightforward label distribution protocol can operate using the routing
protocol to design routes.
With explicit route selection, a more sophisticated routing algorithm must be
implemented, one that does not employ a single metric to design a route. In this
case, a label distribution protocol could make use of a separate route selection pro¬
tocol, such as an enhanced OSPF, or incorporate a routing algorithm into a more
complex label distribution protocol.

18.3 REAL-TIME TRANSPORT PROTOCOL (RTP)

The most widely used transport-level protocol is TCP. Although TCP has proven its
value in supporting a wide range of distributed applications, it is not suited for use
with real-time distributed applications. By a real-time distributed application, we
mean one in which a source is generating a stream of data at a constant rate, and
one or more destinations must deliver that data to an application at the same con¬
stant rate. Examples of such applications include audio and video conferencing, live
video distribution (not for storage but for immediate play), shared workspaces,
remote medical diagnosis, telephony, command and control systems, distributed
interactive simulations, games, and real-time monitoring. A number of features of
TCP disqualify it for use as the transport protocol for such applications:

1. TCP is a point-to-point protocol that sets up a connection between two end


points. Therefore, it is not suitable for multicast distribution.
2. TCP includes mechanisms for retransmission of lost segments, which then
arrive out of order. Such segments are not usable in most real-time applications.
3. TCP contains no convenient mechanism for associating timing information
with segments, which is another real-time requirement.
534 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

The other widely used transport protocol, UDP, does not exhibit the first two
characteristics listed but, like TCP, does not provide timing information. By itself,
UDP does not provide any general-purpose tools useful for real-time applications.
Although each real-time application could include its own mechanisms for
supporting real-time transport, there are a number of common features that warrant
the definition of a common protocol. A standards-track protocol designed for this
purpose is the real-time transport protocol (RTP), defined in RFC 1889.9 RTP is best
suited to soft real-time communication. It lacks the necessary mechanisms to sup¬
port hard real-time traffic.
This section provides an overview of RTP. We begin with a discussion of real¬
time transport requirements. Next, we examine the philosophical approach of RTP.
The remainder of the section is devoted to the two protocols that make up RTP: The
first is simply called RTP and is a data transfer protocol; the other is a control pro¬
tocol known as RTCP (RTP Control Protocol).

RTP Protocol Architecture


In RTP, there is close coupling between the RTP functionality and the application-
layer functionality. Indeed, RTP is best viewed as a framework that applications can
use directly to implement a single protocol. Without the application-specific infor¬
mation, RTP is not a full protocol. On the other hand, RTP imposes a structure and
defines common functions so that individual real-time applications are relieved of
part of their burden.
RTP follows the principles of protocol architecture design outlined in a paper
by Clark and Tennenhouse [CLAR90]. The two key concepts presented in that
paper are application-level framing and integrated layer processing.

Application-Level Framing
In a traditional transport protocol, such as TCP, the responsibility for recov¬
ering from lost portions of data is performed transparently at the transport layer.
[CLAR90] lists two scenarios in which it might be more appropriate for recovery
from lost data to be performed by the application:

1. The application, within limits, may accept less than perfect delivery and con¬
tinue unchecked. This is the case for real-time audio and video. For such appli¬
cations, it may be necessary to inform the source in more general terms about
the quality of the delivery rather than to ask for retransmission. If too much
data are being lost, the source might perhaps move to a lower-quality trans¬
mission that places lower demands on the network, increasing the probability
of delivery.
2. It may be preferable to have the application rather than the transport proto¬
col provide data for retransmission. This is useful in the following contexts:

gRFC 1889, RTP: A Transport Protocol for Real-Time Applications, January 1996.
18.3 / REAL-TIME TRANSPORT PROTOCOL (RTP) 535

a. The sending application may recompute lost data values rather than stor¬
ing them.
b. The sending application can provide revised values rather than simply
retransmitting lost values, or send new data that “fix” the consequences of
the original loss.

To enable the application to have control over the retransmission function,


Clark and Tennenhouse propose that lower layers, such as presentation and trans¬
port, deal with data in units that the application specifies. The application should
break the flow of data into application-level data units (ADUs), and the lower lay¬
ers must preserve these ADU boundaries as they process the data. The application-
level frame is the unit of error recovery. Thus, if a portion of an ADU is lost in
transmission, the application will typically be unable to make use of the remaining
portions. In such a case, the application layer will discard all arriving portions and
arrange for retransmission of the entire ADU, if necessary.

Integrated Layer Processing


In a typical layered protocol architecture, such as TCP/IP or OSI, each layer
of the architecture contains a subset of the functions to be performed for commu¬
nications, and each layer must logically be structured as a separate module in end
systems. Thus, on transmission, a block of data flows down through and is sequen¬
tially processed by each layer of the architecture. This structure restricts the imple-
menter from invoking certain functions in parallel or out of the layered order to
achieve greater efficiency. Integrated layer processing, as proposed in [CLAR90],
captures the idea that adjacent layers may be tightly coupled and that the imple-
menter should be free to implement the functions in those layers in a tightly cou¬
pled manner.
The idea that a strict protocol layering may lead to inefficiencies has been
propounded by a number of researchers. For example, [CROW92] examined the
inefficiencies of running a remote procedure call (RPC) on top of TCP and sug¬
gested a tighter coupling of the two layers. The researchers argued that the inte¬
grated layer processing approach is preferable for efficient data transfer.
Figure 18.9 illustrates the manner in which RTP realizes the principle of inte¬
grated layer processing. RTP is designed to run on top of a connectionless transport
protocol such as UDP. UDP provides the basic port addressing functionality of the
transport layer. RTP contains further transport-level functions, such as sequencing.
However, RTP by itself is not complete. It is completed by modifications and/or
additions to the RTP headers to include application-layer functionality. The figure
indicates that several different standards for encoding video data can be used in con¬
junction with RTP for video transmission.

RTP Data Transfer Protocol

We first look at the basic concepts of the RTP data transfer protocol and then exam¬
ine the protocol header format. Throughout this section, the term RTP will refer to
the RTP data transfer protocol.
536 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

Figure 18.9 RTP Protocol Architecture [THOM96]

RTP Concepts
RTP supports the transfer of real-time data among a number of participants
in a session. A session is simply a logical association among two or more RTP enti¬
ties that is maintained for the duration of the data transfer. A session is defined by
the following:

• RTP port number: The destination port address is used by all participants for
RTP transfers. If UDP is the lower layer, this port number appears in the Des¬
tination Port field (see Figure 2.1) of the UDP header.
• RTCP port number: The destination port address is used by all participants
for RTCP transfers.
• Participant IP addresses: This can either be a multicast IP address, so that the
multicast group defines the participants, or a set of unicast IP addresses.

The process of setting up a session is beyond the scope of RTP and RTCP.
Although RTP can be used for unicast real-time transmission, its strength lies
in its ability to support multicast transmission. For this purpose, each RTP data unit
includes a source identifier that identifies which member of the group generated
the data. It also includes a timestamp so that the proper timing can be re-created on
the receiving end using a delay buffer. RTP also identifies the payload format of the
data being transmitted.
RTP allows the use of two kinds of RTP relays: translators and mixers. First
we need to define the concept of relay. A relay operating at a given protocol layer
is an intermediate system that acts as both a destination and a source in a data trans¬
fer. For example, suppose that system A wishes to send data to system B but cannot
do so directly. Possible reasons are that B may be behind a firewall or B may not be
able to use the format transmitted by A. In such a case, A may be able to send the
data to an intermediate relay R. R accepts the data unit, makes any necessary
changes or performs any necessary processing, and then transmits the data to B.
18.3 / REAL-TIME TRANSPORT PROTOCOL (RTP) 537

A mixer is an RTP relay that receives streams of RTP packets from one or
more sources, combines these streams, and forwards a new RTP packet stream to
one or more destinations. The mixer may change the data format or simply perform
the mixing function. Because the timing among the multiple inputs is not typically
synchronized, the mixer provides the timing information in the combined packet
stream and identifies itself as the source of synchronization.
An example of the use of a mixer is to combine of a number of on/off sources
such as audio. Suppose that a number of systems are members of an audio session
and each generates its own RTP stream. Most of the time only one source is active,
although occasionally more than one source will be “speaking” at the same time.
A new system may wish to join the session, but its link to the network may not be
of sufficient capacity to carry all of the RTP streams. Instead, a mixer could receive
all of the RTP streams, combine them into a single stream, and retransmit that
stream to the new session member. If more than one incoming stream is active at
one time, the mixer would simply sum their PCM values. The RTP header gener¬
ated by the mixer includes the identifier(s) of the source(s) that contributed to the
data in each packet.
The translator is a simpler device that produces one or more outgoing RTP
packets for each incoming RTP packet. The translator may change the format of the
data in the packet or use a different lower-level protocol suite to transfer from one
domain to another. Examples of translator use include the following:

• A potential recipient may not be able to handle a high-speed video signal used
by the other participants. The translator converts the video to a lower-quality
format requiring a lower data rate.
• An application-level firewall may prevent the forwarding of IP packets. Two
translators are used, one on each side of the firewall, with the outside one tun¬
neling all multicast packets received through a secure connection to the trans¬
lator inside the firewall. The inside translator then sends out RTP packets to
a multicast group protected by the firewall.
• A translator can replicate an incoming multicast RTP packet and send it to a
number of unicast destinations.

RTP Fixed Header


Each RTP packet includes a fixed header and may also include additional
application-specific header fields. Figure 18.10 shows the fixed header. The first
twelve octets (shaded portion) are always present and consist of the following fields:

• Version (2 bits): Current version is 2.


• Padding (1 bit): Indicates whether padding octets appear at the end of the pay-
load. If so, the last octet of the payload contains a count of the number of
padding octets. Padding is used if the application requires that the payload be
an integer multiple of some length, such as 32 bits.
• Extension (1 bit): If set, the fixed header is followed by exactly one extension
header, which is used for experimental extensions to RTP.
538 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

• CSRC Count (4 bits): The number of CSRC identifiers that follow the fixed
header.
• Marker (1 bit): The interpretation of the marker bit depends on the payload
type; it is typically used to indicate a boundary in the data stream. For video,
it is set to mark the end of a frame. For audio, it is set to mark the beginning
of a talk spurt.
• Payload Type (7 bits): Identifies the format of the RTP payload, which follows
the RTP header.
• Sequence Number (16 bits): Each source starts with a random sequence num¬
ber, which is incremented by one for each RTP data packet sent. This allows
for loss detection and packet sequencing within a series of packets with the
same timestamp. A number of consecutive packets may have the same time-
stamp if they are logically generated at the same time; an example is several
packets belonging to the same video frame.
• Timestamp (32 bits): Corresponds to the generation instant of the first octet
of data in the payload. The time units of this field depend on the payload type.
The values must be generated from a local clock at the source.
• Synchronization Source Identifier: A randomly generated value that uniquely
identifies the source within a session.

Following the fixed header, there may be one or more of the following field:

• Contributing Source Identifier: Identifies a contributing source for the pay-


load. These identifiers are supplied by a mixer.

The Payload Type field identifies the media type of the payload and the for¬
mat of the data, including the use of compression or encryption. In a steady state,
a source should only use one payload type during a session but may change the

0 4 8 9 16 31
V p X cc M Payload type Sequence number

Timestamp

Synchronization source (SSRC) identifier

Contributing source (CSRC) identifier

Contributing source (CSRC) identifier

V = Version
P = Padding
X = Extension
CC= CSRC count
M = Marker

Figure 18.10 RTP Header


18.3 / REAL-TIME TRANSPORT PROTOCOL (RTP) 539

Tabic 18.3 Payload Types for Standard Audio and Video Encodings (RFC 1890)

0 PCMU audio
16-23 unassigned audio
1 1016 audio
24 unassigned video
2 G721 audio
25 CelB video
3 GSM audio
26 JPEG video
4 unassigned audio
27 unassigned
5 DV14 audio (8 kHz)
28 nv video
6 DV14 audio (16 kHz)
29-30 unassigned video
7 LPC audio
31 H261 video
8 PCMA audio
32 MPV video
9 G722 audio
33 MP2T video
10 L16 audio (stereo)
34-71 unassigned
11 L16 audio (mono)
72-76 reserved
12-13 unassigned audio
77-95 unassigned
14 MPA audio
96-127 dynamic
15 G728 audio

payload type in response to changing conditions, as discovered by RTCP. Table 18.3


summarizes the payload types defined in RFC 1890.10

RTP Control Protocol (RTCP)


The RTP data transfer protocol is used only for the transmission of user data, typi¬
cally in multicast fashion among all participants in a session. A separate control pro¬
tocol (RTCP) also operates in a multicast fashion to provide feedback to RTP data
sources as well as all session participants. RTCP uses the same underlying transport
service as RTP (usually UDP) and a separate port number. Each participant peri¬
odically issues an RTCP packet to all other session members. RFC 1889 outlines
four functions performed by RTCP:

• Quality of Service (QoS) and congestion control: RTCP provides feedback on


the quality of data distribution. Because RTCP packets are multicast, all ses¬
sion members can assess how well other members are performing and receiv¬
ing. Sender reports enable receivers to estimate data rates and the quality of
the transmission. Receiver reports indicate any problems encountered by
receivers, including missing packets and excessive jitter. For example, an
audio-video application might decide to reduce the rate of transmission over
low-speed links if the traffic quality over the links is not high enough to sup¬
port the current rate. The feedback from receivers is also important in diag¬
nosing distribution faults. By monitoring reports from all session recipients, a

'"RFC 1890, RTP Profile for Audio and Video Conferences with Minimal Control, January 1996.
540 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

network manager can tell whether a problem is specific to a single user or


more widespread.
• Identification: RTCP packets carry a persistent textual description of the
RTCP source. This provides more information about the source of data pack¬
ets than the random SSRC identifier and enables a user to associate multiple
streams from different sessions. For example, separate sessions for audio and
video may be in progress.
• Session size estimation and scaling: To perform the first two functions, all par¬
ticipants send periodic RTCP packets. The rate of transmission of such pack¬
ets must be scaled down as the number of participants increases. In a session
with few participants, RTCP packets are sent at the maximum rate of one
every five seconds. RFC 1889 includes a relatively complex algorithm by
which each participant limits its RTCP rate on the basis of the total session
population. The objective is to limit RTCP traffic to less than 5% of total ses¬
sion traffic.
• Session control: RTCP optionally provides minimal session control information.
An example is a participant identification to be displayed in the user interface.

An RTCP transmission consists of a number of separate RTCP packets bun¬


dled in a single UDP datagram (or other lower-level data unit). The following packet
types are defined in RFC 1889:

• Sender Report (SR)


• Receiver Report (RR)
• Source Description (SDES)
• Goodbye (BYE)
• Application specific

Figure 18.11 depicts the formats of these packet types. Each type begins with
a 32-bit word containing the following fields:

• Version (2 bits): Current version is 2.


• Padding (1 bit): If set, indicates that this packet contains padding octets and
the end of the control information. If so, the last octet of the padding contains
a count of the number of padding octets.
• Count (5 bits): The number of reception report blocks contained in an SR or
RR packet (RC), or the number of source items contained in an SDES or
BYE packet.
• Packet Type (8 bits): Identifies RTCP packet type.
• Length (16 bits): Length of this packet in 32 bit words, minus one.

In addition, the Sender Report and Receiver Report packets contain the following field:

• Synchronization Source Identifier: Identifies the source of this RTCP packet.

We now turn to a description of each packet type.


I }UTUD u >iuni[3

C
_o x
o>
_C- c O)
c 'C
o CD 1— c 1
1 c/) c/3 1 1 > P-)
O E <D O o 03 ><
cc CD ■D cc cc 03
CQ
CO <D
O
CO CO V.
o CP
o CO H o o
6 LU 3 o b b U
DC Q C
C/3 CM DC cc o H
CO CO II CO CO v3 C*
CO Oh CO CO D CJ
u 0
H 0-
c* X
o
-a CO 03
c
..CL... 0)
>

J3pi33H I ^OOiq yod^J u >po[q yodsy

(c) RTCP Application-defined packet


03
o T3
Oh
<D C
<D
■u
<D c
_> CD
CL
*s
■ai
(D
CJ
<D
J-H
c
CP o
U "5
H o
C* Q.
Cl
<

uop^uuojui
i3p^3jq J3pu3§ I >pojq yod^y u )pojq yoda^j

on
ca
o
CP O
u Pa
CL,
<L>
-a U
c
1) H
<*> CC
a.
U
H
DC oc
"c? vla
S
WD
•W
Urn

541
542 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

Sender Report (SR)


RTCP receivers provide reception quality feedback using a Sender Report or
a Receiver Report, depending on whether the receiver is also a sender during this
session. Figure 18.11a shows the format of a Sender Report. The Sender Report con¬
sists of a header, already described; a sender information block; and zero or more
reception report blocks. The sender information block includes the following fields:

• NTP Timestamp (64 bits): The absolute wallclock time when this report was
sent; this is an unsigned fixed-point number with the integer part in the first 32
bits and the fractional part in the last 32 bits. This may be used by the sender
in combination with timestamps returned in receiver reports to measure
round-trip time to those receivers.
• RTP Timestamp (32 bits): This is a the relative time used to create timestamps
in RTP data packets. This lets recipients place this report in the appropriate
time sequence with RTP data packets from this source.
• Sender’s Packet Count (32 bits): Total number of RTP data packets transmit¬
ted by this sender so far in this session.
• Sender’s Octet Count (32 bits): Total number of RTP payload octets trans¬
mitted by this sender so far in this session.

Following the sender information block are zero or more reception report
blocks. One reception block is included for each source from which this participant
has received data during this session. Each block includes the following fields:

• SSRC_n (32 bits): Identifies the source referred to by this report block.
• Fraction lost (8 bits): The fraction of RTP data packets from SSRC_n lost
since the previous SR or RR packet was sent.
• Cumulative number of packets lost (24 bits): Total number of RTP data pack¬
ets from SSRC_n lost during this session.
• Extended highest sequence number received (32 bits): The least significant 16
bits record the highest RTP data sequence number received from SSRC_n.
The most significant 16 bits record the number of times the sequence number
has wrapped back to zero.
• Interarrival jitter (32 bits): An estimate of the jitter experienced on RTP data
packets from SSRC_n, explained later.
• Last SR timestamp (32 bits): The middle 32 bits of the NTP timestamp in the
last SR packet received from SSRC_n. This captures the least significant half
of the integer and the most significant half of the fractional part of the time-
stamp and should be adequate.
• Delay since last SR (32 bits): The delay, expressed in units of 2“16 seconds,
between receipt of the last SR packet from SSRC_n and the transmission of
this report block. These last two fields can be used by a source to estimate
round-trip time to a particular receiver.

Recall that delay jitter was defined as the maximum variation in delay experi¬
enced by packets in a single session. There is no simple way to measure this quan-
18.3 / REAL-TIME TRANSPORT PROTOCOL (RTP) 543

tity at the receiver, but it is possible to estimate the average jitter in the following
way. At a particular receiver, define the following parameters for a given source:

S(I) = Timestamp from RTP data packet I


R(I) = Time of arrival for RTP data packet /, expressed in RTP timestamp
units. The receiver must use the same clock frequency (increment
interval) as the source but need not synchronize time values with
the source
£>(/) = The difference between the interarrival time at the receiver and the
spacing between adjacent RTP data packets leaving the source
/(/) = Estimated average interarrival jitter up to the receipt of RTP data
packet I

The value of D(I) is calculated as

D(I) = (R(I) - R(I - 1)) - (5(7) - S(I - 1))

Thus, D(I) measures how much the spacing between arriving packets differs from
the spacing between transmitted packets. In the absence of jitter, the spacings will
be the same and D(I) will have a value of 0. The interarrival jitter is calculated con¬
tinuously as each data packet I is received, according to the formula

J(I) is calculated as an exponential average11 of observed values of D(I). Only a


small weight is given to the most recent observation, so that temporary fluctuations
do not invalidate the estimate.
The values in the Sender Report enable senders, receivers, and network man¬
agers to monitor conditions on the network as they relate to a particular session.
For example, packet loss values give an indication of persistent congestion, while the
jitter measures transient congestion. The jitter measure may provide a warning of
increasing congestion before it leads to packet loss.

Receiver Report (RR)


The format for the Receiver Report (Figure 18.11b) is the same as that for a
Sender Report, except that the Packet Type field has a different value and there is
no sender information block.

Source Description (SDES)


The Source Description packet (Figure 18.lid) is used by a source to provide
more information about itself. The packet consists of a 32-bit header followed by
zero or more chunks, each of which contains information describing this source.
Each chunk begins with an identifier for this source or for a contributing source.

"For comparison, see Equation (12.4).


544 CHAPTER 18 / PROTOCOLS FOR QOS SUPPORT

Table 18.4 SDES Types (RFC 1889)

Value Name Description

0 END End of SDES list

1 CNAME Canonical name: unique among all participants within one

RTP session

2 NAME Real user name of the source

3 EMAIL Email address

4 PHONE Telephone number

5 LOC Geographic location

6 TOOL Name of application generating the stream

7 NOTE Transient message describing the current state of the source

8 PRIV Private experimental or application-specific extensions

This is followed by a list of descriptive items. Table 18.4 lists the types of descrip¬
tive items defined in RFC 1889.

Goodbye (BYE)
The BYE packet indicates that one or more sources are no longer active. This
confirms to receivers that a prolonged silence is due to departure rather than net¬
work failure. If a BYE packet is received by a mixer, it is forwarded with the list of
sources unchanged. The format of the BYE packet consists of a 32-bit header fol¬
lowed by one or more source identifiers. Optionally, the packet may include a textual
description of the reason for leaving.

Application-Defined Packet
This packet is intended for experimental use for functions and features that
are application specific. Ultimately, an experimental packet type that proves gener¬
ally useful may be assigned a packet type number and become part of the stan¬
dardized RTCP.

18.4 RECOMMENDED READING AND WEB SITES

[ZHAN93] is a good overview of the philosophy and functionality of RSVP, written by its
developers. [BLAC01] provides a thorough treatment of MPLS. [WANG01] covers not only
MPLS but IS and DS; it also has an excellent chapter on MPLS traffic engineering. [VISW98]
includes a concise overview of the MPLS architecture and describes the various proprietary
efforts that preceded MPLS.

BLAC01 Black, U. MPLS and Label Switching Networks. Upper Saddle River, NJ: Pren¬
tice Hall, 2001.

You might also like