0% found this document useful (0 votes)
55 views123 pages

Txc-Chapter 3 Setembre 2020

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views123 pages

Txc-Chapter 3 Setembre 2020

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 123

T

Tecnologies per a les xarxes troncals (Core networks).


Tot són tecnologies de nivell 2 ( o 2,5) i el nom identifica el que commuta el
commutador (trames, cel·les, etiquetes, trames ethernet).
Al final veurem el control de la congestió aplicable a totes les tecnologies.

1
Consultar capítol 7 del llibre Stallings.

2
T

Les taules d’enrutament de la xarxa IP diuen el camí del terminal al servidor


passant pels routers C, A i D. Caldrà seguir els circuits virtuals de la xarxa Level
2

3
4
To see the need for data link control, we list some of the requirements
and objectives for effective data communication between two directly
connected
transmitting-receiving stations:

• Frame synchronization: Data are sent in blocks called frames. The


beginning
and end of each frame must be recognizable. We briefly introduced
this topic with the discussion of synchronous frames (Figure 6.2).

• Flow control: The sending station must not send frames at a rate faster
than the receiving station can absorb them.

• Error control: Bit errors introduced by the transmission system should


be corrected.

• Addressing: On a shared link, such as a local area network (LAN), the


identity of the two stations involved in a transmission must be specified.

5
• Control and data on same link: It is usually not desirable to have a
physically separate communications path for control information.
Accordingly, the receiver must be able to distinguish control information
from the data being transmitted.

• Link management: The initiation, maintenance, and termination of a


sustained data exchange require a fair amount of coordination and cooperation
among stations. Procedures for the management of this exchange
are required.

5
Flow control is a technique for assuring that a transmitting entity does not
overwhelm a receiving entity with data. The receiving entity typically allocates a
data buffer of some maximum length for a transfer. When data are received, the
receiver must do a certain amount of processing before passing the data to the
higher-level software. In the absence of flow control, the receiver's buffer may
fill up and overflow while it is processing old data.

6
To begin, we examine mechanisms for flow control in the absence of
errors.
The model we will use is depicted in Figure 7.1a, which is a vertical time
sequence
diagram. It has the advantages of showing time dependencies and
illustrating the
correct send–receive relationship. Each arrow represents a single frame
transiting a
data link between two stations. The data are sent in a sequence of frames,
with each
frame containing a portion of the data and some control information. The
time it
takes for a station to emit all of the bits of a frame onto the medium is the
transmission
time ; this is proportional to the length of the frame. The propagation time
is the
time it takes for a bit to traverse the link between source and destination.
For this
section, we assume that all frames that are transmitted are successfully

7
received; no
frames are lost and none arrive with errors. Furthermore, frames arrive in the same
order in which they are sent. However, each transmitted frame suffers an arbitrary
and variable amount of delay before reception.

7
The simplest form of flow control, known as stop-and-wait flow control,
works as
follows. A source entity transmits a frame. After the destination entity
receives
the frame, it indicates its willingness to accept another frame by sending
back an
acknowledgment to the frame just received. The source must wait until it
receives
the acknowledgment before sending the next frame. The destination can
thus stop
the flow of data simply by withholding acknowledgment. This procedure
works fine
and, indeed, can hardly be improved upon when a message is sent in a few
large
frames. However, it is often the case that a source will break up a large
block of
data into smaller blocks and transmit the data in many frames. This is done
for the
following reasons:

8
• The buffer size of the receiver may be limited.

• The longer the transmission, the more likely that there will be an error,
necessitating
retransmission of the entire frame. With smaller frames, errors are
detected sooner, and a smaller amount of data needs to be retransmitted.

• On a shared medium, such as a LAN, it is usually desirable not to permit one


station to occupy the medium for an extended period, thus causing long delays
at the other sending stations.

8
In situations where the bit length of the link is greater than the frame
length,
serious inefficiencies result. This is illustrated in Figure 7.2.

In the figure, the transmission


time is normalized to one, and the propagation delay is expressed as the
variable a .

When a is less than 1, the propagation time is less than the transmission
time.
In this case, the frame is sufficiently long that the first bits of the frame
have arrived
When a is greater than 1, the propagation time is greater than the
transmission
time. In this case, the sender completes transmission of the entire frame
before the
leading bits of that frame arrive at the receiver. Put another way, larger
values of

9
a are consistent with higher data rates and/or longer distances between stations.
Chapter 16 discusses a and data link performance.

Both parts of Figure 7.2 (a and b) consist of a sequence of snapshots of the


transmission process over time. In both cases, the first four snapshots show the
process
of transmitting a frame containing data, and the last snapshot shows the return
of a small acknowledgment frame. Note that for a(alfa) >1, the line is always
underutilized
and even for a(alfa)<1, the line is inefficiently utilized. In essence, for very high
data rates, for very long distances between sender and receiver, stop-and-wait flow
control provides inefficient line utilization.

9
10
The essence of the problem described so far is that only one frame at a time can
be in transit. Efficiency can be greatly improved by allowing multiple frames to
be in transit at the same time.

Consider two stations, A and B, connected via a full-duplex link. Station B


allocates buffer space for W frames. Thus, B can accept W frames, and A is
allowed to send W frames without waiting for any acknowledgments. To keep
track of which frames have been acknowledged, each is labeled with a a k-bit
sequence number. This gives a range of sequence numbers of 0 through 2k – 1,
and frames are numbered modulo 2k, with a maximum window size of 2k – 1. The
window size need not be the maximum possible size for a given sequence number
length k. B acknowledges a frame by sending an acknowledgment that includes
the sequence number of the next frame expected. This scheme can also be used to
acknowledge multiple frames, and is referred to as sliding-window flow control.
Most data link control protocols also allow a station to cut off the flow of frames
from the other side by sending a Receive Not Ready (RNR) message, which
acknowledges former frames but forbids transfer of future frames. At some
subsequent point, the station must send a normal acknowledgment to reopen the
window. If two stations exchange data, each needs to maintain two windows, one
for transmit and one for receive, and each side needs to send the data and
acknowledgments to the other. To provide efficient support for this requirement, a
feature known as piggybacking is typically provided. Each data frame includes
a field that holds the sequence number of that frame plus a field that holds the

11
sequence number used for acknowledgment.

11
Figure 7.3 is a useful way of depicting the sliding-window process. It
assumes
the use of a 3-bit sequence number, so that frames are numbered
sequentially from
0 through 7, and then the same numbers are reused for subsequent frames.
The
shaded rectangle indicates the frames that may be sent; in this figure, the
sender
may transmit five frames, beginning with frame 0. Each time a frame is
sent, the
shaded window shrinks; each time an acknowledgment is received, the
shaded window
grows. Frames between the vertical bar and the shaded window have been
sent
but not yet acknowledged. As we shall see, the sender must buffer these
frames in
case they need to be retransmitted.

The window size need not be the maximum possible size for a given

12
sequence
number length. For example, using a 3-bit sequence number, a window size of
5 could be configured for the stations using the sliding-window flow control
protocol.

12
An example is shown in Figure 7.4. The example assumes a 3-bit sequence
number field and a maximum window size of seven frames. Initially, A and B
have windows indicating that A may transmit seven frames, beginning with frame
0 (F0). After transmitting three frames (F0, F1, F2) without acknowledgment, A
has shrunk its window to four frames and maintains a copy of the three
transmitted frames. The window indicates that A may transmit four frames,
beginning with frame number 3. B then transmits an RR (receive ready) 3, which
means "I have received all frames up through frame number 2 and am ready to
receive frame number 3; in fact, I am prepared to receive seven frames,
beginning with frame number 3." With this acknowledgment, A is back up to
permission to transmit seven frames, still beginning with frame 3; also A may
discard the buffered frames that have now been acknowledged. A proceeds to
transmit frames 3, 4, 5, and 6. B returns RR 4, which acknowledges F3, and
allows transmission of F4 through the next instance of F2. By the time this RR
reaches A, it has already transmitted F4, F5, and F6, and therefore A may only
open its window to permit sending four frames beginning with F7.

Sliding-window flow control is potentially much more efficient than stop-


and-
wait flow control. The reason is that, with sliding-window flow control, the
transmission link is treated as a pipeline that may be filled with frames in
transit.

13
By contrast, with stop-and-wait flow control, only one frame may be in the pipe at
a
time. Chapter 16 quantifies the improvement in efficiency.

13
14
15
Error control refers to mechanisms to detect and correct errors that occur in
the
transmission of frames. The model that we will use, which covers the
typical case, is
illustrated in Figure 7.1b. As before, data are sent as a sequence of frames;
frames
arrive in the same order in which they are sent; and each transmitted frame
suffers
an arbitrary and potentially variable amount of delay before reception. In
addition,
we admit the possibility of two types of errors:

• Lost frame: A frame fails to arrive at the other side. In the case of a
network,
the network may simply fail to deliver a frame. In the case of a direct point-
to-
point data link, a noise burst may damage a frame to the extent that the
receiver is not aware that a frame has been transmitted.

16
• Damaged frame: A recognizable frame does arrive, but some of the bits are in
error (have been altered during transmission).

The most common techniques for error control are based on some or all of the
following ingredients:

• Error detection: The destination detects frames that are in error, using the
techniques described in the preceding chapter, and discards those frames.

• Positive acknowledgment: The destination returns a positive acknowledgment


to successfully received, error-free frames.

• Retransmission after timeout: The source retransmits a frame that has not
been acknowledged after a predetermined amount of time.

• Negative acknowledgment and retransmission: The destination returns a


negative acknowledgment to frames in which an error is detected. The source
retransmits such frames.

16
Collectively, these mechanisms are all referred to as automatic repeat
request
(ARQ) . The effect of ARQ is to turn a potentially unreliable data link into
a reliable
one. Three versions of ARQ have been standardized:

• Stop-and-wait ARQ

• Go-back-N ARQ

• Selective-reject ARQ

All of these forms are based on the use of the flow control techniques
discussed
in Section 7.1. We examine each in turn.

17
Stop-and-wait ARQ is based on the stop-and-wait flow control technique outlined
previously. The source station transmits a single frame and then must await an
acknowledgment (ACK). No other data frames can be sent until the destination
station's reply arrives at the source station.

Two sorts of errors could occur. First, the frame that arrives at the destination
could be damaged. The receiver detects this by using the error-detection
technique referred to earlier and simply discards the frame. To account for this
possibility, the source station is equipped with a timer. After a frame is
transmitted, the source station waits for an acknowledgment. If no
acknowledgment is received by the time that the timer expires, then the same
frame is sent again. Note that this method requires that the transmitter maintain a
copy of a transmitted frame until an acknowledgment is received for that frame.

The second sort of error is a damaged acknowledgment, which is not


recognizable by A, which will therefore time out and resend the same frame. This
duplicate frame arrives and is accepted by B. B has therefore accepted two copies
of the same frame as if they were separate. To avoid this problem, frames are
alternately labeled with 0 or 1, and positive acknowledgments are of the form
ACK0 and ACK1. In keeping with the sliding-window convention, an ACK0
acknowledges receipt of a frame numbered 1 and indicates that the receiver is
ready for a frame numbered 0.

18
Figure 7.5 gives an example of the use of stop-and-wait ARQ, showing the
transmission of a sequence of frames from source A to destination B.2 The
figure
shows the two types of errors just described. The third frame transmitted by
A is lost
or damaged and therefore B does not return an ACK. A times out and
retransmits
the frame. Later, A transmits a frame labeled 1 but the ACK0 for that frame
is lost.
A times out and retransmits the same frame. When B receives two frames
in a row
with the same label, it discards the second frame but sends back an ACK0
to each.

The principal advantage of stop-and-wait ARQ is its simplicity. Its


principal
disadvantage, as discussed in Section 7.1, is that stop-and-wait is an
inefficient mechanism.
The sliding-window flow control technique can be adapted to provide more

19
efficient line use; in this context, it is sometimes referred to as continuous ARQ .

19
The form of error control based on sliding-window flow control that is
most commonly
used is called go-back-N ARQ. In this method, a station may send a series
of frames sequentially numbered modulo some maximum value. The
number of
unacknowledged frames outstanding is determined by window size, using
the
sliding-window flow control technique. While no errors occur, the
destination
will acknowledge incoming frames as usual (RR = receive ready, or
piggybacked
acknowledgment). If the destination station detects an error in a frame, it
may send
a negative acknowledgment (REJ = reject) for that frame, as explained in
the following
rules. The destination station will discard that frame and all future
incoming
frames until the frame in error is correctly received. Thus, the source
station, when

20
it receives a REJ, must retransmit the frame in error plus all succeeding frames that
were transmitted in the interim.

20
With selective-reject ARQ, the only frames retransmitted are those that receive a
negative acknowledgment, in this case called SREJ, or those that time out.
Selective reject would appear to be more efficient than go-back-N, because it
minimizes the amount of retransmission. On the other hand, the receiver must
maintain a buffer large enough to save post-SREJ frames until the frame in error
is retransmitted and must contain logic for reinserting that frame in the proper
sequence. The transmitter, too, requires more complex logic to be able to send a
frame out of sequence. Because of such complications, select-reject ARQ is much
less widely used than go-back-N ARQ. Selective reject is a useful choice for a
satellite link because of the long propagation delay involved.

21
Figure 7.6a is an example of the frame flow for go-back-N ARQ.
Because of the propagation delay on the line, by the time that an
acknowledgment
(positive or negative) arrives back at the sending station, it has already
sent at least one additional frame beyond the one being acknowledged. In
this
example, frame 4 is damaged. Frames 5 and 6 are received out of order and
are
discarded by B. When frame 5 arrives, B immediately sends a REJ 4. When
the
REJ to frame 4 is received, not only frame 4 but frames 5 and 6 must be
retransmitted.
Note that the transmitter must keep a copy of all unacknowledged
frames. Figure 7.6a also shows an example of retransmission after timeout.
No
acknowledgment is received for frame 5 within the timeout period, so A
issues
an RR to determine the status of B.

22
Figure 7.6b illustrates Selective-Reject ARQ. When frame 5 is received
out of order, B sends a SREJ 4, indicating that frame 4 has not been received.
However, B continues to accept incoming frames and buffers them until a valid
frame 4 is received. At that point, B can place the frames in the proper order for
delivery to higher-layer software.

22
The most important data link control protocol is HDLC (ISO 3009, ISO 4335).
Not only is HDLC widely used, but it is the basis for many other important data
link control protocols, which use the same or similar formats and the same
mechanisms as employed in HDLC.
To satisfy a variety of applications, HDLC defines: three station types:
• Primary station: Responsible for controlling the operation of the link. Frames
issued by the primary are called commands.
• Secondary station: Operates under the control of the primary station. Frames
issued by a secondary are called responses. The primary maintains a separate
logical link with each secondary station on the line.
• Combined station: Combines the features of primary and secondary. A
combined station may issue both commands and responses.
It also defines two link configurations:
• Unbalanced configuration: Consists of one primary and one or more
secondary stations and supports both full-duplex and half-duplex transmission.
• Balanced configuration: Consists of two combined stations and supports both
full-duplex and half-duplex transmission.

23
Topologia de xarxa (la xarxa física pot ser qualsevol configuració):
ABM només dues estacions les dues combinades amb la mateixa capacitat de
control de l’enllaç. No hi ha POLL. És el cas més habitual en xarxes WAN a
nivell local.
NRM té una estació primària i una o més secundàries. El control de l’enllaç el té
la primària. Per accedir al medi les secundàries han de rebre un POLL de la
primària ( bit P a 1 en trames RR o I ). La primària envia dades a les secundàries
amb un SELECT (trames I ). Les secundàries no es poden connectar entre si.

24
HDLC defines three data transfer modes:

• Normal response mode (NRM): Used with an unbalanced configuration. The


primary may initiate data transfer to a secondary, but a secondary may only
transmit data in response to a command from the primary. NRM is used on multi-
drop lines, in which a number of terminals are connected to a host computer.

• Asynchronous balanced mode (ABM): Used with a balanced configuration.


Either combined station may initiate transmission without receiving permission
from the other combined station. ABM is the most widely used of the three
modes; it makes more efficient use of a full-duplex point-to-point link because
there is no polling overhead.

• Asynchronous response mode (ARM): Used with an unbalanced


configuration. The secondary may initiate transmission without explicit
permission of the primary. The primary still retains responsibility for the line,
including initialization, error recovery, and logical disconnection. ARM is rarely
used; it is applicable to some special situations in which a secondary may need to
initiate transmission.

25
HDLC uses synchronous transmission. All transmissions are in the form of
frames,
and a single frame format suffices for all types of data and control
exchanges.

Figure 7.7 depicts the structure of the HDLC frame. The flag, address, and
control fields that precede the information field are known as a header .
The frame
check sequence and flag fields following the data field are referred to as a
trailer .

26
Flag fields delimit the frame at both ends with the unique pattern
01111110. A single flag may be used as the closing flag for one frame and
the opening
flag for the next. On both sides of the user-network interface, receivers are
continuously
hunting for the flag sequence to synchronize on the start of a frame. While
receiving a frame, a station continues to hunt for that sequence to determine
the
end of the frame. Because the protocol allows the presence of arbitrary bit
patterns
(i.e., there are no restrictions on the content of the various fields imposed
by the
link protocol), there is no assurance that the pattern 01111110 will not
appear somewhere
inside the frame, thus destroying synchronization. To avoid this problem, a
procedure known as bit stuffing is used. For all bits between the starting
and ending
flags, the transmitter inserts an extra 0 bit after each occurrence of five 1s
in the

27
frame. After detecting a starting flag, the receiver monitors the bit stream. When a
pattern of five 1s appears, the sixth bit is examined. If this bit is 0, it is deleted. If
the
sixth bit is a 1 and the seventh bit is a 0, the combination is accepted as a flag. If the
sixth and seventh bits are both 1, the sender is indicating an abort condition.

With the use of bit stuffing, arbitrary bit patterns can be inserted into the data
field of the frame. This property is known as data transparency .

Figure 7.8 shows an example of bit stuffing. Note that in the first two cases, the
extra 0 is not strictly necessary for avoiding a flag pattern but is necessary for the
operation of the algorithm.

27
The address field identifies the secondary station that transmitted or is to receive
the frame. This field is not needed for point-to-point links but is always included
for the sake of uniformity. The address field is usually 8 bits long but, by prior
agreement, an extended format may be used in which the actual address length is
a multiple of 7 bits. The leftmost bit of each octet is 1 or 0 according as it is or is
not the last octet of the address field. The remaining 7 bits of each octet form part
of the address. The single-octet address of 11111111 is interpreted as the all-
stations address in both basic and extended formats. It is used to allow the
primary to broadcast a frame for reception by all secondaries.

28
All of the control field formats contain the poll/final (P/F) bit. Its use
depends
on context. Typically, in command frames, it is referred to as the P bit and
is set to
1 to solicit (poll) a response frame from the peer HDLC entity. In response
frames,
it is referred to as the F bit and is set to 1 to indicate the response frame
transmitted
as a result of a soliciting command.

Note that the basic control field for S- and I-frames uses 3-bit sequence
numbers. With the appropriate set-mode command, an extended control
field can
be used for S- and I-frames that employs 7-bit sequence numbers. U-frames
always
contain an 8-bit control field.

29
HDLC operation consists of the exchange of I-frames, S-frames, and U-
frames
between two stations. The various commands and responses defined for
these frame
types are listed in Table 7.1.

30
To better understand HDLC operation, several examples
are presented in Figure 7.9. In the example diagrams, each arrow includes a
legend that specifies the frame name, the setting of the P/F bit, and, where
appropriate,
the values of N(R) and N(S). The setting of the P or F bit is 1 if the
designation
is present and 0 if absent.

Figure 7.9a shows the frames involved in link setup and disconnect. The
HDLC
protocol entity for one side issues an SABM command to the other side and
starts
a timer. The other side, upon receiving the SABM, returns a UA response
and sets
local variables and counters to their initial values. The initiating entity
receives the
UA response, sets its variables and counters, and stops the timer. The
logical connection
is now active, and both sides may begin transmitting frames. Should the

31
timer expire without a response to an SABM, the originator will repeat the SABM,
as illustrated. This would be repeated until a UA or DM is received or until, after
a given number of tries, the entity attempting initiation gives up and reports failure
to a management entity. In such a case, higher-layer intervention is necessary. The
same figure (Figure 7.9a) shows the disconnect procedure. One side issues a DISC
command, and the other responds with a UA response.

Figure 7.9b illustrates the full-duplex exchange of I-frames. When an entity


sends a number of I-frames in a row with no incoming data, then the receive
sequence number is simply repeated (e.g., I,1,1; I,2,1 in the A-to-B direction).
When
an entity receives a number of I-frames in a row with no outgoing frames, then the
receive sequence number in the next outgoing frame must reflect the cumulative
activity (e.g., I,1,3 in the B-to-A direction). Note that, in addition to I-frames, data
exchange may involve supervisory frames.

Figure 7.9c shows an operation involving a busy condition. Such a condition


may arise because an HDLC entity is not able to process I-frames as fast as they
are arriving, or the intended user is not able to accept data as fast as they arrive
in I-frames. In either case, the entity’s receive buffer fills up and it must halt the
incoming flow of I-frames, using an RNR command. In this example, A issues an
RNR, which requires B to halt transmission of I-frames. The station receiving the
RNR will usually poll the busy station at some periodic interval by sending an RR
with the P bit set. This requires the other side to respond with either an RR or an
RNR. When the busy condition has cleared, A returns an RR, and I-frame
transmission
from B can resume.

An example of error recovery using the REJ command is shown in Figure 7.9d.
In this example, A transmits I-frames numbered 3, 4, and 5. Number 4 suffers an
error and is lost. When B receives I-frame number 5, it discards this frame because
it
is out of order and sends an REJ with an N(R) of 4. This causes A to initiate
retransmission
of I-frames previously sent, beginning with frame 4. A may continue to send

31
additional frames after the retransmitted frames.

An example of error recovery using a timeout is shown in Figure 7.9e. In


this example, A transmits I-frame number 3 as the last in a sequence of I-frames.
The frame suffers an error. B detects the error and discards it. However, B cannot
send an REJ, because there is no way to know if this was an I-frame. If an error is
detected in a frame, all of the bits of that frame are suspect, and the receiver has no
way to act upon it. A, however, would have started a timer as the frame was
transmitted.
This timer has a duration long enough to span the expected response time.
When the timer expires, A initiates recovery action. This is usually done by polling
the other side with an RR command with the P bit set to determine the status of
the other side. Because the poll demands a response, the entity will receive a frame
containing an N(R) field and be able to proceed. In this case, the response indicates
that frame 3 was lost, which A retransmits.

These examples are not exhaustive. However, they should give the reader a
good feel for the behavior of HDLC.

31
Some of the features that it offers which are not available in HDLC include:
Link quality management which is a way to monitor the quality of a link in PPP.
When PPP detects too many errors on a link, the link is shut down.
Authentication using PAP and/or CHAP
PPP operation is made using three parameters: Encapsulation of frames using
HDLC protocol, LCP (Link Control Protocol) for establishment, configuration
and testing of the link
NCP (Network Control Protocols) to negotiate the different layer 3 protocols.

32
Més informació: https://www.cisco.com/c/en/us/tech/wan/point-to-point-
protocol-ppp/index.html

Components of PPP
Point - to - Point Protocol is a layered protocol having three components −
Encapsulation Component − It encapsulates the datagram so that it can be
transmitted over the specified physical layer.
Link Control Protocol (LCP) − It is responsible for establishing, configuring,
testing, maintaining and terminating links for transmission. It also imparts
negotiation for set up of options and use of features by the two endpoints of the
links.
Authentication Protocols (AP) − These protocols authenticate endpoints for use
of services. The two authentication protocols of PPP are:
Password Authentication Protocol (PAP)
Challenge Handshake Authentication Protocol (CHAP)
Network Control Protocols (NCPs) − These protocols are used for negotiating
the parameters and facilities for the network layer. For every higher-layer
protocol supported by PPP, one NCP is there. Some of the NCPs of PPP are:
Internet Protocol Control Protocol (IPCP)
OSI Network Layer Control Protocol (OSINLCP)
Internetwork Packet Exchange Control Protocol (IPXCP)

33
DECnet Phase IV Control Protocol (DNCP)
NetBIOS Frames Control Protocol (NBFCP)
IPv6 Control Protocol (IPV6CP)

33
The LLC layer for LANs is similar in many respects to other link layers in
common use. Like all link layers, LLC is concerned with the transmission of a
link-level PDU between two stations, without the necessity of an intermediate
switching node. LLC has two characteristics not shared by most other link control
protocols:
1. It must support the multi-access, shared-medium nature of the link (this differs
from a multidrop line in that there is no primary node).
2. It is relieved of some details of link access by the MAC layer.
Addressing in LLC involves specifying the source and destination LLC users.
Typically, a user is a higher-layer protocol or a network management function in
the station. These LLC user addresses are referred to as service access points
(SAPs), in keeping with OSI terminology for the user of a protocol layer.

34
LLC specifies the mechanisms for addressing stations across the medium and for
controlling the exchange of data between two users. The operation and format of
this standard is based on HDLC. Three services are provided as alternatives for
attached devices using LLC:
Unacknowledged connectionless service: This service is a datagram-style
service. It is a very simple service that does not involve any of the flow- and
error-control mechanisms. Thus, the delivery of data is not guaranteed. However,
in most devices, there will be some higher layer of software that deals with
reliability issues.
Connection-mode service: This service is similar to that offered by HDLC. A
logical connection is set up between two users exchanging data, and flow control
and error control are provided.
Acknowledged connectionless service: This is a cross between the previous two
services. It provides that datagrams are to be acknowledged, but no prior logical
connection is set up.

35
All three LLC protocols employ the same PDU format (Figure 11.5),
which consists of four fields. The DSAP (Destination Service Access Point)
and SSAP (Source Service Access Point) fields each contain a 7-bit
address, which specifies the destination and source users of LLC. One bit
of the DSAP indicates whether the DSAP is an individual or group address.
One bit of the SSAP indicates whether the PDU is a command or response
PDU. The format of the LLC control field is identical to that of HDLC
(Figure 7.7), using extended (7-bit) sequence numbers.
For type 1 operation , which supports the unacknowledged connectionless
service, the unnumbered information (UI) PDU is used to transfer user
data. There is no acknowledgment, flow control, or error control. However,
there is error detection and discard at the MAC level.

36
IEEE 802.3 defines three types of MAC frames. The basic frame is the
original frame format. In addition, to support data link layer protocol
encapsulation within the data portion of the frame, two additional frame
types have
been added. A Q-tagged frame supports 802.1Q VLAN capability, as
described in Section 12.3. An envelope frame is intended to allow
inclusion of additional prefixes and suffixes to the data field required by
higher-layer encapsulation protocols such as those defined by the IEEE
802.1 working group (such as Provider Bridges and MAC Security), ITU-
T, or IETF (such as MPLS). Figure 12.4 depicts the frame format for all
three types of frames; the differences are contained in the MAC Client Data
field. Several additional fields encapsulate the frame to form an 802.3
packet.

37
The current 1-Gbps specification for IEEE 802.3 includes the following physical
layer alternatives (Figure 12.5):

38
Four physical layer options are defined for 10-Gbps Ethernet (Figure 12.7). The
first three of these have two suboptions: an "R" suboption and a "W" suboption.
The R designation refers to a family of physical layer implementations that use a
signal encoding technique known as 64B/66B, described in Appendix 12A. The R
implementations are designed for use over dark fiber, meaning a fiber optic cable
that is not in use and that is not connected to any other equipment. The W
designation refers to a family of physical layer implementations that also use
64B/66B signaling but that are then encapsulated to connect to SONET
equipment.
.

39
IEEE 802.3ba specifies three types of transmission media (Table 12.3): copper
backplane, twisted pair, and optical fiber. For copper media, four separate
physical lanes are specified. For optical fiber, either 4 or 10 wavelength lanes are
specified, depending on data rate and distance.

40
The IEEE 802.1Q standard, last updated in 2005, defines the operation of VLAN bridges and
switches that permits the definition, operation and administration of VLAN topologies within a
bridged/switched LAN infrastructure. In this section, we will concentrate on the application of
this standard to 802.3 LANs. Recall from Chapter 11 that a VLAN is an administratively
configured broadcast domain, consisting of a subset of end stations attached to a LAN. A VLAN
is not limited to one switch but can span multiple interconnected switches. In that case traffic
between switches must indicate VLAN membership. This is accomplished in 802.1Q by inserting
a tag with a VLAN identifier (VID) with a value in the range from 1 to 4094. Each VLAN in a
LAN configuration is assigned a globally unique VID. By assigning the same VID to end systems
on many switches, one or more VLAN broadcast domains can be extended across a large
network. Figure 12.10 shows the position and content of the 802.1 tag, referred to as Tag Control
Information (TCI). The presence of the 2-octet TCI field is indicated by set the Length/Type field
in the 802.3 MAC from to a value of 8100 hex. The TCI consists of three subfields:
User priority (3 bits): The priority level for this frame. Canonical format indicator (1 bit): s
always set to zero for Ethernet switches. CFI is used for compatibility reason between Ethernet
type network and Token Ring type network. If a frame received at an Ethernet port has a CFI set
to 1, then that frame should not be forwarded as it is to an untagged port.VLAN identifier (12
bits): the identification of the VLAN. Of the 4096 possible VIDs, a VID of 0 is used to identify
that the TCI contains only a priority value, and 4095 (FFF) is reserved, so the maximum possible
number of VLAN configurations is 4094.

41
T

QinQ permet ampliar el nombre de circuits virtuals i crear aniuaments (un VLAN
dins un altra).
Mac in Mac permet crear nivells de commutació a nivells diferents (com la xarxa
telefònica de veu). Es tracta d’un tunneling.

42
T

Mac in Mac permet crear nivells jeràrquics de commutació. Això reduexi la


complicació de les taules d’enrutament.

43
T

LAP F Core és el protocol de transmissió de dades a nivell 2 que prové de HDLC


eliminant el camp de control, i per tant de les funcions implícites a ell, i
introdueix el concepte de circuit virtual dins del camp d’adreça.
Els camps de seqüència de comprovació de senyaladors i trames (FCS) funcionen
com a l’HDLC. El camp d’informació transporta les dades de les capes superiors.
Si l’usuari tria implementar funcions de control d’enllaç de dades addicionals
d’extrem a extrem, es pot transportar una trama d’enllaç de dades en aquest
camp. De forma específica, una selecció comuna serà utilitzar tot el protocol
LAPF (anomenat protocol de control LAPF) per realitzar funcions sobre les
funcions centrals de LAPF. Teniu en compte que el protocol implementat
d’aquesta manera s’estableix estrictament entre els abonats finals i és transparent
a la xarxa de retransmissió de trama. El camp d’adreça té una longitud per
defecte de 2 bytes i es pot ampliar a 3 o 4 bytes. Transporta un identificador de
connexió d’enllaç de dades (DLCI) de 10, 16 o 23 bits. El DLCI té la mateixa
funció que el número de circuit virtual a l’X.25: permet que diverses connexions
de retransmissió de trama es multiplexin en un sol canal. Com a l’X.25,
l’identificador de connexió només té un significat: cada extrem de la connexió
lògica assigna un DLCI propi de l’agrupació de números no utilitzats localment i
la xarxa n’ha de mapar un amb l’altre. L’alternativa, utilitzant el mateix DLCI als
dos extrems, exigiria algun tipus de gestió global de valors de DLCI. La longitud
del camp d’adreça i, per tant, del DLCI, es determina mitjançant els bits
d’extensió de camp d’adreça (EA). El bit C/R és específic d’aplicació i no
l’utilitza el protocol de retransmissió de trama estàndard. La resta de bits del
camp d’adreça tenen a veure amb el control la congestió.

44
The asynchronous transfer mode makes use of fixed-size cells, consisting of a 5-
octet header and a 48-octet information field. There are several advantages to the
use of small, fixed-size cells. First, the use of small cells may reduce queuing
delay for a high-priority cell, because it waits less if it arrives slightly behind a
lower-priority cell that has gained access to a resource (e.g., the transmitter).
Second, it appears that fixed-size cells can be switched more efficiently, which is
important for the very high data rates of ATM [PARE88]. With fixed-size cells, it
is easier to implement the switching mechanism in hardware.

45
1. In the HUNT state, a cell delineation algorithm is performed bit by bit to
determine if the HEC coding law is observed (i.e., match between received HEC
and calculated HEC). Once a match is achieved, it is assumed that one header has
been found, and the method enters the PRESYNC state.
2.In the PRESYNC state, a cell structure is now assumed. The cell delineation
algorithm is performed cell by cell until the encoding law has been confirmed
consecutively d times.
3.In the SYNC state, the HEC is used for error detection and correction (see
Figure 11.7). Cell delineation is assumed to be lost if the HEC coding law is
recognized consecutively as incorrect a times.
The values of a and d are design parameters. Greater values of d result in longer
delays in establishing synchronization but in greater robustness against false
delineation. Greater values of a result in longer delays in recognizing a
misalignment but in greater robustness against false misalignment.

46
For the SDH-based physical layer, framing is imposed using the STM-1 (STS-3)
frame. Stallings DCC9e Figure 11.13 shows the payload portion of an STM-1
frame (for comparison, see Stallings DCC9e Figure 8.11).
The H4 octet in the path overhead is set at the sending side to indicate the next
occurrence of a cell boundary. That is, the value in the H4 field indicates the
number of octets to the first cell boundary following the H4 octet. The
permissible range of values is 0 to 52.The advantages of the SDH-based approach
include: It can be used to carry either ATM-based or STM-based (synchronous
transfer mode) payloads, making it possible to initially deploy a high-capacity
fiber-based transmission infrastructure for a variety of circuit-switched and
dedicated applications and then readily migrate to the support of ATM.
Some specific connections can be circuit switched using an SDH channel. For
example, a connection carrying constant-bit-rate video traffic can be mapped into
its own exclusive payload envelope of the STM-1 signal, which can be circuit
switched. This may be more efficient than ATM switching. Using SDH
synchronous multiplexing techniques, several ATM streams can be combined to
build interfaces with higher bit rates than those supported by the ATM layer at a
particular site. For example, four separate ATM streams, each with a bit rate of
155 Mbps (STM-1), can be combined to build a 622-Mbps (STM-4) interface.
This arrangement may be more cost effective than one using a single 622-Mbps
ATM stream.

47
T

El protocol AAL5 introdueix un trailer que indica la llargària del payload. El


PAD es un farcit per fer la llargària del total múltiple de 48 octets, ja que ha
d’anar dins de cel·les ATM.

48
T

El total AAL5-PDU es divideix per 48, s’introdueix el PAD i cada tros es posa
dins una cel·la ATM. La darrera cel·la porta el tercer bit del PTI a 1. Així el
receptor pot recomposar el CPCS-PDU original.

49
An ATM network is designed to be able to transfer many different types of traffic
simultaneously, including real-time flows such as voice, video, and bursty TCP
flows. Although each such traffic flow is handled as a stream of 53-octet cells
traveling through a virtual channel, the way in which each data flow is handled
within the network depends on the characteristics of the traffic flow and the
requirements of the application. For example, real-time video traffic must be
delivered within minimum variation in delay. We examine the way in which an
ATM network handles different types of traffic flows in Chapter 13. In this
section, we summarize ATM service categories, which are used by an end system
to identify the type of service required.

50
T

Es considera nivell 2.5. Es una forma de crear circuits virtuals pel nivell IP.
Capítol 23 Ed. 10 Stallings (english)

51
52
53
54
In essence, MPLS is an efficient technique for forwarding and routing packets. MPLS was designed with IP networks in mind, but the technology can be used without IP to construct a network
with any link-level protocol, including ATM and frame relay. In an ordinary packet-switching network packet switches must examine various fields within the packet heard to determine
destination, route, quality of service (QoS), and any traffic management functions (such as discard or delay) that may be supported. Similarly, in an IP-based network, routers examine a number of
fields in the IP header to determine these functions. In an MPLS network, a fixed-length label encapsulates an IP packet or a data link frame. The MPLS label contains all the information needed by
an MPLS-enabled router to perform routing, delivery, QoS, and traffic management functions. Unlike IP, MPLS is connection oriented.

55
MPLS makes it easy to commit network resources in such a way as to balance the
load in the face of a given demand and to commit to differential levels of support
to meet various user traffic requirements. The ability to define routes
dynamically, plan resource commitments on the basis of known demand, and
optimize network utilization is referred to as traffic engineering. Prior to the
advent of MPLS, the one networking technology that provided strong traffic
engineering capabilities was ATM. With the basic IP mechanism, there is a
primitive form of automated traffic engineering. Specifically, routing protocols
such as OSPF enable routers to dynamically change the route to a given
destination on a packet-by-packet basis to try to balance load. But such dynamic
routing reacts in a very simple manner to congestion and does not provide a way
to support QoS. All traffic between two endpoints follows the same route, which
may be changed when congestion occurs. MPLS, on the other hand, is aware of
not just individual packets but flows of packets in which each flow has certain
QoS requirements and a predictable traffic demand. With MPLS, it is possible to
set up routes on the basis of these individual flows, with two different flows
between the same endpoints perhaps following different routers. Further, when
congestion threatens, MPLS paths can be rerouted intelligently. That is, instead of
simply changing the route on a packet-by-packet basis, with MPLS, the routes are
changed on a flow-by-flow basis, taking advantage of the known traffic demands
of each flow. Effective use of traffic engineering can substantially increase usable
network capacity.

56
An MPLS network or internet consists of a set of nodes, called label switching routers (LSRs)
capable of switching and routing packets on the basis of which a label has been appended to each
packet. Labels define a flow of packets between two endpoints or, in the case of multicast,
between a source endpoint and a multicast group of destination endpoints. For each distinct flow,
called a forwarding equivalence class (FEC), a specific path through the network of LSRs is
defined, called a label switched path (LSP). In essence, an FEC represents a group of packets
that share the same transport requirements. All packets in an FEC receive the same treatment en
route to the destination. These packets follow the same path and receive the same QoS treatment
at each hop. In contrast to the forwarding in ordinary IP networks, the assignment of a particular
packet to a particular FEC is done just once, when the packet enters the network of MPLS routers.
Thus, MPLS is a connection-oriented technology. Associated with each FEC is a traffic
characterization that defines the QoS requirements for that flow. The LSRs need not examine or
process the IP header but rather simply forward each packet based on its label value. Each LSR
builds a table, called a label information base (LIB), to specify how a packet must be treated and
forwarded. Thus, the forwarding process is simpler than with an IP router.

57
Figure depicts the operation of MPLS within a domain of MPLS-enabled routers. The following are key elements of the operation:
1.Prior to the routing and delivery of packets in a given FEC, a path through the network, known as a label switched path (LSP), must be defined and the QoS
parameters along that path must be established. The QoS parameters determine (1) how much resources to commit to the path, and (2) what queuing and discarding
policy to establish at each LSR for packets in this FEC. To accomplish these tasks, two protocols are used to exchange the necessary information among routers:
(a)An interior routing protocol, such as OSPF, is used to exchange reachability and routing information. (b)Labels must be assigned to the packets for a particular
FEC. Because the use of globally unique labels would impose a management burden and limit the number of usable labels, labels have local significance only, as
discussed subsequently. A network operator can specify explicit routes manually and assign the appropriate label values. Alternatively, a protocol is used to
determine the route and establish label values between adjacent LSRs. Either of two protocols can be used for this purpose: the Label Distribution Protocol (LDP) or
an enhanced version of RSVP. LDP is now considered the standard technique, with the RSVP approach deprecated.
2.A packet enters an MPLS domain through an ingress edge LSR, where it is processed to determine which network-layer services it requires, defining its QoS. The
LSR assigns this packet to a particular FEC, and therefore a particular LSP; appends the appropriate label to the packet; and forwards the packet. If no LSP yet exists
for this FEC, the edge LSR must cooperate with the other LSRs in defining a new LSP.
3.Within the MPLS domain, as each LSR receives a labeled packet, it (a)Removes the incoming label and attaches the appropriate outgoing label to the packet (b)
Forwards the packet to the next LSR along the LSP
4.The egress edge LSR strips the label, reads the IP packet header, and forwards the packet to its final destination.

58
Stallings DCC9e Figure 21.2 shows the label-handling and forwarding operation in more
detail. Each LSR maintains a forwarding table for each LSP passing through the LSR.
When a labeled packet arrives, the LSR indexes the forwarding table to determine the
next hop. For scalability, as was mentioned, labels have local significance only. Thus, the
LSR removes the incoming label from the packet and attaches the matching outgoing
label before forwarding the packet. The ingress edge LSR determines the FEC for each
incoming unlabeled packet and, on the basis of the FEC, assigns the packet to a particular
LSP, attaches the corresponding label, and forwards the packet. In this example, the first
packet arrives at the edge LSR, which reads the IP header for the destination address
prefix, 128.89. The LSR then looks up the destination address in the switching table,
inserts a label with a 20-bit label value of 19, and forwards the labeled packet out
interface 1. This interface is attached via a link to a core LSR, which receives the packet
on its interface 2. The LSR in the core reads the label and looks up its match in its
switching table, then replaces label 19 with label 24, and forwards it out interface 0. The
egress LSR reads and looks up label 4 in its table, which says to strip the label and
forward the packet out interface 0.

59
Let us now look at an example that illustrates the various stages of operation of
MPLS, Figure 21.3. We examine the path of a packet as it a source workstation to
a destination server. Across the MPLS network, the packet enters at egress node
LSR 1. Assume that this is the first occurrence of a packet on a new flow of
packets, so that LSR 1 does not have a label for the packet. LSR 1 consults the IP
header to find the destination address and then determine the next hop. Assume in
this case that the next hop is LSR 3. Then, LSR 1 initiates a label request toward
LSR 3. This request propagates through the network as indicated by the dashed
green line.Each intermediate router receives a label from its downstream router
starting from LSR 7 and going upstream until LSR 1, setting up an LSP. The LSP
setup is indicated by the dashed grey line. The setup can be performed using LDP
and may or may not involve traffic engineering considerations. LSR 1
is now able to insert the appropriate label and forward the packet to LSR 3. Each
subsequent LSR (LSR 5, LSR 6, LSR 7) examines the label in the received
packet, replaces it with the outgoing label, and forwards it. When the packet
reaches LSR 7, the LSR removes the label because the packet is departing the
MPLS domain and delivers the packet to the destination.

60
One of the most powerful features of MPLS is label stacking. A labeled packet
may carry a number of labels, organized as a last-in-first-out stack. Processing is
always based on the top label. At any LSR, a label may be added to the stack
(push operation) or removed from the stack (pop operation). Label stacking
allows the aggregation of LSPs into a single LSP for a portion of the route
through a network, creating a tunnel. The term tunnel refers to the fact that traffic
routing is determined by labels, and is exercised below normal IP routing and
filtering mechanisms. At the beginning of the tunnel, an LSR assigns the same
label to packets from a number of LSPs by pushing the label onto each packet's
stack. At the end of the tunnel, another LSR pops the top element from the label
stack, revealing the inner label. This is similar to ATM, which has one level of
stacking (virtual channels inside virtual paths) but MPLS supports unlimited
stacking. Label stacking provides considerable flexibility. An enterprise could
establish MPLS-enabled networks at various sites and establish a number of LSPs
at each site. The enterprise could then use label stacking to aggregate multiple
flows of its own traffic before handing it to an access provider. The access
provider could aggregate traffic from multiple enterprises before handing it to a
larger service provider. Service providers could aggregate many LSPs into a
relatively small number of tunnels between points of presence. Fewer tunnels
means smaller tables, making it easier for a provider to scale the network core.

61
An MPLS label is a 32-bit field consisting of the following elements (Stallings
DCC9e Figure 21.4), defined in RFC 3032:
Label value: Locally significant 20-bit label. Values 0 through 15 are reserved.
Traffic class (TC): 3 bits used to carry traffic class information.
S: Set to one for the oldest entry in the stack, and zero for all other entries. Thus,
this bit marks the bottom of the stack.
Time to live (TTL): 8 bits used to encode a hop count, or time to live, value.

62
The label stack entries appear after the data link layer headers, but before any
network layer headers. The top of the label stack appears earliest in the packet
(closest to the data link header), and the bottom appears latest (closest to the
network layer header), as shown in Figure. The network layer packet immediately
follows the label stack entry that has the S bit set.

63
In data link frame, such as for PPP (point-to-point protocol), the label stack
appears between the IP header and the data link header (Stallings DCC9e Figure
21.6a). For an IEEE 802 frame, the label stack appears between the IP header and
the LLC (logical link control) header (Figure 21.6b). If MPLS is used over a
connection-oriented network service, a slightly different approach may be taken,
as shown in Figures 21.6c and d. For ATM cells, the label value in the topmost
label is placed in the VPI/VCI field in the ATM cell header. The entire top label
remains at the top of the label stack, which is inserted between the cell header
and the IP header. Placing the label value in the ATM cell header facilitates
switching by an ATM switch, which would, as usual, only need to look at the cell
header. Similarly, the topmost label value can be placed in the DLCI (data link
connection identifier) field of a frame relay header. Note that in both these cases,
the Time to Live field is not visible to the switch and so is not decremented. The
reader should consult the MPLS specifications for the details of the way this
situation is handled.

64
To understand MPLS, it is necessary to understand the operational relationship among FECs, LSPs, and labels. The specifications covering all of the ramifications of this relationship are lengthy. In
the remainder of this section, we provide a summary. The essence of MPLS functionality is that traffic is grouped into FECs. The traffic in an FEC transits an MPLS domain along an LSP.
Individual packets in an FEC are uniquely identified as being part of a given FEC by means of a locally significant label. At each LSR, each labeled packet is forwarded on the basis of its label
value, with the LSR replacing the incoming label value with an outgoing label value. The overall scheme described in the previous paragraph imposes a number of requirements. Specifically, 1.
Each traffic flow must be assigned to a particular FEC. 2. A routing protocol is needed to determine the topology and current conditions in the domain so that a particular LSP can be assigned to an
FEC. The routing protocol must be able to gather and use information to support the QoS requirements of the FEC. 3. Individual LSRs must become aware of the LSP for a given FEC, must assign
an incoming label to the LSP, and must communicate that label to any other LSR that may send it packets for this FEC. The first requirement is outside the scope of the MPLS specifications. The
assignment needs to be done either by manual configuration, or by means of some signaling protocol, or by an analysis of incoming packets at ingress LSRs.

65
RFC 2702 (Requirements for Traffic Engineering Over MPLS) describes traffic
engineering as follows: Traffic Engineering (TE) is concerned with performance
optimization of operational networks. In general, it encompasses the application
of technology and scientific principles to the measurement, modeling,
characterization, and control of Internet traffic, and the application of such
knowledge and techniques to achieve specific performance objectives. The
aspects of Traffic Engineering that are of interest concerning MPLS are
measurement and control.The goal of MPLS traffic engineering is twofold. First,
traffic engineering seeks to allocate traffic to the network to maximize utilization
of the network capacity. And second, traffic engineering seeks to ensure the most
desirable route through the network for packet traffic, taking into account the
QoS requirements of the various packet flows. In performing traffic engineering,
MPLS may override the shortest path or least-cost route selected by the interior
routing protocol for a given source-destination flow.

66
Figure provides a simple example of traffic engineering. Both R1 and R8 have a
flow of packets to send to R5. Using OSPF or some other routing protocol, the
shortest path is calculated as R2-R3-R4. However, if we assume that R8 has a
steady-state traffic flow of 20 Mbps and R1 has a flow of 40 Mbps, then the
aggregate flow over this route will be 60 Mbps, which will exceed the capacity of
the R3-R4 link. As an alternative, a traffic engineering approach is to determine a
route from source to destination ahead of time and reserve the required resources
along the way by setting up a LSP and associating resource requirements with
that LSP. In this case, the traffic from R8 to R5 follows the shortest route, but the
traffic from R1 to R5 follows a longer route that avoids overloading the network.

67
MPLS TE works by learning about the topology and resources available in a network. It then
maps the traffic flows to a particular path based on the resources that the traffic flow requires and
the available resources. MPLS TE builds unidirectional LSPs from a source to the destination,
which are then used for forwarding traffic. The point where the LSP begins is called LSP headend
or LSP source, and the node where the LSP ends is called LSP tailend or LSP tunnel destination.
LSP tunnels allow the implementation of a variety of policies related to network performance
optimization. For example, LSP tunnels can be automatically or manually routed away from
network failures, congestion, and bottlenecks. Furthermore, multiple parallel LSP tunnels can be
established between two nodes, and traffic between the two nodes can be mapped onto the LSP
tunnels according to local policy. The following components work together to implement MPLS
TE: Information distribution: A link state protocol, such as Open Shortest Path First (OSPF), is
necessary to discover the topology of the network. OSPF is enhanced to carry additional
information related to TE, such as bandwidth available and other related parameters. OSPF uses
Type 10 (Opaque) Link State Advertisements (LSAs) for this purpose. Path calculation: Once
the topology of the network and the alternative routes are known, a constraint-based routing
scheme is used for finding the shortest path through a particular network that meets the resource
requirements of the traffic flow. The Constrained Shortest Path First (CSPF) algorithm (discussed
subsequently), which operates on the LSP headend is used for this functionality. Path setup: is a
signaling protocol to reserve the resources for a traffic flow and to establish the LSP for a traffic
flow. IETF has defined two alternative protocols for this purpose. The Resource Reservation
Protocol (RSVP) has been enhanced with TE extensions for carrying labels and building the LSP.
The other approach is an enhancement to LDP known as Constraint-based Routing Label
Distribution Protocol (CR-LDP). Traffic forwarding: This is accomplished with MPLS, using
the LSP set up by the traffic engineering components just described.

68
Early in the MPLS standardization process, it became clear that a protocol was needed
that would enable providers to set up LSPs that took into account QoS and traffic
engineering parameters. Development of this type of signaling protocol proceeded on two
different tracks: Extensions to RSVP for setting up MPLS tunnels, known as RSVP-TE
[RFC3209] Extensions to LDP for setting constraint based LSPs [RFC3212]. The
motivation for the choice of protocol in both cases was straightforward. Extending
RSVP-TE to do in an MPLS environment what it already was doing (handling QoS
information and reserving resources) in an IP environment is comprehensible; you only
have to add the label distribution capability. Extending a native MPLS protocol like LDP,
which was designed to do label distribution, to handle some extra TLVs with QoS
information is also not revolutionary. Ultimately, the MPLS working group announced,
in RFC 3468, that RSVP-TE is the preferred solution. In general terms, RSVP-TE
operates by associating an MPLS label with an RSVP flow. RSVP is used to reserve
resources and to define an explicit router for an LSP tunnel. Stallings DCC9e Figure
21.11 illustrates the basic operation of RSVP-TE. An ingress node uses the RSVP PATH
message to request an LSP to be defined along an explicit route. The PATH message
includes a label request object and an explicit route object (ERO). The ERO defines the
explicit route to be followed by the LSP. The destination node of a label-switched path
responds to a LABEL_REQUEST by including a LABEL object in its response RSVP
Resv message. The LABEL object is inserted in the filter spec list immediately following
the filter spec to which it pertains. The Resv message is sent back upstream towards the
sender, following the path state created by the Path message, in reverse order.

69
A virtual private network (VPN) is a private network that is configured within a
public network (a carrier's network or the Internet) in order to take advantage of
the economies of scale and management facilities of large networks. VPNs are
widely used by enterprises to create wide area networks (WANs) that span large
geographic areas, to provide site-to-site connections to branch offices, and to
allow mobile users to dial up their company LANs. From the point of view of the
provider, the pubic network facility is shared by many customers, with the traffic
of each customer segregated from other traffic. Traffic designated as VPN traffic
can only go from a VPN source to a destination in the same VPN. It is often the
case that encryption and authentication facilities are provided for the VPN.

70
With a layer 2 VPN, there is mutual transparency between the customer network and the provider
network. In effect, the customer requests a mesh of unicast LSPs among customer switches that
attach to the provider network. Each LSP is viewed as a layer 2 circuit by the customer. In a
L2VPN, the provider's equipment forwards customer data based on information in the Layer 2
headers, such as an Ethernet MAC address, an ATM virtual channel identifier, or a frame relay
data link connection identifier.

71
72
73
74
75
76
77
T

78
79
80
81
82
83
84
85
86
87
88
89
90
T

Tècnica aplicable a qualsevol tecnologia de nivell 2 ( o 3).

91
Consider the queuing situation at a single packet switch or router, such as is
illustrated in Figure 20.1. Any given node has a number of I/O ports attached to
it: one or more to other nodes, and zero or more to end systems. On each port,
packets arrive and depart. We can consider that there are two buffers, or queues,
at each port, one to accept arriving packets, and one to hold packets that are
waiting to depart. In practice, there might be two fixed-size buffers associated
with each port, or there might be a pool of memory available for all buffering
activities. In the latter case, we can think of each port having two variable-size
buffers associated with it, subject to the constraint that the sum of all buffer sizes
is a constant. In any case, as packets arrive, they are stored in the input buffer of
the corresponding port. The node examines each incoming packet, makes a
routing decision, and then moves the packet to the appropriate output buffer.
Packets queued for output are transmitted as rapidly as possible; this is, in effect,
statistical time division multiplexing. If packets arrive too fast for the node to
process them (make routing decisions) or faster than packets can be cleared from
the outgoing buffers, then eventually packets will arrive for which no memory is
available.

92
When such a saturation point is reached, one of two general strategies can
be adopted. The first such strategy is to discard any incoming packet for
which there is no available buffer space. The alternative is for the node that
is experiencing these problems to exercise some sort of flow control over
its neighbors so that the traffic flow remains manageable. But, as Figure
20.2 illustrates, each of a node’s neighbors is also managing a number of
queues. If node 6 restrains the flow of packets from node 5, this causes the
output buffer in node 5 for the port to node 6 to fill up. Thus, congestion at
one point in the network can quickly propagate throughout a region
or the entire network. While flow control is indeed a powerful tool, we
need to use it in such a way as to manage the traffic on the entire network.

93
Figure suggests the ideal goal for network utilization. The top graph plots the
steady-state total throughput (number of packets delivered to destination end
systems) through the network as a function of the offered load (number of
packets transmitted by source end systems), both normalized to the maximum
theoretical throughput of the network. For example, if a network consists of a
single node with two full-duplex 1-Mbps links, then the theoretical capacity of
the network is 2 Mbps, consisting of a 1-Mbps flow in each direction. In the ideal
case, the throughput of the network increases to accommodate load up to an
offered load equal to the full capacity of the network; then normalized throughput
remains at 1.0 at higher input loads. Note, however, what happens to the end-to-
end delay experienced by the average packet even with this assumption of ideal
performance. At negligible load, there is some small constant amount of delay
that consists of the propagation delay through the network from source to
destination plus processing delay at each node. As the load on the network
increases, queuing delays at each node are added to this fixed amount of delay.
When the load exceeds the network capacity, delays increase without bound.

94
The ideal case reflected Figure assumes infinite buffers and no overhead related
to congestion control. In practice, buffers are finite, leading to buffer overflow,
and attempts to control congestion consume network capacity in the exchange of
control signals. Let us consider what happens in a network with finite buffers if
no attempt is made to control congestion or to restrain input from end systems.
The details will, of course, differ depending on network configuration and on the
statistics of the presented traffic.

95
In this book, we discuss various techniques for controlling congestion in packet-
switching, frame relay, and ATM networks, and in IP-based internets. To give
context to this discussion, Stallings DCC9e Figure 13.5 provides a general
depiction of important congestion control techniques, which include:
•backpressure
•choke packets
•implicit congestion signaling
•explicit congestion signaling

96
There are a number of issues related to congestion control that might be included
under the general category of traffic management. In its simplest form,
congestion control is concerned with efficient use of a network at high load. The
various mechanisms discussed in the previous section can be applied as the
situation arises, without regard to the particular source or destination affected.
When a node is saturated and must discard packets, it can apply some simple
rule, such as discard the most recent arrival. However, other considerations can
be used to refine the application of congestion control techniques and discard
policy.

97
Internet Protocols and
Multimedia Applications

February 2014
98
Internet Protocols and
Multimedia Applications

February 2014
99
Internet Protocols and
Multimedia Applications

February 2014
100
Master CANS. Next Generation Internet -
Next Generation Networks

October 2010 101


An ATM network is designed to be able to transfer many different types of traffic
simultaneously, including real-time flows such as voice, video, and bursty TCP
flows. Although each such traffic flow is handled as a stream of 53-octet cells
traveling through a virtual channel, the way in which each data flow is handled
within the network depends on the characteristics of the traffic flow and the
requirements of the application. For example, real-time video traffic must be
delivered within minimum variation in delay. We examine the way in which an
ATM network handles different types of traffic flows in Chapter 13. In this
section, we summarize ATM service categories, which are used by an end system
to identify the type of service required.

102
Two important tools in managing network are traffic shaping and traffic
policing. Traffic shaping is aimed at smoothing out traffic flow by
reducing packet clumping that leads to fluctuations in buffer occupancy. In
essence, if the input to a switch on a certain channel or logical connection
or flow is bursty, traffic shaping produces an output packet stream that is
less bursty and with a more regular flow of packets.
Traffic policing discriminates between incoming packets that conform to
quality of service (QoS) agreement and those that don’t. Packets that don’t
conform may be treated in one of the following ways:
1. Give the packet lower priority compared to packets in other output
queues. 2. Label the packet as nonconforming by setting the appropriate
bits in a header. Downstream switches may treat nonconforming packets
less favorably if congestion occurs. 3. Discard the packet. In essence,
traffic shaping is concerned with traffic leaving the switch and traffic
policing is concerned with traffic entering the switch. Two important
techniques that can be used for traffic shaping or traffic policing are token
bucket and leaky bucket.

103
A widely used traffic management tool is token bucket. This is a way of
characterizing and managing traffic that has three advantages:
1. Many traffic sources can be defined easily and accurately by a token
bucket scheme.
2. The token bucket scheme provides a concise description of the load to
be imposed by a flow, enabling the service to determine easily the resource
requirement.
3. The token bucket scheme provides the input parameters to a policing
function. This scheme provides a concise description of the peak and
average traffic load the recipient can expect and it also provides a
convenient mechanism by which the sender can implement a traffic flow
policy. Token bucket is used in the Bluetooth specification and in
differentiated services.

104
A token bucket traffic specification consists of two parameters: a token
replenishment rate R and a bucket size B. The token rate R specifies the
continually sustainable data rate; that is, over a relatively long period of
time, the
average data rate to be supported for this flow is R. The bucket size B
specifies the amount by which the data rate can exceed R for short periods
of time. The exact condition is as follows: during any time period T, the
amount of data sent cannot exceed RT + B. Figure illustrates this scheme
and explains the use of the term bucket. The bucket represents a counter
that indicates the allowable number of bytes of data that can be sent at any
time. The bucket fills with byte tokens at the rate of R (i.e., the counter is
incremented R times per second), up to the bucket capacity (up to the
maximum counter value). Data arrive from the user and are assembled into
packets, which are queued for transmission. A packet may be transmitted if
there are sufficient tokens to match the packet size.

105
Another scheme, similar to token bucket, is leaky bucket. Leaky bucket is
used in the asynchronous transfer mode (ATM) specification and in the
ITU-T H.261 standard for digital video coding and transmission. The basic
principle
of leaky bucket is depicted in Figure 20.7. The algorithm maintains a
running count of the cumulative amount of data sent in a counter X . The
counter is decremented at a constant rate of one unit per time unit to a
minimum value of zero; this is equivalent to a bucket that leaks at a rate of
1. The counter is incremented by I for each arriving packet, where I is the
size of the packet, subject to the restriction that the maximum counter value
is L . Any arriving cell that would cause the counter to exceed its maximum
is defined as nonconforming; this is equivalent to a bucket with a capacity
of L .

106
T

El CIR indica el límit de transmissió garantida (Bc). A partir d’aquí hi ha un altra


límit (Be) en el que les trames es marquen amb baixa prioritat, i superat aquest
límit (maximum rate) no es deixen entrar més dades.

107
T

Es fixa un temps de mesura T. Es compten els bits que s’envien al llarg de T. Si el


número supera Bc es marca la unitat de dades que tingui un mínim d’un bit
superat. Si supera Be un sol bit no es deixa entrar la trama. Veiem tres casos
diferents en funció del ritme d’enviament de trames.

108

You might also like