Filter 2005 TSA

Download as pdf or txt
Download as pdf or txt
You are on page 1of 26

Traffic Shaping in Aggregate-Based Networks:

Implementation and Analysis

Markus Fidler a,1 , Volker Sander b,2 , Wojciech Klimala b,2


a Department of Computer Science, RWTH Aachen University,
52064 Aachen, Germany
b Central Institute for Applied Mathematics, Research Center Jülich,
52425 Jülich, Germany

Abstract

The Differentiated Services architecture allows providing scalable Quality of Service


by means of aggregating flows to a small number of traffic classes. Among these
classes a Premium Service is defined, for which end-to-end delay guarantees are of
particular interest. However, in aggregate scheduling networks such delay bounds
suffer significantly from effects that are due to multiplexing of flows to aggregates.
A way to minimize the impacts of interfering flows is to shape incoming traffic, so
that bursts are smoothed. Doing so reduces the queuing delay within the core of
the domain, whereas an additional shaping delay at the edge is introduced.
This paper addresses the issue of traffic shaping analytically by extending known
Network Calculus. An equation that allows computing tight per-flow output bounds
in aggregate scheduling networks is derived and a solution for shaped interfering
flows is provided. We then give an overview on the shaping capabilities of cur-
rent legacy routers, showing deviations of actual implementations compared to the
idealized view. Finally, the evolved analytical framework is applied to an example
scenario and the results are compared to corresponding measurements.

Key words: Aggregate Scheduling, Differentiated Services, Traffic Shaping,


Network Calculus, Legacy Router.

Email addresses: [email protected] (Markus Fidler),


[email protected] (Volker Sander), [email protected] (Wojciech
Klimala).
1 This work was supported in part by the German Research Community (DFG)

under grant graduate school (GRK) 643 Software for Communication Systems.
2 This work was supported in part by the Path Allocation in Backbone networks

(PAB) project funded by the German Research Network (DFN) and the Federal
Ministry of Education and Research (BMBF).

Preprint submitted to Elsevier Science 21 July 2004


1 Introduction

Differentiated Services (DS) [2] addresses the scalability problems of the former
Integrated Services approach by an aggregation of flows to a small number
of traffic classes. Packets are identified by simple markings that indicate the
respective class. In the core of the network, routers do not need to determine to
which flow a packet belongs, only which aggregate behavior has to be applied.
Edge routers mark packets and indicate whether they are within profile or,
if they are out of profile, in which case they might even be discarded at the
edge router. A particular marking on a packet indicates a so-called Per Hop
Behavior (PHB) that has to be applied for forwarding of the packet. The
Expedited Forwarding (EF) PHB [8] is intended for building a service that
offers low loss and low delay, namely a Premium Service.

It seems inevitable that tomorrow’s high-speed networks will be optically


switched to a significant extent. One particular challenge that arises in this
emerging scenario will be the co-existence of IP and optical services. Pro-
viding the existence of advanced management tools for optical domains, the
question arises how to efficiently use the underlying transport capabilities in
the non-optical domain, particularly given the fact that applications, today’s
protocols, and IP-based services introduce bursts. An instrument that allows
reducing the impacts of such bursts on network performance is to shape in-
coming traffic at the edge of a domain [17]. Queuing of the initial bursts is in
this case performed at the edge, with the aim to minimize the delay within
the core. Especially, if heterogeneous aggregates have to be scheduled, shaping
significantly reduces the impacts of different types of flows on each other [21].

In this paper we investigate and quantify the impacts of traffic shaping on


queuing effects that are caused by traffic aggregation. The belonging theory
on deterministic multiplexing has gained much attention recently [15], [11],
[16] and in particular the issues of traffic shaping have been addressed in [18].
This paper extends our previous work presented in [10]. It differs from the
approach that is investigated in [18] in that we only assume traffic shaping at
the ingress routers of a domain instead of at the input ports of each router.

The particular advantage of this solution is that it does not rely on any specific
shaping support in the end-systems. While some operating systems have an
optional support for traffic shaping, the QoS kernel extension of AIX [24] is
an example for this, there is no general availability nor does a standardized
programming interface exist for controlling this feature. Similarly, advances in
application level protocols such as using TCP-Friendly Rate Control 3 [13] on
3 TFRC could also be implemented using a Congestion Manager [1] that relieves
transport protocols and applications from having to (re-)implement congestion con-
trol. However, this concept would require a modification in the IP-stack.

2
top of the Real-Time Protocol [23] do only apply for applications that make
already use of these emerging protocols.

The remainder is organized as follows: In section 2 the required background


on Network Calculus is given and extensions for traffic shaping are derived.
Section 3 deals with the shaping capabilities of legacy routers. A comparative
example is provided in section 4. Section 5 concludes the paper. Proofs are
given in the appendix. Further related work is presented in [9,10], where the
issue of traffic shaping is addressed in the context of admission control.

2 Network Calculus

Network Calculus is a theory of deterministic queuing systems that is based


on the calculus for network delay presented in [5,6] and on Generalized Pro-
cessor Sharing in [19,20]. Extensions and a comprehensive overview on current
Network Calculus are given in [3,15]. Here, only some basic concepts are in-
troduced briefly. Details are given for Network Calculus extensions that cover
traffic shaping. In the sequel upper indices j indicate links respective the queu-
ing and scheduling unit on an outgoing link and lower indices i indicate flows
respective traffic trunks.

2.1 Arrival Constraints

Flows can be described by arrival functions Fij (t) that are given as the cu-
mulated number of bits seen in an interval [0, t]. Arrival curves αij (t) are de-
fined to give an upper bound on the arrival functions, where αij (t2 − t1 ) ≥
Fij (t2 ) − Fij (t1 ) for all t2 ≥ t1 ≥ 0.

Theorem 1 (Minimum Arrival Curve) The minimum arrival curve that


corresponds to a given arrival function Fij (t) can be derived by self de-convolu-
tion of Fij (t) according to (1), where denotes min-plus de-convolution [15].

αij (t) = Fij (t) Fij (t) = sup[Fij (t + s) − Fij (s)] (1)
s≥0

A typical constraint for incoming flows is given by the leaky bucket algorithm,
which allows for bursts of a certain size and a defined sustainable rate. Non-
conforming traffic can either be shaped or dropped.

Definition 1 (Single Leaky Bucket Constraint) The arrival curve that


is enforced by a single leaky bucket with depth bji and token rate ri is the affine

3
function that is given by (2) for t > 0. It is zero for t = 0.

αij (t) = bji + ri · t (2)

The indexing refers to a flow i at a link or queuing and scheduling unit j. Note
that the burst term bji in most cases depends on the point of observation, that
is the link j, whereas the rate ri usually does not change throughout the
network.

Leaky buckets can be concatenated as shown in figure 1 to enforce and describe


more complex arrival constraints.

j j
bi bi

ri ri

Fig. 1. Dual leaky bucket configuration

Definition 2 (Dual Leaky Bucket Constraint) Consider a dual leaky


j
bucket configuration according to figure 1. Define (ri , bi ), and (ri , bji ) to be the
j
parameters of the first, respective second leaky bucket, with ri > ri and bi > bji .
The resulting arrival curve is defined by (3) for t > 0 and zero for t = 0. It
j j
allows for bursts of size bji , then it ascends by ri until ti = (bi − bji )/(ri − ri ),
and finally it increases with rate ri as shown in figure 2.
 j
j bi −bji
j bj + ri · t , if t ≤ ti =

αij (t) = min[bji + ri · t, bi + ri · t] = ij ri −ri (3)

bi + ri · t , else

An arrival curve of the type in (3) applies, if a single leaky bucket constrained
j
flow with the arrival curve αij (t) = bi + ri · t traverses a combination of an
ideal bit-by-bit traffic shaper with rate ri and a packetizer with a maximum
packet size lmax [3,14]. The concept of a packetizer was introduced with the
intention to model the effect of variable length packets. In reality, flows are not
continuous in time, due to a minimum granularity that needs to be taken into
account. Packetizers are used to convert a continuous input into a packetized
sequence with a given maximum packet size. As a consequence a burst size of
bji = lmax remains and the output arrival curve is dual leaky bucket constrained
j
with parameters (ri , bi ) and (ri , lmax ). Note that actual implementations differ

4
from the combination of an ideal shaper and a packetizer. As a result bji ≥ lmax
j
holds. The shaper adds a worst-case delay of (bi − bji )/ri .

data

ri

j
bi
ri

j
bi

j
ti time

Fig. 2. Dual leaky bucket constraint

2.2 Multiplexing

Aggregate scheduling networks, such as DS domains, are characterized by mul-


tiplexing of flows to traffic trunks, respective traffic aggregates. Fortunately,
the traffic specification applied by Network Calculus allows for a very simple
description of traffic aggregation: The aggregate arrival function, respective
arrival curve of a number of flows or traffic trunks that are multiplexed is the
sum of the individual arrival functions respective arrival curves.

Corollary 1 (Multiplexing, Single Leaky Bucket Case) The aggregate


of two single leaky bucket constrained flows 1 and 2 according to (2) is again
single leaky bucket constrained and given by (4).

α1j (t) + α2j (t) = bj1 + bj2 + (r1 + r2 ) · t (4)

The result immediately extends to scenarios with n flows.

In case of dual leaky bucket constraints the scenario is significantly more


complicated.

Corollary 2 (Multiplexing, Dual Leaky Bucket Case) Consider a


traffic aggregate of two dual leaky bucket constrained flows 1 and 2 with pa-
j j
rameters t1 and t2 according to (3). Assume an in order numbering of flows
j j
that ensures that t1 ≤ t2 . The corresponding aggregate arrival curve is given

5
in (5).
 j j j
b + bj2 + (r1 + r2 ) · t
 1

 , if t ≤ t1 ∧ t ≤ t2
j j j
α1j (t) + α2j (t) = b1 + bj2 + (r1 + r2 ) · t , if t > t1 ∧ t ≤ t2 (5)
j j


b1 + b2 + (r1 + r2 ) · t , else

The result extends to traffic aggregate that consist of n dual leaky bucket con-
j
strained flows i = 1 . . . n with parameters ti and in order numbering of flows
j j
such that ti ≤ ti+1 , ∀i = 1 . . . n − 1. Note that the ordering allows describing
the aggregate arrival constraint by n + 1 instead of 2n leaky buckets. The re-
sulting aggregate arrival curve is an increasing, concave, and piece-wise linear
function.

2.3 Aggregate Scheduling

The service that is offered by the scheduler on an outgoing link j can be


characterized by a minimum service curve, denoted by β j (t).

Definition 3 (Rate-Latency Service Curve) A special type of service curve


is the rate-latency type that is given by (6), with a rate Rj and a latency T j .
The term [x]+ is equal to x, if x ≥ 0, and zero otherwise.

β j (t) = Rj · [t − T j ]+ (6)

Service curves of the rate-latency type are implemented for example by Prior-
ity Queuing (PQ) or Generalized Processor Sharing (GPS) respective Weighted
Fair Queuing (WFQ), where certain transport resources that correspond to
the rate Rj can be assigned to selected traffic. The latency T j applies for
non-preemptive scheduling, where in case of PQ low priority packets that are
already in service have to complete service before high priority packets that
arrive in the meantime can be scheduled. Thus, T j can be given as the quo-
tient of the maximum packet size and the link rate. The same latency applies
for Packet by packet GPS (PGPS) for which relatvsed models can be found
in [19,15].

However, in aggregate scheduling networks resources are provisioned on a per-


aggregate basis. As an immediate consequence, flows that belong to the same
aggregate compete for resources. The problem has been addressed in [7,15]
and the following equation for per-flow service curves for FIFO aggregate
scheduling nodes has been derived.

Theorem 2 (Aggregate Scheduling) Consider a flow 1 that is scheduled


in FIFO order and in an aggregate manner with a flow, or a sub-aggregate 2

6
on a link j, where the service curve that is seen by the aggregate is given by
β j (t). Equation (7) defines a family of service curves βθj (t) with an arbitrary
parameter θ ≥ 0 that are offered to flow 1. The term 1t>θ is one for t > θ and
zero for t ≤ θ.
βθj (t) = [β j (t) − α2j (t − θ)]+ · 1t>θ (7)

The application of (7) usually requires a thorough choice of one particular


service curve by fixing the parameter θ.

2.4 Output Constraints

Based on the above concepts, bounds for the backlog and the delay can be
derived to be the maximal vertical deviation respective horizontal deviation
between the arrival curve and the service curve. Further on, of particular
interest for aggregate scheduling networks are constraints that can be derived
for the output of a traffic trunk from a queuing and scheduling unit.

Theorem 3 (Output Bound) If a traffic trunk i that is constrained by αij


is input to a link j that offers a service curve β j , a tight output arrival curve
αij+1 of flow i can be derived by min-plus de-convolution of arrival curve and
service curve according to (8).

αij+1 (t) = αij (t) β j (t) = sup[αij (t + s) − β j (s)] (8)


s≥0

The following approach can be applied to derive tight per-flow output con-
straints in case of aggregate scheduling.

Corollary 3 (Output Bound, Aggregate Scheduling) Substitution of


(7) in (8) gives a family of flow 1 output constraints with parameter θ. Any of
these output constraints holds individually, thus, the inf θ≥0 [. . . ] is an output
constraint, too [15]. The corresponding equation for the flow 1 output con-
straint α1j+1 is given by (9).
 h i
α1j+1 (t) = inf sup α1j (t + s) − [β j (s) − α2j (s − θ)]+ 1s>θ (9)
θ≥0 s≥0

A solution of (9) for rate-latency service curves and single leaky bucket con-
strained flows 1 and 2 is provided in [15]. However, a general solution for
arbitrary arrival curves has to our knowledge been missing so far. The rele-
vant extensions of current theory considering rate-latency service curves and
general arrival curves respective dual leaky bucket constraints are derived in
the following.

7
Theorem 4 (Output Bound, Rate-Latency Case) Consider two flows 1,
and 2 that are α1j , respective α2j upper constrained. Assume these flows are
served in FIFO order and in an aggregate manner by a node j that is char-
acterized by a minimum service curve β j (t) of the rate-latency type β j (t) =
Rj · [t − T j ]+ . Then, the output of flow 1 is α1j+1 upper constrained according
to (10), where θ is a function of t and has to comply with (11). A related
equation for simple rate service curves is derived in [4].

α1j+1 (t) = α1j (t + θ(t)) (10)

supv>0 [α1j (v + t + θ(t)) − α1j (t + θ(t)) + α2j (v) − Rj · v]


θ(t) = + Tj (11)
Rj

Corollary 4 (Output Bound, Single Leaky Bucket Case) In case of a


single leaky bucket constrained flow or traffic trunk 1, with rate r1 and burst
size bj1 , (11) can be simplified applying α1j (v + t + θ(t)) − α1j (t + θ(t)) = r1 · v.
As an immediate consequence, θ becomes independent of t. With (10) we find
that the output flow 1 is leaky bucket constrained with r1 and bj+1 1 according
to (12). The same result is also reported in [15]

bj+1
1 = α1j (θ(0)) (12)

Equation (11) becomes (13).

supv>0 [r1 · v + α2j (v) − Rj · v]


θ(0) = + Tj (13)
Rj
If the flow or sub-aggregate 2 is leaky bucket constrained with rate r2 and burst
size bj2 and if r1 + r2 ≤ Rj , the sup[. . . ] in (13) is found for v → 0 resulting
in θ = bj2 /Rj + T j and bj+1
1 can be given by (14).

bj+1
1 = bj1 + r1 · (T j + bj2 /Rj ) (14)

Theorem 5 (Output Bound, Dual Leaky Bucket Case) Consider


j j
two flows or traffic trunks 1 and 2 that are α1 , respective α2 upper constrained,
where α2j is concave. Assume that these flows are served in FIFO order and in
an aggregate manner by a node j that is characterized by a minimum service
curve of the rate-latency type. If the input flow 1 is constrained by two leaky
j j j
buckets with (r1 , b1 ), (r1 , bj1 ), and t1 = (b1 − bj1 )/(r1 − r1 ), the output flow
j+1
is dual leaky bucket constrained with (r1 , b1 ), and (r1 , bj+1 1 ), where b1
j+1
and
j+1
b1 are given by (15) respective (16).

bj+1
1 = α1j (θ(0)) (15)

j+1 j j
b1 = b1 + r1 · θ(t1 ) (16)

8
The parameter θ(t) according to (11) is given in (17) for t = 0 and in (18)
j
for t = t1 .

supv>0 [α1j (v + θ(0)) − α1j (θ(0)) + α2j (v) − Rj · v]


θ(0) = j
+ Tj (17)
R

j supv>0 [r1 · v + α2j (v) − Rj · v]


θ(t1 ) = + Tj (18)
Rj

A pragmatic procedure to solve (17) is described below: Equation (17) can


have either of the three forms shown in (19), depending on the value of θ
itself.

sup [r1 ·v+αj2 (v)−Rj ·v]+Rj ·T j j
 v>0


 Rj
, if θ(0) ≤ t1 − v
 j j
j ·v]+R ·T +b1 −bj1
j j
θ(0) =  supv>0 [r1 ·v+α2 (v)−R
j
j
, else if θ(0) ≤ t1 (19)
R +r1 −r1
 j j j j
 supv>0 [r1 ·v+α2 (v)−R ·v]+R ·T

, else

Rj

However, α1j (v + θ(0)) − α1j (θ(0)) remains constant or decreases if θ increases,


therefore (19) has only one unique solution. Thus, all three forms of (19) can
be evaluated, whereby the solution is found for the form of (19) for which θ(0)
falls into the indicated interval.

Now consider a sub-aggregate with arrival constraint α2j that consists of n


j
dual leaky bucket constrained flows with arrival curves α2,i , 1 ≤ i ≤ n. The
j
arrival curve α2 consists of n + 1 pieces, is piece-wise linear, increasing, and
concave. Then, define ν and ν according to (20) and (21).
n n
r2,i · 1t>tj ≥ Rj · 1t>0 }
X X
ν = sup{t : r1 + r2,i · 1t≤tj + (20)
2,i 2,i
i=1 i=1

n n
r2,i · 1t>tj ≥ Rj · 1t>0 }
X X
ν = sup{t : r1 + r2,i · 1t≤tj + (21)
2,i 2,i
i=1 i=1
The supv [. . . ] in (19) is found for the largest time instance v for which the
first derivative of [. . . ] is still positive. The corresponding time instances are,
however, derived in (20) respective in (21). Thus, ν and ν are the values of v
for which the supv [. . . ] in the first form respective the second and third form
of (19) and also in (18) are attained.

3 Legacy Hardware

The concept of leaky bucket shapers provides the framework for traffic shap-
ing. A leaky bucket shaper consists of a bucket, which stores tokens, and a

9
holding queue, where incoming packets can be buffered. Whenever a packet
is forwarded from the holding queue to the outgoing interface, a number of
tokens that correspond to the packet size, are placed into the token bucket.
Packets are forwarded as long as the bucket offers sufficient space for tokens
without causing an overflow, until packets have eventually to be queued. The
bucket has a depth of b and it leaks at a constant rate r. Thus, outgoing traffic
cannot exceed a burst size of b and a sustainable rate of r.

The idealized fluid-flow model of a perfect shaper can be implemented by


setting the burst size b to zero. However, in packet networks a minimum gran-
ularity will have to remain. In case of packets of a maximum size of lmax a
bucket depth of at least b = lmax is required to allow for sound operation.

However, many deployed routers do not provide the described functionality.


Instead, performance optimized implementations of traffic shapers are used.
The experiments we present in this paper were performed using Cisco 72xx
routers, which do not support buckets that constantly leak at a given rate.
Instead, the underlying traffic shaping does allow for some bursts. On this
platform, traffic shaping is configured in terms of intervals, mean rates, and
bursts [25].

Instead of a continuous decrease of tokens, the bucket is emptied at once at


the end of defined, fixed sized intervals. The standard configuration for this is
25 ms. The bucket depth is then given by the mean rate, divided by the interval
length. Hence, on an interval basis, the transmitted data cannot exceed the
bucket depth. However, within the intervals the transmission can actually be
performed at link speed.

In order to check how this implementation of traffic shaping influences the


behavior of a TCP flow, we monitored the network traffic of a TCP session at
the sender with tcpdump and analyzed the trace file with the tcptrace program.
Figure 3 illustrates the results with 10 ms resolution. The dashed lines show
the sequence number of the data that are transmitted. The solid lines indicate
the received acknowledgments as a function of time. Since the transmission of
acknowledgements is indirectly influenced by the shaper, it is an appropriate
mechanism to show how the injected traffic load is affected by shaping.

The experiment used a modified version of the ttcp program that allows to
control the socket buffer write frequency. Here, TCP packets were sent at an
average rate of 10 Mb/s. The socket buffer size was 256 kByte, to reflect long-
distance transmissions across several domains. The virtual application uses
data buffers of 256 kByte that are written periodically to the TCP socket.
Thus, a bursty traffic profile is created. Note that TCP applications can always
produce bursts of up to a full window. The effects of bursty and synchronized
TCP sources are well-known in the Internet.

10
In the left of figure 3, the shaper was disabled. We recognize the behavior
of TCP that injects data at link speed, when the available window is allow-
ing this. Here, the sender generates bursts of about 256 kByte. This burst
structure is reflected by the acknowledged packets. By enabling shaping with
an average rate of 12 Mb/s the bursty TCP flow is smoothed according to
the shaper configuration. In this case, the acknowledgements were received in
much smaller bursts. In fact, we recognize the shaping parameters used in this
scenario. Since the time interval for emptying the token bucket was 25 ms, we
allowed for bursts of up to 12 Mb/s · 25 ms = 0.3 Mb within each interval.

Yet, a double step can be recognized at each write operation of the application.
Recall that data is written to the socket at a mean rate of 10 Mb/s, whereas
the mean shaping rate is 12 Mb/s. Thus, the bucket of the shaper will be
emptied some time before a write operation takes place. Hence, the first step
of 0.3 Mb is caused by the regular bucket depth b that can be fully exploited.
However, if the write operation takes place shortly before the end of a 25 ms
interval of the shaper, the bucket will immediately be emptied allowing the
injection of a second burst of 0.3 Mb into the network.

x 10 3 x 10 3

4600 4600
bytes

bytes

4400 4400

4200 4200

4000 4000

3800 3800

3600 3600

3400 3400
2.6 2.8 3 2.6 2.8 3
time in seconds time in seconds

Fig. 3. Influence of shaping on a bursty TCP flow. The number of acknowledged


Bytes is plotted against time for the situation when the shaping is disabled (a) and
enabled (b). The shaping mechanism significantly reduces traffic bursts.

11
Thus, the interval-based shaper implementation has the unfortunate property
that in a worst-case bursts of twice the bucket depth can pass the shaper and
enter the network virtually at once.

4 Application Scenarios and Example Evaluation

Scalability is one of the main reasons for the introduction of aggregate schedul-
ing to the DS framework. Yet, aggregate scheduling implies multiplexing of
flows to traffic aggregates, which has significant impacts on traffic properties
as well as on QoS guarantees. While statistical multiplexing benefits from a
large number of multiplexed flows, deterministic multiplexing does not. Each
additional flow that is multiplexed to an aggregate increases the potential
burst size of each other flow. Thus, the traffic specification of a flow or traffic
trunk that applies at the egress of a DS domain differs from the initial traf-
fic specification at the ingress, causing additional difficulties in multi-domain
scenarios. A feasible solution [17] that is investigated here is to shape flows
with a large burst size, to achieve robustness against interference and to sup-
port scalability based on controlled competition for resources, even in case of
a large number of interfering flows.

This section combines the results from section 3 on shaping in legacy hardware
and the analytical treatment from section 2 to provide an application example.

4.1 Test-bed Implementation

The scenario that is investigated is shown in figure 4. The network we use


is a DS test-bed implementation that is based on a number of Cisco series
72xx routers. The investigated flows pass several links and among them a
bottleneck link that is an ATM PVC with a gross rate of 50 Mb/s and a
net rate of about 40 Mb/s, after subtracting ATM overhead. The EF PHB is
implemented applying Priority Queuing (PQ) respective Low Latency Queuing
(LLQ) in Cisco terminology. Attached to the PQ schedulers are leaky buckets
that restrict the high priority traffic to a rate of 15 Mb/s and a burst size of
1.5 MByte. Excess traffic is discarded to avoid starvation of the low priority
BE traffic. Note that the priority schedulers are non-preemptive, not only with
respect to the IP datagram that is in service, but also with respect to the L2
queue on the ATM interface [22]. The L2 queue is controlled by the tx-ring-
limit parameter that is set to four, which corresponds to configured queuing
space for four MTUs. Here we apply the Ethernet MTU of 1.5 kByte.

Three types of flows are transmitted. A BE gen-send/gen-recv UDP flow that

12
SunOS 5.8
(gen_send)

h)t
stE
Cisco 7200

(Fa
OS 12.2(13)T SunOS 5.8
SunOS 5.8 (ttcp receiver)
(A
(ttcp sender) TM bottleneck
)

)
pri

h
(Fa

th)

tEt
ori
tE g
stE

ty

s
a s

(Fa
(F apin
th)

sh

(ATM)
Cisco 7200
OS 12.2(13)T
Cisco 7200

(Fa
OS 12.2(13)T

s
Linux 2.4.13

tEth
Cisco 7200 (gen_recv)

)
OS 12.2(13)T Ethernet
Switch
th)
(FastE

Linux 2.4.13 Linux 2.4.13


(rude) (crude)

Fig. 4. Test-bed implementation and evaluation scenario

periodically creates congestion. Note that BE traffic ideally does not have
any influence on the priority queue. However, due to the non-preemptive L2
queue an additional delay of up to 1.2 ms can be measured for high priority
traffic on the bottleneck link. The EF PHB is used by an aggregate of a UDP
video stream generated by rude/crude and the ttcp TCP flow that is shown
in figure 3 (a). The video stream is a news sequence from [12] with an I-frame
distance of 12 frames and a rate of 25 frames per second. The TCP flow applies
the window scale option and uses a maximum window of 256 kByte that is
2 Mb to support a target throughput of 10 Mb/s up to a Round Trip Time
(RTT) of about 200 ms, which is reasonable for multi-domain scenarios.

4.2 Scenario without Consideration of Shaping

Figure 5 shows the video profile of the news sequence as it is input to the
test-bed domain. The periodic structure of the frame size clearly shows the
fixed encoding of the streams that consists of large I-frames and smaller P-
and B-frames. The spacing of the video frames is 40 ms. For processing of
the data a bin size of 10 ms has been applied. Further on, figure 5 shows the
profile of the video as it is output from the test-bed. Large parts of the input
and the output sequence match for the applied bin size of 10 ms, whereas a
number of noticeable differences remain, where a frame has been delayed by
the network. Most visible are the peaks that are generated, if two frames fall
into the same bin, indicating a significant output burstiness increase.

13
160
video output
video input
140

120

100

frame size (kb)


80

60

40

20

0
0 10 20 30 40 50 60
time (s)

Fig. 5. Video profiles


Figure 6 gives the cumulative functions of the video input respective output
that are called arrival functions in Network Calculus terminology. Due to the
resolution almost no differences can be noticed between input and output.
60
video output
video input

50
cumulated frame size (Mb)

40

30

20

10

0
0 10 20 30 40 50 60
time (s)

Fig. 6. Video arrival functions

Corresponding minimum arrival curves can be derived by self de-convolution of


the arrival functions according to (1). The derived arrival curves for the video
input respective video output are shown in figure 7. As figure 6 before, figure 7
only provides an overview, since the resolution does not allow distinguishing
between input and output arrival curve.

A detailed view on the leftmost part of the arrival curves is given by figure 8,
now with a bin size of 1 ms. Here a significant difference of the burstiness
between input and output can be noticed. The corresponding single leaky
bucket constraint for the input is approximated by (22) with a burst size of

14
60
video output
video input

50

40

data (Mb)
30

20

10

0
0 10 20 30 40 50 60
time (s)

Fig. 7. Video arrival curves


bin = 0.1 Mb and a rate of r = 2 Mb/s.

αin (t) = 0.1 Mb + 2 Mb/s · t (22)

The measurement of the output yields the single leaky bucket constraint
in (23), with an increased burst size of bout = 0.17 Mb. Different parame-
ter sets are possible. However, for a fixed rate only one choice of the output
burst size results in a tight constraint.

αout (t) = 0.17 Mb + 2 Mb/s · t (23)

1
video output
0.9 video input

0.8

0.7

0.6
data (Mb)

0.5

0.4

0.3

0.2

0.1

0
0 50 100 150 200 250 300 350 400
time (ms)

Fig. 8. Video arrival curves

Analytically the scenario can be addressed as follows: All links except the
bottleneck link are over-provisioned and have only marginal impact. For ease

15
of presentation these links are neglected during the analysis. The bottleneck
link offers a service curve of β(t) according to (24) to the aggregate of the
video stream and the TCP flow.

β(t) = 40 Mb/s · [t − 1.2 ms] (24)

The latency of T = 1.2 ms results from the non-preemptive L2 queue on the


ATM bottleneck interface, as discussed above. The arrival curve of the inter-
fering TCP stream can be described partly by TCP parameters and partly by
the target data rate of the application, respective by the policing parameters
of the ingress router. Here we apply a window size of 256 kByte and a rate of
10 Mb/s corresponding to the arrival curve in (25).

αTCP (t) = 2 Mb + 10 Mb/s · t (25)

According to (14) the burstiness of the video stream as it is output from the
bottleneck link can be derived according to (26).
bTCP
 
bout = bin + r · T +
R 
2 Mb
 (26)
= 0.1 Mb + 2 Mb/s · 1.2 ms + = 0.2 Mb
40 Mb/s
The analytical output burst size clearly exceeds the measured output burst
size of 0.17 Mb. However, further improvement of the analytical results based
on the equations for shaped traffic is feasible.

4.3 Scenario with Consideration of Shaping

Note that additional information on the TCP flow is available, namely that it
passes a Fast Ethernet link before being multiplexed with the video stream.
Using the path MTU discovery option of TCP, we obtain a Maximum Segment
Size (MSS) for the TCP connection that is equivalent to the MTU size of
the Fast Ethernet link. Thereby the Ethernet link acts as a traffic shaper,
however, with a comparably large shaping rate of about 1500/(1500 + 18) ·
100 Mb/s)98.8 Mb/s, accounting for 18 Bytes of Ethernet encapsulation, and
a remaining burst size of one Ethernet MTU that is 0.012 Mb. Thus, the
arrival curve of the TCP stream can be refined to be the dual leaky bucket
constraint in (27).

αTCP (t) = min[0.012 Mb + 98.8 Mb/s · t, 2 Mb + 10 Mb/s · t] (27)

The corresponding time constant tTCP of the dual leaky bucket constraint is
given in (28).
2 Mb − 0.012 Mb
tTCP = = 22.4 ms (28)
98.8 Mb/s − 10 Mb/s

16
To derive the output bound for the video stream equations (12) and (13)
are applied. The parameter θ(0) is derived in (29) according to the definition
in (13).

supv>0 [r · v + αTCP (v) − R · v]


θ(0) = +T
R
(29)
supv>0 [2 Mb/s · v + αTCP (v) − 40 Mb/s · v]
= + 1.2 ms
40 Mb/s

The sup[. . . ] is found for v = tTCP = 22.4 ms and θ(0) = 35.5 ms results.
With (12) we find the output burst size bout in (30).

bout = αin (θ(0))


(30)
= 0.1 Mb + 2 Mb/s · 35.5 ms = 0.171 Mb

The resulting video output constraint is αout (t) = 0.171 Mb + 2 Mb · t, which


is tighter than the one derived before.

With respect to multi-domain scenarios the question about the traffic specifi-
cation of the traffic as it is input to a downstream domain arises. The naive
approach would be to apply the traffic specification of the source throughout
all domains that are involved and to add traffic shapers at domain’s egress
routers that ensure that the output conforms to the specification. Doing so
adds an additional worst-case shaping delay of (0.071 Mb)/(2 Mb/s) = 36 ms
to reduce the output burst size from 0.171 Mb to 0.1 Mb, applying a rate of
2 Mb/s. The re-shaping delay has to be added to the per-domain edge-to-edge
delay bound of the video transmission that has been measured to be 56.4 ms.

Clearly the concept of re-shaping the domain’s outbound traffic is not generally
applicable. In particular with respect to deterministic performance bounds
the process is not scalable, since each interfering flow penalizes the video
stream from the above example in two ways: It introduces extra delays due
to competition for resources within the domain, here about 52.7 ms, and it
influences the traffic profile such that additional re-shaping delays apply at
the egress router of the domain, here 35 ms.

The preferred solution is to shape interfering flows at the ingress of the source
domain. Here, the video stream initially only has a comparably small burst
size, whereas the TCP stream is inherently bursty. Yet, we have shown in [22]
that shaping TCP streams allows for a very smooth operation, where the
goodput can be significantly larger than if congestion control and the saw-
tooth characteristic apply.

For the next experiment the TCP stream is shaped on the ingress router of the
test-bed with an average shaping rate of 12 Mb/s, as shown in figure 3 (b).
Following the discussion on shaping in legacy hardware from section 3, the

17
burstiness is decreased from initially 2 Mb to 2 · 12 Mb/s · 25 ms = 0.6 Mb,
where a worst-case burst size of two times the configured bucket depth remains
due to the interval-based shaper implementation.

Figure 9 gives the arrival curves for the video input and output, if the TCP
stream is shaped. The corresponding single leaky bucket constraint for the
input is αin (t) = 0.10 Mb + 2 Mb/s · t as before. The single leaky bucket
constraint for the output follows from the measurement according to (31),
with a burst size of b = 0.11 Mb.

αout (t) = 0.11 Mb + 2 Mb/s · t (31)

1
video output
0.9 video input

0.8

0.7

0.6
data (Mb)

0.5

0.4

0.3

0.2

0.1

0
0 50 100 150 200 250 300 350 400
time (ms)

Fig. 9. Video arrival curves with TCP shaping

The analysis yields the following: The arrival curve of the interfering TCP
stream is dual leaky bucket constrained and can be described by (32).

αTCP (t) = min[0.6 Mb + 12 Mb/s · t, 2 Mb + 10 Mb/s · t] (32)

The flow of interest, that is the video stream, is as before constrained by a sin-
gle leaky bucket, thus equations (12) and (13) can be applied. The parameter
θ(0) is defined by (13) and given in (33).

supv>0 [2 Mb/s · v + αTCP (v) − 40 Mb/s · v]


θ(0) = + 1.2 ms (33)
40 Mb/s

Obviously the sup[. . . ] is found for v → 0 ms and θ(0) = 16.2 ms results. With
(12) we find the output burst size bout according to (34).

bout = 0.1 Mb + 2 Mb/s · 16.2 ms = 0.132 Mb (34)

18
If in addition the shaping effect of the Ethernet link is taken into account,
the interfering TCP stream is constrained by three leaky buckets according
to (35).

αTCP (t) = min[0.012 Mb+98.8 Mb/s·t, 0.6 Mb+12 Mb/s·t, 2 Mb+10 Mb/s·t]
(35)
The relevant time constant tTCP is given in (36).
0.6 Mb − 0.012 Mb
tTCP = = 6.8 ms (36)
98.8 Mb/s − 12 Mb/s

The parameter θ(0) defined by (13) is derived in (37).

supv>0 [2 Mb/s · v + αTCP (v) − 40 Mb/s · v]


θ(0) = + 1.2 ms (37)
40 Mb/s

The sup[. . . ] is found for v = tTCP = 6.8 ms and θ(0) = 11.8 ms results. Since
θ(0) > tTCP the leaky bucket constraint enforced by the shaper applies and
the improved output burst size bout according to (12) is derived in (38).

bout = 0.1 Mb + 2 Mb/s · 11.8 ms = 0.124 Mb (38)

Clearly, the analytic bound is negatively affected by the large burstiness that
remains after shaping, where in the worst case bursts of two times the bucket
depth can enter the network.

The measured edge-to-edge delay bound for the video stream is 12.2 ms. An
additional worst case re-shaping delay of 5 ms applies to reduce the measured
output burstiness of the video stream from 0.11 Mb again to 0.1 Mb.

5 Conclusions

In this paper we have addressed the impacts of traffic shaping in aggregate


scheduling networks. We applied the notation of dual leaky bucket constrained
arrival curves to extend the analytical framework of Network Calculus to cover
traffic shaping in aggregate scheduling networks. A general per-flow service
curve has been derived for FIFO aggregate scheduling rate-latency service
elements. This equation has been solved for the special case of dual leaky
bucket constrained flows and dual leaky bucket output constraints have been
derived.

Note that input and output of a service element can be modelled by the same
type of arrival curve, only differing with respect to parameters, which is a
very fortunate property, since it allows addressing multiple-node scenarios.

19
In particular our analytical result can be applied to networks, where traffic
shapers are in effect at ingress routers only, whereby the dual leaky bucket
property of shaped traffic trunks can still be derived to hold within the core
of the network.

We provided an overview on the shaping capabilities of current router hard-


ware and showed relevant deviations compared to the concept of the ideal
shaper. Finally we discussed an application scenario that illustrates the appli-
cation of traffic shaping and compares measurements with derived analytical
bounds.

References

[1] Balakrishnan, H., Rahul, H., and S. Seshan, An Integrated Congestion


Management Architecture for Internet Hosts Proceedings of ACM SIGCOMM,
1999.

[2] Blake, S., et al., An Architecture for Differentiated Services, RFC 2475, 1998.

[3] Chang, C.-S., Performance Guarantees in Communication Networks, Springer,


TNCS, 2000.

[4] Vicent Cholvi, V., Echagüe, J., and Le Boudec J.-Y., Worst Case Burstiness
Increase due to FIFO Multiplexing, Elsevier Performance Evaluation, vol. 49,
no. 1-4, pp. 491-506, 2002.

[5] Cruz, R. L., A Calculus for Network Delay, Part I: Network Elements in
Isolation, IEEE Transactions on Information Theory, vol. 37, no. 1, pp. 114-131,
1991.

[6] Cruz, R. L., A Calculus for Network Delay, Part II: Network Analysis, IEEE
Transactions on Information Theory, vol. 37, no. 1, pp. 132-141, 1991.

[7] Cruz, R. L., SCED+: Efficient Management of Quality of Service Guarantees,


Proceedings of IEEE Infocom, 1998.

[8] Davie, B., et al., An Expedited Forwarding PHB, RFC 3246, 2002.

[9] Fidler, M., Providing Internet Quality of Service based on Differentiated


Services Traffic Engineering, PhD Thesis, Aachen University, 2003.

[10] Fidler, M., On the Impacts of Traffic Shaping on End-to-End Delay Bounds in
Aggregate Scheduling Networks, Springer, LNCS 2811, Proceedings of Cost 263
QoFIS, pp. 1-10, 2003.

[11] Fidler, M., and Sander, V., A parameter based admission control for
differentiated services networks, Elsevier Computer Networks Journal, vol. 44,
no. 4, pp. 463-479, 2004.

20
[12] Fitzek, F., and Reisslein, M., MPEG-4 and H.263 Video Traces for Network
Performance Evaluation, IEEE Network Magazine, vol. 15, no. 6, pp. 40-54,
2001.

[13] Handley, M., Floyd, S., Padhye, J., and Widmer, J., TCP Friendly Rate Control
(TFRC): Protocol Specification RFC 3448, 2003.

[14] Le Boudec, J.-Y., Some Properties Of Variable Length Packet Shapers,


Proceedings of ACM Sigmetrics, 2001.

[15] Le Boudec, J.-Y., and Thiran, P., Network Calculus A Theory of Deterministic
Queuing Systems for the Internet, Springer, LNCS 2050, 2002.

[16] Lenzini, L., Mingozzi, E., and Stea, G., Delay Bounds for FIFO Aggregates: A
Case Study, Springer, LNCS 2811, Proceedings of COST 263 QoFIS, pp. 31-40,
2003.

[17] Nichols, K., Jacobson, V., and Zhang, L., A Two-bit Differentiated Services
Architecture for the Internet, RFC 2638, 1999.

[18] Ossipov, E., and Karlsson, G., The effect of per-input shapers on the delay
bound in networks with aggregate scheduling, Springer, LNCS 2899, Proceedings
of MIPS, pp. 107-118, 2003.

[19] Parekh, A. K., and Gallager, R. G., A Generalized Processor Sharing Approach
to Flow Control in Integrated Services Networks: The Single-Node Case,
IEEE/ACM Transactions on Networking, vol. 1, no. 3, pp. 344-357, 1993.

[20] Parekh, A. K., and Gallager, R. G., A Generalized Processor Sharing Approach
to Flow Control in Integrated Services Networks: The Multiple-Node Case,
IEEE/ACM Transactions on Networking, vol. 2, no. 2, pp. 137-150, 1994.

[21] Sander, V., Design and Evaluation of a Bandwidth Broker that Provides
Network Quality of Service for Grid Applications, PhD Thesis, Aachen
University, 2002.

[22] Sander, V., and Fidler, M., Evaluation of a Differentiated Services based
Implementation of a Premium and an Olympic Service, Springer, LNCS 2511,
Proceedings of Cost 263 QoFIS, pp. 36-46, 2002.

[23] Schulzrinne, H., Casner, S., Frederick, R., and V. Jacobson, RTP: A Transport
Protocol for Real-Time Applications RFC 1889, 1996.

[24] AIX Documentation, System Management Guide: Communications and


Networks, TCP/IP Quality of Service (QoS).

[25] Cisco Documentation, Policing and Shaping, Cisco IOS Quality of Service
Solutions Configuration Guide, Release 12.2.

21
Appendix

Proof 1 (Proof of Theorem 4) With sup0≤s≤θ [α1j (t + s) − [β j (s) − α2j (s −


θ)]+ 1s>θ ] = α1j (t + θ), (39) follows from (9).
 h i
α1j+1 (t) = inf sup α1j (t + θ), sup[α1j (t j
+ s) − [β (s) − α2j (s +
− θ)] ] (39)
θ≥0 s>θ

Then, service curves of the rate-latency type β j (t) = Rj ·[t−T j ]+ are assumed.
The condition Rj · (s − T j ) − α2j (s − θ) ≥ 0 can be found to hold for θ ≥ θ0
with θ0 = sups>0 [α2j (s) − Rj · s]/Rj + T j , whereby θ0 ≥ T j . The derivation of
this condition is based on (40) and (41) that are obtained from [15].

inf [Rj · (s − T j ) − α2j (s − θ)] = inf [Rj · v − α2j (v) − Rj · T j + Rj · θ]


s>θ v>0
(40)
= R · θ − Rj · T j − sup[α2j (v) − Rj · v]
j
v>0

Rj ·θ −Rj ·T j −sup[α2j (v)−Rj ·v] ≥ 0 ⇔ θ ≥ sup[α2j (v)−Rj ·v]/Rj +T j (41)


v>0 v>0
Thus, (42) can be derived for θ ≥ θ0 , from which (43) follows immediately.
 h i
α1j+1 (t) = inf0 sup α1j (t + θ), sup[α1j (t + s) − Rj · (s − T j
) + α2j (s − θ)] (42)
θ≥θ s>θ
 h i
α1j+1 (t) = inf0 sup α1j (t+θ), sup[α1j (t+v +θ)−Rj ·(v +θ −T j )+α2j (v)] (43)
θ≥θ v>0

For different values of θ a θ is defined as a function of (t + θ) in (44). With
θ∗ ≥ θ0 (45) can be given.

supv>0 [α1j (t + v + θ) − α1j (t + θ) + α2j (v) − Rj · v]


θ∗ (t + θ) = + Tj (44)
Rj
sup[α1j (t + v + θ) − α1j (t + θ) + α2 (v) − Rj · v] − Rj · (θ − T j ) Q 0, if θ R θ∗ (45)
v>0
Then, with (44), and (45) the outer sup[. . . ] in (43) can be solved, and (46)
is derived.
 h i h
α1j+1 (t) = inf inf∗ α1j (t + θ) , 0 inf α1j (t + θ)+
θ>θ θ ≤θ≤θ∗
i
sup[α1j (t + v + θ) − α1j (t + θ) + α2j (v) − Rj · v] − Rj · (θ − T j ) (46)
v>0

The inf[. . . ] in (46) is found for θ = θ∗ , resulting in (47) and (48), which proofs
theorem 4.
α1j+1 (t) = α1j (t + θ) (47)
j j j
supv>0 [α1 (v + t + θ) − α1 (t + θ) + α2 (v) − Rj · v]
θ= + Tj (48)
Rj

22
Here, θ < θ0 is not investigated. Instead, it can be shown that the bound in (47)
and (48) for θ = θ∗ ≥ θ0 is attained. Thus, we cannot find a better solution
if θ < θ0 is considered. A thought experiment, which shows that the derived
bound is attained, can be found in [9]. It mimics a proof that is provided for
the special case of a single leaky bucket constrained trunk 1 in [15]. 2

Proof 2 (Proof of Theorem 5) Based on (11), θ(t) is derived here for a


dual leaky bucket constrained flow 1. The sub-aggregate 2 arrival curve is
assumed to be concave to simplify the proof. Concavity is probably not re-
quired, yet, the aggregate of a number of dual leaky bucket constrained flows
is concave anyway.

Especially the supv>0 [. . . ] in the numerator of (11) is of interest in the follow-


ing. It is given by (49).

(θ(t) − T j ) · Rj = sup[α1j (v + t + θ(t)) − α1j (t + θ(t)) + α2j (v) − Rj · v] (49)


v>0

Case 1 (t = 0) For t = 0 (50) can be immediately derived from (49).

supv>0 [α1j (v + θ) − α1j (θ) + α2j (v) − Rj · v]


θ(t = 0) = + Tj (50)
Rj
With α1j+1 (t) = α1j (t + θ(t)) according to (10) we find the output burst size
bj+1
1 = α1j (θ(0)), which proves (15).
j j
Case 2 (0 < t < t1 − θ(t)) Note that θ(t) can be greater than t1 for all t ≥ 0,
so that case 2 does not apply. However, this instance is covered by case 3 and
j
addressed here only for illustrative purposes. If θ(t) ≥ t1 for all t ≥ 0 then
j
θ(0) ≥ t1 and (49) simplifies to (65), so that θ(t) is constant for all t ≥ 0.
j j+1
With θ(t) = θ(0) ≥ ti it follows that bj+1 i = bi according to (15) and (16).
Further on, since r1 ≥ r1 , the parameters (ri , bj+1i ) of the dual leaky bucket
constraint become redundant. The trunk 1 output constraint is reduced to a
j+1
single leaky bucket constraint with parameters (r1 , b1 ).
j
For 0 < t < t1 − θ(t)) a differentiation is made concerning the variable v(t),
for which the supv>0 [. . . ] in (49) is found. We distinguish between two cases:
j j
v(t) + t + θ(t) < t1 and v(t) + t + θ(t) ≥ t1 .
j j
Case 2a ((0 < t < t1 − θ(t)) ∧ (t < t1 − v(t) − θ(t))) The sup[. . . ] in (49) is
j j
derived based on (51-53) for 0 ≤ t ≤ t1 − θ(t) and t ≤ t1 − v(t) − θ(t).

α1j (v + t + θ) = bj1 +r1 · (v + t + θ) (51)


α1j (t + θ) = bj1 +r1 · (t + θ) (52)
α1j (v +t+ θ)−α1j (t + θ) = r1 · v (53)

Replacing (53) in (49) yields (54).

(θ − T j ) · Rj = sup [r1 · v + α2j (v) − Rj · v] (54)


j
0<v≤t1 −t−θ

23
For differentiable and concave α2 (v), the sup[. . . ] in (54) is found for a unique
j
v, where ∂α2j (v)/∂v = Rj −r1 with 0 < v ≤ t1 −t−θ. Thus, v is independent of
t, which can also be shown to hold for non-differentiable, for example piecewise
linear, α2j (v).
j j
Now, define a ∆t > 0 with t + ∆t ≤ t1 − θ(t + ∆t) and t + ∆t ≤ t1 − v −
θ(t + ∆t). At time t + ∆t a modified θ(t + ∆t) = θ(t) + ∆θ can be observed.
However, with (54) we find that θ is independent of t and constant over the
investigated interval, so that ∆θ = 0.
With α1j+1 (t) = α1j (t + θ) according to (10) the output arrival curve of trunk
1 increases with α1j+1 (t+∆t)−α1j+1 (t) = α1j (t+∆t+θ+∆θ)−α1j (t+θ) = r1 ·∆t.
Thus, with case 1 the leaky bucket parameters (r1 , bj+1 1 ) in theorem 5 are
proven for case 2a.
j j
Case 2b ((0 < t < t1 − θ(t)) ∧ (t ≥ t1 − v(t) − θ(t))) Again, the sup[. . . ]
j j
in (49) is derived based on (55-57) for 0 < t ≤ t1 − θ(t) and t ≥ t1 − v(t) − θ(t).
j j j
Note that bj1 + r1 · t1 = b1 + r1 · t1 .
j
α1j (v + t + θ) = b1 +r1 · (v + t + θ) (55)
j j
= bj1 +r1 · t1 +r1 · (v + t + θ − t1 )
α1j (t + θ) = bj1 +r1 · (t + θ) (56)
j j
α1j (v +t+ θ)−α1j (t + θ) = r1 · (t1 − t − θ)+r1 · (v + t + θ − t1 ) (57)

Substitution of (57) in (49) yields (58).


j j
(θ − T j ) · Rj = sup [r1 · (t1 − t − θ) + r1 · (v + t + θ − t1 ) + α2j (v) − Rj · v] (58)
j
v≥t1 −t−θ

Again, for differentiable and concave α2 (v), the sup[. . . ] in (58) is found for
j
a unique v, where ∂α2 (v)/∂v = Rj − r1 with v ≥ t1 − t − θ. As before, v is
independent of t, which can also be shown to hold for non-differentiable α2j (v).
j
However, for t = t1 − v(t) − θ(t) the special case of ∂α2 (v)/∂v < Rj − r1 can
be attained, which is excluded here and addressed afterwards.
j
Now, define a ∆t > 0 with t + ∆t ≤ t1 − θ(t + ∆t). At time t + ∆t a modified
θ(t + ∆t) = θ(t) + ∆θ can be observed. As given by (57) we find a decrease
of (∆t + ∆θ) · r1 and an increase of (∆t + ∆θ) · r1 for the sup[. . . ] in (49).
With (58), ∆θ can be defined according to (59).
r1 − r 1 r 1 − r1
 
θ(t + ∆t) − θ(t) = ∆θ = (∆t + ∆θ) · = −∆t · (59)
Rj Rj + r 1 − r1
j
We find that ∆θ < 0 and ∆t > −∆θ, so that t + ∆t > t1 − v − θ(t + ∆t) holds
j
for t ≥ t1 − v − θ(t) and ∆t > 0.
Figure 10 gives an example, which illustrates equation (49) for this case.
The arrival curves of trunk 1 and 2 are shown, whereby the trunk 1 arrival
curve is dual leaky bucket constrained. It is moved to the left by t + θ and
downwards by bj1 + r1 · (t + θ). Further on, the rate of the service curve R

24
is displayed. Obviously, the value of v for which the sup[. . . ] is found is not
affected, if the arrival curve of trunk 1 is moved further to the left. However,
the value of the sup[. . . ] in (49) decreases as shown by (57).

data

alpha 2

r1

r1

-(t+theta ) v time

Fig. 10. Example case of (49)

With α1j+1 (t) = α1j (t + θ) according to (10) we find that the output arrival
curve of trunk 1 increases with α1j+1 (t + ∆t) − α1j+1 (t) = α1j (t + ∆t + θ +
∆θ) − α1j (t + θ) = (∆t + ∆θ) · r1 < ∆t · r1 . Thus, applying ∆θ = 0 yields
the leaky bucket parameters (r1 , bj+11 ) in theorem 5, which overestimate the
output arrival curve. Yet, arrival curves are defined to give an upper bound
on the respective arrival functions, which allows loosening arrival constraints.
j
Now, the special case with t = t1 − v(t) − θ(t) and ∂α2 (v)/∂v < R − r1 is
j
covered, where r1 · (v + t + θ − t1 ) = 0 in (58). Define a ∆t > 0 with ∆t → 0,
which results in a corresponding ∆θ. For ∆t > −∆θ the sup[. . . ] in (58) allows
j
applying smaller values v that fulfill v ≥ t1 − t − θ. For concave trunk 2 arrival
j
curves eventually ∂α2 (v)/∂v = R − r1 is reached for v = t1 − t − ∆t − θ − ∆θ.
j
However, the sup[. . . ] in (58) is found for v = t1 − t − ∆t − θ − ∆θ as long
j
as ∂α2 (v)/∂v < R − r1 . Thus, we find that v + t + θ = t1 is constant, which
allows deriving (60) from (58).

j j j
(θ − T j ) · Rj = r1 · (t1 − t − θ) + α2j (t1 − t − θ) − Rj · (t1 − t − θ) (60)

Similar to the derivation of (59), we derive (61) from (60).

j j
α2j (t1 − t − ∆t − θ − ∆θ) − α2j (t1 − t − θ) Rj − r 1
∆θ = + (∆t + ∆θ) · (61)
Rj Rj

In the absence of a concrete trunk 2 arrival curve, we cannot derive a solution


for (61). However, a range for ∆θ can be given. For concave trunk 2 arrival
j
curves α2j (t) we know that ∂α2 (v)/∂v ≥ Rj − r1 at v = t1 − t − θ. Otherwise
j
the sup[. . . ] in (49) is found for v < t1 − t − θ and case 2a applies. Thus, (62)

25
can be given as an upper bound.

Rj − r 1 Rj − r 1
∆θ ≤ −(∆t + ∆θ) · + (∆t + ∆θ) · =0 (62)
Rj Rj
j
Further on, with ∂α2 (v)/∂v ≤ Rj − r1 for v ≥ t1 − t − θ, we find (63).

R j − r1 Rj − r 1 r 1 − r1
∆θ ≥ −(∆t+∆θ)· j
+(∆t+∆θ)· j
= −(∆t+∆θ)· (63)
R R Rj
The lower bound in (63) is the same equation as derived in (59), which
yields (64), so that ∆t ≥ −∆θ holds.
r 1 − r1
∆θ ≥ −∆t · (64)
Rj + r 1 − r1

Now, with 0 ≥ ∆θ ≥ −∆t · (r1 − r1 )/(Rj + r1 − r1 ), we apply the same argu-


mentation as before and apply ∆θ = 0, yielding the leaky bucket parameters
(r1 , bj+1
1 ) in theorem 5.
j j
Case 3 (t ≥ t1 − θ(t)) For t + θ(t) ≥ t1 , (65) can be derived immediately
from (49).
supv>0 [r1 · v + α2j (v) − Rj · v]
θ(t) = + Tj (65)
Rj
j
Note that θ(t) according to (65) is constant for t ≥ t1 − θ(t). Further on
j
θ(0) ≥ θ(t ≥ t1 − θ(t)), since r1 ≥ r1 is assumed. With (10), the output arrival
j
curve of trunk 1 is given as α1j+1 (t) = α1j (t + θ(t)). The conditions t + θ(t) ≥ t1 ,
j j
and thus α1j (t + θ(t)) = b1 + r1 · (t + θ(t)) hold for t ≥ t1 − θ(t). Resulting, the
j
output arrival curve of trunk 1 increases with rate r1 for t ≥ t1 − θ(t). The
j+1 j+1 j
output burst size bi can be derived as bi = α1j (t+θ(t))−r1 ·t = b1 +r1 ·θ(t)
j j+1 j j
for any t ≥ t1 − θ(t), so that bi = b1 + r1 · θ(t1 ) holds, which proves that
j+1
(r1 , b1 ) according to (16) is a trunk 1 output constraint. 2

26

You might also like