Filter 2005 TSA
Filter 2005 TSA
Filter 2005 TSA
Abstract
under grant graduate school (GRK) 643 Software for Communication Systems.
2 This work was supported in part by the Path Allocation in Backbone networks
(PAB) project funded by the German Research Network (DFN) and the Federal
Ministry of Education and Research (BMBF).
Differentiated Services (DS) [2] addresses the scalability problems of the former
Integrated Services approach by an aggregation of flows to a small number
of traffic classes. Packets are identified by simple markings that indicate the
respective class. In the core of the network, routers do not need to determine to
which flow a packet belongs, only which aggregate behavior has to be applied.
Edge routers mark packets and indicate whether they are within profile or,
if they are out of profile, in which case they might even be discarded at the
edge router. A particular marking on a packet indicates a so-called Per Hop
Behavior (PHB) that has to be applied for forwarding of the packet. The
Expedited Forwarding (EF) PHB [8] is intended for building a service that
offers low loss and low delay, namely a Premium Service.
The particular advantage of this solution is that it does not rely on any specific
shaping support in the end-systems. While some operating systems have an
optional support for traffic shaping, the QoS kernel extension of AIX [24] is
an example for this, there is no general availability nor does a standardized
programming interface exist for controlling this feature. Similarly, advances in
application level protocols such as using TCP-Friendly Rate Control 3 [13] on
3 TFRC could also be implemented using a Congestion Manager [1] that relieves
transport protocols and applications from having to (re-)implement congestion con-
trol. However, this concept would require a modification in the IP-stack.
2
top of the Real-Time Protocol [23] do only apply for applications that make
already use of these emerging protocols.
2 Network Calculus
Flows can be described by arrival functions Fij (t) that are given as the cu-
mulated number of bits seen in an interval [0, t]. Arrival curves αij (t) are de-
fined to give an upper bound on the arrival functions, where αij (t2 − t1 ) ≥
Fij (t2 ) − Fij (t1 ) for all t2 ≥ t1 ≥ 0.
αij (t) = Fij (t) Fij (t) = sup[Fij (t + s) − Fij (s)] (1)
s≥0
A typical constraint for incoming flows is given by the leaky bucket algorithm,
which allows for bursts of a certain size and a defined sustainable rate. Non-
conforming traffic can either be shaped or dropped.
3
function that is given by (2) for t > 0. It is zero for t = 0.
The indexing refers to a flow i at a link or queuing and scheduling unit j. Note
that the burst term bji in most cases depends on the point of observation, that
is the link j, whereas the rate ri usually does not change throughout the
network.
j j
bi bi
ri ri
An arrival curve of the type in (3) applies, if a single leaky bucket constrained
j
flow with the arrival curve αij (t) = bi + ri · t traverses a combination of an
ideal bit-by-bit traffic shaper with rate ri and a packetizer with a maximum
packet size lmax [3,14]. The concept of a packetizer was introduced with the
intention to model the effect of variable length packets. In reality, flows are not
continuous in time, due to a minimum granularity that needs to be taken into
account. Packetizers are used to convert a continuous input into a packetized
sequence with a given maximum packet size. As a consequence a burst size of
bji = lmax remains and the output arrival curve is dual leaky bucket constrained
j
with parameters (ri , bi ) and (ri , lmax ). Note that actual implementations differ
4
from the combination of an ideal shaper and a packetizer. As a result bji ≥ lmax
j
holds. The shaper adds a worst-case delay of (bi − bji )/ri .
data
ri
j
bi
ri
j
bi
j
ti time
2.2 Multiplexing
5
in (5).
j j j
b + bj2 + (r1 + r2 ) · t
1
, if t ≤ t1 ∧ t ≤ t2
j j j
α1j (t) + α2j (t) = b1 + bj2 + (r1 + r2 ) · t , if t > t1 ∧ t ≤ t2 (5)
j j
b1 + b2 + (r1 + r2 ) · t , else
The result extends to traffic aggregate that consist of n dual leaky bucket con-
j
strained flows i = 1 . . . n with parameters ti and in order numbering of flows
j j
such that ti ≤ ti+1 , ∀i = 1 . . . n − 1. Note that the ordering allows describing
the aggregate arrival constraint by n + 1 instead of 2n leaky buckets. The re-
sulting aggregate arrival curve is an increasing, concave, and piece-wise linear
function.
β j (t) = Rj · [t − T j ]+ (6)
Service curves of the rate-latency type are implemented for example by Prior-
ity Queuing (PQ) or Generalized Processor Sharing (GPS) respective Weighted
Fair Queuing (WFQ), where certain transport resources that correspond to
the rate Rj can be assigned to selected traffic. The latency T j applies for
non-preemptive scheduling, where in case of PQ low priority packets that are
already in service have to complete service before high priority packets that
arrive in the meantime can be scheduled. Thus, T j can be given as the quo-
tient of the maximum packet size and the link rate. The same latency applies
for Packet by packet GPS (PGPS) for which relatvsed models can be found
in [19,15].
6
on a link j, where the service curve that is seen by the aggregate is given by
β j (t). Equation (7) defines a family of service curves βθj (t) with an arbitrary
parameter θ ≥ 0 that are offered to flow 1. The term 1t>θ is one for t > θ and
zero for t ≤ θ.
βθj (t) = [β j (t) − α2j (t − θ)]+ · 1t>θ (7)
Based on the above concepts, bounds for the backlog and the delay can be
derived to be the maximal vertical deviation respective horizontal deviation
between the arrival curve and the service curve. Further on, of particular
interest for aggregate scheduling networks are constraints that can be derived
for the output of a traffic trunk from a queuing and scheduling unit.
The following approach can be applied to derive tight per-flow output con-
straints in case of aggregate scheduling.
A solution of (9) for rate-latency service curves and single leaky bucket con-
strained flows 1 and 2 is provided in [15]. However, a general solution for
arbitrary arrival curves has to our knowledge been missing so far. The rele-
vant extensions of current theory considering rate-latency service curves and
general arrival curves respective dual leaky bucket constraints are derived in
the following.
7
Theorem 4 (Output Bound, Rate-Latency Case) Consider two flows 1,
and 2 that are α1j , respective α2j upper constrained. Assume these flows are
served in FIFO order and in an aggregate manner by a node j that is char-
acterized by a minimum service curve β j (t) of the rate-latency type β j (t) =
Rj · [t − T j ]+ . Then, the output of flow 1 is α1j+1 upper constrained according
to (10), where θ is a function of t and has to comply with (11). A related
equation for simple rate service curves is derived in [4].
bj+1
1 = α1j (θ(0)) (12)
bj+1
1 = bj1 + r1 · (T j + bj2 /Rj ) (14)
bj+1
1 = α1j (θ(0)) (15)
j+1 j j
b1 = b1 + r1 · θ(t1 ) (16)
8
The parameter θ(t) according to (11) is given in (17) for t = 0 and in (18)
j
for t = t1 .
n n
r2,i · 1t>tj ≥ Rj · 1t>0 }
X X
ν = sup{t : r1 + r2,i · 1t≤tj + (21)
2,i 2,i
i=1 i=1
The supv [. . . ] in (19) is found for the largest time instance v for which the
first derivative of [. . . ] is still positive. The corresponding time instances are,
however, derived in (20) respective in (21). Thus, ν and ν are the values of v
for which the supv [. . . ] in the first form respective the second and third form
of (19) and also in (18) are attained.
3 Legacy Hardware
The concept of leaky bucket shapers provides the framework for traffic shap-
ing. A leaky bucket shaper consists of a bucket, which stores tokens, and a
9
holding queue, where incoming packets can be buffered. Whenever a packet
is forwarded from the holding queue to the outgoing interface, a number of
tokens that correspond to the packet size, are placed into the token bucket.
Packets are forwarded as long as the bucket offers sufficient space for tokens
without causing an overflow, until packets have eventually to be queued. The
bucket has a depth of b and it leaks at a constant rate r. Thus, outgoing traffic
cannot exceed a burst size of b and a sustainable rate of r.
The experiment used a modified version of the ttcp program that allows to
control the socket buffer write frequency. Here, TCP packets were sent at an
average rate of 10 Mb/s. The socket buffer size was 256 kByte, to reflect long-
distance transmissions across several domains. The virtual application uses
data buffers of 256 kByte that are written periodically to the TCP socket.
Thus, a bursty traffic profile is created. Note that TCP applications can always
produce bursts of up to a full window. The effects of bursty and synchronized
TCP sources are well-known in the Internet.
10
In the left of figure 3, the shaper was disabled. We recognize the behavior
of TCP that injects data at link speed, when the available window is allow-
ing this. Here, the sender generates bursts of about 256 kByte. This burst
structure is reflected by the acknowledged packets. By enabling shaping with
an average rate of 12 Mb/s the bursty TCP flow is smoothed according to
the shaper configuration. In this case, the acknowledgements were received in
much smaller bursts. In fact, we recognize the shaping parameters used in this
scenario. Since the time interval for emptying the token bucket was 25 ms, we
allowed for bursts of up to 12 Mb/s · 25 ms = 0.3 Mb within each interval.
Yet, a double step can be recognized at each write operation of the application.
Recall that data is written to the socket at a mean rate of 10 Mb/s, whereas
the mean shaping rate is 12 Mb/s. Thus, the bucket of the shaper will be
emptied some time before a write operation takes place. Hence, the first step
of 0.3 Mb is caused by the regular bucket depth b that can be fully exploited.
However, if the write operation takes place shortly before the end of a 25 ms
interval of the shaper, the bucket will immediately be emptied allowing the
injection of a second burst of 0.3 Mb into the network.
x 10 3 x 10 3
4600 4600
bytes
bytes
4400 4400
4200 4200
4000 4000
3800 3800
3600 3600
3400 3400
2.6 2.8 3 2.6 2.8 3
time in seconds time in seconds
11
Thus, the interval-based shaper implementation has the unfortunate property
that in a worst-case bursts of twice the bucket depth can pass the shaper and
enter the network virtually at once.
Scalability is one of the main reasons for the introduction of aggregate schedul-
ing to the DS framework. Yet, aggregate scheduling implies multiplexing of
flows to traffic aggregates, which has significant impacts on traffic properties
as well as on QoS guarantees. While statistical multiplexing benefits from a
large number of multiplexed flows, deterministic multiplexing does not. Each
additional flow that is multiplexed to an aggregate increases the potential
burst size of each other flow. Thus, the traffic specification of a flow or traffic
trunk that applies at the egress of a DS domain differs from the initial traf-
fic specification at the ingress, causing additional difficulties in multi-domain
scenarios. A feasible solution [17] that is investigated here is to shape flows
with a large burst size, to achieve robustness against interference and to sup-
port scalability based on controlled competition for resources, even in case of
a large number of interfering flows.
This section combines the results from section 3 on shaping in legacy hardware
and the analytical treatment from section 2 to provide an application example.
12
SunOS 5.8
(gen_send)
h)t
stE
Cisco 7200
(Fa
OS 12.2(13)T SunOS 5.8
SunOS 5.8 (ttcp receiver)
(A
(ttcp sender) TM bottleneck
)
)
pri
h
(Fa
th)
tEt
ori
tE g
stE
ty
s
a s
(Fa
(F apin
th)
sh
(ATM)
Cisco 7200
OS 12.2(13)T
Cisco 7200
(Fa
OS 12.2(13)T
s
Linux 2.4.13
tEth
Cisco 7200 (gen_recv)
)
OS 12.2(13)T Ethernet
Switch
th)
(FastE
periodically creates congestion. Note that BE traffic ideally does not have
any influence on the priority queue. However, due to the non-preemptive L2
queue an additional delay of up to 1.2 ms can be measured for high priority
traffic on the bottleneck link. The EF PHB is used by an aggregate of a UDP
video stream generated by rude/crude and the ttcp TCP flow that is shown
in figure 3 (a). The video stream is a news sequence from [12] with an I-frame
distance of 12 frames and a rate of 25 frames per second. The TCP flow applies
the window scale option and uses a maximum window of 256 kByte that is
2 Mb to support a target throughput of 10 Mb/s up to a Round Trip Time
(RTT) of about 200 ms, which is reasonable for multi-domain scenarios.
Figure 5 shows the video profile of the news sequence as it is input to the
test-bed domain. The periodic structure of the frame size clearly shows the
fixed encoding of the streams that consists of large I-frames and smaller P-
and B-frames. The spacing of the video frames is 40 ms. For processing of
the data a bin size of 10 ms has been applied. Further on, figure 5 shows the
profile of the video as it is output from the test-bed. Large parts of the input
and the output sequence match for the applied bin size of 10 ms, whereas a
number of noticeable differences remain, where a frame has been delayed by
the network. Most visible are the peaks that are generated, if two frames fall
into the same bin, indicating a significant output burstiness increase.
13
160
video output
video input
140
120
100
60
40
20
0
0 10 20 30 40 50 60
time (s)
50
cumulated frame size (Mb)
40
30
20
10
0
0 10 20 30 40 50 60
time (s)
A detailed view on the leftmost part of the arrival curves is given by figure 8,
now with a bin size of 1 ms. Here a significant difference of the burstiness
between input and output can be noticed. The corresponding single leaky
bucket constraint for the input is approximated by (22) with a burst size of
14
60
video output
video input
50
40
data (Mb)
30
20
10
0
0 10 20 30 40 50 60
time (s)
The measurement of the output yields the single leaky bucket constraint
in (23), with an increased burst size of bout = 0.17 Mb. Different parame-
ter sets are possible. However, for a fixed rate only one choice of the output
burst size results in a tight constraint.
1
video output
0.9 video input
0.8
0.7
0.6
data (Mb)
0.5
0.4
0.3
0.2
0.1
0
0 50 100 150 200 250 300 350 400
time (ms)
Analytically the scenario can be addressed as follows: All links except the
bottleneck link are over-provisioned and have only marginal impact. For ease
15
of presentation these links are neglected during the analysis. The bottleneck
link offers a service curve of β(t) according to (24) to the aggregate of the
video stream and the TCP flow.
According to (14) the burstiness of the video stream as it is output from the
bottleneck link can be derived according to (26).
bTCP
bout = bin + r · T +
R
2 Mb
(26)
= 0.1 Mb + 2 Mb/s · 1.2 ms + = 0.2 Mb
40 Mb/s
The analytical output burst size clearly exceeds the measured output burst
size of 0.17 Mb. However, further improvement of the analytical results based
on the equations for shaped traffic is feasible.
Note that additional information on the TCP flow is available, namely that it
passes a Fast Ethernet link before being multiplexed with the video stream.
Using the path MTU discovery option of TCP, we obtain a Maximum Segment
Size (MSS) for the TCP connection that is equivalent to the MTU size of
the Fast Ethernet link. Thereby the Ethernet link acts as a traffic shaper,
however, with a comparably large shaping rate of about 1500/(1500 + 18) ·
100 Mb/s)98.8 Mb/s, accounting for 18 Bytes of Ethernet encapsulation, and
a remaining burst size of one Ethernet MTU that is 0.012 Mb. Thus, the
arrival curve of the TCP stream can be refined to be the dual leaky bucket
constraint in (27).
The corresponding time constant tTCP of the dual leaky bucket constraint is
given in (28).
2 Mb − 0.012 Mb
tTCP = = 22.4 ms (28)
98.8 Mb/s − 10 Mb/s
16
To derive the output bound for the video stream equations (12) and (13)
are applied. The parameter θ(0) is derived in (29) according to the definition
in (13).
The sup[. . . ] is found for v = tTCP = 22.4 ms and θ(0) = 35.5 ms results.
With (12) we find the output burst size bout in (30).
With respect to multi-domain scenarios the question about the traffic specifi-
cation of the traffic as it is input to a downstream domain arises. The naive
approach would be to apply the traffic specification of the source throughout
all domains that are involved and to add traffic shapers at domain’s egress
routers that ensure that the output conforms to the specification. Doing so
adds an additional worst-case shaping delay of (0.071 Mb)/(2 Mb/s) = 36 ms
to reduce the output burst size from 0.171 Mb to 0.1 Mb, applying a rate of
2 Mb/s. The re-shaping delay has to be added to the per-domain edge-to-edge
delay bound of the video transmission that has been measured to be 56.4 ms.
Clearly the concept of re-shaping the domain’s outbound traffic is not generally
applicable. In particular with respect to deterministic performance bounds
the process is not scalable, since each interfering flow penalizes the video
stream from the above example in two ways: It introduces extra delays due
to competition for resources within the domain, here about 52.7 ms, and it
influences the traffic profile such that additional re-shaping delays apply at
the egress router of the domain, here 35 ms.
The preferred solution is to shape interfering flows at the ingress of the source
domain. Here, the video stream initially only has a comparably small burst
size, whereas the TCP stream is inherently bursty. Yet, we have shown in [22]
that shaping TCP streams allows for a very smooth operation, where the
goodput can be significantly larger than if congestion control and the saw-
tooth characteristic apply.
For the next experiment the TCP stream is shaped on the ingress router of the
test-bed with an average shaping rate of 12 Mb/s, as shown in figure 3 (b).
Following the discussion on shaping in legacy hardware from section 3, the
17
burstiness is decreased from initially 2 Mb to 2 · 12 Mb/s · 25 ms = 0.6 Mb,
where a worst-case burst size of two times the configured bucket depth remains
due to the interval-based shaper implementation.
Figure 9 gives the arrival curves for the video input and output, if the TCP
stream is shaped. The corresponding single leaky bucket constraint for the
input is αin (t) = 0.10 Mb + 2 Mb/s · t as before. The single leaky bucket
constraint for the output follows from the measurement according to (31),
with a burst size of b = 0.11 Mb.
1
video output
0.9 video input
0.8
0.7
0.6
data (Mb)
0.5
0.4
0.3
0.2
0.1
0
0 50 100 150 200 250 300 350 400
time (ms)
The analysis yields the following: The arrival curve of the interfering TCP
stream is dual leaky bucket constrained and can be described by (32).
The flow of interest, that is the video stream, is as before constrained by a sin-
gle leaky bucket, thus equations (12) and (13) can be applied. The parameter
θ(0) is defined by (13) and given in (33).
Obviously the sup[. . . ] is found for v → 0 ms and θ(0) = 16.2 ms results. With
(12) we find the output burst size bout according to (34).
18
If in addition the shaping effect of the Ethernet link is taken into account,
the interfering TCP stream is constrained by three leaky buckets according
to (35).
αTCP (t) = min[0.012 Mb+98.8 Mb/s·t, 0.6 Mb+12 Mb/s·t, 2 Mb+10 Mb/s·t]
(35)
The relevant time constant tTCP is given in (36).
0.6 Mb − 0.012 Mb
tTCP = = 6.8 ms (36)
98.8 Mb/s − 12 Mb/s
The sup[. . . ] is found for v = tTCP = 6.8 ms and θ(0) = 11.8 ms results. Since
θ(0) > tTCP the leaky bucket constraint enforced by the shaper applies and
the improved output burst size bout according to (12) is derived in (38).
Clearly, the analytic bound is negatively affected by the large burstiness that
remains after shaping, where in the worst case bursts of two times the bucket
depth can enter the network.
The measured edge-to-edge delay bound for the video stream is 12.2 ms. An
additional worst case re-shaping delay of 5 ms applies to reduce the measured
output burstiness of the video stream from 0.11 Mb again to 0.1 Mb.
5 Conclusions
Note that input and output of a service element can be modelled by the same
type of arrival curve, only differing with respect to parameters, which is a
very fortunate property, since it allows addressing multiple-node scenarios.
19
In particular our analytical result can be applied to networks, where traffic
shapers are in effect at ingress routers only, whereby the dual leaky bucket
property of shaped traffic trunks can still be derived to hold within the core
of the network.
References
[2] Blake, S., et al., An Architecture for Differentiated Services, RFC 2475, 1998.
[4] Vicent Cholvi, V., Echagüe, J., and Le Boudec J.-Y., Worst Case Burstiness
Increase due to FIFO Multiplexing, Elsevier Performance Evaluation, vol. 49,
no. 1-4, pp. 491-506, 2002.
[5] Cruz, R. L., A Calculus for Network Delay, Part I: Network Elements in
Isolation, IEEE Transactions on Information Theory, vol. 37, no. 1, pp. 114-131,
1991.
[6] Cruz, R. L., A Calculus for Network Delay, Part II: Network Analysis, IEEE
Transactions on Information Theory, vol. 37, no. 1, pp. 132-141, 1991.
[8] Davie, B., et al., An Expedited Forwarding PHB, RFC 3246, 2002.
[10] Fidler, M., On the Impacts of Traffic Shaping on End-to-End Delay Bounds in
Aggregate Scheduling Networks, Springer, LNCS 2811, Proceedings of Cost 263
QoFIS, pp. 1-10, 2003.
[11] Fidler, M., and Sander, V., A parameter based admission control for
differentiated services networks, Elsevier Computer Networks Journal, vol. 44,
no. 4, pp. 463-479, 2004.
20
[12] Fitzek, F., and Reisslein, M., MPEG-4 and H.263 Video Traces for Network
Performance Evaluation, IEEE Network Magazine, vol. 15, no. 6, pp. 40-54,
2001.
[13] Handley, M., Floyd, S., Padhye, J., and Widmer, J., TCP Friendly Rate Control
(TFRC): Protocol Specification RFC 3448, 2003.
[15] Le Boudec, J.-Y., and Thiran, P., Network Calculus A Theory of Deterministic
Queuing Systems for the Internet, Springer, LNCS 2050, 2002.
[16] Lenzini, L., Mingozzi, E., and Stea, G., Delay Bounds for FIFO Aggregates: A
Case Study, Springer, LNCS 2811, Proceedings of COST 263 QoFIS, pp. 31-40,
2003.
[17] Nichols, K., Jacobson, V., and Zhang, L., A Two-bit Differentiated Services
Architecture for the Internet, RFC 2638, 1999.
[18] Ossipov, E., and Karlsson, G., The effect of per-input shapers on the delay
bound in networks with aggregate scheduling, Springer, LNCS 2899, Proceedings
of MIPS, pp. 107-118, 2003.
[19] Parekh, A. K., and Gallager, R. G., A Generalized Processor Sharing Approach
to Flow Control in Integrated Services Networks: The Single-Node Case,
IEEE/ACM Transactions on Networking, vol. 1, no. 3, pp. 344-357, 1993.
[20] Parekh, A. K., and Gallager, R. G., A Generalized Processor Sharing Approach
to Flow Control in Integrated Services Networks: The Multiple-Node Case,
IEEE/ACM Transactions on Networking, vol. 2, no. 2, pp. 137-150, 1994.
[21] Sander, V., Design and Evaluation of a Bandwidth Broker that Provides
Network Quality of Service for Grid Applications, PhD Thesis, Aachen
University, 2002.
[22] Sander, V., and Fidler, M., Evaluation of a Differentiated Services based
Implementation of a Premium and an Olympic Service, Springer, LNCS 2511,
Proceedings of Cost 263 QoFIS, pp. 36-46, 2002.
[23] Schulzrinne, H., Casner, S., Frederick, R., and V. Jacobson, RTP: A Transport
Protocol for Real-Time Applications RFC 1889, 1996.
[25] Cisco Documentation, Policing and Shaping, Cisco IOS Quality of Service
Solutions Configuration Guide, Release 12.2.
21
Appendix
Then, service curves of the rate-latency type β j (t) = Rj ·[t−T j ]+ are assumed.
The condition Rj · (s − T j ) − α2j (s − θ) ≥ 0 can be found to hold for θ ≥ θ0
with θ0 = sups>0 [α2j (s) − Rj · s]/Rj + T j , whereby θ0 ≥ T j . The derivation of
this condition is based on (40) and (41) that are obtained from [15].
The inf[. . . ] in (46) is found for θ = θ∗ , resulting in (47) and (48), which proofs
theorem 4.
α1j+1 (t) = α1j (t + θ) (47)
j j j
supv>0 [α1 (v + t + θ) − α1 (t + θ) + α2 (v) − Rj · v]
θ= + Tj (48)
Rj
22
Here, θ < θ0 is not investigated. Instead, it can be shown that the bound in (47)
and (48) for θ = θ∗ ≥ θ0 is attained. Thus, we cannot find a better solution
if θ < θ0 is considered. A thought experiment, which shows that the derived
bound is attained, can be found in [9]. It mimics a proof that is provided for
the special case of a single leaky bucket constrained trunk 1 in [15]. 2
23
For differentiable and concave α2 (v), the sup[. . . ] in (54) is found for a unique
j
v, where ∂α2j (v)/∂v = Rj −r1 with 0 < v ≤ t1 −t−θ. Thus, v is independent of
t, which can also be shown to hold for non-differentiable, for example piecewise
linear, α2j (v).
j j
Now, define a ∆t > 0 with t + ∆t ≤ t1 − θ(t + ∆t) and t + ∆t ≤ t1 − v −
θ(t + ∆t). At time t + ∆t a modified θ(t + ∆t) = θ(t) + ∆θ can be observed.
However, with (54) we find that θ is independent of t and constant over the
investigated interval, so that ∆θ = 0.
With α1j+1 (t) = α1j (t + θ) according to (10) the output arrival curve of trunk
1 increases with α1j+1 (t+∆t)−α1j+1 (t) = α1j (t+∆t+θ+∆θ)−α1j (t+θ) = r1 ·∆t.
Thus, with case 1 the leaky bucket parameters (r1 , bj+1 1 ) in theorem 5 are
proven for case 2a.
j j
Case 2b ((0 < t < t1 − θ(t)) ∧ (t ≥ t1 − v(t) − θ(t))) Again, the sup[. . . ]
j j
in (49) is derived based on (55-57) for 0 < t ≤ t1 − θ(t) and t ≥ t1 − v(t) − θ(t).
j j j
Note that bj1 + r1 · t1 = b1 + r1 · t1 .
j
α1j (v + t + θ) = b1 +r1 · (v + t + θ) (55)
j j
= bj1 +r1 · t1 +r1 · (v + t + θ − t1 )
α1j (t + θ) = bj1 +r1 · (t + θ) (56)
j j
α1j (v +t+ θ)−α1j (t + θ) = r1 · (t1 − t − θ)+r1 · (v + t + θ − t1 ) (57)
Again, for differentiable and concave α2 (v), the sup[. . . ] in (58) is found for
j
a unique v, where ∂α2 (v)/∂v = Rj − r1 with v ≥ t1 − t − θ. As before, v is
independent of t, which can also be shown to hold for non-differentiable α2j (v).
j
However, for t = t1 − v(t) − θ(t) the special case of ∂α2 (v)/∂v < Rj − r1 can
be attained, which is excluded here and addressed afterwards.
j
Now, define a ∆t > 0 with t + ∆t ≤ t1 − θ(t + ∆t). At time t + ∆t a modified
θ(t + ∆t) = θ(t) + ∆θ can be observed. As given by (57) we find a decrease
of (∆t + ∆θ) · r1 and an increase of (∆t + ∆θ) · r1 for the sup[. . . ] in (49).
With (58), ∆θ can be defined according to (59).
r1 − r 1 r 1 − r1
θ(t + ∆t) − θ(t) = ∆θ = (∆t + ∆θ) · = −∆t · (59)
Rj Rj + r 1 − r1
j
We find that ∆θ < 0 and ∆t > −∆θ, so that t + ∆t > t1 − v − θ(t + ∆t) holds
j
for t ≥ t1 − v − θ(t) and ∆t > 0.
Figure 10 gives an example, which illustrates equation (49) for this case.
The arrival curves of trunk 1 and 2 are shown, whereby the trunk 1 arrival
curve is dual leaky bucket constrained. It is moved to the left by t + θ and
downwards by bj1 + r1 · (t + θ). Further on, the rate of the service curve R
24
is displayed. Obviously, the value of v for which the sup[. . . ] is found is not
affected, if the arrival curve of trunk 1 is moved further to the left. However,
the value of the sup[. . . ] in (49) decreases as shown by (57).
data
alpha 2
r1
r1
-(t+theta ) v time
With α1j+1 (t) = α1j (t + θ) according to (10) we find that the output arrival
curve of trunk 1 increases with α1j+1 (t + ∆t) − α1j+1 (t) = α1j (t + ∆t + θ +
∆θ) − α1j (t + θ) = (∆t + ∆θ) · r1 < ∆t · r1 . Thus, applying ∆θ = 0 yields
the leaky bucket parameters (r1 , bj+11 ) in theorem 5, which overestimate the
output arrival curve. Yet, arrival curves are defined to give an upper bound
on the respective arrival functions, which allows loosening arrival constraints.
j
Now, the special case with t = t1 − v(t) − θ(t) and ∂α2 (v)/∂v < R − r1 is
j
covered, where r1 · (v + t + θ − t1 ) = 0 in (58). Define a ∆t > 0 with ∆t → 0,
which results in a corresponding ∆θ. For ∆t > −∆θ the sup[. . . ] in (58) allows
j
applying smaller values v that fulfill v ≥ t1 − t − θ. For concave trunk 2 arrival
j
curves eventually ∂α2 (v)/∂v = R − r1 is reached for v = t1 − t − ∆t − θ − ∆θ.
j
However, the sup[. . . ] in (58) is found for v = t1 − t − ∆t − θ − ∆θ as long
j
as ∂α2 (v)/∂v < R − r1 . Thus, we find that v + t + θ = t1 is constant, which
allows deriving (60) from (58).
j j j
(θ − T j ) · Rj = r1 · (t1 − t − θ) + α2j (t1 − t − θ) − Rj · (t1 − t − θ) (60)
j j
α2j (t1 − t − ∆t − θ − ∆θ) − α2j (t1 − t − θ) Rj − r 1
∆θ = + (∆t + ∆θ) · (61)
Rj Rj
25
can be given as an upper bound.
Rj − r 1 Rj − r 1
∆θ ≤ −(∆t + ∆θ) · + (∆t + ∆θ) · =0 (62)
Rj Rj
j
Further on, with ∂α2 (v)/∂v ≤ Rj − r1 for v ≥ t1 − t − θ, we find (63).
R j − r1 Rj − r 1 r 1 − r1
∆θ ≥ −(∆t+∆θ)· j
+(∆t+∆θ)· j
= −(∆t+∆θ)· (63)
R R Rj
The lower bound in (63) is the same equation as derived in (59), which
yields (64), so that ∆t ≥ −∆θ holds.
r 1 − r1
∆θ ≥ −∆t · (64)
Rj + r 1 − r1
26