0% found this document useful (0 votes)
18 views14 pages

WYSG05

The document discusses TCP Westwood with Agile Probing (TCPW-A), an enhancement to the Transmission Control Protocol (TCP) designed to improve performance in dynamic, high-speed networks. TCPW-A utilizes mechanisms such as agile probing and persistent noncongestion detection to optimize the congestion window and slow-start threshold, addressing issues related to bandwidth variability and packet loss. Experimental results indicate that TCPW-A significantly enhances link utilization across various network conditions.

Uploaded by

nsc402
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views14 pages

WYSG05

The document discusses TCP Westwood with Agile Probing (TCPW-A), an enhancement to the Transmission Control Protocol (TCP) designed to improve performance in dynamic, high-speed networks. TCPW-A utilizes mechanisms such as agile probing and persistent noncongestion detection to optimize the congestion window and slow-start threshold, addressing issues related to bandwidth variability and packet loss. Experimental results indicate that TCPW-A significantly enhances link utilization across various network conditions.

Uploaded by

nsc402
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO.

2, FEBRUARY 2005 235

TCP With Sender-Side Intelligence to Handle


Dynamic, Large, Leaky Pipes
Ren Wang, Member, IEEE, Kenshin Yamada, M. Yahya Sanadidi, Senior Member, IEEE, and
Mario Gerla, Fellow, IEEE

Abstract—Transmission control protocol Westwood (TCPW) (ssthresh). Then, the connection switches to congestion avoid-
has been shown to provide significant performance improvement ance, where cwnd grows more conservatively, by one packet
over high-speed heterogeneous networks. The key idea of TCPW every RTT (linearly). Then, upon a packet loss, the sender
is use eligible rate estimation (ERE) methods to intelligently set
the congestion window (cwnd) and slow-start threshold (ssthresh) reduces cwnd to half.
after a packet loss. ERE is defined as the efficient transmission It is well known that the current TCP throughput deterio-
rate eligible for a sender to achieve high utilization and be friendly rates in high-speed heterogeneous networks, where many of the
to other TCP variants. This paper presents TCP Westwood with packet losses are due to noise and external interference over
agile probing (TCPW-A), a sender-side only enhancement of
TCPW, that deals well with highly dynamic bandwidth, large
wireless links [19]. Congestion control schemes in current TCP
propagation time/bandwidth, and random loss in the current and assume that a packet loss is invariably due to congestion and
future heterogeneous Internet. TCPW-A achieves this goal by reduce their congestion window by half blindly, thus, the per-
adding the following two mechanisms to TCPW. formance deterioration.
1) When a connection initially begins or restarts after a When the bandwidth-delay product (BDP) increases, another
timeout, instead of exponentially expanding cwnd to an
arbitrary preset sthresh and then going into linear increase, problem TCP faces is initial ssthresh setting. In many cases,
TCPW-A uses agile probing, a mechanism that repeatedly initial ssthresh is set to an arbitrary value, ranging from 4 kB
resets ssthresh based on ERE and forces cwnd into an to arbitrarily high (e.g., maximum possible value), depending
exponential climb each time. The result is fast convergence on implementation under various operating systems. By setting
to a more appropriate ssthresh value.
2) In congestion avoidance, TCPW-A invokes agile probing
the initial ssthresh to an arbitrary value, TCP performance may
upon detection of persistent extra bandwidth via a scheme suffer from two potential problems.
we call persistent noncongestion detection (PNCD). While
in congestion avoidance, agile probing is actually invoked 1) If ssthresh is set too high relative to the network BDP, the
under the following conditions: exponential increase of cwnd generates too many packets
a) a large amount of bandwidth that suddenly becomes too fast, causing multiple losses at the bottleneck router
available due to change in network conditions; and coarse timeouts, with significant reduction of the con-
b) random loss during slow-start that causes the connection nection throughput.
to prematurely exit the slow-start phase. 2) If the initial ssthresh is set too low, the connection exits
Experimental results, both in ns-2 simulation and lab measure-
ments using actual protocols implementation, show that TCPW-A slow-start and switches to linear cwnd increase prema-
can significantly improve link utilization over a wide range of band- turely, resulting in poor startup utilization especially when
width, propagation delay, and dynamic network loading. BDP is large.
Index Terms—Bandwidth estimation, congestion control, high- Dynamic bandwidth presents yet another challenge to TCP
speed networks, random errors, simulation and measurement. performance. In today’s heterogeneous Internet, bandwidth
available to a TCP connection varies often due to many reasons,
I. INTRODUCTION including multiplexing, access control, and mobility [10]. First,
the bandwidth available to a TCP flow is affected by other flows

T RANSMISSION control protocol (TCP) has been widely


used in the Internet for numerous applications. The suc-
cess of the congestion control mechanisms introduced in [21]
sharing the same bottleneck link. Second, in shared medium
access networks, the bandwidth available to a TCP flow is
highly variable depending on channel utilization and medium
and their succeeding enhancements has been remarkable. The access protocol dynamics. Finally, handoff and interference,
current implementation of TCP Reno/NewReno runs in two among other factors in mobile networks, introduce significant
phases: slow-start and congestion avoidance. In slow-start, bandwidth changes over time. Standard TCP versions can
upon receiving an acknowledgment, the sender increases the handle bandwidth decrease fairly well by its cwnd and ssthresh
congestion window (cwnd) exponentially, doubling cwnd every “multiplicative decrease” upon a congestion loss. However, if
round-trip time (RTT), until it reaches the slow-start threshold a large amount of bandwidth becomes available for reasons
such as wide bandwidth-consuming flows leaving the network,
Manuscript received November 1, 2003; revised May 15, 2004. TCP may be slow in catching up (as will be shown below), par-
The authors are with the Computer Science Department, University of ticularly in congestion avoidance with its “additive-increase”
California, Los Angeles, Los Angeles, CA 90095 USA (e-mail: renwang@
cs.ucla.edu; [email protected]; [email protected]; [email protected]). mechanism, increasing cwnd only by one packet per RTT. As
Digital Object Identifier 10.1109/JSAC.2004.839426 a result, link utilization can be lacking, particularly in large
0733-8716/$20.00 © 2005 IEEE
236 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO. 2, FEBRUARY 2005

Fig. 1. Network topology for simulations.

propagation times and highly dynamic large bandwidth, or restarts after a TCP timeout. In this paper, we extend the use of
what we might call “large dynamic pipes.” agile probing to congestion avoidance when extra unused band-
TCP Westwood (TCPW) has been proposed in [6] and [29] width is detected.
and shown to provide significant performance improvement, The second mechanism concerns how to detect extra unused
better scalability and stability [7] over high-speed, heteroge- bandwidth. We realized that if a TCP sender identifies the newly
neous networks. After a packet loss, instead of simply cutting materialized extra bandwidth and invokes agile probing prop-
cwnd by half as in standard TCP, TCPW-A resets cwnd along erly, the connection can converge to the desired window faster
with ssthresh according to the TCPW sender’s eligible rate than usual linear increase. This also applies in the case when
estimate (ERE), thus maintaining a reasonable window size in a random error occurs during startup, causing a connection to
case of random losses, and preventing overreaction when trans- exit slow-start prematurely and switch to congestion avoidance.
mission speed is high. TCPW relies on an adaptive estimation In this paper, we propose a PNCD mechanism, which identi-
technique to determine a sender ERE at all times. The goal of fies the availability of persistent extra bandwidth in congestion
TCPW is to estimate the connection eligible sending rate to avoidance, and invokes agile probing accordingly.
achieve high utilization, without starving other connections. A Experimental results, both in ns-2 simulation and lab mea-
brief overview of TCPW and ERE is given in Section II and surements, show that TCPW-A can significantly improve link
detailed description can be found in [29]. utilization under a wide range of system parameters.
In TCPW, ERE is only used to set sshthresh and cwnd after The remainder of the paper is organized as follows. In
a packet loss. We realize that we can take advantage of ERE Section II, we briefly state the current slow-start mechanisms
further when linear increase is too slow to ramp up cwnd (and in TCP and evaluate their performance. This section serves as
simply setting a very high initial ssthresh is not a good idea motivation of our work. In Section III, we give an overview
as we will discuss later), as in cases of connection startup and of TCPW, and introduce agile probing mechanism to improve
dynamic bandwidth aforementioned. In this paper, we present TCP startup performance. Simulation evaluation is also pro-
TCP Westwood with agile probing (TCPW-A), a sender-side vided. In Section IV, we present PNCD mechanism which is
only enhancement of TCPW, that intelligently deals well with used to invoke agile probing during congestion control. Simu-
highly dynamic bandwidth, large propagation times and band- lation results to evaluate TCPW-A performance are provided in
width, and random loss in the current and future heterogeneous Section V. In Section VI, we describe the FreeBSD implemen-
Internet. TCPW-A achieves this goal by incorporating the fol- tation of TCPW-A and report measurement results. Section VII
lowing two mechanisms into basic TCPW algorithm: discusses related work. Finally, Section VIII discusses future
The first mechanism is agile probing, which is invoked at work and concludes the paper.
connection startup (including after a time-out), and after extra
available bandwidth is detected. Agile probing adaptively and II. TCP STARTUP PERFORMANCE
repeatedly resets ssthresh based on ERE. Each time the ssthresh
In this section, we state briefly the current TCP slow-start
is reset to a value higher than the current one, cwnd climbs
mechanisms, and evaluate their startup performance in large
exponentially to the new value. This way, the sender is able
bandwidth delay networks by simulation. We illustrate the inad-
to grow cwnd efficiently (but conservatively) to the maximum
equacy of the current schemes when facing networks with large
value allowed by current conditions without overflowing the
BDP and reveal the reason behind it. This section serves as mo-
bottleneck buffer with multiple losses—a problem that often af-
tivation of our work.
fects traditional TCP. The result is fast convergence of cwnd to a
more appropriate ssthresh value. In slow-start, agile probing in-
creases utilization of bandwidth by reaching “cruising speed” A. Simulation Setup
faster than existing protocols, this is especially important to All simulation results in this paper are obtained using ns-2
short-lived connections. We have presented a similar idea which [27]. The network topology is shown in Fig. 1, where repre-
we called Astart in [30] when a connection initially begins or sents a TCP sender and a TCP receiver. and are two
WANG et al.: TCP WITH SENDER-SIDE INTELLIGENCE TO HANDLE DYNAMIC, LARGE, LEAKY PIPES 237

TABLE I
NEWRENO UTILIZATION DURING FIRTST 20 S
(Bandwidth = 40 Mb/s)

TABLE II
NEWRENO UTILIZATION DURING FIRTST 20 S
(RTT = 100 ms)

Fig. 2. cwnd dynamic during the startup phase.

routers with finite buffer capacity, each set equal to the BDP un-
less, otherwise, specified. Results are obtained for varying prop-
agation time and bottleneck bandwidth. FTP is the simulated
application. The receiver issues an acknowledgment (ACK) for
every data packet received. We assume the receiver’s advertised
window is always large so that the actual sending window is
always equal to cwnd. For the convenience, the window size
is measured in number of packets, and the packet size is 1000
bytes. The initial ssthresh for Reno/NewReno is set to be 32
packets, equal to 32 kB.

B. TCP Reno/NewReno
In TCP Reno/NewReno, a sender starts in slow-start, Fig. 3. cwnd dynamics in NewReno with Hoe’s modification.
, and every ACK received results in an increase of
cwnd by one packet. Thus, the sender exponentially increases
cwnd. When cwnd hits ssthresh, the sender switches to con- Consider now the impact of bottleneck bandwidth on utiliza-
gestion avoidance phase, increasing cwnd linearly, considerably tion during startup stage. With the increase of bottleneck band-
slower than in slow-start. width, the packet transmission speeds up, but the sender still
In this section, we evaluate Reno/NewReno startup perfor- has to wait for ACKs to increase cwnd. Thus, after prematurely
mance in large BDP networks. If the initial ssthresh is too low,1 exiting slow-start, cwnd grows with almost the same rate (one
a connection exits slow-start and switches to congestion-avoid- packet per RTT) regardless of bandwidth. With larger bottleneck
ance prematurely, resulting in poor utilization. Fig. 2 shows the capacity, more bandwidth is left unused, which leads to lower
Reno cwnd dynamics in the startup stage. The results are ob- utilization. Table II shows the relation between utilization and
tained for a bottleneck bandwidth of 40 Mb/s, and RTT values bottleneck bandwidth during the startup stage. The utilization
of 40, 100, and 200 ms. The bottleneck buffer size is set equal drops to 4.7% with 200 Mb/s bottleneck bandwidth.
to BDP in each case.
From Fig. 2, we see that when 100 ms, Reno stops C. TCP Reno/NewReno With Hoe’s Slow-Start Modification
exponentially growing cwnd long before it reaches the ideal In [18], Hoe proposes a method for setting the initial ssthresh
value ( 500). After that, cwnd increases slowly, and has to the product of delay and estimated bandwidth. The bandwidth
not reached 500 by 20 s. As a result, the achieved throughput is estimation is calculated by applying the least squares estimation
only 12.90 Mb/s, much lower than the desired 40 Mb/s. Another on three closely-spaced ACKs (similar to the concept of packet
observation concerns how RTT affects performance. When RTT pair [25]). RTT is obtained by measuring the round trip time of
increases, the ideal window grows too. On the other hand, be- the first segment transmitted.
cause cwnd increases one packet per RTT during congestion- Hoe’s modification enables the sender to get an estimation
avoidance, longer RTT means slower cwnd growth, resulting in of the BDP at an early stage and set the ssthresh accordingly,
even lower utilization. The results in Table I show the drastic thus avoiding switching to congestion avoidance prematurely.
reduction in utilization as RTT increases. As illustrated in Fig. 3 with large buffer space (buffer size
BDP ), Reno with Hoe’s modification increases cwnd
exponentially and exits properly.
However, Hoe’s modification may encounter multiple-loss
1In a network with small BDP, the initial ssthresh might be set too high. As
problems when the bottleneck buffer is not big enough com-
a result, at some cycle in slow-start, a Reno sender often overshoots the BDP,
causing multiple losses and a coarse time-out. This is also a problem resulting pared with the BDP, which could easily happen in large
from an inappropriate setting of ssthresh. bandwidth delay networks. In Fig. 3 when the buffer size is
238 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO. 2, FEBRUARY 2005

125 packets (1/4 BDP), the connection encounters multiple


losses and runs into a long recovery time (from 0.9 to 14.8 s).
The achieved throughput during the first 20 s is only 3.61 Mb/s,
translating into 9% utilization.
The reason for the multiple losses is as follows. During slow-
start, for every ACK received, the sender increases cwnd by 1
and sends out two new packets. If the receiver acknowledges
every packet, then after RTT, cwnd will be . Suppose the
access link capacity is at least twice as large as the bottleneck
capacity, these packets will arrive at the bottleneck back to
back at a speed twice that of the bottleneck link. Thus, to avoid
losses, at least a buffer of packets is needed to hold off the
temporarily bursting packets. Hoe’s modification sets ssthresh Fig. 4. Vegas cwnd dynamic and queue length during startup phase
to the estimated BDP, thus, a buffer size of BDP/2 is required to (bottleneck bandwidth = 40 Mb/s, baseRTT = 100 ms).
prevent multiple losses for single connection.
More importantly, Hoe’s modification does not adjust to TABLE III
changing path load. If there are multiple connections starting RATIO OF SLOW START TERMUNATION WINDOW TO THE IDEAL WINDOW
up at approximately the same time, or other large volume traffic (BDP) IN VEGAS (Round-Trip Time = 100 ms)
(for example, video transferring) joins in when a connection
is in slow-start, the Hoe’s modification will have set the ini-
tial ssthresh too high, resulting in multiple losses and coarse
time-out.

D. TCP Vegas analysis of this problem can be found in [26]. Fig. 4 also shows
Unlike TCP Reno/NewReno that uses packet loss as conges- Vegas cwnd dynamic over a path with BDP equal to 500 packets.
tion indication, TCP Vegas [5] detects incipient congestion by Vegas exits slow-start at , while the ideal window is
comparing the achieved throughput to the expected throughput 500 packets.
at the beginning of a cycle (RTT). The difference between these The startup underutilization of Vegas is aggravated as BDP
two values reflects the queue length of the connection in the bot- grows. Table III shows the ratio of the slow-start termination
tleneck router. cwnd to the ideal window value for different bottleneck band-
This Vegas method is applied to both slow-start and conges- width. The ratio is reduced to about 0.1 with a bottleneck of
tion-avoidance phases. In congestion-avoidance, cwnd increases 100 Mb/s.
by 1 per RTT if the difference is small, meaning that there
is enough network capacity. Vegas reduces cwnd in the same
III. AGILE PROBING
fashion (by one packet) when the achieved throughput is con-
siderably lower than the expected throughput. In this section, we introduce agile probing scheme that
During slow-start, Vegas doubles its congestion window only improves TCP performance during startup, and over large
every other RTT (compared with Reno’s every RTT). When the dynamic pipe with the help of PNCD, which we will introduce
difference between actual and expected throughput exceeds a in Section IV. First, we give a brief overview of TCPW. TCPW
threshold, Vegas stops its window doubling and switches to con- behaves identical to TCP NewReno during window increase
gestion-avoidance (see Fig. 4). phases, both in slow-start and congestion avoidance. Therefore,
By growing cwnd slower and monitoring every RTT for it inherits the weakness of TCP NewReno, as stated in the last
incipient congestion, Vegas avoids multiple losses and the section. In TCPW with agile probing (TCPW-A), agile probing
coarse time-out that would result [17]. However, when the and PNCD are cooperated into TCPW to overcome the “slow”
BDP is large, Vegas may underutilize the available band- slow-start and inefficient window increase. In slow-start, agile
width by switching to congestion avoidance too early [26]. probing is always used, while in congestion avoidance, it is
The premature slow-start termination is caused by RTT invoked only after PNCD detects persistent noncongestion.
overestimation in the Vegas algorithm. In Vegas, the sender Below, we discuss agile probing and present simulation eval-
checks the difference between expected and actual throughput: uation comparing with other mechanisms. In Section IV, we
diff baseRTT RTT only at the beginning of present details of PNCD.
the RTT where cwnd is doubled.2
At this point, is over-estimated because of the tempo- A. TCPW Overview
rary queue buildup at the router during the previous cycle (the
last two RTT’s). Fig. 4 shows the instantaneous queue length In TCPW, a sender continuously monitors ACKs from the
pattern. As a result of RTT overestimation, diff is overestimated receiver and computes its current ERE [29]. ERE relies on an
too, and Vegas exits slow-start prematurely. A more detailed adaptive estimation technique applied to the ACK stream. The
goal of ERE is to estimate the connection eligible sending rate
2Provided that the difference indicates no congestion. with the goal of achieving high utilization, without starving
WANG et al.: TCP WITH SENDER-SIDE INTELLIGENCE TO HANDLE DYNAMIC, LARGE, LEAKY PIPES 239

other connections. We emphasize that what a connection is eli-


gible for is not the residual bandwidth on the path. The connec-
tion is often eligible more than that. For example, if a connection
joins two similar connections, already in progress and fully uti-
lizing the path capacity, then the new connection is eligible for
a third of the capacity.
Research on active network estimation [9] reveals that sam-
ples obtained by “packet pair” is more likely to reflect link ca-
pacity, while samples obtained by “packet train” give short-time
throughput. In TCPW, the sender adaptively computes , an in-
terval over which the ERE sample is calculated. An ERE sample
is computed by the amount of data in bytes that were success-
fully delivered in depends on the congestion level, the
latter measured by the difference between “expected rate” and
Fig. 5. A close look at agile probing cwnd dynamic during startup phase.
“achieved rate” as in TCP Vegas. That is depends on the net-
work congestion level as follows:
B. Agile Probing Mechanism
Agile Probing uses ERE to adaptively and repeatedly reset
ssthresh. During agile probing, when the current ssthresh is
(1) lower than ERE, the sender resets ssthresh higher accordingly,
and increases cwnd exponentially. Otherwise, cwnd increases
linearly to avoid overflow. In this way, agile probing probes the
where is the minimum RTT value of all acknowl-
available network bandwidth for this connection, and allows
edged packets in a connection, and RTT is the smoothed RTT
the connection to eventually exit slow-start close to an ideal
measurement. The expected rate of the connection when there
window corresponding to its share of path bandwidth. The
is no congestion is given by , while RE is the
pseudocode of the algorithm, executed upon ACK reception, is
achieved rate computed based on the amount of data acknowl-
as follows:
edged during the latest RTT, and exponentially averaged over
time using a low-pass filter. When there is no congestion and,
if ( DUPACKS are received)
therefore, no queueing time, is almost the same
switch to congestion avoidance phase;
as RE, producing small . In this case, ERE becomes close to a
else (ACK is received)
packet pair measurement. On the other hand, under congestion
if (ssthresh < (ERE 3 RTTmin )=seg size)
conditions, RE will be much smaller than due
to longer queueing delays. As a result, will be larger and ssthresh=(ERE 3 RTT min) =seg size; =3 reset ssthresh3=
endif
ERE closer to a packet train measurement. After computing
if (cwnd >= ssthresh) /*linear increase
the ERE samples, a discrete version of a continuous first order
phase*/
low-pass filter using the Tustin approximation [4] is applied to
increase cwnd by 1/cwnd;
obtain smoothed ERE.
else if cwnd < ssthresh) /*exponentially
In current TCPW implementation, upon packet loss (indi-
increase phase*/
cated by three DUPACKs or a time-out) the sender sets cwnd and
increase cwnd by 1;
ssthresh based on its current ERE. TCPW uses the following al-
endif
gorithm to set cwnd and ssthresh. For more details on TCPW and
endif.
ERE, and its performance evaluation in high-speed, error-prone
environments, please refer to [6] and [29]
By repeating cycles of linear increase and exponential in-
crease, cwnd adaptively converges to the desired window in a
if (three DUPACKS are received) timely manner, enhancing link utilization in slow-start. Fig. 5
ssthresh = (ERE 3 RTTmin )=seg size; illustrates the detailed cwnd dynamic. cwnd does not always in-
if (cwnd > ssthresh) /*congestion avoid*/ crease as fast as exponential increase, especially as cwnd ap-
cwnd = ssthresh; proaches BDP. This prevents the temporary queue from building
endif up too fast, and thus, prevents a sender from overflowing a small
endif buffer. In this case, the ideal cwnd when the sender should exit
if (coarse timeout expires) slow-start phase is the BDP, which is 500.
cwnd = 1;
ssthresh = (ERE 3RTTmin )=seg size; C. Performance Evaluation
if(ssthresh < 2) In this subsection, we evaluate the performance of agile
ssthresh = 2; probing during startup, comparing the throughput performance
endif of the proposed agile probing algorithm to other mechanisms
endif. we described and evaluated in the previous subsections. We
240 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO. 2, FEBRUARY 2005

Fig. 6. Agile probing cwnd dynamic when five connections start at the
same time (bottleneck bandwidth = 40 Mb/s, RTT = 100 ms, and
BDP = 500 packets). Fig. 8. Throughput versus bottleneck capacity (first 20 s).

thanks to the continuous estimation mechanism, which reacts to


the new traffic and determines an eligible sending rate that is no
longer the entire bottleneck link capacity. After the UDP traffic
joins in, the appropriate exit point is no longer 500 (equivalent
to BDP), but 250 instead.
3) Throughput Comparison: The summary of this section
is that agile probing significantly improves TCP startup per-
formance with regards to various bottleneck bandwidth, buffer
size and RTT. To focus on the startup performance of different
schemes, we only calculate the throughput during the first 20 s.
The throughput of agile probing, NewReno, NewReno with
Hoe’s modification and Vegas are examined under bottleneck
bandwidth varying from 10 to 150 Mb/s (while fixing the RTT at
Fig. 7. cwnd dynamic with UDP traffic joins in during srartup 100 ms). The results in Fig. 8 show that agile probing and Hoe’s
(bottleneck capacity = 40 Mb/s, RTT = 100 ms, BDP = 500 packets). modification achieve higher throughput, and scale with band-
width. NewReno and Vegas performance lags in this scenario.
also compare agile probing with commercial satellite transport Another observation is that NewReno with Hoe’s modification
protocol where a very large initial window is used. We have also slightly outperforms agile probing. In Hoe’s method, the initial
evaluated how well agile probing coexists with TCP NewReno ssthresh is immediately set to the bandwidth after three closely
and its performance under dynamic loading. Due to space spaced ACKs returned, so cwnd increases by one for every ACK
constraint, please refer to [30] for more detail. received. On the other hand, agile probing gradually probes for
1) Agile Probing Behavior With Multiple Connections: We bandwidth and slows down when the estimate is closer to the
ran simulation with five connections starting at the same time. connection bandwidth share. We believe that the slightly lower
The results in Fig. 6 show that each connection is able to esti- throughput achieved by agile probing above is more than com-
mate its share of bandwidth and switch to congestion-avoidance pensated for by its avoidance of buffer overflow and multiple
at the appropriate time. losses in other cases.
For the convenience of presentation, we only show the simu- To assess the robustness of the different schemes to buffer
lation result with five connections. Simulations with more con- size, we ran simulations with bottleneck buffer size varying
nections were run and TCP with agile probing exhibits similar from 100 (BDP/5) to 250 (BDP/2) packets. The bandwidth
behavior, as expected. is 40 Mb/s and RTT is 100 ms. The results in Fig. 9 show
2) Startup in a Congested Network: To evaluate the adap- that agile probing is robust to buffer size reductions, while
tivity of agile probing when the network becomes congested, we NewReno with Hoe’s modification suffers when the buffer
also tested the startup behavior when another high-volume user size is smaller than BDP/2. The reduction in buffer size has
datagram protocol (UDP) connection joins the TCP connection no meaningful impact on NewReno and Vegas. They still exit
during the slow-start phase. We ran simulations with one TCP slow-start prematurely, as explained in Section III.
connection starting at time 0 over a link with capacity 40 Mb/s. RTT can considerably affect the startup performance. Fig. 10
A UDP flow with intensity of 20 Mb/s starts at 0.5 s. Fig. 7 shows the throughput of agile probing, NewReno, Hoe’s mod-
shows that Hoe’s method runs into multiple losses and finally ification and Vegas with RTT varying from 20 to 200 ms. The
times out. The reason is the setting of the initial ssthresh to 500 bottleneck bandwidth is fixed here at 40 Mb/s and buffer size is
(BDP) at the very beginning of the connection, and the lack of set equal to BDP. Fig. 10 shows that agile probing and Hoe’s
adjustment to the change in network load later. In contrast, agile method both scale well with RTT with Hoe’s modification
probing has a more appropriate (lower) slow-start exit cwnd, slightly better for the same reason previous stated (Hoe’s
WANG et al.: TCP WITH SENDER-SIDE INTELLIGENCE TO HANDLE DYNAMIC, LARGE, LEAKY PIPES 241

Fig. 11. Congestion window dynamics of agile probing and LIW method
= = =
Fig. 9. Throughput versus bottleneck buffer size (first 20 s).
(bottleneck 10 Mb/s, RTT 500 ms, and BDP 600).

during the startup. However, it cannot singlehandedly solve the


problem of poor startup utilization over satellite links. Below,
we will show the reason and also compare the performance of
agile probing with LIW method.
A commercial satellite system using a geostationary (GEO)
could have bandwidth up to 24 Mb/s, which results in a BDP of
about 3000 with one-way propagation delay. Under this situa-
tion, even with an initial window of 64 kB, it would take a very
long time for TCP to fully utilize the link.
Fig. 11 compares the startup behavior of agile probing
and LIW method. The bottleneck capacity is 10 Mb/s and
one-way propagation delay is 250 ms. The graph shows that
although LIW method comes up strong at the very beginning,
Fig. 10. Throughput versus two-way propagation time (first 20 s).
it fades quickly comparing to agile probing due to bypassing
the slow-start stage. As a result, the throughput of LIW method
method set the ssthresh immediately to the BDP, where agile during this period is only 2.80 Mb/s comparing to agile
probing probes and slows down when cwnd is close to the probing’s 9.33 Mb/s.
BDP). The performance of NewReno and Vegas deteriorate Another challenge LIW method faces is caused by its in-
significantly as RTT increases. ability to adapt to different network conditions. By setting the
The studies in the last two sections focused on the perfor- initial congestion to a large value, if the network is highly con-
mance a TCP connection during its initial startup phase, but gested or many connections simultaneously join in, it is possible
agile probing can also be used after any coarse time-out. This that using LIW overflows the buffers and causes multiple losses.
is of particular value to TCPW since after a time-out, ERE is Moreover, a connection using a satellite link may also have a
small relative to the connection’s actual bandwidth share. This terrestrial part, thus using LIW end-to-end could affect the per-
is because during a coarse time-out, the sender transmits too formance and fairness of the terrestrial part of the connection.
few packets and, therefore, the share estimate is very low. Agile
probing helps in this case by gradually probing for bandwidth
IV. PERSISTENT NONCONGESTION DETECTION (PNCD)
share and switching to congestion avoidance at a more appro-
priate time. In this section, we present a PNCD mechanism that aims at
4) Comparing Agile Probing to the Use of Large Initial detecting extra available bandwidth and invoking agile probing
Window (LIW) Over Satellite Links: In a connection that incor- accordingly. In congestion avoidance, a connection monitors
porates a satellite link, the main bottleneck in TCP performance the congestion level constantly. If a TCP sender detects per-
is due to the large delay-bandwidth product nature of the satel- sistent noncongestion conditions, which indicates that the con-
lite link. As we mentioned in Section II, a larger initial cwnd, nection may be eligible for more bandwidth, the connection
roughly 4 kB, is proposed in [1]. This could greatly speed up invokes agile probing to capture such bandwidth and improve
transfers with only a few packets. However, the improvement is utilization.
still inadequate when BDP is very large, and the file to transfer As described in Section II, rate estimate (RE) is an estimate
is bigger than just a few packets. of the rate achieved by a connection. If the network is not
More aggressively, commercial satellite data communication congested and extra bandwidth is available, RE will increase as
providers typically use a very LIW over satellite links, e.g., cwnd increases. On the other hand, if the network is congested,
64 kB and, thus, bypass the slow-start stage of the normal TCP RE flattens despite of the cwnd increase. Fig. 12(a) illustrates
evolution [13]. This method effectively increases the utilization the expected rate, which is equal to , and RE
242 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO. 2, FEBRUARY 2005

Fig. 12. RE, cwnd, and ssthresh dynamics. (a) Noncongestion. (b) Congestion.

in noncongested path; while Fig. 12(b) shows expected rate initial expected rate ( ) under congestion.
and RE under congestion. Also, shown in these figures, are We define the congestion boundary as
the plots. Such values correspond to the Congestion Boundary Expected Rate
“initial” expected rate. That is the expected rate when con-
Initial Expected Rate (2)
gestion avoidance was entered, or after a packet loss. From
Fig. 12(a), we see that RE follows continuously Fig. 13 illustrates the relation among congestion boundary,
in noncongestion case. On the other hand, Fig. 12(b) shows that expected rate, and initial expected rate with .
RE does not grow and remains equal to RE may fluctuate crossing above and below the congestion
under congestion. In this case, RTT increases by an amount boundary. To detect persistent noncongestion, we use a (non-
equal to the queueing time at the bottleneck. Then, this RTT’s congestion) counter, which increases by one every time RE is
growth cancels out the cwnd increase, and keeps RE constant, above the congestion boundary and decreases by one if RE is
as it should. below the congestion boundary. A pseudocode of the PNCD al-
As mentioned before, indicates expected gorithm is as follows:
rate in no congestion and RE is the achieved rate. To be if ( in congestion avoidance except for the
more precise, RE is the achieved rate corresponding to the initial two RTT){
expected Rate 1.5 times RTT earlier.3 Thus, we must use if ( RE > Congestion Boundary ){
in a comparison, the corresponding expected rate, that is no_congestion_counter++;
. RE tracks the expected rate in non- else if (no congestion counter > 0){
congestion conditions, but flattens, remaining close to the no_congestion_counter–;
if (no congestion counter > cwnd){
3The oldest acknowledged packet used for RE calculation is received one RTT restart agile probing;
before. Packets are traveled back and forth for RTT time period. Thus, the oldest }else{
ACK used for ERE calculation is sent two RTT before, and the newest ACK is
sent one RTT before. On average, the ACK packets used for RE calculation are no congestion counter = 0;
0
sent 1.5 RTT before. Then, the “cwnd at 1.5 RTT before” becomes (cwnd 1:5). }
WANG et al.: TCP WITH SENDER-SIDE INTELLIGENCE TO HANDLE DYNAMIC, LARGE, LEAKY PIPES 243

again. The throughputs of these 15 s simulations are 30.3 Mb/s


without PNCD, and 88.8 Mb/s with PNCD and agile probing.

B. Dynamic Bandwidth
To illustrate how TCPW-A behaves under dynamic band-
width, Fig. 15 shows cwnd dynamics when nonresponsive UDP
flows are gone from the path, causing extra bandwidth to be-
come available.
The bottleneck link bandwidth is 100 Mb/s, two-way propa-
gation delay is 100 ms, and the bottleneck buffer is set to BDP.
The nonresponsive UDP flows have disappeared from the
path around 50 s, and the remaining flow is eligible to use the
newly materialized bandwidth. Without PNCD, the connection
needs 60 s to reach BDP. On the other hand, PNCD detects
Fig. 13. Congestion boundary, expected rate, and initial expected rate with the unused bandwidth within a few seconds, and a new agile
= 0:5. probing phase makes instant use of this unused bandwidth pos-
sible! Note that dynamic bandwidth due to other reasons stated
If the parameter is greater than 0.5, the Congestion in Introduction will induce similar behavior that helps to im-
boundary line gets closer to expected rate. We can make this prove utilization.
algorithm more conservative by setting .
Even if the PNCD algorithm can accurately detect noncon- C. Fairness and Friendliness to NewReno
gestion, there is always the possibility that the network becomes Fairness relates to the relative performance of a set of con-
congested immediately after the connection switches to agile nections of the same TCP variant. Friendliness relates to how
probing phase. One such scenario is after a buffer overflow at the sets of connections running different TCP flavors affect the per-
bottleneck router. Many of the TCP connections may decrease formance of each other. The simulation topology consists of a
their cwnd after a buffer overflow, and congestion is relieved single bottleneck link with a capacity of 50 Mb/s, and one-way
in a short time period. The PNCD in some connection may de- propagation delay of 35 ms. The buffer size at the bottleneck
tect noncongestion and invoke agile probing. However, the er- router is equal to the pipe size. The link is loss free except where
roneous detection is not a serious problem. Unlike exponential otherwise stated.
cwnd increase in slow-start phase of NewReno, the TCP con- A set of simulations with ten simultaneous flows was run to
nection adaptively seeks the fair share estimate in agile probing investigate fairness of TCPW-A. To provide a single numerical
mode. Thus, if the network has already been congested when measure reflecting the fair share distribution across the various
a new agile probing begins, the “agile probing” connection will connections, we use the Jain’s Fairness Index defined as [22]
not increase cwnd much, and will go back to linear probing soon
enough.

Fairness Index
V. SIMULATION RESULTS
In this section, we evaluate the performance of our TCPW-A
algorithms in terms of throughput, friendliness, and window
where is the throughput of the th flow and is the total
dynamics. The results show that TCPW-A exhibits signifi-
number of flows. The fairness index always lies between 0
cantly improved performance, yet remains friendly toward TCP
and 1. A value of 1 indicates that all flows got exactly the same
NewReno, the de facto TCP standard over the Internet.
throughput.
We calculate the fairness index for both Reno and TCPW. The
A. Premature Exit From Slow-Start Jain’s Fairness Index of TCPW-A reached 0.9949, and that of
Fig. 14 shows cwnd dynamics under random packet loss NewReno is 0.9944. Therefore, fairness of TCPW-A is compa-
during slow-start. The bottleneck link bandwidth is 100 Mb/s, rable to that of NewReno.
two-way propagation delay is 100 ms, and the bottleneck buffer Since TCPW-A invokes Agile Probing, aggressively seeking
is equal to the BDP. When RTT reaches 2 Mb/s, unused bandwidth, the evaluation of TCPW-A friendliness is
a packet is dropped (assumed to be random loss, which may important. PNCD ensures that agile probing is invoked only in
happen in the early stage of a connection over satellite or wire- persistent noncongestion. Thus, if there are enough connections
less links, as we observed in measurements). The connection to fill the pipe, TCPW-A connections behave similar to TCPW.
exits slow-start phase and enters congestion avoidance. Without Thanks to good friendliness characteristics of TCPW, TCPW-A
PNCD, cwnd increases slowly, one packet every RTT, requiring connections can effectively coexist with TCP NewReno con-
more than 60 s for cwnd to reach BDP. With the help of PNCD, nections over the same path. Fig. 16 shows cwnd dynamics for
the TCPW-A connection detects persistent noncongestion TCPW-A and NewReno connections. The bottleneck link band-
within a few seconds, and then starts a new agile probing width is 100 Mb/s and a two-way propagation delay is 70 ms.
244 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO. 2, FEBRUARY 2005

Fig. 14. cwnd dynamic under a random packet loss at slow-start phase. (a) TCP (without PNCD). (b) TCPW-A (with PNCD and agile probing).

In Fig. 16(a), one TCPW-A and one NewReno connection Thus, the set of connections is randomly spread out over 20 min
start running at the same time. The TCPW-A connection simulation time, making the bandwidth available to a connec-
benefits initially by quickly reaching cruising speed, but they tion quite oscillating. The initial ssthresh of TCP NewReno is
both reach the same cwnd after a few congestion losses. In set to default value, 32 kB.
Fig. 16(b), five TCPW-A and five NewReno connections start Fig. 17 shows total throughput versus bottleneck bandwidth.
running at the same time. The first started TCPW-A connection The total throughput is computed as the sum of throughputs of
gets much more bandwidth initially, but all connections regard- all connections. Two-way propagation delays are 70 ms, and the
less of TCPW-A or NewReno reach fair share rate after a few bottleneck buffer size is set equal to the pipe size (BDP). When
packet losses. the bottleneck capacity is 10 Mb/s, TCP-A does not improve
things, as NewReno can easily fill the small pipe. As the bot-
D. Throughput Comparison Under Dynamic Load tleneck link capacity increases, TCPW-A exhibits much better
We evaluated the performance of TCPW-A under highly dy- scalability than NewReno. At 150 Mb/s, TCPW-A achieves at
namic load conditions. In 20 min simulation time, we ran 100 least twice as much throughput as NewReno does.
connections, each with a lifetime of 30 s. The starting time of the Fig. 18 shows the total throughput versus two-way propa-
connections are uniformly distributed over the simulation time. gation delay. The bottleneck bandwidth is 45 Mb/s, and the
WANG et al.: TCP WITH SENDER-SIDE INTELLIGENCE TO HANDLE DYNAMIC, LARGE, LEAKY PIPES 245

Fig. 15. cwnd dynamic when dominant flows are gone from the bottleneck router. (a) TCP (without PNCD). (b) TCPW-A (with PNCD and Agile Probing).

bottleneck buffer size is set equal to the pipe size (BDP). VI. LAB MEASUREMENT RESULTS
NewReno performance degrades as the propagation delays
To evaluate TCPW-A performance in actual systems, we have
increase, showing lack of scalability to long propagation time.
implemented TCPW-A algorithms, including agile probing and
TCPW-A, on the other hand, scales well with increasing prop-
PNCD, on FreeBSD [15]. Lab measurements confirmed our
agation times.
simulation results, showing that TCPW-A behaves quite well
Note that there are two factors that account for TCP
in actual systems.
NewReno’s inability to scale with bandwidth and RTT. One is
its slow (linear) cwnd increase even when other connections
leave the network and more bandwidth becomes available. A. Measurement Setup
The other factor is due to its small initial ssthresh, causing In our lab measurement experiments, all PCs are running on
premature exit from slow-start. One can argue that increasing FreeBSD Release 4.5 [15]. CPU clock tick is 10 ms (default).
the initial ssthresh easily solves this problem. However, a very We use Dummynet [11], [12] to emulate the bottleneck link. The
large initial ssthresh may risk the connection into multiple bottleneck link (from PC router to TCP receiver) speed is set to
packet losses and reduce the utilization even further. As also 10 Mb/s. The propagation time is 400 ms, and the router buffer
pointed out in [2], we think it is best to adaptively figure out the is set equal to BDP (500 kB), equivalent to 62 packets. Queue
ssthresh value on the fly. management at the router is drop-tail. We use Iperf [20] as a
246 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO. 2, FEBRUARY 2005

Fig. 18. Link utilization versus two-way propagation time


(bottleneck capacity = 45 Mb/s).

Fig. 16. cwnd dynamics between TCPW-A and NewReno (bottleneck


=
bandwidth 100 Mb/s, RTT = 70 ms). (a) One TCPW-A and one NewReno Fig. 19. cwnd dynamics in TCPW-A and NewReno at startup (lab
connections. (b) Five TCPW-A and five NewReno connections. measurement results).

Fig. 17. Link utilization versus bottleneck link capacity (RTT = 70 ms). Fig. 20. cwnd dynamics in TCPW-A and NewReno with preexisting flows in
(0,25 s) (lab measurements results).
traffic generator. The receiver’s advertised window is set large
enough at 4 MB. The initial ssthresh for TCP NewReno is set then increases more slowly as the connection approaches the
to be 32 kB in our measurements. link capacity.

B. TCPW-A Convergence C. PNCD Effectiveness


Fig. 19 shows measured cwnd dynamics in TCPW-A and Fig. 20 compares cwnd dynamics in TCPW-A to that of
NewReno connections during slow-start. NewReno enters con- NewReno as obtained from lab measurements. When a con-
gestion avoidance phase after cwnd reaches the initial ssthresh nection starts, the path is already filled with nonresponsive
(32 k), and cwnd increases linearly by one packet per RTT. cwnd UDP flows. NewReno enters congestion avoidance after cwnd
reaches only 2 Mb/s, 20% of the link capacity, for the first 25 s. reaches the initial ssthresh (32 k). TCPW-A connection does
On the other hand, in the TCPW-A connection, cwnd quickly not increase cwnd much since the bottleneck link has been
converges to the link capacity by agile probing, We can also already largely occupied. In fact, cwnd stays lower than that
confirm that cwnd growth is not exponential throughout agile of NewReno. The dominant flows are finished after 25 s, and
probing. Initially, cwnd increases rapidly for the first 3 s, and the entire link capacity becomes available to the remaining
WANG et al.: TCP WITH SENDER-SIDE INTELLIGENCE TO HANDLE DYNAMIC, LARGE, LEAKY PIPES 247

connection. The TCPW-A connection detects persistent non- router has to either keep per-flow state to get an accurate es-
congestion, and invokes agile probing to quickly capture the timate, with the resultant scalability problem; or assumes that
unused bandwidth. On the other hand, NewReno stays in con- flows share bandwidth fairly, which is not true often. Comparing
gestion avoidance and linearly (and slowly) increases its cwnd. to this scheme, and XCP (which takes advantage of router feed-
back to swiftly adjust cwnd, thus, can handle dynamic band-
width), TCPW-A only requires sender-side modification, thus,
VII. RELATED WORK much easier to deploy.

A great deal of research effort has been made to enhance TCP VIII. CONCLUSION AND FUTURE WORK
performance for dynamic, large, leaky pipes. There are several
other approaches to improving TCP scalability to large pipes In this paper, we presented TCPW-A, a TCP modification that
that require only sender-side modification, including scalable uses sender side intelligence to address the issues of highly dy-
TCP [23], high-speed TCP [14], and Vegas-based fast TCP [8]. namic bandwidth, large delays, and random loss. Besides basic
In these schemes, as in traditional TCP, packet losses are exclu- TCPW scheme presented in [6] and [30], TCPW-A incorporates
sively treated as congestion signals. Compared with the previous two new mechanisms: agile probing and PNCD. Agile probing
schemes, TCPW-A is equipped with the ability to better handle enhances probing during slow-start and whenever noncongested
random errors in high-speed heterogeneous networks. Explicit conditions are detected. Agile probing converges to more appro-
control protocol (XCP) [24] is a well-designed congestion con- priate ssthresh values thereby making better utilization of large
trol scheme for high-speed, long delay networks. However, it pipes, and reaching “cruising speeds” faster, without causing
requires cooperation from routers and receivers, making it dif- multiple packet losses. Another contribution of this work is the
ficult to deploy. introduction of the PNCD method. PNCD is shown to be ef-
With the increase of short-lived web traffic [16], researchers fective in detecting persistent noncongestion conditions, upon
realize that startup performance is important, especially over which TCPW-A invokes agile probing. The combination en-
large pipes. The agile probing scheme in TCPW-A provides a sures that during Congestion Avoidance, TCPW-A can make
realistic means to figure out the right ssthresh on the fly. A va- quick use of bandwidth that materializes because of dynamic
riety of other methods have been recently suggested in the liter- loads among other causes.
ature to avoid multiple losses and to achieve higher utilization The results presented above were obtained using both sim-
during slow-start. A larger initial cwnd, roughly 4 kB, is pro- ulation and laboratory measurements with actual implementa-
posed in [1]. This could greatly speed up transfers with only a tion under FreeBSD operating system. The results show that
few packets. However, the improvement is still inadequate when TCPW-A works as well in actual systems as it does in simu-
BDP is very large, and the file to transfer is bigger than just a few lation experiments.
packets [31]. Fast start [28] uses cwnd and ssthresh cached from In the future, we will evaluate TCPW-A further in terms of
recent connections to reduce the transfer latency. The cached pa- expanded friendliness, random loss, complex topologies, and
rameters may be too aggressive or too conservative when net- interaction with active queue management schemes. We also
work conditions change. In [18], Hoe proposes to set the ini- plan to move our measurements from the lab to the Internet and
tial ssthresh to the BDP estimated using packet pair measure- on satellite links.
ments. This method can be too aggressive when the bottleneck
buffer is not big enough, or many flows are coexisting. In [31], REFERENCES
Shared Passive Network Discovery (SPAND) has been proposed [1] M. Allman, S. Floyd, and C. Patridge, “Increasing TCP’s initial
to derive the optimal initial values for TCP parameters. SPAND window,” Internet Draft, Apr. 1998.
needs leaky bucket pacing for outgoing packets, which can be [2] M. Allman, “End2end-Interest discussion group,” [Online]. Available:
http://www.postel.org/pipermail/end2end-interest/2003-July.txt, 2003.
costly and problematic in practice [3]. TCP Vegas [5] detects [3] A. Aggarwal, S. Savage, and T. E. Anderson, “Understanding the per-
congestion by comparing the achieved throughput over a cycle formance of TCP pacing,” in Proc. IEEE INFOCOM, Tel-Aviv, Israel,
of length equal to RTT, to the expected throughput implied by Mar. 2000, pp. 1157–1165.
[4] K. J. Astrom and B. Wittenmark, Computer Controlled Systems. En-
cwnd and baseRTT (the minimum RTT) at the beginning of a glewood Cliffs, N. J.: Prentice-Hall, 1997.
cycle. This method is applied in both slow-start and congestion [5] L. S. Brakmo and L. L. Perterson, “TCP Vegas: End-to-end congestion
avoidance phases. During slow-start, a Vegas sender doubles its avoidance on a global Internet,” IEEE J. Sel. Areas Commun., vol. 13,
no. 8, pp. 1465–1480, Oct. 1995.
cwnd only every other RTT, in contrast with Reno’s doubling [6] C. Casetti, M. Gerla, S. Mascolo, M. Y. Sanadidi, and R. Wang, “TCP
every RTT. A Vegas connection exits slow-start when the dif- Westwood: Bandwidth estimation for enhanced transport over wireless
ference between achieved and expected throughput exceeds a links,” presented at the Mobicom 2001, Rome, Italy, Jul. 2001.
[7] J. Chen, F. Paganini, R. Wang, M. Y. Sanadidi, and M. Gerla, “Fluid-flow
certain threshold. However, Vegas is not able to achieve high analysis of TCP Westwood with RED,” in Proc. GLOBECOM, vol. 7,
utilization in large bandwidth delay networks, due to its overes- San Francisco, Ca., Nov. 2003, pp. 4064–4068.
timation of RTT [26]. [8] D. H. Choe and S. H. Low, “Stabilized Vegas,” in Proc. IEEE/IN-
FOCOM, San Francisco, CA, Apr. 2002, pp. 2290–2300.
To deal with dynamic bandwidth, TCP-EBN is proposed in [9] C. Dovrolis, P. Ramanathan, and D. Moore, “What do packet dispersion
[10]. In TCP-EBN, a TCP sender increases or decreases its cwnd techniques measure?,” presented at the INFOCOM, Anchorage, AK,
according to a bandwidth estimate that is sent to it from routers. Apr. 2001.
[10] D. Dutta and Y. Zhang, “An early bandwidth notification (EBN) archi-
Thus, this scheme relies on router cooperation and is, therefore, tecture for dynamic bandwidth environments,” in Proc. IEEE Int. Conf.
not an “end-to-end” approach to the problem at hand. Also, the Commun., Apr. 2002.
248 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 23, NO. 2, FEBRUARY 2005

[11] L. Rizzo, “Dummynet: A simple approach to the evaluation of network Kenshin Yamada, photograph and biography not available at the time of
protocols,” ACM Comput. Commun. Rev., 1997. publication.
[12] IP Dummynet [Online]. Available: http://info.iet.unipi.it/~luigi/ip_
dummynet/
[13] N. Ehsan, M. Liu, and R. Ragland, “Measurement based performance
analysis of Internet over satellite,” presented at the Int. Symp. Perf. Eval.
Comput. Telecommun. Syst. (SPECTS), San Diego, CA, Jul. 2002.
[14] S. Floyd, “High speed TCP for large congestion windows,” Internet
draft, [Online]. Available: draft-ietf-tsvwg-highspeed-01.txt, 2003.
[15] FreeBSD Project [Online]. Available: http://www.freebsd.org/ M. Yahya (Medy) Sanadidi (M’83–SM’03) re-
[16] L. Guo and I. Matta, “The war between mice and elephants,” in Proc. ceived the B.Sc. degree in computer engineering
9th IEEE Int. Conf. Network Protocols, Riverside, CA, Nov. 2001, pp. from the University of Alexandria, Alexandria,
180–188. Egypt, and the Ph.D. degree in computer science
[17] G. Hengartner, J. Bolliger, and T. Gross, “TCP Vegas revisited,” in Proc. from the University of California, Los Angeles
IEEE INFOCOM, Mar. 2000, pp. 1546 –1555. (UCLA).
[18] J. C. Hoe, “Improving the startup behavior of a congestion control He is currently a Research Professor in the Com-
scheme for TCP,” in Proc. ACM SIGCOMM , 1996, pp. 270–280. puter Science Department, UCLA. As Coprincipal
[19] G. Holland and N. H. Vaidya, “Analysis of TCP performance over mo- Investigator on NSF sponsored research, he is
bile ad hoc network,” presented at the ACM MobiCom, Seattle, WA, guiding research in design, modeling, and evaluation
Aug. 1999. of high-performance Internet protocols. At UCLA,
[20] Iperf Version 1.7.0 [Online]. Available: http://dast.nlanr.net/Projects/ he also teaches undergraduate and graduate courses on queueing systems and
Iperf/ computer networks. He was a Manager and Senior Consulting Engineer at
[21] V. Jacobson, “Congestion avoidance and control,” ACM Comput. Teradata, AT&T, NCR, and previous to that, he held the position of Computer
Commun. Rev., vol. 18, no. 4, pp. 314–329, Aug. 1988. Scientist at Citicorp, where he pursued R&D projects in wireless metropolitan
[22] R. Jain, “The art of computer systems performance analysis,” Wiley, area data communications. In particular, from 1984 to 1987, he lead the
New York, QA76.9.E94J32, 1991. design and prototyping of a wireless MAN for home banking and credit card
[23] T. Kelly, “Scalable TCP: Improving performance in high-speed wide verification applications. From 1981 to 1983, he was an Assistant Professor
area networks,” , Dec. 2002, submitted for publication. in the Computer Science Department, University of Maryland, College Park.
[24] D. Katabi, M. Handley, and C. Rohrs, “Internet congestion control At the University of Maryland, he taught performance modeling, computer
for future high bandwidth-delay product environments,” in Proc. architecture and operating systems, and was Principal Investigator for NSA
SIGCOMM , 2002. sponsored research in global data communications networks. He has consulted
[25] S. Keshav, “A control-theoretic approach to flow control,” in Proc. ACM for industrial concerns, has coauthored conference and journal papers, has been
SIGCOMM, Sep. 1991, pp. 3–15. awarded two patents in performance modeling, and has served as reviewer
[26] S. Lee, B. G. Kim, and Y. Choi, “Improving the fairness and the response and program committee member of professional conferences. His current
time of TCP-Vegas,” in Lecture Notes in Computer Science. New research interests are in remote sensing and estimation of path characteristics,
York: Springer-Verlag. congestion control, and adaptive streaming.
[27] NS-2 Network Simularor (Ver.2.) LBL [Online]. Available: http://www.
mash.cs.berkley.edu/ns/
[28] V. N. Padmamabhan and R. H. Katz, “TCP fast start: A technique
for speeding up web transfers,” presented at the IEEE GLOBECOM,
Sydney, Australia, Nov. 1998.
[29] R. Wang, M. Valla, M. Y. Sanadidi, and M. Gerla, “Using adaptive band-
width estimation to provide enhanced and robust transport over hetero-
geneous networks,” presented at the 10th IEEE Int. Conf. Network Pro-
tocols, Paris, France, Nov. 2002. Mario Gerla (M’75–SM’01–F’03) received the
[30] R. Wang, G. Pau, K. Yamada, M. Y. Sanadidi, and N. Gerla, “TCP startup Graduate degree in engineering from the Politecnico
performance in large bandwidth delay networks,” in Proc. IEEE IN- di Milano, Milan, Italy, in 1966, and the M.S. and
FOCOM, Hong Kong, Mar. 2004, pp. 796–805. Ph.D. degrees in engineering from the University of
[31] Y. Zhang, L. Qiu, and S. Keshav, “Optimizing TCP startup perfor- California, Los Angeles (UCLA), in 1970 and 1973,
mance,” Tech. Rep., Cornell CSD, 1999. respectively.
After working for Network Analysis Corporation
from 1973 to 1976, he joined the Faculty of the
Computer Science Department, UCLA, where he
Ren Wang (S’01–M’03) received the B.E degree is now a Professor. His research interests cover
in automation from University of Science and the performance evaluation, design and control of
Technology of China, the M.S. degree in computer distributed computer communication systems, high-speed computer networks,
engineering from the Chinese Academy of Science, wireless LANs, and ad hoc wireless networks. He has worked on the design,
Beijing, China, and the M.S. degree in electrical implementation and testing of various wireless ad hoc network protocols
engineering from University of California, Los (channel access, clustering, routing, and transport) within the DARPA WAMIS,
Angeles (UCLA). Currently, she is working towards GloMo Projects. Currently, he is leading the ONR MINUTEMAN Project at
the Ph.D. degree in computer science at UCLA UCLA, the main focus of which is the design of a robust, scalable wireless
under the guidance of Prof. M. Gerla. ad hoc network architecture for unmanned intelligent agents in defense and
Her research interests include network per- homeland security scenarios. He is also conducting research on QoS routing,
formance measurement and evaluation, network multicasting protocols, and TCP transport for the next-generation Internet (see
analysis and modeling, and TCP protocol design and evaluation. www.cs.ucla.edu/NRL for recent publications).

You might also like