Electrical Engineering and Computer Science Department
Electrical Engineering and Computer Science Department
Technical Report
NWU-EECS-06-15
October 16, 2006
Abstract
Network measurements reveal that today's most popular web browsers open parallel
TCP connections and use them to actively transfer data from web servers. This technique, based
on the assumption that ``more is better,'' is motivated by the need to improve user-perceived
performance. While parallel connections indeed improve performance for lossless networks and
when web pages are very large in size, we show, by means of analytical modeling, simulation,
and testbed experiments, that no such improvements exist in scenarios that more closely
characterize today's Internet. Moreover, we demonstrate that in addition to placing more stress
on web servers, the parallel TCP approach can degrade the web-response times to a level that is
up to an order of magnitude below the level achievable by a single connection. We analyze the
roots of this phenomenon and find that the benefits of accelerated parallel download of small
objects are largely overshadowed by the initial connection setup time, which dominates the
entire transfer. We quantify the user-perceived latency and show that it dramatically
increases with the level of parallelism and congestion in the network.