ISP Interconnection PDF
ISP Interconnection PDF
Page 1
www.interisle.net
Page 2
www.interisle.net
Page 3
1 Introduction
Internet Service Providers (ISPs) connect their networks to each other in order to exchange traffic between their customers and the customers of other ISPs. ISP Interconnection1 allows traffic originating at a source connected to one ISPs network to reach a destination connected to another ISPs network, around the block or around the world. End users see the seamless, global, ubiquitous communication medium known as the Internet; behind the scenes lie many individual networks, owned and operated by many different corporate, institutional, and governmental entities, joined to each other by interconnection arrangements. Interconnection is the glue that holds the Internet together. Interconnection enables the Internet as a whole to be ubiquitously fullyconnected, despite the fact that no single network operator could possibly provide Internet access in every part of the world. The unregulated marketdriven model on which todays global interconnection arrangements are based has developed over the past three decades in parallel with the development of the Internet itself, and studies by a wide variety of public and private organizations2 have repeatedly concluded that it represents the most effective and efficient way to provide ubiquitous public Internet connectivity without being either anti-competitive or inequitable. Internet interconnection is fundamentally different from interconnection in the traditional, circuit-switched telephony world, for reasons that are intrinsic to the architecture of the Internet and how it has evolved. As a result, the nature of Internet interconnection agreements, the range of choices that are available to participants, the economics of interconnection, and the number and variety of
In The Evolution of the U.S. Internet Peering Ecosystem (see Sources and References), Bill Norton coins the roughly equivalent term Internet Peering Ecosystem: a community of loosely affiliated network operators that interact and interconnect their networks in various business relationships.
2
An excellent summary of the case that these studies collectively make for the unnecessity of ISP interconnection regulation is contained in FCC Office of Plans and Policy (now Office of Strategic Planning and Policy Analysis) Working Paper 32, The Digital Handshake: Connecting Internet Backbones (see Sources and References).
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 4
participants in the market are different from their counterparts in the telephony world.
2.1 Networking
Neither internetworking nor interconnection were features of the Internets most distant precursors. In the 1950s and 1960s, before LANs and PCs, computer communication meant connecting I/O and storage peripherals (such as card readers, terminals, and printers) to resolutely self-contained mainframe computers. Early efforts to connect computers to each other led to networks based on a variety of different proprietary communications technology and protocols. The Information Processing Techniques Office (IPTO) of the Advanced Research Projects Agency (ARPA) of the U.S. Department of Defense funded several projects to build homogeneous networks before Bob Taylor, who took
Page 5
over as IPTO head in 1966, recruited Larry Roberts to design a distributed communications network which laid the foundation for the ARPAnet. When there were just a few of these homogeneous networks, it was possible to exchange information between them by building a translator; but as the number of networks grew, the n-squared scaling inefficiency of pair-wise translation led to the idea of internetworkingcreating a network of networks.
2.2 Internetworking
It is remarkable to realize that the very earliest thinking4 about what a network of networksan internetshould be embraced the three key concepts that underlie the architecture of todays global Internet: 1) The concept of packet switching, which originated in at least three distinct places during 1961-1965: in Paul Barans work at the RAND corporation in Santa Monica, CA; in Leonard Kleinrocks work at UCLA in Los Angeles, CA; and in Donald Daviess work at the National Physical Laboratory in Teddington, UK. All three concluded that the strongest communication system would be a distributed network of computers with (a) redundant links; (b) no central control; (c) messages broken into equal-size packets; (d) variable routing of packets depending on the availability of links and nodes; and (e) automatic reconfiguration of routing tables after the loss of a link or node. 2) The concept of best-effort service, which originated in the multi-access channels of ALOHAnet at the University of Hawaii (by Abramson, Kuo, and Binder, through 1970)5.
In the early to mid-1960s, culminating in the July 1968 ARPA request for proposals for the interconnection of four ARPA research sites into what would be called the ARPAnet.
5
ALOHAnet was a radio network, and the idea of contention for channels was widely familiar in the radio context; it was Bob Metcalfes brilliant leap from ALOHA to Ethernet (at PARC in 1973) that brought the concept of stochastic (non-deterministic) channel access into the networking mainstream.
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 6
3) The concept of application independencethat the network should be adaptable to any purpose, whether foreseen or unforeseen, rather than tailored specifically for a single application (as the public switched telephone network had been purpose-built for the single application of analog voice communication). At the outset, in 1969, the ARPAnet was not an interneteach of its four computer hosts was connected to an Interface Message Processor (IMP) by a proprietary serial link and protocol6, and the IMPs communicated with each other over 56Kb/sec. lines leased from the telephone company, using an ARPAnet-specific host-to-host protocol that was referred to as the Network Control Program (NCP). Other packet networks, based on other protocols, were being developed at the same time7. The first papers describing packet network interconnection were published by Vint Cerf and Bob Kahn in 1973; the ARPAnet began using IP in 1977. From the beginning the ARPAnet was managed by an informal and mostly selfselected group of engineers and managers who began meeting as the Network Working Group (NWG) in the summer of 1968. The tradition of self-management by the people designing, installing, and operating the network was established at the very first NWG meeting, and has carried through to the governance structures that oversee the Internet todayparticularly the Internet Engineering Task Force (IETF).
2.3 Interconnection
The clearly evident usefulness of the ARPAnet to the U.S. Defense Department contractors who were permitted to use it led other U.S. Government agencies to
6 7
Dubbed 1822, after the serial number of the BBN report that described it.
Initially the Packet Radio Network (Bob Kahn) and Packet Satellite Network (Larry Roberts); later Cyclades (Louis Pouzin), and the X.25-based networks that became Telenet, Datapac, PSS, and Transpac.
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 7
develop similar networks8. Eventually, disgruntled computer scientists9 who could not connect to one of the government-controlled networks established CSNET for the (academic and industrial) computer science community. AT&Ts wide dissemination of the Unix operating system encouraged the creation of USENET, based on the Unix UUCP communication protocols, and in 1981 Ira Fuchs and Greydon Freeman developed BITNET, which linked academic mainframe computers. With the exception of BITNET and USENET, these early networks were restricted to closed communities defined by an acceptable use policy (AUP) that specified the uses to which the networks could legitimately be put (e.g., to conduct research funded by a particular government agency). The prevalence of highly restrictive AUPs provided little incentive for the networks to interconnect, and initially they did not.
The U.S. Department of Energy (DoE) built MFENet for its researchers in Magnetic Fusion Energy; DoE's High Energy Physicists responded by building HEPNet. NASA Space Physicists followed with SPAN.
9
Page 8
10
by Larry Landweber (University of Wisconsin), David Farber (University of Delaware), Anthony Hearn (Rand Corporation), and Peter Denning (Purdue University).
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 9
NSF went on to sponsor NSFnet, a high-speed backbone connecting its supercomputing research centers. NSF also commissioned the development11 of a deliberate architecture of backbones and regional networks that introduced the idea of hierarchy into the Internet topology. By 1990, the NSFnet had become the backbone of the modern Internet, and in 1996, NSF handed over its management to commercial Internet Service Providers (ISPs).
As the number and diversity of NAPs increased, the potential complexity of hundreds or thousands of ad-hoc bilateral arrangements pointed to the need for
11
Page 10
an overarching, neutral policy framework within which providers could implement mutually beneficial cost-sharing interconnection agreements. It was at this juncture that there began to emerge a large number of privately operated NAPs, a.k.a. exchange points, which provided a uniform set of technical and administrative services (e.g. interconnection, traffic routing based on sophisticated criteria, operational support of routing equipment, traffic metering, billing, and clearing and settlement of charges between parties). These exchanges provided a framework that allowed multiple providers of different sizes, scopes, and operating philosophies, serving the same or different markets, to interconnect in ways appropriate to each.
www.interisle.net
Page 11
growing number of ISPs, and the variety of different ways in which the rapidly expanding Internet services market drove the development of creative combinations of public and private ISP interconnection, ensured that the Internet as a whole would always be fully interconnected; the customers of every ISP could communicate with the customers of every other ISP, whether or not any particular pair of ISPs installed an explicit public or private interconnection.12
12
The topology of Internet interconnection has emerged over the past decade as an important factor in studies of Internet resilience and survivability; see, for example, Edward J. Maleckis The Economic Geography of the Internets Infrastructure (Sources and References). A corollary to many of these studies is the observation that the selfhealing properties of the Internet architecture guarantee that the Internet as a whole will remain fully interconnected even if most of the direct connections between individual ISPs were removed. The fear of Internet balkanization as a result of large ISPs refusing to interconnect with smaller ISPs is, in todays Internet, completely unfounded.
13
With the notable exception of the UK, which was connected to the ARPAnet much earlier than any other non-North American country.
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 12
Internet content, they had very little incentive to defray the cost of connections to other countries. As recently as five years ago, ISPs in non-North American countries were determined to correct this imbalance by forcing North American ISPs to subsidize the cost of inter-regional links. However, as dozens of viable regional Internet exchanges have emerged outside of North America14, the pressure to regulate international ISP interconnection in favor of non-North American ISPs has substantially evaporated. Market forces now drive ISP interconnection decisions in many other countries as effectively as they do in North America.
14
A current list of Internet exchange points is maintained at https://www.peeringdb.com/private/exchange_list.php; at the time of this report, 60 of the 92 listed exchanges are located outside of North America.
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 13
4) control of spam, phishing, and other intrinsically multi-ISP exploits; and 5) enforcement of national public policy mandates (universal service, emergency warning (cf. the recent IETF proposal), wiretap, etc.). ISP interconnection operates very differently in the Internet than its counterpart does, for example, in the more familiar public switched telephone network (PSTN). The differences are observable both in the basic architecture of interconnectionthe decentralized and self-organizing Internet approach to packet switching vs. the centralized and heavily-managed PSTN circuit switchingand in the policies and economics that govern interconnection arrangements.
www.interisle.net
Page 14
creation of value, while truly representing global consensus and thereby keeping participants on board. Almost every aspect of Internet technical development, operation, and governance is managed by a self-organized, self-regulating structure, and this fact has often been cited as the key to the Internets phenomenal success. Selfregulation has allowed the Internet to adapt quickly and efficiently to the rapid pace of change and innovation in telecommunications technology, operations, and public policy. 3.1.1.1 Technical Standards Internet technical standards are developed through the activities of the Internet Engineering Task Force (IETF), coordinated by the Internet Architecture Board (IAB) and housed, administratively, within the Internet Society. The IETF is: a loosely self-organized group of people who contribute to the engineering and evolution of Internet technologies. It is the principal body engaged in the development of new Internet standard specifications. The IETF is unusual in that it exists as a collection of happenings, but is not a corporation and has no board of directors, no members, and no dues.15 The loosely self-organized IETF and related organizations have proven, over a 20 year history, to be effective at establishing workable standards and highly adaptive to the rapid growth and change that have occurred within the Internet. 3.1.1.2 Operating Principles and Practices Since the earliest days of the Internet, the operators of interconnected networks have met both informally and formally to share technical information and coordinate operating principles and practices. In the 1990s, members of the former NFSNET Regional-techs meeting formed an expanded group, called the North American Network Operators Group (NANOG), with a charter to promote and coordinate the interconnection of networks within North America
15
Tao of the IETFA Novice's Guide to the Internet Engineering Task Force. (see Sources and References).
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 15
and to other continents, serving as an operational forum for the coordination and dissemination of technical information related to backbone and enterprise networking technologies and operational practices. NANOG has been highly effective in allowing ISPs and backbone providers to coordinate their activities to efficiently provide seamless service to a broad market. The fact that North American Internet users enjoy transparent access to the entire Internet, regardless of the ISP to which they happen to be locally connected, testifies to the success of the self-regulating NANOG model. Another measure of the effectiveness of NANOG is that other regions of the world have replicated the approach and have developed or are developing similar groups, including: AfNOGthe African Network Operators Group SwiNOGthe Swiss Network Operators Group JANOGthe JApan Network Operators Group FRnOGthe FRench Network Operators Group NZNOGthe New Zealand Network Operators Group SANOGthe South Asian Network Operators Group PACNOGthe Pacific Network Operators Group
3.1.1.3 Resource Allocation One of the most important governance functions in any domain is promoting an efficient exchange of value and allocation of resources. In the Internet, there are two key types of resources in play: Physical, tangible infrastructure such as communications links and switching facilities, and virtual resources. Domain names, such as coca-cola.com, or lightbulbs.com, constitute one highly visible class of valuable virtual resource in the Internet. Domain names combine aspects of traditional intellectual property (i.e. trademarks and service marks), with the technical infrastructure required to cause the names to perform their intended function.
www.interisle.net
Page 16
Another important virtual resource is the IP Address, the numerical address by which each computer connected to the Internet is uniquely addressable. Under any addressing scheme, there are only a fixed number of addresses available; the IETF can establish an addressing scheme (and has done so); the operators groups can establish a plan for deploying it; but there still needs to be a mechanism for allocating the addresses. ICANN, the Internet Corporation for Assigned Names and Numbers, is an international, broadly participatory organization responsible for overseeing: The allocation of domain names, through a highly decentralized, marketdriven process The allocation of IP addresses The operation of the mechanism (also highly decentralized) whereby names are resolved to addresses, an essential function for the proper operation of most Internet services. From its own description: As a private-public partnership, ICANN is dedicated to preserving the operational stability of the Internet; to promoting competition; to achieving broad representation of global Internet communities; and to developing policy appropriate to its mission through bottom-up, consensus-based processes.
Page 17
are installed. ISPs that want to use the exchange point to connect to other ISPs run one or more links from their own routers to the exchange point, and connect them to the exchange point routers. The ISP routers and the exchange point routers exchange information about where different groups of Internet hostsidentified by their IP addressesare located, using routing protocols such as the Border Gateway Protocol (BGP). ISP A might learn, for example, that a group of Internet users who are customers of ISP B can be reached through an exchange point to which both A and B are connected, and decide to use the exchange point to reach those users. Traffic from users on As network to users on Bs network would flow over As network as far as the exchange point, and then over Bs network.16 A similar arrangement obtains when two ISPs decide to connect their networks directly to each other, rather than at a third-party exchange point. The most important difference between this model of Internet interconnection and the circuit-switching model of the PSTN is that the Internet dynamically selforganizes to find paths from one point to another without explicit preconfiguration or setup. In the Internet, if an ISPs link to one exchange point (or the exchange point itself) fails, it can quickly re-route traffic through some other exchange point, or to a direct connection to another ISP, without loss of data or manual re-configuration. When multiple carriers are involved, this process is much less dynamic (and much less robust) in the PSTN, where call re-routing depends on the prior negotiation and provisioning not only of alternative circuits but also of switch ports and switching fabric capacity. In todays richly-interconnected Internet, the possibility that an ISP could find itself unable to connect its customers to some part of the Internet because one or even many other ISPs refused to interconnect with it17 is vanishingly small; there
16
In practice, of course, the way in which traffic flows are managed at exchange points is much more complicated than in this example.
17
Some ISPs will refuse to carry traffic that originated with another ISP that has been blacklisted for sponsoring spam or phishing attacks, but this is not the classic holdup scenario that can arise from simple refusal to interconnect in the PSTN world.
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 18
are simply too many available connection points, public and private, and the architecture of the Internet ensures that traffic will flow end-to-end regardless of where an ISP is connected.
18
Representative examples of large, medium, and small networks peering policies are the MCI UUNet policy (at <http://global.mci.com/uunet/peering/>), the Speakeasy policy (at <http://www.speakeasy.net/network/peeringpolicy.php>), or (third example),
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 19
The current environment is one in which a heterogeneous mix of network providerslarge and small; local, national, and global; public and privateconnect to each others networks under a variety of arrangements, which adhere to one of four basic models: 1) Bilateral settlements. Two operators interconnect. Each accepts traffic destined for its own customers and originating within the others network. Neither network delivers traffic to third parties on behalf of the other. Each charges for the volume of traffic it accepts from the other. ( It follows that if the value of traffic in both directions is equal, the net settlement amount would be zero) 2) Sender Keep All. As with bilateral settlements, two operators each accept traffic from the other, for delivery to the accepting networks customers. But no charge is made. 3) Transit. One operator, the provider, accepts traffic originating within the others network, destined not only for its own customers but for third party networks with whom the provider in turn connects. The provider charges a fee for carrying the other networks traffic. 4) Multilateral exchanges. An operator connects to an exchange, a (usually commercial) facility carrying connections from multiple operators. There, traffic is routed to other operators networks via equipment provided by the exchange and according to rules administered by the exchange; the operator settles through the exchange for traffic that others carry on its behalf and that it carries on behalf of others. When all the different network interconnection arrangements are considered, it is possible to consider them all as variations on a common theme (see Table 1) : Networks A and B connect to each other, possibly through a thirdparty exchange point or other intermediary. Each accepts traffic destined for its own customers (peering), and/or for the customers of other networks to which it is in turn connected (transit).
www.interisle.net
Page 20
The arrangement either includes a cash payment made by one network to the other (again, possibly through a third party intermediary), or it doesnt.
The arrangement is either purely bilateral, or it is a multi-party agreement. A accepts traffic for: Its own customers only Other networks to whom it connects B accepts traffic for: Its own customers only Other networks to whom it connects Other Financial settlement None Cash Through an Exchange Multi-Party Networks connect: Directly Nature of Agreement Bilateral
Table 1: Any given interconnection arrangement can be characterized by choosing one value from each of the columns above.
19
Two recent studies of peering economics are reported in Economics of Peering and A Business Case for Peering in 2004 (see Sources and References).
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 21
Operational: networks may require a certain level of operational support. Routing: networks may require specific routing policies and practices. Size: networks may choose to peer only with similarly sized networks. Anticipated traffic volumes.
Additional, idiosyncratic factors apply. For example, if one networks specific geography or customer mix or traffic mix dovetails with an important element of the other networks strategy, it would lead to a higher perceived value and price than otherwise. In any given case, the arrangement is made on the basis of a perceived equitable exchange of value between the two interconnecting parties, where the value of the arrangement to each of the parties is determined by a number of factors, some obvious (direct cash payment, cost-effective transit, or access to a large user community, for example); others entirely idiosyncratic . Because so many idiosyncratic factors affect each interconnection decision, it is extremely difficult to analyze the economics of any particular interconnection arrangement using external, objective criteria in order to determine whether or not the market is distorted and the agreement gives either party undue advantage. The argument has been made in the past, for example, that certain bilateral relationships between overseas and US-based networks are unfair on the grounds that the cost of the transatlantic or transpacific link was borne entirely by the overseas network, where as the origination of traffic was split more evenly between the two. In some cases, European ISPs in one country were connecting to US backbones in order to send traffic back to a neighboring European country, bearing the cost in effect of two transatlantic hops. Two arguments against intervention apply here: one addresses the argument itself and the other examines historical outcomes. At a theoretical level, the implicit assumption that the cost of a link, in a perfectly fair market, should be borne by the two connected parties in proportion to the volume of traffic they
www.interisle.net
Page 22
originate, and that anything else is perforce distored; is flawed due to the additional factors other than traffic volume, discussed above, that influence the value of interconnection to either party. On a more pragmatic level, it has been observed that the European networks now connect to each other at multiple exchange points within Europe. It was the rational, financial desire to avoid paying transatlantic round-trips to connect to ones neighbor, and not regulatory intervention, that led to the emergence of this more effective network topology.
www.interisle.net
Page 23
It has been suggested that the free market operation of ISP interconnection would be threatened by consolidation and the emergence of a small number of dominant players, who would be able to form in effect a cartel and disadvantage their competitors, resulting in the usual effects of reduced competition: higher prices overall, a slower pace of innovation, fewer choices, and damage to the end consumer. There has, in fact, been considerable consolidation in the ISP market. Is it hurting the market? In assessing this question, it is important to understand what might be the symptoms, or signatures of a distorted, uncompetitive, oligopolistic or monopolistic market. Given the idiosyncratic nature of peering decisions, it is not clear that just because the cash economics of given a peering arrangement do not track the data transport volumes, geographic footprints, or other obvious criteria, implies market distortion. On the other hand, tracking the number of backbone operators and the entry barriers to the backbone business is likely to provide insight into the dynamics of the market. Recently, it has been claimed that the barriers to entry in the backbone business have been lowered, for example: Trends in transport pricing over the past six months have created a disruptive change by lowering the barriers for small and regional networks to develop robust national backbones for application delivery, peering, network performance and business expansion. This presentation will review pricing trends and the opportunities that are being created for small and regional networks. It will draw upon specific examples and case studies of ISPs that have leveraged this trend, as well as a review of specific products and their price points. The presentation will be technical and geared toward an engineering audience.20 It is worth following these predictions to determine their accuracy and applicability.
20
Jay Adelson, founder and CTO, Equinix, Session announcement, 2005 ISPCON conference.
Copyright 2005 Interisle Consulting Group, LLC. All Rights Reserved www.interisle.net
Page 24
4 New Challenges
The Internet approach to interconnection faces several new challenges, which will force it to adapt, as it has to other challenges and changes over the past decades. The first three challenges described below are well within the scope of the Internet approach: the existing policy mechanisms are well equipped to adapt to these changes, as they have to equally disruptive challenges and changes in the past. The fourth, relating to external attempts to bypass the selforganizing aspects of the Internet approach and impose poicy, are in many senses orthogonal to the operation of the Internet itself, and represent a significant and potentially distorting force.
www.interisle.net
Page 25
With the rise of VoIP and QoS-dependent applications, interconnection arrangements are likely to involve multiple layers of the Internet architecture. This will affect technical standards, operating practices and policies, the economic decisions surrounding a providers decision to interconnect, the terms of interconnection agreements, and the overall market.
4.2 Balkanization
Potential for balkanization of the Internet as backbone ISPs try to differentiate themselves (competitively) by offering services only to their own customers, resulting in a network infrastructure that does not provide a uniform, universal standard of coverage. (see OPP WP 32 pg. 26). This was anticipated by studies conducted in the late 1990s, which concluded at the time that balkanization was not likely to occur because of other forces.
5 Conclusions
Today's Internet is the way it is because of the way it developed. In every arena: technical standards, operating practices, resource allocation, and others, policy is established by self-organized, inclusive organizations, operating with a high degree of transparency, and representing a broad constituency.
www.interisle.net
Page 26
This approach is nearly inevitable, given the inherently decentralized native architecture of the Internet and the heterogeneous, global market in which the Internet operates. The incentives are well aligned: due to the network effect, continued growth of the Internet is a rising tide that lifts all boats, which creates a strong bias toward policies that facilitate growth and efficiency. If the policy making organizations didn't respond to that imperative, the participants wouldn't follow, and the policy makers would lose their mandate. On the other hand, top-down attempts to regulate, either in the service of "improving" the Internet itself, to redress perceived inequalities in access or pricing, or in furtherance of orthogonal policy objectives (solving the digital divide problem, for example), no matter how well intentioned or carefully crafted, are contrary to the fundamental, decentralized nature of the Internet, which is an important source of the Internet's vitality, and run the risk of being destabilizing and harmful. At present, the self-organized, self-regulating aspects of the Internet are thriving. Regulatory policy-makers should remain attuned to the possibility that future developments would lead to a less competitive environment, and watch for the signatures of a distorted market, but until such problems present themselves, should refrain from action.
www.interisle.net
Page 27
Evolution of Internet Infrastructure in the Twenty-First Century: The Role of Private Interconnection Agreements. Rajiv Dewan, Marshall Friemer, and Pavan Gundepudi, University of Rochester, October 1999. NRIC V Focus Group 4 Final Report, Appendix B: Service Provider Interconnection for Internet Protocol Best Effort Service. FCC Network Reliability and Interoperability Council, September 2000. <http://www.nric.org/pubs/nric5/2B4appendixb.doc> NRIC VI Focus Group 3 Final Report. FCC Network Reliability and Interoperability Council, November 2003. <http://www.nric.org/pubs/nric5/2B3finalreport.pdf> A Business Case for Peering in 2004. William B. Norton, September 2004. Presentation to Gigabit Peering Forum IX, New York City, 14 September 2004. <https://ecc.equinix.com/peering/downloads/A Business Case for Peering in 2004.ppt> Competitive Effects of Internet Peering Policies. Paul Milgrom, Bridger Mitchell, and Padmanabhan Srinagesh, 2000. Reprinted from The Internet Upheaval, Ingo Vogelsang and Benjamin Compaine (eds), Cambridge: MIT Press (2000): 175-195. <http://www.stanford.edu/~milgrom/publishedarticles/TPRC 1999.internet peering.pdf> Economic Trends in Internet Exchanges. Bill Woodcock, February 2005. Presentation to NZNOG 2005 conference, 3 February 2005. <http://www.pch.net/resources/papers/asia-pac-ix-update/asia-pac-ixupdate-v11.pdf> Economics of Peering. Steve Gibbard, April 2005. <http://www.pch.net/documents/papers/Gibbard-peering-economics.pdf> The Art of PeeringThe Peering Playbook. William B. Norton, May 2002. <http://www.xchangepoint.net/info/wp20020625.pdf> The ISP Survival Guide: Strategies for Running a Competitive ISP. Geoff Huston, Wiley Computer Publishing, 1998. Internet interconnection and the off-net-cost pricing principle. Jean-Jacques Laffont, Scott Marcus, Patrick Rey, and Jean Tirole, RAND Journal of Economics Vol. 34, No. 2, Summer 2003 pp. 370390. Interconnection, Peering and Financial Settlements in the Internet. Geoff Huston, June 1999. Presentation at the 1999 Internet Society Conference (INET 99). <http://www.isoc.org/inet99/proceedings/1e/1e_1.htm > Without Public Peer: The Potential Regulatory and Universal Service Consequences of Internet Balkanization. Rob Frieden, 1998. Virginia Journal of Law and Technology, Fall 1998. <http://vjolt.student.virginia.edu/graphics/vol3/vol3_art8.html> The Evolution of the U.S. Internet Peering Ecosystem. William B. Norton, November 2003. < http://www.equinix.com/pdf/whitepapers/PeeringEcosystem.pdf>
www.interisle.net
Page 28
Internet Economics. Lee W. McKnight and Joseph P. Bailey (eds.), MIT Press, 1997. Netheads vs. Bellheads: Research into Emerging Policy Issues in the Development and Deployment of Internet Protocols. Timothy Denton, May 1999. <http://www.tmdenton.com/pub/bellheads.pdf> Tao of the IETFA Novice's Guide to the Internet Engineering Task Force. Susan Harris and Paul Hoffman, October 2004. <http://edu.ietf.org/tao> The Economic Geography of the Internets Infrastructure. Edward J. Malecki, October 2002. < http://www.clarku.edu/econgeography/2002_04.html>
www.interisle.net