New Generation Network Architecture
New Generation Network Architecture
New Generation Network Architecture
AKARI Project Original Publish (Japanese) April 2007 English Translation October 2007 Copyright 2007 NICT
AKARI Project Members: Masaki Hirabaru, Masugi Inoue, Hiroaki Harai, Toshio Morioka, Hideki Otsuki, Kiyohide Nakauchi, Sugang Xu, Ved Kafle, Hiroko Ueda, Masataka Ohta, Fumio Teraoka, Masayuki Murata, Hiroyuki Morikawa, Fumito Kubota, and Tomonori Aoyama This document presents the conceptual design of a new generation network architecture. It is based on the discussions at 14 meetings and 2 seminars, which were held during an 11-month period beginning in May 2006 and attended primarily by the Network Architecture Group of the New Generation Network Research Center of the National Institute of Information and Communications Technology (NICT).
What the name of AKARI indicates The codename for New Generation Network R&D in NICT "A small light in the dark pointing to the future"
Societal Considerations and Design Requirements of the New Generation Network Era
Network requirements and considerations for the Internet of tomorrow include: (1) Peta-bps class backbone network, 10Gbps FTTH, e-Science (2) 100 billion devices, machine to machine (M2M), 1 million broadcasting stations (3) Principles of competition and user-orientation (4) Essential services (medical care, transportation, emergency services), 99.99% reliability (5) Safety, peace of mind (privacy, monetary and credit services, food supply traceability, disaster services) (6) Affluent society, disabled persons, aged society, long-tail applications (7) Monitoring of global environment and human society
(8) Integration of communication and broadcasting, Web 2.0 (9) Economic incentives (business-cost models) (10) Ecology and sustainable society (11) Human potential, universal communication To deal with these societal requirements, our goal is to contribute to human development by designing a new generation network architecture based on the following design principles. (1) Large capacity. Increased speed and capacity are required to satisfy future traffic needs, which are estimated to be approximately 1000 times current requirements in a decade. (2) Scalability. The devices that are connected to the network will be extremely diverse, ranging from high-performance servers to single-function sensors. Although little traffic is generated by a small device, their number will be enormous, and this will affect the number of addresses and states in the network. (3) Openness. The network must be open and able to support appropriate principles of competition. (4) Robustness. High availability is crucial because the network is relied on for important services such as medical care, traffic light control and other vehicle traffic services, and bulletins during emergencies. (5) Safety. The architecture must be able to authenticate all wired and wireless connections. It also must be designed so that it can exhibit safety and robustness according to its conditions during a disaster. (6) Diversity. The network must be designed and evaluated based on diverse communication requirements without assuming specific applications or usage trends. (7) Ubiquity. To implement pervasive development worldwide, a recycling-oriented society must be built. A network for comprehensively monitoring the global environment from various viewpoints is indispensable for accomplishing this. (8) Integration and simplification. The design must be simplified by integrating selected common parts, not by just packing together an assortment of various functions. Simplification increases reliability and facilitates subsequent extensions. (9) Network model. To enable the information network to continue to be a foundation of society, the network architecture must have a design that includes a businesscost model so that appropriate economic incentives can be offered to service providers and businesses in the communications industry. (10) Electric power conservation. As network performance increases, its power consumption continues to grow, and as things stand now, a router will require the electrical power of a small-scale power plant. The information-networked society of the future must be more Earth friendly. (11) Extendibility. The network must be sustainable. In other words, it must have enough flexibility to enable the network to be extended as society develops.
generation network architecture must be designed to handle simultaneous or serious failures that may occur. Controls for a topologically fluctuating network: In a mobile network or P2P network, communication devices are frequently created, eliminated, or moved. It is essential for mobility to be taken into consideration when designing a network. For example, when the topology frequently changes, controls for finding resources on demand are more effective than controls for maintaining routes or addresses. However, since the overhead for on-demand control is high, it is important to enable routing to be implemented according to conditions of topology fluctuation. Controls based on real-time traffic measurement: Failures become more commonplace as the scale of a network increases. As a result, precision-optimized real-time traffic measurements over the time scale required for control are important, and these must be applied to routing. Also, to pursue more autonomous actions in end hosts, it is important to actually measure or estimate the network status. Scalable, distributed controls: To sufficiently scale controls even in large-scale or topologically varying networks, it is important to introduce self-organizing controls or pursue autonomous actions at each node. Openness: Providing openness to users to facilitate the creation of new applications is also important to the network. (3) Reality Connection principle Internet problems occur because entities in space on the network are disassociated from real-world society. To smoothly integrate relationships between these entities and society, addressing must be separated into physical and logical address spaces and then mappings must be created between them and authentication or traceability requests based on those mappings must be satisfied. Separation of physical and logical addressing: We must investigate the extent to which physical and logical addressing should be separated. Various problems have been caused on the Internet by the appearance of new types of host connection scenarios that had not previously existed such as mobility or multi-homing scenarios and by handling physical and logical addresses in the same way. Bi-directional authentication: A network should be designed so that bidirectional authentication is always possible. Also, authentication information must be located under control of the particular individual or entity. Traceability: Individuals or entities must be traceable to reduce attacks on the network. Traceability must be a basic principle when designing addressing and routing as well as transport over them. To reduce spam, systems must be traceable from applications to actual society.
obtained by scientific methods is the essence of architecture construction. Specifically, the following procedure is required. (1) One architecture that can be entirely optimized and can flexibly adopt new functions is constructed. (2) Then, to refine that architecture, a model is created based on network science, and its system properties are discovered according to mathematical analysis or actual inspections. (3) Specific methods for achieving further global optimization (such as moderate interactions between layers or moderate interactions between different modules in the same layer) are created and new functions are adopted. This causes the network system to grow. (4) The entire process in which new properties for that system are discovered from a scientific standpoint and new technologies are adopted is repeatedly executed. In other words, network development can be promoted through a feedback loop containing repeated scientific and technological processes. Network science provides basic theories and methodologies for network architectures. However, the network system itself must be understood. New discoveries or principles can be obtained and system limitations can be learned by understanding system behavior through basic theories and methodologies. These theories and methodologies can also help clarify what makes good protocols or control mechanisms. When a network architecture is designed through network science research, whether or not the architecture is truly useful is clarified and implementation is promoted based on the following five criteria. (1) Has a new design policy been developed? (2) Has a new communication method been implemented? (3) Was a new abstraction, model, or tool conceived? (4) Were results commercialized and accepted by the user community? (5) Were solutions given for real-world problems?
Summary
The AKARI Conceptual Design is a first step towards implementing a new generation network architecture. As mentioned earlier, this paper introduces societal considerations, future basic technologies, and design principles to be used when designing a new network architecture. It also includes conceptual design examples of several key portions based on the design principles as well as requirements for testbeds that must be built for verifying them. Our approach is to focus our energy on continuing to design a new generation network and to use testbeds to investigate and evaluate the quality of that design. Therefore, the existence of design principles is crucial to achieving a globally optimized, stabilized architecture. Until the final design is completed, even the design principles themselves are not fixed, but can be changed according to feedback through repeated design and evaluation. The network architecture is positioned between the top-down demands of solving societal problems and the bottom-up conditions of future available component
technologies. Its role is to maximize the quality of life for the entire networked society and to provide it with sustainable stability. A new sustainable design must support human development for 50 or 100 years, not just 2 or 3 decades as it functions as the information infrastructure underlying our society. This new architecture must avoid the same dangers confronting the current Internet.
CONTENTS
Preface Chapter 1 Goals of the New Generation Network Architecture Design Project AKARI1 1.1 AKARI Project Objective 1.2 AKARI Project Targets 1.3 AKARI Project Themes 1.4 Network Architecture Definitions and Roles 1.5 Opportunity for Redesigning Network Architecture from a Clean Slate 1.6 Conceptual Positioning of New Generation Network and Its Approach 1.7 Two Types of NGN: NXGN and NWGN 1.8 Comparison of NXGN and NWGN Chapter 2 Current Problems and Future Requirements11 2.1 Internet Limitation 2.2 Future Frontier 2.3 Traffic Requirements 10 Years Into the Future 2.4 Societal Requirements and Design Requirements Chapter 3 Future Enabling Technologies 26 3.1 Optical Transmission 3.2 New Optical Fiber 3.3 Wavelength and Waveband Conversion 3.4 Optical 3R 3.5 Optical Quality Monitoring 3.6 Optical Switch 3.7 Optical Buffer 3.8 Silicon Photonics 3.9 Electric Power Conservation 3.10 Quantum Communication 3.11 Time Synchronization 3.12 Software-Defined Radio 3.13 Cognitive Radio 3.14 Sensor Networks 3.15 Power Conservation for Wireless Communications in the Ubiquitous Computing Era Chapter 4 Design Principles and Techniques 40 4.1 Design Principles for a New Generation Network 4.2 Network Architecture Design Based on an Integration of Science and Technology 4.3 Measures for Evaluating Architectures 4.4 Business Models Chapter 5 Basic Configuration of a New Network Architecture 54 5.1 Optical Packet Switching and Optical Paths 5.2 Optical Access 5.3 Wireless Access
5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 5.12 5.13
PDMA Transport Layer Control Addressing and Routing Layering Security QoS Routing Network Model Robustness Control Overlay Network Layer Degeneracy
Chapter 6 Testbed Requirements117 Chapter 7 Related Research119 7.1 NewArch 7.2 GENI / FIND 7.3 Euro-NGI / Euro-FGI Chapter 8 Conclusions122 Appendix Definitions of Terms 123
Preface
Packet switching was invented over 40 years ago. This technology, which gave rise to the Internet, is the information foundation of society today. About a century before the invention of packet switching, the telephone was invented as an improvement over telegraph, and the telephone network based on circuit switching came to occupy a firmly entrenched position within society. Through the failure of Asynchronous Transport Mode (ATM), the telephone network became the Next Generation Network (NGN) and an attempt is now being made to absorb it into a network based on packet switching. Through the transition from a simple network for connecting telephones to an information network for connecting computers, the network not only has supported societal aims, but has also become an indispensable part of our world today. In the ubiquitous computing society of the future, an information network will permeate our society and its terminals will be processing devices that are neither telephones nor computers. As the complexity and diversity of human society increases in the future and people and information become more closely interconnected, the network itself cannot help but reflect this diversity and complexity. Computers and networks will be ubiquitous and information networks will be embedded in the real world to benefit society en masse. The information network that supports the diversification of human life will give birth to a new culture and science. The network will enable real-world society to incorporate virtual space so that the two spaces are integrated seamlessly and people will be unaware of the passing back and forth between these spaces. The current Internet, which was not designed with this kind of pervasive information network-oriented society in mind, cannot handle this societal transition, leaving it unable to further mankind's future potential. Actually, we are already experiencing problems associated with the gap between the real world and virtual space. To realize this kind of information networkoriented society envisioned for the next two or three decades, we must have a new generation network that can integrate the real world and virtual space and deal with them seamlessly. Improvements have often been made to the Internet by the Internet Engineering Task Force (IETF), its standards organization. Because of improvements that were made spanning dozens of years, its protocols have become more complex. Also, innovative ideas are not accepted in Internet technologies that have already been established. IPv6 simply broadens the address space, and we cannot expect the IETF to produce a new network architecture. Our vision is that we must create this new generation network before the Internet reaches its limits. The aim of new generation network research is to create a network for people of the next generation, not to create a network based on next generation technologies. A network architecture, which is a set of design principles for designing a network, is consistent with the general rules of human society. The Internet architecture was developed along with competition based on market principles and globalization, which the Internet supported. Both the rules of society and the Internet try to welcome and engage turning points. A sustainable society increasingly demands not only liberalization, but also peace of mind and safety. To apply technologies that will be available in the future to resolve both social problems that cannot be resolved by modifying the current network as well as problems that are expected to become serious in the future, we must select, integrate, and simplify
techniques and technologies based on a network architecture designed according to new design principles. The network architecture is positioned between the top-down demands of solving societal problems and the bottom-up conditions of future available component technologies. Its role is to maximize the quality of life of the entire network-oriented society and to provide it with sustainable stability. New generation network research must design the network from a clean slate regardless of current technologies. A new sustainable design must support human development for the following 50 or 100 years. We should design an ideal network that can be realized at a future point in time and then consider the issue of migration from existing conditions later. We must not improve the current technology without looking at future courses of action. This conceptual design is a collection of techniques and technologies that were selected and simplified based on design principles conforming to its concepts. Since the techniques and technologies that are included have not yet been evaluated, they are only suggestions to be included in a new generation network and act simply as guidelines indicating the first step in advancing our research. This conceptual design is organized as follows. Chapter 1 introduces the aims of the new generation network architecture design project AKARI. To clarify the current problems and future requirements, Chapter 2 describes the design requirements that are called for in this conceptual design. Chapter 3 describes future component technologies that can be used by the new generation network. Chapter 4 discusses design principles and techniques that are used in this conceptual design. Chapter 5 deals with the basic configuration of the new generation network architecture and various related technical areas. Chapter 6 describes the requirements for testbeds to be used as prototypes for verifying the new generation network architecture. Chapter 7 introduces related research and Chapter 8 presents conclusions.
Chapter 1. Goals of the New Generation Network Architecture Design Project AKARI [Hirabaru, Otsuki, Aoyama,
Kubota] This chapter initially describes the objectives and targets of the AKARI Project. Then, to clarify the aims of the project, it describes the importance of the network architecture definitions and roles, the conceptual positioning and approach of the AKARI Project, and the differences between a next generation and new generation network.
Design based on basic principles that are common overall, not local improvements of efficiency or progress in specific component technologies Create an overarching vision of what the future network should be for more than a decade hence and utilize established design capabilities based on practical experience
Future requirements from diverse users and society For the information infrastructure of society -Global optimization - Sustainable stability
Network Architecture
Many component technologies, sub architectures Select, integrate and simplify by design principles Architecture Protocol engineering New generation network
loca h al hierarc g l ad in dres ddress a sing anycast IPSE C c om LS plica MP ro u t t ing ed ast ltic mobility mu
flowlabel
2000
Fig. 1.5.2. Initiatives for Recreating a Network Architecture from a Clean Slate
GMPLS
NAT
The time for redesigning the Internet from a clean slate is approaching!
2001
2002
2003
2004
2005
2006
2007
2008
2009
FIND (NSF) Euro-NGI (EU) Autonomic Communication (EU) UNS Strategic Programs (JP) non-IP New Generation Network Future (> New) Generation Network Architecture (NICT)
generation network will not be able to be implemented immediately, but will act as a reference for future research and development and point to a course of action for research and development in this field. There are concerns that if research and development is performed based on current technologies, the direction taken by the development process for the network-oriented society will reflect corporate interests or be reduced to local optimizations. In addition, a large gap may occur between research and development based on current technologies and the next generation technologies when the limits of the current Internet are reached. However, we believe that milestones for current network research and development projects can be determined and steps towards the future can be taken with an ideal solution in mind. Many current network research and development projects end up adhering to piecemeal improvements of Internet technologies or the spread of the Internet. There is a strong tendency to carry out development with the current Internet in mind, which inhibits movement towards new innovation. It is our philosophy that network research and development that is linked to future innovations is only possible by starting from a clean slate with no concept of the current Internet in mind.
Revised NXGN
1) new paradigm
2) modification
Past Network
Present Network
2005
2010
2015
User A
User B
User C
User interface
Universal access
Application
Customizable
Overlay network
C
Flexible
Scale-free Secure
QoS tasks IP network limits could be reached by using of IP for QoS tasks. In particular, it is difficult to guarantee QoS. Although applications are preferentially controlled for each class, it is obvious that bandwidth is difficult to guarantee. Scalability and Capacity Since all services undergo session management, scalability is in a concern. There are also uncertainties concerning transaction management required for authentication of terminals and individuals to ensure security. Management information search scalability in location information databases used for mobility is also uncertain. These kinds of uncertainties are worrisome because control is centralized even though these services take a distributed form using IP. Since existing terminals and applications are integrated, a tera-bps to peta-bps class network is probably required in terms of capacity. However, we will be unable to create a greater capacity if these kinds of scalability uncertainties are not resolved. This is a major concern for future implementation technologies. Electric Power Since the infrastructure is based on IP routers, router performance is directly related to QoS or network performance. If we consider peta-bps class processing based on high-end IP routers, several hundred routers consuming kilowatts of power per node are required, resulting in megawatt-class power requirements. Flexibility, Robustness, and Sustainability The possibility of future growth may be inhibited by the ANI implementation and non-technical limitations. On the other hand, robustness can be ensured since security will be under the strict control of business. Also, support for emergency calls is obligatory and these calls will be processed with high priority. Since the replacement of existing services by services that will guarantee a certain degree of flexibility is a worthy goal, sustainability (exceeding 50 or 100 years) is not a primary goal. Application provider ANI Service stratum User User
IP IP
Network
Table 1.8. Differences Next Generation Network (NGN) and New Generation Network.
Add QoS and authentication to Create new network without existing IP being committed to IP O-E-O conversion: Less than All-Optical: Greater than petapeta-bps capacity bps capacity Unknown but highly diverse ranging from devices acting in conjunction with massive information servers to tiny communication devices such as a sensor
Assumed terminals Integration and creation of and applications advanced versions of existing terminals and applications such as triple- or quadruple-play services Power consumption
Power consumption at several Power conservation by a factor megawatts of at least 1/100 according to multi-wavelength optical (transformer substation scale) switching Successive violations of Control spam or DoS attacks by principles such as firewalls, address tracing and end-to-end IPSec, and IP traceback and inter-network security Supported by enhancement of Robustness is provided by the management function by network itself businesses Distributed centralized control following IP, MPLS required for high-speed rerouting, long fault detection time Introduction of complete distributed control, increase in failure-resistance and adaptability, inclusion of sensor nets or ad-hoc nets
Security
Robustness
Routing control
Relationship Although there are some Provides openness from a neutral between users and constraints on openness standpoint, and users can bring the network stipulated by UNI, ANI, and new services NNI, reliability is increased Quality assurance Priority control for each class Quality assurance that includes by using IP bandwidth for each flow using packet switching or paths appropriately
Layer configuration
Layer degeneracy and crosslayer control centered around a thin common layer Vertical or horizontal integration possible
Integration model Basic principles Sustainable evolution Access Wired-wireless convergence Mobile Number of terminals
Set from a business standpoint Set from a clean slate to match while using IP future requirements Has limitations due to IP Has sustainable evolution capability that can adapt to a changing society Over 10Gpbs for each user Context aware ID locator separation Over 100 billion
References
[1-1] David Clark, et al., NewArch Project: Future-Generation Internet Architecture, http://www.isi.edu/newarch/, 2003. [1-2] Larry Peterson, et al., GENI: Global Environment for Network Innovations, http://www.geni.net/, 2006. [1-3] Daniel Kofman, et al., Euro NGI, http://eurongi.enst.fr/, 2006. [1-4] Mikhail Smirnov, et al. Autonomic Communication, http://www.autonomiccommunication.org, 2006. [1-5] The Telecommunications Council. Research and Development for Ubiquitous Network Society. UNS Strategic Programs, http://www.soumu.go.jp/snews/2005/pdf/050729_7_2.pdf, July 29, 2005. [1-6] Hirabaru et al. Network Architecture Group, http://nag.nict.go.jp/, 2006.
10
11
Currently, only PIM-SM, for which the number of advertisements does not increase even if the domain gets larger, is used. However, it is limited to use in each domain by statically configuring so that the number of advertisements does not dramatically increase even if the number of groups increases. Although the resource reservation protocol RSVP was designed to support all types of multicast protocols that have currently failed, it contains the same problems as those protocols. Another limitation of multicast routing has been the introduction of IGMP. IGMP was introduced so that end terminals could be supported only by IGMP without regard to the multicast routing method. However, this not only made the multicast routing method (that only a router understands) unnecessarily complicated, but also moved functions that the terminal should have to the network, which is an overt violation of the end-to-end principle. Actually, IGMP has been functionally extended twice because of the introduction of new multicast routing methods, although it was supposed to be unrelated to individual multicast routing methods. IGMP has obviously failed.
exceeding 200,000 entries. Since multihoming is currently performed using routing, a multihomed site requires individual independent entries in BGP global routing tables. Most of advertised global routing entries are used for multihoming. As long as multihoming depends on routing, the number of global routing table entries is expected to continue to increase quickly in the future. Attempting to perform inter-domain routing using BGP leads to another limitation. For example, a method in which the MPLS path assignment depends on a BGP advertisement causes a significant increase in BGP advertisement information.
13
functions, IPSec contained inconsistencies from the beginning. Security cannot be standardized by concentrating on a specific layer, but must be implemented in an appropriate layer according to application requirements. IPSec also contains public key encryption limitations. Generally, to implement security, it must be theoretically impossible for secret information that must be shared between specific parties to be shared by an unknown third party. However, in attempting to resolve this problem according to public key encryption, an additional third party called a certification authority (CA) was also introduced without taking the reliability of the CA into consideration (although the CA can be trusted to the same degree as the ISP, if the ISP can be trusted from the start, then IPsec would be unnecessary). This is inconsistent. The IPSec protocol that was actually defined by a compromise is unsuitable for most applications, and is of little use.
14
denominated in terms of seconds, which ignores the special characteristics of the data link. The upper and lower limits of the ND timeout value, which were determined without any particular justification, make high-speed handover impossible. This is the latest recognized limitation of IPv6. Although the specifications were changed for only this part, this change was merely an improvement of unimportant details. For example, in a wireless LAN, since multicasting and broadcasting do not resend packets when a collision occurs, they are not as reliable as Ethernet broadcasting or wireless-LAN unicasting. Although congestion causes processing performance to drop significantly, this problem has not been solved. With ND, an attempt was made to have unicast routing (not just multicast routing) distinguish between simple terminals and routers so that a simple terminal would not need to understand the routing protocol. However, reducing terminal functions and relying on routers is a violation of the end-to-end principle. IPv6 differs from IPv4 in that the minimum Maximum Transmission Unit (MTU) has been significantly increased. In many upper layer technologies, it is sufficient if the standard MTU can be used. A value that is required in the upper layers is the Path MTU (PMTU), which is the minimum MTU value on a path spanning multiple hops. PMTU discovery is an IPv6 option. However, the PMTU varies as the route varies, and monitoring is required at a suitable interval. PMTU was first implemented in the network layer, and then timeouts and the concept of time were also introduced. Currently, PTMU discovery cannot actually be used. The increase in the number of global routing table entries for interdomain routing had been recognized at the initial stage of IPv6 development as a problem no less important as that of the pressure on the address space, and address structures and address assignment methods that would suppress the number of global routing table entries were proposed. However, since the multihoming problem has not been solved, multihoming requests from ISPs cannot currently be resisted and the unlimited increase in the number of global routing table entries is not likely to stop. Although there also have been experiments attempting to make IPv6 deal with the multihoming problem, since many of them try to introduce even more timeouts in the network layer in a similar manner as NAT, this only worsens the current situation. IPSec has also been integrated as a standard feature in IPv6. However, no attempt has been made to resolve the key sharing problem, and security is not particularly increased by IPSec.
15
Two kinds of Long Tail Long tail for Business Long tail for R & D
C Data Speed
Short-term development performed in corporations cannot help but emphasize technologies that are adaptable to the area of the graph to the left in which there are many users. However, if we look at the history of ICT research and development since the dawn of the Internet, it is apparent that innovative technologies were created from research targeting the long tail part of the graph where there was an extremely small number of users and that those technologies gradually expanded into the areas to the left until they finally spread to the part with an enormous number of ordinary users. The Internet, World Wide Web, search techniques, and other technologies all started from research intended for the long tail part, which targeted an extremely small number of researchers. Of course, it takes a long time for these technologies to spread from the extremely small number of special users to the enormous number of general users. However, the corporation that accomplishes this before any others will dominate as the
16
current ICT champion. Even when designing the new generation network architecture, it is important to emphasize variety and the ease of introducing new services from the viewpoint described above.
P T G M K K
B2B
Cine-grid
Digital Cinema > 100GB DVD>GB MP3>MB/ music Web10kB/page
IP TV
Both directions
e Commerce P2P Web content B2C
InternetTV S2M 11Mpage/day Yahoo 300Mpage/day
HDTV SDTV
Sensor & RF ID
M
Access frequency [page/day]
17
Sensor networks for environmental measurement can be considered to help preserve the Earth's environment by monitoring its deterioration. For example, assume that the urban areas throughout the world are covered by sensor networks. The total area of the land surface of the Earth is 149 million sq. km., and 10% of that or 15 million sq. km. comprise urban areas. If 10 sensor nodes were deployed per sq. km., there would be 150 million nodes. When considering the increase in the number of nodes, besides the sensor networks described above, we must also take into consideration the increase in existing nodes for mobile devices, home networks, and appliances.
18
(7) Rich user experiences Various examples of typical services are Google Maps, Flicker, Hatena: Bookmark (one kind of social bookmark service), AJAX, Gmail, Amazon, SNS, and Wikipedia. Although there are various ideas of what Web 2.0 represents, we define Web 2.0 here as something that implements: (1) Frameworks for collecting users' content (2) Frameworks for collecting users' personal information
that enables the service to be provided only to a specific group that depends on metadata such as the terminal owner or physical location. Private networks must be able to be freely constructed based on such information as the terminal owner, terminal location, or billing information.
20
entire floor with a sensor network, the environment surrounding inhabitants can be optimized, and the energy consumption of the entire floor can be reduced. To implement these kinds of context-aware services, basic technologies such as a context acquisition mechanism, context representation model, distributed context database mechanism, context filtering mechanism (privacy, security, policy), and context estimation mechanism must be developed. In particular, a context estimation technology that estimates high-level context information based on physical information obtained from sensors is extremely important for developing real world-oriented applications. However, although the word "context" is used without qualification, there certainly will exist various levels of context granularity required by applications. Consider position information as an example. Even if there are applications that require coordinate information, there will also be applications that require more abstract information such as "movie theater," for example. Context information platforms that can appropriately provide various types of granularity required by applications cannot be developed in a short time. Development must proceed while gaining experience in constructing and operating prototype systems.
http://www.jpix.ad.jp/jp/techncal/traffic.html
Fig. 2.3. Traffic Forecast for 10 Years into the Future 21
Design requirements
Large capacity Scalability Openness Robustness Safety Diversity Ubiquity Integration and simplification Network model Electric power conservation Extendibility
22
architecture. A balance between network providers and network users is important, and a high degree of control by users as well as user-oriented diversity is also required. Therefore, the network must be open and must be able to support appropriate principles of competition. Standardization of interfaces or the technologies used by them is important. The World Wide Web was invented because networks were open, and networks should have a degree of openness that brings out users' creative originality and enables networks to fully prosper. Mechanisms that enable users to provide services and control networks are required. In this case, there will be no distinction between users and service providers. Functions should be provided to enable users to easily bring services to the network.
23
24
be replaced once it is embedded in society, the network architecture must be able to be developed in a sustainable manner for 50 or 100 years.
References
[2-1] Tomonori Aoyama. Digital Musings (e-Zuiso) "Two Long Tails," Denkei Shimbun, August 14, 2006 (in Japanese). [2-2] Tim OReilly, What is Web2.0, http://www.oreillynet.com/pub/a/oreilly/ tim/news/2005/09/30/ what-is-web-20.html. [2-3] C. A. Eldering, M. L. Sylla, J. A. Eisenach, Is there a Moore's law for bandwidth?, IEEE Communications Magazine, Vol. 37, No. 10, pp. 117-121, Oct. 1999. [2-4] G. Guilder, Telecosm: How Infinite Bandwidth Will. Revolutionize Our World, The Free Press, NY, 2000. [2-5] Survey Data Concerning Electric Power Conservation Techniques in Networks, Mitsubishi Research Institute, Inc., February 20, 2004 (in Japanese). [2-6] http://innovation.nikkeibp.co.jp/etb/20060417-00.html, Nikkei BP, Emerging Technology Business, April 17, 2006 article (in Japanese).
25
Fig. 3.1.1. Multi-level Modulation/demodulation Schemes On the other hand, 100 Gbit/s transmission experiments using OTDM were reported in 1993, and 1.28 Tbit/s per wavelength (640 Gbit/s 2PDM (polarization division multiplexing)) experiments were reported in 2000. Although the pulse width, which will be on the order of sub-picoseconds, will easily be affected by the dispersion of the transmission optical fibers, OTDM has a potential to be used on ultra-fast links exceeding several 100 Gbit/s over short and medium distances in the future.
26
27
Fig. 3.2. Cross Section of (a) Photonic Crystal Fiber, (b) Photonic Bandgap Fiber
28
Fig. 3.3.1. Waveband Conversion at a Waveband Node Research has been conducted on a quasi-phase matched Lithium Niobate (QPM-LN) waveguide as a material to be used for parametric wavelength conversion (Fig. 3.3.2) [38]. Since actual experimental results showing conversion gain with little degradation have also been reported recently, further progress is expected
29
3.4. Optical 3R
To implement a wideband all-optical core network, the current OEO-type transponder must be replaced by an optical 3R element. For RZ signals, this can be implemented by generating an optical clock according to clock extraction and switching the optical clock by signal pulses. However, for other modulation formats, some kind of format conversion is required. Operation exceeding 100 Gbit/s has been proven by using a semiconductor or nonlinear fiber as the switch. However, since processing is required for each individual wavelength, integration with an optical add/drop multiplexer such as an AWG will probably be required. On the other hand, if 2R operations are performed, nonlinear optical effects can also be used for simultaneous wavelength operations [3-9], and a choice between 2R and 3R regeneration must also be made according to the scale of the networks.
will increase, and if the required number of ports is on the order of 100, MEMS switches with little wavelength dependency may become advantageous. In addition, research is actively being conducted on optical matrix switches using PLZT or SOA to obtain a fast switching speed (several nsecs). On the other hand, since an all-optical switch opens and closes gates using ultrashort optical pulses, they can operate in the 100fsec (0.1ps) range. In particular, performance of several 100 Gbit/s has been empirically verified by using the optical Kerr effect due to pure electronic polarization or the four-wave mixing effect [3-12].
conversion, which tends to be costly (power consumption, number of parts, amount of money), is unnecessary. A representative application of an optical fiber delay line buffer is an optical packet switch buffer. The following figure shows a typical configuration for an optical fiber delay line buffer. The figure on the left below is an example of a 4-input, 1-output optical buffer consisting of an optical switch and optical fiber delay lines with different lengths and an optical coupler. A B-type delay is assigned. By controlling the optical switch appropriately, a delay is obtained by directing information to the optical fiber with the appropriate length. Even if information arrives from different input lines simultaneously, a collision can be avoided by controlling the switch appropriately. Since there is no optical logic circuit, using electronic processing for control is more realistic. To create a larger buffer without a large-scale optical switch, multiple 1N or NN optical switches can be combined as shown in the figure on the right below. To make the system more compact in this case, it is necessary to create fiber wiring sheets or ribbons and an array of switches.
Control signal
3 3 4 4 1 1 2 2
1x8
8x8
Buffer Manager
1 1 2 2 3 3
Input (0)
Discard
3 3 4 4
1 1 2 2
4 switch 4
NX (B+1)
0 T 2T 3T 4T (B-1)T
8x8
1x8
Input (1)
4 4
3 3
2 2
1 1
1x8 1x8
8x8
Input (3) Input (4) Input (5) Input (6) Input (7)
Optical packets
Discard
32
33
time or frequency standards on the network or by independently having a large number of ultra-small atomic clocks such as chip-scale atomic clocks (CSAC).
Increased Connectivity
Since radio terminals and radio base stations on which software-defined radio technology has been installed will no longer select the radio communication method, the radio systems to which a radio terminal can connect will increase from the radio terminal's viewpoint. Applications can be expected for use at times of emergency such as temporarily switching to a separate radio system when communication with a specific radio base station or a radio area of a certain range is impossible. In addition, the radio terminal is also able to access multiple radio communication methods at normal times.
34
Increased Efficiency
Increased overall communication efficiency will be achieved by enabling radio terminals, which had been separated for each radio system in the past, to be multiaccessible without having to select a radio system as described above.
Multi-Connection
Since software-defined radio technology enables multiple radio communications to be executed simultaneously without having to physically equip radio terminals with multiple radio communication interfaces, so-called multi-connections will be able to be widely used. By generalizing multi-connections, which had previously been a relatively special condition, more communication models or applications that assume the use of multiconnections may be created.
35
there are many examples in which this kind of architecture has been assumed. The terminal or base station searches the radio environment and considers the search result together with information in the database on the network to determine the communication method that should be used or the time or place to use it. The following considerations will have an impact on the cognitive radio network architecture in addition to those listed above for software-defined radio.
or with a connected network, means have been considered in which IP is not used. If the hardware for future sensor nodes progresses sufficiently, means of using IP will also be considered. (However, there will still be the communication efficiency problems described in the next paragraph.) Communication efficiency: Each sensor in a sensor network is generally thought to generate a small amount of data. Therefore, if IP is used for communication, the relative weight of the header will be too large, and communication efficiency will drop. As a result, use of a more efficient independent communication protocol rather than IP is often considered in sensor networks. Connection type: The types of connections for connecting sensor nodes and a network can be broadly divided into types in which sensor nodes are connected to an IP network and types in which a sensor network is constructed as a non-IP network (that is, a network based on a different protocol than IP and connected to an IP network via a gateway). The first type is suitable, for example, when access from general users is widely permitted or when information is widely sent to general users such as with current web cams. However, this type also has the problem that security is difficult to ensure when communicating only with the owner or a limited number of users. The second type is suitable, for example, when constructing a sensor network in a limited area for specific objectives of the owner or managers in charge such as security cameras in a shopping district. However, this model does not have a very high degree of freedom since it assumes that the information that is obtained will undergo some kind of data processing before the information is passed to the general public via the gateway if that information is to be further made publicly available to general networks or general users. In other words, there is considerable interest in a sensor network that can ensure a desired level of security and enable information to be freely obtained and processed by general users.
3.15. Power Conservation for Wireless Communications in the Ubiquitous Computing Era
In a ubiquitous computing environment, it is necessary to always know about diverse devices or services that are constantly around you. To accomplish this, not only must functions for quickly discovering the devices or services be provided, but the power consumption of mobile terminals must also be kept low. The transmission speed, however, only matters to the extent that device or services descriptions are exchanged. Also, a requirement of a wireless sensor network is that it must operate in an environment with a limited power supply and limited CPU and other resources. Based on the fact that the volume of sensor data is quite small, a low speed wireless communication technology that can reduce power consumption as much as possible is required. In addition, if the power consumption required for wireless communication is extremely small in a tag for which diverse applications such as production management, inventory management, or delivery status management are expected, an active tag with a built-in battery becomes a real possibility. Advanced tag applications can be implemented by using active tags with built-in batteries.
37
Currently, a device with a wireless technology such as Bluetooth or ZigBee is functionally complex because diverse applications are assumed, and the power consumption also cannot help but be large. These devices also take time to discover devices or services in their immediate vicinity. Since high-speed is a selling point of ultra-wideband (UWB) technology, its power consumption essentially increases. Previous wireless communication research attempted to implement high speed and high mobility properties. Of course, although investigations concerning power consumption were also performed because of its relationship to standby time, the flow of research and development for realizing third generation mobile phones pointed towards the upper right in Fig. 3.15. On the other hand, in the future pervasive computing era, a third axis for "Power" will project outwards towards the foreground. Development of wireless communication technologies for implementing locations near the origin in Fig. 3.15, that is, locations corresponding to lower power consumption, low speed, and low mobility properties will be required from the viewpoint of device or service discovery, sensor networks, and active tags. To move towards a reduction in power consumption, entire systems must be considered based on computer architectures and device technologies, not just wireless communication technologies. Since the power consumption required for sending/receiving 1 bit a distance of 10 to 100 meters is almost equivalent to the power consumption required for computing several thousand to several million instructions, wireless communication technology must fulfill a large role for reducing power consumption. The keys to reducing power consumption include a reduction in communication overhead, a concise MAC protocol, and a highly efficient sleep mode as well as reduction in the transmission rate.
Mobility
Data Rate
Power
References
[3-1] A. Sano et al., 14-Tb/s (140 x 111-Gb/s PDM/WDM) CSRZ-DQPSK Transmission over 160 km using 7-THz Bandwidth Extended L-band EDFAs, ECOC 2006 Th4.1.1 (2006).
38
[3-2] A. H. Gnauck et al., 12.3-Tb/s C-Band DQPSK Transmission at 3.2 b/s/Hz Spectral Efficiency, ECOC 2006 ETh4.1.2 (2006). [3-3] A. H. Gnauck et al., 25.6-Tb/s C+L-Band Transmission of PolarizationMultiplexed RZ-DQPSK Signals, OFC 2007 PDP19 (2007). [3-4] H. Masuda, et al., 20.4-Tb/s (204 111Gb/s) Transmission over 240 km Using Bandwidth-Maximized Hybrid Raman/EDFAs, OFC 2007 PDP20 (2007). [3-5] Nakazawa et al., 1.28 Tbit/s-70 km OTDM transmission using third- and fourthorder simultaneous dispersion compensation with a phase modulator, Electron. Lett., Vol. 36, pp.2027-2029 (2000)H.G.Weber et al, Single channel 1.28 Tbit/s and 2.56 Tbit/s DQPSK transmission, Electron. Lett., Vol. 42, pp.178 - 179 (2006). [3-6] H. Takara et al., Field demonstration of over 1000-channel DWDM transmission with supercontinuum multi-carrier source, Electron. Lett. Vol. 41, pp.270 (2005). [3-7] Y. Miyagawa et al., Over-10 000-channel 2.5 GHz-spaced ultra-dense WDM light source, Electron. Lett. Vol. 42, pp.655 (2006). [3-8] Yamazaki et al. Waveband Path Virtual Concatenation With Contention Resolution Provided by Transparent Waveband Conversion Using QPM-LN Waveguides, IEICE Technical Committee on Optical Communication Systems (OCS) (May 25, 2006). [3-9] T. Ohara et al., 160-gb/s all-optical limiter based on spectrally filtered optical solitons, IEEE Photonics Technology Letters, Vol. 16, pp. 2311 2313 (2004). [3-10] I. Shake et al., Averaged Q-factor method using amplitude histogram evaluation for transparent monitoring of optical signal-to-noise ratio degradation in optical transmission system, IEEE J. Lightwave Technology, Vol. 20, pp. 1367 1373 (2002). [3-11] I. Shake et al., Simple measurement of eye diagram and BER using high-speed asynchronous sampling, IEEE J. Lightwave Technology, Vol. 22, pp. 1296 1302 (2004). [3-12] T. Morioka et al., Error-free 500 Gbit/s all-optical demultiplexing using lownoise, low-jitter supercontinuum short pulses, Electron. Lett. Vol. 32 pp. 833 - 834 (1996). [3-13] World's First Implementation of an Ultra-Fast All-Optical Switch Using Silicon, NICT press release, http://www2.nict.go.jp/pub/whatsnew/press/h17/0511091/051109-1.html. [3-14] Aoki. Energy Consumption Trend of IT Facilities and Energy Reduction Achieved by IT Services. Journal of the IEICE, Vol. 90, No. 3, pp. 170-175, March 2007. [3-15] Sasaki. Quantum Information Technology and Energy Consumption. The Journal of the IEICE, Vol. 90, No. 3, pp. 220-225, March 2007. [3-16] M. Sasaki, Overview of Quantum Information Communication and the NICT Initiative. Journal of the National Institute of Information and Communications Technology, Vol. 52, No. 3, 47-53 (2006). [3-17] Seth M. Foreman, et al.: Review of Scientific Instruments 78, 021101 (2007).
39
40
supporting specific user requests such as NAT or proxies are placed in the network, network growth or new application uses are obstructed. Keeping the network layer simple is also extremely important from the standpoint of ensuring reliability or extensibility. Making a system simple is the first step in ensuring its reliability. Also, providing extensibility enables functions to be easily added. However, keeping a network simple can create its own problems as the following example explains. MPLS, which locates a circuit switching technology immediately below the network layer and above the link layer, appeared as a technology for increasing the speed of the Internet. However, when MPLS introduces traffic engineering, it duplicates many of the roles of the network layer or link layer such as QoS routing or measures for dealing with link failures. In other words, if the network layer is kept simple, there arises the temptation to introduce new technologies and to try to use the functions that are obtained by those technologies to their fullest extent. The network is simple and there consequently is a tendency to optimize or functionally maximize one technology without considering its consistency with other technologies or other layer functions. Therefore, the simplicity of the network carries with it the risk that it is capable of bringing about the breakdown of the network architecture. This point suggests the importance of an architecture design that is not only taken into consideration at the design stage, but also through the subsequent development stage. When designing a new generation network, we must maintain the principles mentioned above and also aim for a network architecture that can support diversity and extendibility, which have increased traditionally. Although this chapter does not give final solutions, it presents important principles or rules as design principles for a new generation network architecture.
End-to-End
This is a basic principle that states that a network should not be constructed based on a specific application or with the support of a specific application as its objective. Although Internet architecture development benefited from the application of this principle, as time passed, the end-to-end principle was gradually lost. History suggests
41
that a new principle is required in addition to this principle so that the same mistake is not repeated.
Crystal Synthesis
When selecting from among many technologies and integrating them in order to enable diverse uses, simplification is the most important principle. However, as the history of Internet development shows, network complexity increases with time since network uses become more diverse and new inconsistent functions are added. To counter this and maintain the KISS principle, the design must incorporate "crystal synthesis," a kind of simplification of technologies to reduce complexity even when integrating functions.
Common Layer
In a network model with a layer structure, each layer's independence is maintained. Each layer is designed independently and its functions are extended independently. An example is IP, which is in charge of the network layer, and Ethernet, which is in charge of the data link layer. The functions of each protocol exist independently, and redundancy occurs in the functions because of extensions. If we assume that the network layer exists as a common layer, other layers need not have the functions that are implemented in that common layer. One of the reasons for the success of the Internet is that the IP layer is a common layer. Therefore, we concluded that the design of the new generation network architecture will have a common layer and will eliminate redundant functions in other layers to degenerate functions in multiple layers.
Self-* properties
To construct a sustainable network that can be continuously developed, that network must be adaptive. To accomplish this, it is important for all entities within the network to operate in an adaptive, self-distributed, and self-organizing manner. For example, although current IP routing control is often described as distributed-oriented, this is not really the case. Current IP routing control is not completely distributed control. It is more accurate to describe it as distributed centralized or distributed cooperative. For example, 42
although OSPF, which is one type of IP routing control is distributed in the sense that all routers (entities) perform packet forwarding based on independent decisions, all nodes collect the same information and perform the same operations (distributed centralized) and other nodes are also expected to behave in the same manner (distributed cooperative). The fact that IP routing control is not completely distributed control is linked to the weakness of network fault tolerance. In the future, the network must be designed so that the distributed orientation is further advanced and individual entities operate in a selfdistributed manner and that intended controls are implemented overall. In other words, a self-organizing network must be designed. Also, although the hierarchical structure of the network will continue to be an important concept in the future from the perspectives of function division and function sharing, the hierarchical structure, which is a vertically aligned entity, must become a more flexible structure. That is, the network must be designed having an adaptable control structure for upper and lower layer states without completely dividing the hierarchy as is traditionally done. In other words, a self-emergent network must be designed. Although the aim in a conventional distributed system often had been to improve the overall performance by such means as load balancing, the distributed processing mentioned here will very likely lower resource usage efficiency instead. Therefore, the inefficient resource usage that is caused by the distributed processing orientation must be compensated for by end node adaptability.
43
Openness
Providing openness to users to facilitate the creation of new applications is also important to the network. An example of a network without openness is the telephone network, which includes telephones in the network, but has no room for other applications to be connected. The NGN, in which openness is improved, has room for providing specialized functions to network services according to the newly defined ANI. However, in the NGN, the network and users are basically independent, and the degree of freedom for creating new applications is unclear. Therefore, in the new generation network, although the network itself will have a simple configuration, it is important to provide openness to users and to entrust users with some of its handling. Naturally, there will be problems depending on the degree of openness. For example, the degree of openness can range from aggressive in which users manipulate network internals (for example, routing tables) to soft, which is the degree for designating network resources 44
that are to be used. Future topics of interest include network modeling so that requests from users can be conveyed to the network as well as control plane or protocol design. Network monitoring for ensuring safety is also important as the network becomes more open.
Bi-directional authentication
Although bi-direction authentication is also performed explicitly or implicitly in real life, authentication is particularly important in a new generation network. A network should be designed so that bi-directional authentication is always possible. Also, authentication information must be located so that the particular individual or entity controls the information.
Traceability
Individuals or entities must be traceable to reduce attacks on the network. Traceability must be a basic principle when designing addressing and routing as well as transport over them. To reduce spam, systems must be traceable from applications to actual society. Anonymity should also be provided at the same time as a means of protection. Traceability is a technological principle provided as part of the architecture, and societal rules are applied for its operation. The basic principles described above are closely related. For example, since selforganizing control is distributed, scalability is ensured. Also, how addressing is handled 45
is an important problem in a network where the topology changes. This discussion gives us a glimpse of how architecture is a comprehensive science.
4.2. Network Architecture Design Based on an Integration of Science and Technology 4.2.1 Conventional Network Design Techniques
Network design, including the Internet, provides many examples in which theoretical research results were applied in implementing technologies. In particular, research and development had conventionally progressed in a close relationship with various fields in applied mathematics such as queuing theory, traffic theory, game theory, and optimization theory. One example in which theoretical research had clearly promoted development of related technological fields is research concerning multiple access technologies such as ALOHA, its successor CSMA/CD, and the recent CSMA/CA. Recently, theoretical research is also actively being conducted regarding QoS technology and TCP technology. However, there are also many criticisms concerning theoretical research related to these technologies. For example, it has often been pointed out that theoretical research related to QoS technology has not produced any new technological advances. Actually, the limitations of QoS seem to have been clarified as theoretical research progressed. Looking back at the results obtained by QoS technology, we find many things that should be studied or reflected upon in furthering network architecture design. Also, TCP technology originally seemed to be an ad-hoc technique, and theoretical research just showed its predominance after the fact. However, it also seems that progress in current theoretical research can be highly significant for additional future studies. At any rate, a fundamental problem that should be mentioned here is that previous theoretical research, including research concerning the technologies mentioned above, targeted individual technologies. An architecture should essentially be produced by integrating technological and theoretical (scientific) techniques [4-4]. The separation of science and technology seemed to have been a problem previously, especially for the research and development of network architecture. To search for universal laws inherent in a system that already exists, a "scientific technique" models the target and clarifies the target's properties based on mathematical theory. On the other hand, "technology" invents, creates, and uses a specific method to implement new functions. As a result, to implement new functions based on properties that were derived from a scientific technique, it is important to consider a model and apply it to the actual system. However, conventionally, the lack of actions based on this perspective was a problem. In other words, despite the fact that the essence of architecture design is an accumulation of methods based on properties that were obtained by scientific techniques, this kind of cycle was not followed very well conventionally. The main reason that this kind of separation occurred was that the theories that had previously been used for studying networks were borrowed from applied mathematics, which was not a science that was created for information networks. In addition, the following practical problems also were encountered. Previous theoretical methods had focused on the optimization of service quality based on current and near-future technological levels. To be able to easily handle optimization problems, optimization had targeted a certain layer or certain protocol rather than the entire network system. Since an
46
information network had a layered structure, if a lower layer had a stable structure and requests were input from upper layers, this kind of approach could be sufficiently valid. Actually, even in the Internet, various small functions had already been added and it was possible at that time to locally search for universal laws or optimize some functions. Also, if optimization can be performed targeting a specific control method or protocol and this work is ultimately repeated across all layers, the entire architecture may be able to be evaluated. However, this assumption does not hold now because interactions between adjacent layers will be more dynamic in a future self-growing, adaptive information network architecture.
47
Power Law
Recently, the power law has often been observed in various networks. In terms of network topology, this means that the probability distribution follows the relationship p(k) k-r, where p(k) is the probability that the nodal degree is k. This relationship is seen in the Internet for the number of AS connections, number of router link connections, number of peer connections in P2P networks, and number of Web link connections. Some reasons that are given for why the power law is observed are self-organization, dynamic growth, and interactions that occur among a large number of entities. These certainly conform to the aims that should be pursued for a new generation network, which were presented earlier, and as a result, are given here as reasons indicating the possibility that the power law will be discovered. Currently, scientific research related to the power law has been actively conducted in statistical physics, applied mathematics,
48
sociology, economics, and biology. To continue to apply the results of these disparate fields to the development of information network technology, it is important to integrate science and technology.
49
is artificial and controllable. Therefore, if self-organizing control based on network science can be implemented, the information network itself can be used as an enormous testbed for self-organization control, which will enable feedback to be passed from the information network field to the biological field and from there to the field of complex adaptive system science. A true mutually beneficial integration of science and technology will be able to be implemented.
50
51
the quality of service. However, since users will leave an ISP whose quality of service deteriorates, the quality is maintained. This situation is sufficiently embodied by routing based on the current border gateway protocol (BGP). However, it is difficult for a network whose design is oriented to the use of inexpensive trunk lines to include a QoS mechanism.
52
References
[4-1] J. H. Saltzer et al. End to End Arguments in system Design, ACM Transactions on Computer Systems, 1984. [4-2] R. Bush and D. Meyer, Some Internet Architectural Guidelines and Philosophy, IETF RFC 3439, December. 2002. [4-3] D. S. Isenberg, The Rise of the Stupid Network, Computer Telephony, pp. 16-26, August 1997. [4-4] Masayuki Murata. Network Architecture and the Direction of Future Research, IEICE Technical Report on Photonics Networks (PN2005-110), pp. 63-68, March 2006 (in Japanese). [4-5] M. Murata, Biologically Inspired Communication Network Control, Proc. of SELF-STAR: Int'l Workshop on Self-* Properties in Complex Information Systems, (Forli), 31 May - 2 June 2004. [4-6] Naoki Wakamiya and Masayuki Murata. Biologically Inspired Communication Network Technologies, Transactions of the IEICE, Vol. J89B, No. 3, pp. 316-323, March 2006 (in Japanese).
53
5.1. Optical Packet Switching and Optical Paths [Harai, Ohta] 5.1.1. Optical Packet Switching
Definition of Optical Packet Switching
Optical packet switching technology is based on the concept that packets, which had conventionally been switched by freely using electronic processing, are handled as optical packets and switched using optical technology. If circuit speeds exceed 1Gbps, nodes such as routers or Ethernet switches, which are the packet switches used on the Internet, are often connected by optical fiber. In this case, although the packets are optical signals on the transmission channel, conversion processing is performed at the nodes to temporarily convert arriving optical signals to electrical signals. When the packets are to be placed on the next transmission channel, they are converted to optical signals again (O E O). On the other hand, with optical packet switching, the optical signals on the transmission channel are not converted to electrical signals at the nodes. Conversion processing is performed at the optical level as packets, which are placed on the next transmission channel (O O O).
O/E/O
L3 L3 payloadheader payloadheader
O/E/O
L3 L3 payloadheader payloadheader
O/O/O
L3 L3 payloadheader payloadheader
O/E/O
L3 L3 payloadheader payloadheader
1/10/40Gbps
>> 40Gbps
All nodes are equipped with functions that enable routing control information to be sent/received
54
Routing Routing Make aarouting table for Make routing table for forwarding procedure forwarding procedure Forwarding Forwarding Determine output port Determine output port from the routing table from the routing table
payload payload header header
Scheduling Scheduling Avoid packet collision Avoid packet collision Priority control Priority control Buffering Buffering Delay the packets Delay the packets in appropriate time in appropriate time Optical E or O
Switching Switching Switch the packet Switch the packet to the appropriate port to the appropriate port Electrical
payload payload
header header
Fig. 5.1.1.2. Internal Functions of Optical Packet Switching Optical packet switching is an indispensable element in a new generation network for increasing the capacity and lowering the cost of the current packet switching, which freely uses electronic processing. This conceptual design does not stop at a primitive design for simply switching optical packets, but instead describes design concepts for an optical packet switching node, which take into consideration processing up to network layer routing. In other words, an optical packet switching node also performs network control. The router is converted to an all-optical device. To perform optical packet switching, a function for generating optical packets and a function for switching optical packets are required. A node having only the generating function, like a label switch, can be referred to as an optical packet edge node, and a node having only the switching function can be referred to as an optical packet core node. However, in the optical packet switching discussed here, each node also performs network layer processing. Therefore, to avoid confusion with a lower layer closed network such as in MPLS, we will not define core and edge nodes. When we actually are in the practical application phase, we will consider a single router to have not just an interface for generating and switching optical packets but also an Ethernet or SONET/SDH interface.
Optoelectronic Integration
A high-performance optical packet switch should be designed by comprehensively investigating the physical scale, electrical power consumption, ease of use, and other factors. For example, since optical packets are extremely fast, the number of physical parts and the power consumption for optoelectric conversion also increase. Therefore, it is desirable for the optical packet payload to be transferred towards the next node without being converted to electrical signals. However, for packets such as routing packets in which the payload contains information that must be read at the local node, the payload is converted to electrical signals. Therefore, an optical packet switch will provide an input/output port for this purpose. Since headers must be overwritten or matched with a large volume of addresses in routing tables, they are temporarily converted to electrical signals for processing. Appropriate integration will be achieved for optical processing of addresses when a critical number are viewed. To avoid packet collisions, a buffer management method with small computational complexity (i.e., the worst case computation time is small) is required. This method must take into consideration the properties of an optical buffer called a feed-forward type buffer, which is described later.
55
The following sections describe design concepts related to the bit rate, communication range, guard time, optical packet format, asynchronous variable-length packet handling, routing, buffers, and the physical signal format while taking into consideration the ability to obtain performance surpassing that of a packet switch that only performs electronic processing, the elimination of redundant optoelectric conversion, use over a wide area, the KISS principle, and the ability to implement routing.
Bit rate
Developing a bit rate of at least 100Gbps per packet is a primary goal. Routers having 10Gbps interfaces are often seen on backbone networks, and some equipped with 40Gbps interfaces have also begun to appear. However, if this speed is exceeded, costs increase when optical signals are converted to electrical signals. Greater cost advantages can be expected by appropriately configuring 500Gbps class optical packets instead of just 100Gbps ones and switching them in the optical domain. For example, the number of components will be reduced by using a single optical switch to control broadband data. Also, since packets having the same data size have a smaller spread over time as the speed increases, the fiber length of an optical fiber delay line buffer can be shortened and a compact optical buffer can be configured.
Communication Range
To develop optical packet switches over a wide range, our goal is to implement optical packet transmission over the range of a wide area network (>> 100 km) and not be too concerned with the range of a metropolitan area network (< 200 km). For example, if optical packets are configured with a single wavelength, single modulation, and high bit rate, then depending on the cost for high-speed optoelectric conversion and modulation, the ability to use this over the range of a metropolitan area network may be sufficient. However, to perform long-distance communication, costs are entailed not only for wavelength dispersion but also for introducing optical 3R regeneration for signal smoothing or optical power management for controlling nonlinear effects such as self phase modulation. On the other hand, if a single optical packet is created by using multiple wavelengths in DWDM, then long distance transmission is possible with a simpler configuration. Use of multiple wavelengths in a single packet is possible when mature O/E- and E/O-conversion technologies are used. Another important consideration is that the cost can be reduced for converting part of the optical packet to electrical signals for header processing or routing. Of course, the speed difference of each wavelength must be compensated for at each link and at the node where the optical signal is ultimately converted to an electrical signal. However, the advantages that long distance transmission can be performed by using multi-wavelength packets and that O/E- and E/O-conversion costs can be reduced are significant.
approximately 94%. With optical packet switching, the network must be designed so that the maximum data transmission rate (absolute value obtained by multiplying the line speed by the utilization efficiency) exceeds these percentages. The guard time is a ratedetermining factor for wavelength conversion or the switching time of a switch. Fig. 5.1.1.3 shows the relationship between the guard time and link utilization efficiency. For example, when the guard time for a 64-byte packet at a link speed of 500Gbps is 1 nanosecond, the utilization efficiency is approximately 50%, and the effective speed is approximately 250Gbps. On the other hand, for a 1500-byte packet, even when the guard time is 10 nanoseconds, the utilization efficiency is approximately 70% and an effective speed of 350Gbps is obtained. To create an optical packet switch so that this kind of increase in line speed can be enjoyed, the optical switch must have a nanosecond-order switching performance. Besides the switching performance, the effect of wavelength dispersion will increase if the communication range is extended. If dispersion is increased, the actual interval between consecutive packets will become narrower (shorter) than it was when the packets were sent out as shown in Fig. 5.1.1.4. Therefore, for a link to efficiently accommodate optical packets that are to be transmitted over a long distance, the link must be designed to compensate for wavelength dispersion, and dispersibility must be taken into consideration when assigning the optical packet guard time.
1
0.8
Efficiency
0.6
0.4
0.2
1500Byte, 40Gbps 1500Byte, 100Gbps 1500Byte, 500Gbps 64Byte, 40Gbps 64Byte, 100Gbps 64Byte, 500Gbps 0.1 1 Guard Time (nsec) 10 100
0 0.01
57
Wavelength
Packet
Wavelength
Packet
58
Wavelength Time
Wavelength
Waste
Time
Routing
To perform routing, complex processing such as searching for shortest routes and creating routing tables is required, and converting optical packets to electrical signals for electronic processing is the easiest processing to build. For nodes on a packet-switched network to exchange routing information, the information must be entered in the payload part of the packets. For example, in an IP protocol routing packet, routing information is stored in the payload part. For routing packets, if the same wavelength as in the optical packet header is used, the packet length increases in the time direction. Optoelectric conversion of the payload wavelength band is also required. To accomplish this simply and inexpensively, schemes for reducing the number of wavelengths used in routing packets and for facilitating reception of wavelengths constituting the payloads that are to
59
undergo optoelectric conversion are important. In addition, when optoelectric conversion increases costs, a scheme such as reducing the number of optoelectric conversion interfaces to the extent that the decrease in throughput does not affect network performance is also important [5-1-3].
General optical packet CWDM Optical packets received by an optical router CWDM
Fig. 5.1.1.6. (a) Wavelength use by an intermediate optical router in an optical backbone network (MTU measure required), (b) Optical multiplexed packet generation circuit
Wavelength Time
Wavelength
DWDM
CWDM
Header
Broadband amplifier
Broadband modulator
Time
Time
Time
Time
(b)
60
optical signal processing becomes more difficult. On the other hand, with the nonrecirculating-type, the degree of signal degradation can be approached uniformly, and optical signals are easy to process. However, the fiber delay line utilization efficiency under ideal characteristic conditions is not as good as for the recirculating type. A nonrecirculating-type optical buffer mechanism must be established for reducing the required number of optical switches or fiber delay lines while controlling the packet loss rate.
problems instead of performing modulation/demodulation in a similar manner as with electrical communications, and (3) If both of the previous methods are difficult, perform modulation in parts with no packets at the link entry and drop these signals without performing switching at the next node.
11 1
mux
S1 S2 S3
Lightpath
D1 1 2 3 D3 D2 22 3
demux
OXC
12 22 3 2
mux
demux
OXC
Lightpath Uses
Lightpaths are used to provide services that cannot be sufficiently provided by only packet switching. For example, they directly connect paths between hosts for network users who cannot allow information loss or users who want to communicate by using a special protocol. This enables providing true bandwidth guarantees and supporting application innovation. Lightpaths seem promising as a new circuit switching technology together with the development of wavelength multiplexing technologies such as DWDM. In the near future, lightpaths are expected to be used as a means of traffic engineering. In other words, they will be used in a form in which paths are connected between intermediate edges rather than between hosts (Fig. 5.1.2.2 (a)). However, this is a scenario for the NXGN in which optical packet switching cannot be used, and is a typical design principle violation that diminishes the advantages of both packet switching and circuit switching and makes the network more complex. In the NWGN, it is important to cultivate lightpath network technologies based on the end-to-end principle, which
62
assumes that a path is provided up to the host or more aggressively up to an application on the host (Fig. 5.1.2.2 (b)).
Control network Packet switching Packet switching Control network
Fig. 5.1.2.2. Lightpath Provision To provide lightpaths between hosts or applications, the current degree of multiplexing, which is several dozen waves, is insufficient. Wavelength resources or fiber resources are finite, and even in the future, for example, the number of lightpath service users will be far more limited than the number of packet-switched service users. However, lightpaths will be effective when it is easy to predict required bandwidth and quality assurance is required such as for network broadcast services using lightpaths. On the other hand, high density high multiplexed WDM such as 1000-wave multiplexing has been demonstrated, and we believe that lightpaths will continue to provide a technological foundation that can be used by end users.
63
Wavelength Conversion
According to past research, wavelength conversion has clearly been somewhat meaningful for improving lightpath setting performance wherever it has been used. However, its introduction is dependent on cost (initial cost or power cost). Therefore, some points concerning the effects of introducing wavelength conversion are presented below. (1) Boundary between the end user and network: The user provides the end host interface. To control costs, the user will provide the minimum required wavelength multiplexing. On the other hand, the carriers also reduce costs by controlling the number of optical fibers that are used, and since the large-scale effect of accommodating traffic from multiple users is expected, wavelength multiplexing will increase. As a result, the number of wavelengths will differ significantly on the access link and carrier network. From a performance standpoint, there is a significant effect in placing a wavelength conversion function at the boundary between the end user and network. In addition, the same type of wavelength can be placed at the host, resulting in a cost benefit from a production standpoint. (2) Boundary between domains: Locating wavelength conversion at a domain boundary is important in that it simplifies network management. For example, it
64
can reduce advertisements of wavelength usage information, which was described earlier, and also reduce the volume of routing information advertisements. Although the transparency of light will be lost, one wavelength conversion method temporarily reverts to electricity. This method has the advantage that signal smoothing (i.e., regeneration, reshaping, and retiming) can be performed. On the other hand, since the transparency of light is lost and costs tend to rise, there are many future topics of research regarding the introduction of wavelength conversion. Another method in the optical region is the collective conversion of a wavelength group. This method is particularly effective when all hosts are furnished with wavelength groups in the same band.
O/E/O
rma
O/E/O
tion
O/O/O
O/E/O
n Co
a rm info rol t
n tio
O/O/O
O/O/O
O/O/O O/O/O
O/O/O
Packet
Path
Fig. 5.1.3.1. Multiplexing Path Control Signals on Packet-Switched Links One technique for sharing the control mechanism is to multiplex packets for path control signals on packet-switched links. Fig. 5.1.3.1 shows the multiplexing of path control signals on packet-switched links. After the O/E/O optical packet switch for generating optical packets in Fig. 5.1.3.1 receives path control signals at the electrical interface, it converts those control signals to optical packets. Although packet-switched packets and path control packets flow on the same link, higher priorities can be set for path control packets (particularly signaling packets) to prevent situations from occurring too often in which paths cannot be set.
65
Routing Engine
Packet header, control packet (packet, path)
Control / Data
OPS
Packet payload
Data
buffer
Control / Data
MEMS (OCS)
Path
Fig. 5.1.3.2. Example of the Internal Configuration of an Integrated Packet and Path Node Based on an O/O/O Optical Packet Switch Fig. 5.1.3.2 shows an example of the internal configuration of an integrated packet and path node based on an O/O/O optical packet switch. Lightpath services are provided by using some of the lines (i.e., wavelengths) shown in blue in Fig. 5.1.3.2. Optical packet services are provided by using the lines (wavelengths) shown in red and green. Control packets communicate with the appropriate control module via the OPS regardless of whether they are for packet switching or path switching. Another technique that can be used for sharing the control mechanism is path control distribution. On the Internet, a centralized type of distributed control is performed. To also be able to support future conditions in which the number of nodes increases dramatically and the topology varies, this distribution of control should be carried forward. On the other hand, although the topology does not vary and the number of nodes does not increase dramatically on a lightpath network, there will be an increase in the number of host nodes, and control must also take into consideration scalability in the wavelength multiplexing direction. In addition, with inter-domain control, sufficient information is not necessarily obtained, and self-organized control will be required for performance increases or maintenance in that case. Therefore, the benefits of carrying forward distributed control will be significant.
Data
References
[5-1-1] Naoya Wada, Hiroaki Harai, and Fumito Kubota, Optical Packet Switching Network Based on Ultra-Fast Optical Code Label Processing, IEICE Transactions on Electronics, Vol. E87-C, No. 7, pp. 10901096, July 2004. [5-1-2] S. J. Ben Yoo, Fei Xue, Yash Bansal, Julie Taylor, Zhong Pan, Jing Cao, Minyong Jeon, Tony Nady, Gary Goncher, Kirk Boyer, Katsunari Okamoto, Shin Kamei and Venkatesh Akella,High-Performance Optical-Label Switching Packet Routers and Smart Edge Routers for the Next-Generation Internet, IEEE
66
Journal on Selected Areas in Communications, Vol. 21, No. 7, pp. 10411051, September 2003. [5-1-3] Masataka Ohta, Efficient Composition and Decomposition of WDM-based Optical Packet Multiplexed Packets, Technical Reports of the IEICE (PN200676), January 2007. [5-1-4] Hideaki Furukawa, Hiroaki Harai, Naoya Wada, Naganori Takezawa, Kenichi Nashimoto, and Tetsuya Miyazaki, A 31-FDL Buffer Based on Trees of 1x8 PLZT Optical Switches, in ECOC 2006 Technical Digest, September 2006. (Paper No. Tu4.6.5). [5-1-5] Hiroaki Harai and Masayuki Murata, High-Speed Buffer Management for 40Gb/s-Based Photonic Packet Switches, IEEE/ACM Transactions on Networking, Vol. 14, No. 1, pp. 191204, February 2006.
5.2.2. Current FTTH: Single Star, Double Star, PON (Passive Optical Networks)
As currently considered, FTTH is divided into three types as shown in Fig. 5.2.2. These are SS (single star), PDS (passive double star), and ADS (active double star). In every type, an OLT (optical line terminal) is located at the central office side (towards the network) and an ONU (optical network unit) is located at the subscriber side, and optical fiber is laid between them. A single star configuration provides a dedicated optical fiber between the OLT and ONU. Its name derives from the star configuration of the wiring from the OLT to the ONUs. A double star configuration uses a topology in which a relay point is placed between the OLT and ONUs, and optical fiber is wired in a star configuration centered on that relay point. Since multiple relay points can be placed along the way from the OLT to create a two-stage star configuration, this is called a double star configuration. Since the type of FTTH that splits light into several optical signals or combines optical signals by using an optical coupler at the relay point is PDS, this is often known as a PON (passive optical network). On the other hand, the type of FTTH that performs active processing such as switching at the relay point is ADS. In Japan, FTTH service is provided by using SS- and PON-type networks. They both have different advantages and disadvantages based on their respective topologies. For example, the advantages of SS-type networks are that they can provide dedicated access lines to users and provide or upgrade different services separately. On the other hand, the advantages of PON-type networks are that they can conserve the number of fibers that
67
are laid and enable downstream broadcast communications (from the OLT towards the ONUs) to be performed. To facilitate FTTH service, standardization has been performed for PON-type networks. For example, the ITU-T has standardized G-PON (ITU-T G.984, 1.25Gbps or 2.4Gbps, accommodates up to 64 ONUs, supports distances up to 20 km). The IEEE has standardized GE-PON (IEEE 802.3ah Ethernet PON, 1.25Gbps, accommodates at least 16 ONUs, supports distances up to 10 km or 20 km). Currently, in Japan, many services providing up to 100Mbps to each home have been introduced on both SS- and PON-type networks.
SSSingle Star
Backbone network OLT
Star coupler
Switch
ONU
ONU
ONU
ONU
ONU
ONU
ONU
ONU
ONU
ONU
ONU
ONU
68
described above, and either numerous upgrades will be required or PON itself will have to be significantly improved. The following sections will describe various means that can be considered for new-generation optical access.
5.2.4.1. WDM-PON
WDM-PON enables the bandwidth to be increased by using existing PON fiber and couplers. It enables multiple wavelengths to be sent to one user or either GE-PON or 10 Gigabit-PON to be provided using one wavelength. Therefore, future upgrades will be even easier than PON, which provides only one channel. However, with the existing access infrastructure, wavelength multiplexing may be limited due to aerial wiring or branching loss. In addition, WDM-PON depends on the degree of multiplexing and the communication speed, and new optical amplifiers may have to be introduced to compensate for branching loss. Fig. 5.2.4.1 shows the average available wavelength per user (approximation) versus number of multiplexed wavelengths and number of branches, where the transmitted light from the OLT is +27dBm, the received light for each wavelength at the ONU is -24dBm, the input loss at the coupler filter is 2dB, the transmission loss (including loss by the splicer) is 0.35dB/km, the transmission distance is 20 km, there is no amplifier, and the system margin is 5dB. With 16 users, the number of wavelengths can be increased to 256 wavelengths, and 160Gbps can be received per user. However, when there are 64 users, the number of wavelengths can be increased to at most 64 wavelengths, and the bandwidth per user in that case is 10Gbps.
send 27dBm recv-24dBm
Average Bandwidth
Fig. 5.2.4.1. Average Available Bandwidth per User Versus Number of Multiplexed Wavelengths and Number of Branches.
69
migration is easy. In addition, since signal loss is small compared with PON networks, SS is also oriented towards long-distance use. Therefore, the advantages of using SS configurations for access lines in the future are clear.
5.2.4.3. WDM-Direct
WDM-Direct is a concept in which WDM is directly connected at a host (ONU). For example, a user connects to Internet service with a certain wavelength and connects to a bandwidth assurance service with a separate wavelength. One wavelength can also be used for broadcasting. In addition, multiple wavelengths can be used for sending/receiving data. Simultaneous upgrades like with PON will also be unnecessary. Either WDM-PON or SS can be used as an implementation topology. However, although this was also pointed out for wireless access, signal processing for determining the wavelength to use for connecting to the network must be provided as an additional function.
References
[5-2-1] 2006 White Paper on Information and Communications in Japan, hppt://www.soumu-go.jp/ (in Japanese. English version is also available). [5-2-2] Soo-Jin Park, Chang-Hee Lee, Ki-Tae Jeong, Hyung-Jin Park, Jeong-Gyun Ahn, and Kil-Ho Song, Fiber-to-the-Home Services Based on Wavelength-DivisionMultiplexing Passive Optical Network, Journal of Lightwave Technology, Vol. 22, No. 11, pp. 25822591, November 2004.
70
User Authentication
Global Network Access Net Access Net Optical Core Access Net
Mobility Support
Multiple access: optimal route selection Topology change, Homeoffice net intermittent connectivity
Ad hoc networks
Sensors
Fig. 5.3. Image of New Generation Wireless Mobile Access Network The network must support communication with traffic patterns and traffic volumes that differ from the conventional ones for such devices as sensors, which are expected to initiate small traffic volumes, and high-resolution video or 3D video applications, which are expected to generate enormous traffic volumes, in addition to the current voice communications and data access. From the viewpoint of guaranteeing communications during emergencies, attention must also be focused on assuring communication stability and reliability. In addition, energy-efficient communication (low power consumption) is also essential for communication devices and sensors, which are mainly battery powered.
71
One method for arranging base stations with high density is to make practical use of various fixed networks. For example, small base stations that form tiny cells can be located in homes and connected to a mobile core network via broadband links. Another effective method is to connect base stations to the CATV networks that are installed around commercial or residential areas [5-3-1]. A 6MHz frequency bandwidth allocated for just one TV channel can allow a relatively high volume of communications at data rates of several dozen Mbps.
5.3.3. Increase in Speed of Access Links to User Homes and Expansion of Areas Using High-Speed Links
Optical links are widely used for business in urban areas, especially in Japan. On the other hand, the choices for residential-oriented data access for consumers in apartment buildings or detached houses currently are xDSL using telephone lines, CATV networks, and optical links. To respond to demands for higher communication speeds in the future, the current access speeds of optical links (up to 100Mbps or 1Gbps shared by up to 32 users) must be increased. Increasing the speed of optical link sharing technologies known as G-PON or GE-PON or increasing the speed by using other technologies will be future research goals. In areas that cannot be serviced by high-speed links, wireless broadband circuits will have to be provided. The use of wireless access technologies based on fixed 72
wireless access (FWA) called WiMAX, and mesh network technologies based on wireless LANs can be considered. By 2011, Japan is converting its terrestrial television broadcast from the current analogue system to a digital system. To enable the digital system to cover all the areas that have been covered by the current analogue system, transmitting television signals over the telecommunication lines is very essential. Therefore, the optimum technology should be selected for future access links by considering the unification of communications and broadcasting and taking cost into consideration.
References
[5-3-1] Expansion of PHS Service by Using Cable TV Networks--Successful in Actual Experiments--, NICT press release, http://www2.nictgo.jp/pub/whatsnew/press/h18/070312/070312.html, March 12, 2007 (in Japanese).
74
In a wireless LAN, CSMA/CA is used for packet multiplexing. PDMA also uses CSMA/CA for packet multiplexing in a limited frequency band which is effectively shared for upstream and downstream traffic generated from all cells. In addition, a precisely different frequency allocation is not necessary because interference between cells is automatically regulated by CSMA/CA and the available frequency band in any specific vicinity is assigned nearly uniformly to each user. Since inter-cell interference between different wireless carriers is also regulated, all carriers can share the same frequency band without mutual adjustment, and a limited frequency band can be used even more effectively. With the PDMA concept, the cellular communication network is being redesigned for use as a computer network, which is represented by the Internet. Introducing PDMA to satisfy Internet requirements is a first step towards creating a new generation network for the cellular network.
75
Even in the current Internet, connection-oriented protocols such as TCP exist in the transport layer. However, TCP is a protocol that uses up the entire allocated bandwidth as long as there is data to be sent. It makes no attempt to define a bandwidth that TCP requires. Also, the TCP state only exists at the end terminals, intermediate network devices cannot accurately infer the amount of required bandwidth by the flow. The interposition of the intermediate network device is a violation of the Internet's end-to-end principle, and certain Internet characteristics are significantly harmed if it is forcibly introduced. For example, terminals can freely use a transport protocol other than TCP on top of IP by implementing the protocol in the terminals only. However, if some function in the network is specialized to TCP, the freedom available at the terminal is lost. Another problem when a communication method suited for telephones is used on the Internet is that communication in the Internet is generally not bi-directional or continuous. With telephones, voice data flows bi-directionally and can be divided into very tiny time intervals. If appropriate control signals are inserted into this kind of data flow, control information can be quickly exchanged between communicating parties. However, with the Internet, data flows are generally intermittent and one-way. The bandwidths and packet counts for downstream flows are not the same as those of upstream flows even for TCP, which is more or less bi-directional. If a higher level application intermittently uses one TCP connection, TCP often will have no packets flowing at all for certain periods. In addition, it is generally impossible for intermediate network devices to predict this kind of behavior. Note that ADSL and CATV Internet providers have assumed usage patterns in which residential clients download web contents and believe that the downstream flow requires large bandwidth while the upstream flow requires small bandwidth. However, this characteristic only occurs in special circumstances of the client-server model when the server can only be maintained by network providers or large companies, rather than a characteristic of the Internet. With communications according to the peer-to-peer model such as when a web server is placed in a home or video images are transmitted from a home or mobile terminal to another home or mobile terminal, even if the bandwidths required by access terminals are momentarily asymmetric, both the upstream and downstream flows will require large bandwidths of the same order. On the other hand, a fixed bandwidth of the same order is allocated for upstream and downstream flows on a backbone and no problem occurs since temporal variations in the symmetry of individual communications are averaged.
76
In addition, time division multiplexing (TDM) such as SONET/SDH is unnecessary on the Internet, since individual packets contain complete information on their headers. In the internet, multiplexing should be performed in terms of packets, and devices or protocol headers for time division multiplexing are wasteful. Although 10Gbps Ethernet standards also have features that support SONET/SDH framing and it is helpful when Ethernet traffic flows on existing SONET/SDH networks when new Internet backbones are created, it will be most efficient to directly link routers using dark fiber and directly use Ethernet. Although Internet support does not currently require QoS assurance, it does not prohibit it. QoS assurance is difficult to obtain in a CSMA/CD environment. However, with the current Ethernet, the physical layer is normally a full-duplex one-to-one connection, and if priority control is performed on the data link layer by using IEEE 802.1p, QoS assurance is theoretically possible. PDMA applies these kinds of concepts to mobile wireless communications.
77
Although these benefits can also be obtained to a certain degree when CDMA performs dynamic channel assignment, the main difference with PDMA is in the supported speeds for traffic variation. Since PDMA, like CDMA, enables simultaneous communication with multiple cells by using the same RF circuit, make-before-break style smooth handover can be achieved by using only one RF circuit.
78
where represents the natural growth rate within a species and K is the available bandwidth. The above equation is a logistic curve that can be adapted to high-speed lines where the window size increases exponentially in the initial phase. On the other hand, since the available bandwidth, i.e., K becomes smaller as the window size becomes larger, the window size increment rate slows down. A logistic curve is often used in mathematical ecology. A logistic curve states that the population of a species increases explosively in the initial state since there are many resources, but the growth rate is suppressed because the resources decrease as the population becomes larger. 79
100 80 60 40 20 0 0 20 40
Species 1 Species 2
TCP Symbiosis requires the available bandwidth to be known and uses an inline network measurement technique to do so [5-5-1]. This is a new traffic measurement technique that uses burst transmissions within the TCP window size for measuring available bandwidth while sending TCP data. A logistic curve is not used as a simple analogy from mathematical ecology but rather because a great deal of related research concerning the Lotka-Volterra competition model has proved its effectiveness in TCP congestion control. First of all, a network can be thought of as providing its resources to an unspecified large number of users who basically are in a competitive relationship. In particular, the end-to-end principle suggests that mediation of resource contention according to controls within the network should be avoided. However, in a state of unrestricted competition, resources are clearly squandered by excessive competition even in the TCP example. TCP Symbiosis is a solution to smoothly implement a coexistent relationship among users. Consider TCP Symbiosis when there are two connections. This can be represented by the following equations:
N ( t ) + N2 ( t ) d N1 ( t ) = 1 1 N2 ( t ) dt K N ( t ) + N1 ( t ) d N2 ( t ) = 1 2 N1 ( t ) dt K
where represents the decrease in the growth rate due to inter-specific competition. If this were based on mathematical ecology, it would indicate the inter-specific competition for a certain resource. However, the following is clearly stated in Reference [5-5-2], for example. The effect of competition within a species is stronger than the effect received from another species with which it is in competition. In other words, stability occurs when the self-suppression effect of population control within one's own species is greater than the suppression effect received from another species. Even when there is competition, species can coexist only when each competing species has strong internal competition. In other words, a relationship that can also be referred to as competitive symbiosis holds. Another important point is that fairness must be guaranteed. Since implementation of the end-to-end principle does not allow intervention by a mediator, fair control must be
Species population
60
80
100
80
included in the protocol between end terminals or, in other words, a mechanism for guaranteeing fairness between connections must be included in TCP. TCP increases the window size each time it receives an ACK (confirmation of delivery between endterminals). As a result, differences occur in the method of increasing the window size according to differences in the RTT (round trip time) between end terminals, and throughput is determined depending on the RTT. With TCP Symbiosis, if the rate of change of the window size is normalized with respect to the RTT, fair bandwidth allotment can be expected regardless of differences in the RTT. The effectiveness of TCP Symbiosis is shown in [5-5-3]. Importantly, properties of the proposed method such as stability, extendibility, and parameter characteristics are being clarified according to mathematical analytical techniques by extensions of past research regarding the LotkaVolterra competition model.
References
[5-5-1] C.L.T. Man, G. Hasegawa, and M. Murata. An Inline Measurement Method for Capacity of End-to-End Network Path, Proceedings of the 3rd IEEE/IFIP Workshop on End-to-End Monitoring Techniques and Services (E2EMON 2005), May 2005. [5-5-2] Ei Teramoto. Mathematical Ecology, February 1997 (in Japanese). [5-5-3] Hasegawa, G., and Murata, M., TCP Symbiosis: Congestion Control Mechanisms of TCP Based on Lotka-Volterra Competition Model, Proc. Of Workshop on interdisciplinary systems approach in performance evaluation and design of computer & communications systems (Inter-Perf 2006), CD-ROM, (Oct. 2006).
changes if the node moves to another subnet, the IP address of a certain node (interface) depends on the connection location of that node within the Internet. Therefore, the IP address can be called a node (interface) locator. To use an Internet application, the user specifies the target node by using the FQDN, which is a character string. The application accesses a name server to convert the FQDN to an IP address. In this way, the application handles the IP address, which is a locator, as a node identifier. Let us consider the case when the application communicates by using TCP as the transport layer protocol. The application establishes a TCP connection between sockets at its own node and the destination node. A socket is a set consisting of an IP address and port number. The TCP connection is identified by a set of four pieces of information consisting of the IP address and port number of the application's own node and the IP address and port number of the destination node. In this way, TCP also handles the IP address as a node identifier. TCP requests packet transmission by indicating the destination node's IP address to IP, which is the network layer protocol. IP forwards the packet based on the network prefix of the destination node's IP address (destination address) to deliver it to the target subnet. Within the target subnet, the packet is delivered to the target node based on the interface identifier of the target IP address. In this way, IP forwards the packet based on a locator.
Node identifier Application layer FQDN Node identifier mapping IP Address e.g., 133.243.3.35
e.g., nwgn.nict.go.jp
i/f locator Network layer A packet is routed based on the IP address IP Address
IP address
network prefix
interface identifier
82
Mobility Problem
When a node moves from the subnet, where it was originally connected (home link) to another subnet (external link), its IP address changes. Fig. 5.6.2.1 shows a situation in which this node's IP address changes from IP-A to IP-B when the node moves. If the user attempts to communicate by using the FQDN to specify this node, the packets will end up being delivered to the home link and the user will ultimately be unable to communicate because the FQDN is converted to the IP address for when this node is connected to the home link (IP-A). Similarly, if the node moves during communications, packets will end up being routed to the connection point before the move (IP-A), and communications cannot continue. Even if packets are forwarded to the connection point after the move, for example, the TCP connection cannot be maintained after the node is moved because the TCP connection is identified by the IP addresses of both ends.
IP-A
Internet
IP-B
Multihoming Problem
Multihoming is a concept in which a node or site is connected to the global network by multiple routes. Fig. 5.6.2.2 (a) shows a situation in which a node is connected to the global Internet through multiple interfaces that each use separate routes. This is called node multihoming. On the other hand, Fig. 5.6.2.2 (b) shows a situation in which a certain organization is connected to the global Internet via multiple access networks. This is called site multihoming. Multihoming provides the benefits of fault tolerance whereby communications can continue by using another route even if a failure occurs on one route, route selection whereby routes can be used for different purposes according to communication properties, and load distribution whereby load concentration on a specific route can be prevented. In Fig. 5.6.2.2 (a), let us assume that the multihomed node is using interface-A to establish a TCP connection. At this time, let us assume that the source address of packets sent by this node is IP-A. Let us consider the case in which 83
communications continue by switching to interface-B because a failure occurs in the route using interface-A during communications. Once this switch is made, the source address of the packets sent by this node becomes IP-B. Since the TCP connection is identified by the IP addresses and port numbers of both nodes, when the IP address changes as described in the above fault tolerance-related example, the TCP connection can no longer be maintained.
Global Internet
Global Internet
ISP-A Prefix-A
ISP-B
ISP-B Prefix-B
Prefix-B
Security Problem
On the Internet, IPsec [5-6-6] is provided as a security protocol in the network layer. IPsec implements source node authentication, packet tampering prevention, and packet encryption. When IPsec is used, a relationship called a security association (SA) is established by negotiating encryption algorithms and keys that are to be used between both nodes in advance. An SA is identified by the destination node's IP address and a number called the security parameter index (SPI). Therefore, if the destination node moves to another subnet during communications using IPsec, the SA must be reestablished or updated because the destination node's IP address has changed.
84
locator, which is unique in the entire network, is assigned to an interface. Since AKARI is a network architecture for large-scale networks, a locator should have a hierarchical structure if scalability is taken into consideration. As Fig. 5.6.3 shows, a node is identified by an identifier from the application layer to the transport layer. When a packet is sent, the source identifier and destination identifier are converted to locators when they are passed from the transport layer to the network layer, and the network layer forwards packets based on the destination locator. When a packet is received, the source locator and destination locator are converted to identifiers when they are passed from the network layer to the transport layer, and the transport layer and application layer identify the communication destination according to the source identifier. By separating identifiers and locators in this way, the abovementioned problems concerning mobility, multihoming, and security are solved as described below.
mapping Application layer FQDN (character string) Identifier (bit string)
Transport layer
Identifier
Mobility
Consider a situation in which the node identified by Identifier-A moves from the home link to an external link. Assume that Locator-A is assigned to this node at the home link and Locator-B is assigned at the external link. Even if the locator changes because the node moves, since Locator-A and Locator-B are both converted to Identifier-A at the
85
destination node, communications can continue in the transport layer and application layer after the move.
Multihoming
Assume that a multihomed node had been communicating by using the Locator-A route. Consider a situation in which a failure occurs on this route and this node attempts to continue communicating by using the Locator-B route. Although the source locator of packets sent by this node changes from Locator-A to Locator-B after the failure, both Locator-A and Locator-B are converted to the same identifier at the destination node when the packets are passed from the network layer to the transport layer. Therefore, communications can continue in the transport layer and application layer after the route change.
Security
Assume that a security protocol such as IPsec is also used in the network layer in AKARI. The IPsec SA is established by using the identifier. Even if the locator changes in a mobility or multihoming environment, the IPsec SA can be maintained after the move or route change since the identifier is unchanged.
References
[5-6-1] P.V. Mockapetris. Domain names concepts and facilities, RFC 1034, November 1987. [5-6-2] P.V. Mockapetris. Domain names implementation and specification, RFC 1035, November 1987. [5-6-3] J. Postel. Internet Protocol, RFC 791, September 1981. [5-6-4] R. Hinden and S. Deering. IP Version 6 Addressing Architecture, RFC 4291, February 2006. [5-6-5] Masahiro Ishiyama, Mitsunobu Kunishi, Keisuke Uehara, Hiroshi Esaki, and Fumio Teraoka. Lina: A new approach to mobility support in wide area networks. IEICE Transactions on Communication, Vol. E84-B, No. 8, pp. 20762086, August 2001. [5-6-6] S. Kent and K. Seo. Security Architecture for the Internet Protocol, RFC 4301, December 2005.
86
L3 L2
radio quality is going down (1) (2)
(3)
Fig. 5.7.1.1. Handover process in IPv6 Handover processing is performed in the following order. 1. The link layer (L2) detects a deterioration of communication quality, it scans the available wireless channels to determine the AR that the mobile node should connect to after the handover. An interval on the order of seconds is required for the wireless channel scan. 2. By switching the channel that it will use, L2 switches the AR to which it will be connected (end of L2 handover processing). Switching the wireless channel to be used takes approximately several milliseconds. 3. The network layer (L3) awaits the reception of a router advertisement (RA) message sent from the AR. In the IPv6 neighbor discovery specifications [5-7-1], the minimum RA interval is 3 seconds, and in Mobile IPv6 [5-7-2] specifications, the minimum RA interval is 30 milliseconds. 4. By receiving the RA message, L3 knows that L2 handover occurred. It then generates a new IPv6 address and executes duplicate address detection (DAD). DAD processing takes several seconds. 5. If the address is not a duplicate one, L3 sends a signaling message to the location server and receives its confirmation response (end of L3 handover). In the handover processing described above, the mobile node is unable to communicate from the beginning of step (1) until the end of step (5). If the link layer and network layer were organically linked, handover processing can be performed faster as follows (see Fig. 5.7.1.2).
87
L3
(3) DAD (2) notification (4) request of handover of quality start down
(7) signaling
(6) notification of
handover completion
L2
channel (1) scan radio quality is going down channel switch (5) time
Fig. 5.7.1.2. Handover process based on cross-layer architecture 1. L2 regularly scans only wireless channels that are being used by neighboring ARs and selects a candidate handover destination AR in advance. 2. If the communication quality deteriorates to a certain degree, L2 asynchronously reports this to L3. 3. L3 begins the handover preparation process as soon as it gets information related to the new AR from L2. It generates the post-handover IPv6 address, and finishes duplicate address detection. 4. L3 directs L2 to start execution of L2 handover. 5. L2 executes L2 handover process by switching to the wireless channel that is being used by the AR, as indicated by L3 (end of L2 handover). 6. When L2 handover process ends, L2 asynchronously reports this to L3. 7. When L3 receives the end of L2 handover report from L2, it executes signaling process (end of L3 handover). In the handover process described above, since the incommunicable interval is only during step (5) and step (6), it can be dramatically shortened compared with the conventional handover procedure [5-7-3]. This kind of architecture in which control information is exchanged between layers is generally called a cross-layer architecture.
88
or the application protocol can use network layer information. This architecture takes into consideration the exchange of control information between any layers and introduces the Inter-Layer System (ILS), which passes vertically through each layer (see Fig. 5.7.2.1). Control information of each layer is exchanged via the ILS between any layers.
(4) (N+m+1) Layer
PE
AE
(3) (1)
(N+m) Layer
PE
AE
(2)
(N+1) Layer
PE
AE
(N) Layer
PE
AE Inter-Layer System
(4) Response
(1) Request
(2) Confirm
(3) Indication
Fig. 5.7.2.1. Cross-layer architecture of AKARI If the control information that is exchanged between layers were specific to each protocol or device that is used, each time a new protocol or device was added, the existing system would have to be adapted to the new protocol or device, and system maintenance management would be inefficient. Therefore, in the AKARI cross-layer architecture, the control information that is exchanged between layers is assumed to be abstracted information that does not depend on the protocol or device. In the OSI reference model, a protocol entity (PE) that executes protocol processing exists in each layer. To abstract control information, the AKARI cross-layer architecture introduces an abstract entity (AE) in addition to the PE (see Fig. 5.7.2.1). There is a one-to-one correspondence between PEs and AEs. To send control information to another layer, the AE abstracts PE-specific information, and to receive control information from another layer, the AE converts the abstracted information to PE-specific information. Next, let us consider interactions between protocol layers. Providing the following three interaction types is considered sufficient for natural interactions between protocol layers. (1) A layer issues a request for acquiring information from another layer (information acquisition interaction) 89
(2) A layer notifies another layer of the occurrence of an asynchronous event (event notification interaction) (3) A layer directs another layer to perform an action (action directive interaction) An information acquisition interaction is used to get control information from a certain layer. For example, for the link layer, it is used to get information about the link layer protocol, media type (such as 1Gbps Ethernet or IEEE 802.11b wireless LAN), or current communication quality. An event notification interaction is used to notify another layer of an event that occurs asynchronously in a certain layer. For example, it might be used for the link layer to notify that a connection to an access point was disconnected during wireless LAN communication. An action directive interaction is used to perform a specific action in a certain layer. For example, it might be used for the link layer to direct to switch to a specific wireless channel. The following four primitives are introduced for the above interactions (see Fig. 5.7.2.1). (1) Request: Request conveyed by a certain layer to another layer (2) Confirm: Confirmation response to a request (3) Indication: Notification of an asynchronous event by a certain layer to another layer (4) Response: Confirmation response to an indication Information acquisition, event notification, and action directive interactions are implemented as follows by using primitives as shown in Fig. 5.7.2.2.
request
request
response
request
(N)-layer
confirm
(N)-layer
confirm indication
event
confirm
(N)-layer
Information acquisition type: request confirm Event notification type: request confirm, (event occurrence), indication response Action directive type: request confirm For example, assume that L2-LinkType is defined as a primitive for getting the link layer protocol or media type. For the network layer to get the link layer protocol or media information, the network layer sends L2-LinkType.request via the ILS to the link layer. Parameters such as the target network interface are stored in this primitive. In response to this request, the link layer stores the requested information in L2-LinkType.confirm and
90
returns it to the network layer via the ILS. Next, when the link layer is a wireless LAN, assume that L2-LinkUp is defined as a primitive for conveying a notification that the connection to an access point is completed. To get this event notification, the network layer first sends L2-LinkUp.request to the link layer to register an event notification in advance. In response to this request, the link layer returns L2-LinkUp.confirm to convey to the network layer that it received the request. When the event (connection to an access point) actually occurs later, the link layer conveys L2-LinkUp.indication to the network layer. When the network layer receives this indication, it returns L2-LinkUp.response to the link layer as a confirmation response. Finally, assume that L2-LinkConnect is defined as a primitive for causing the link layer to switch the wireless LAN channel. To direct the link layer to perform this action, the network layer sends L2-LinkConnect.request to the link layer. In response to this request, the link layer returns L2-LinkConnect.confirm to the network layer to convey that it received the request.
References
[5-7-1] T. Narten, E. Nordmark, and W. Simpson. Neighbor Discovery for IP Version 6 (IPv6), RFC 2461, December 1998. [5-7-2] D. Johnson, C. Perkins, and J. Arkko. Mobility Support in IPv6, RFC 3775, June 2004. [5-7-3] Kazutaka Gogo, Rie Shibui, and Fumio Teraoka. An l3-driven fast handover mechanism in ipv6 mobility. In Proceedings of SAINT2006, IPv6 Workshop, January 2006.
91
AAAc PAA
authentication request
authentication response
PaC
(user@ISP-A)
front-end (PANA)
Fig. 5.8.1. AAA architecture in IETF
92
References
[5-8-1] D. Forsberg, Y. Ohba, B. Patil, H. Tschofenig, and A. Yegin. Protocol for Carrying Authentication for Network Access (PANA), Internet Draft (Work in progress), March 2007. [5-8-2] P. Calhoun, J. Loughney, E. Guttman, G. Zorn, and J. Arkko. Diameter Base Protocol, RFC 3588, September 2003.
93
applicable. However, the route for QoS routing is not determined using only addresses. Generally, various parameters such as bandwidth, delay, or price are used for the QoS conditions in QoS routing of communications, and since the route that is to be selected differs according to these QoS conditions, the route must be calculated for each individual communication. For example, even communications having identical QoS conditions will not necessarily take the same route since the communication that reserved a route later may not be able to use the same route because the communication that reserved that route earlier used bandwidth resources. Even if the routes that are to be used by multiple communications unexpectedly happen to be identical as a result of the route calculations, the routes cannot easily be aggregated since there is also a possibility that different routes will be used because of later changes in the environment. For example, even if the routes were aggregated, the route signaling message must transport individual communication port numbers or different QoS conditions for each communication in general, and since the signaling message length or processing time is proportional to the number of communications, no reduction is achieved with respect to the order of the computational complexity. When the amount of routing information in a large network is enormous and a hierarchy is created, even if multiple communications appear to travel along the same route from a higher level, different routes may be taken internally because of problem (5). In other words, when QoS routing is performed, the routes of individual communications may not be able to be aggregated, and even if they are forcibly aggregated, the order of the computational complexity or routing table entries will not decrease. QoS routing must be performed for each individual communication, and each individual communication must also have a routing table. This may also be a problem with respect to scalability. For problem (2), which concerns route oscillation, let us consider bandwidth, for example. Assume that the current available bandwidth of each link is advertised. If a communication uses a link, the remaining bandwidth of that link decreases, and in some cases, falls below the bandwidth required by that communication. In this case, if the route for that communication is recalculated for some reason (such as a detour because an intermediate link failed), a situation will occur in which that link is judged to be unavailable and either a separate route will be selected or no route will be found. If, as a method of avoiding this problem, the amount of resources that are being used by each communication is advertised, not only will problem (4) worsen but the amount of information will no longer be able to be reduced by creating a hierarchy since each communication is advertised individually. As another method of avoiding this problem, if the route is determined according to centralized calculations by a PCE, sender, or receiver, the self-reserved route and its effect can be tentatively known at the routedetermination site. However, if a hierarchy of routing information is created, the effect of self reservation on the internal state will no longer be known because of problem (5) or (6) (Fig. 5.9.1.1). In Fig 5.9.1.1 (a) and (b), as a result of the reservation of a bandwidth of 5, an available bandwidth of 10 is advertised when information is simplified at a higher level of the hierarchy. However, because of the internal state, the original bandwidth of 15, which is the case when the bandwidth of 5 is not reserved, is inconsistent with the advertised bandwidth of 10. In other words, the original available bandwidth when a bandwidth of 5 is not reserved is not known from the information that is obtained at the higher level of the hierarchy, and oscillation can no longer be prevented even by performing centralized calculations. Although there is also a method in which the route is not recalculated, this is just a functional limitation, and it is unlikely that a
94
user will agree to continue to use a high-cost route when a less expensive route becomes available under a volume-charge accounting system.
10
Reservation of a bandwidth of 5
20 15
15 10
20 15
15 10
(a) Example in which the simplified reserved bandwidth changes because of a new reservation
10
Reservation of a bandwidth of 5 R
20 15
10 5
20 15
R 10 10
(b) Example in which the simplified reserved bandwidth does not change because of a new reservation
Fig. 5.9.1.1. Loss of Internal Available Bandwidth Information Due to Simplification Accompanying the Creation of a Hierarchy Problem (3) concerns the NP completeness of the optimum route calculation. When there are multiple additive constraints on QoS that are added each time a link or router is traversed, such as price or delay, the calculation of a route that simultaneously keeps these constraints below a certain value or minimizes one constraint while holding another constraint at a certain value is NP complete, which is a degree of difficulty in which a polynomial time solution method with respect to the number of links or routers cannot be found. To avoid this problem, it is necessary to reduce the number of links or routers within each layer by creating an aggressive hierarchy of routing information or to minimize the sum of appropriate functions of price and delay. Problem (4) concerns the increase in routing information that flows because the network scale increases. To prevent this, it is necessary to aggregate the routing information of a network of a certain scale and create a hierarchy of routing information 95
so that it appears to be a simpler topology from the outside. In addition, when the network scale increases, a multi-layer hierarchy must be created. However, creating a hierarchy of routing information will cause problem (5) or (6) to occur. Problem (5) occurs when a hierarchy of routing information is created. Information is lost because the information of lower layers is aggregated and the resulting information may no longer be reliable. If routing information is not reliable and a situation actually occurs in which QoS can no longer be satisfied during signaling, route reservations will fail and the failures will repeatedly occur even when retries are attempted. Although a method called crankback, which remembers the point of failure and avoids it, can be used, if crankback is repeated multiple times, processing will become extremely complicated, signaling time may also increase boundlessly, and the route that is found will not necessarily be the optimum one. An increase in signaling time also becomes a significant impediment for dynamic route recalculation. Problem (6) also occurs when a hierarchy of routing information is created. However, for the transit QoS when traversing a certain area, routing information can generally be aggregated to a certain degree of accuracy even though problem (5) occurs. Also, the receiving end can obtain routing information of all layers in its immediate vicinity, and the sending end can similarly obtain routing information of all layers in its immediate vicinity. However, only the outermost layer information of the routing information in the vicinity of the sending end can be seen from the receiving end. Therefore, although various routes from the receiving end to the outermost layer of the sending end can be calculated, the aggregated routing information that the receiving end can obtain does not enable it to know whether there is actually a route among them that can reach the sending end while satisfying the QoS conditions, and if there are multiple candidates, which among them is the optimum route (Fig. 5.9.1.2). For the hierarchy shown in Fig. 5.9.1.2, when routes are calculated from the receiver to the sender, since there exists a route to the sender that satisfies QoS conditions from point B while the QoS conditions are not satisfied at point C, which is ahead of point A, the receiver should select a route that uses point B. However, even though the receiver receives route information advertisements of each layer around itself and knows that the route from the receiver to either point A or point B satisfies QoS conditions, it does not know whether or not the route from point A or from point B to the sender satisfies QoS conditions unless it has routing information that is advertised in the immediate vicinity of the sender. Nevertheless, information for the vicinity of the sender cannot be advertised in the vicinity of the receiver because of the creation of hierarchies. Although crankback can deal with this problem for the time being, this is certainly not a satisfactory solution.
96
A C Sender Receiver
Fig. 5.9.1.2. Routing Information Hierarchies Problem (7) concerns the method of calculating the optimum route among multiple routes for which different intermediate ISPs (or communications carriers) can be selected when a volume-charge accounting system is used. For a flat-rate accounting system, route selection may be left to the ISPs since the optimum route for ISPs is the lowest cost route. However, for a volume-charge accounting system, the optimum route for the ISP is the route providing the maximum profit, and this route is usually the one with the highest cost to the user. Problem (8) concerns combinations of resource reservations with multicasting. When multicasting is performed, since the routes to the receiving ends, which may be extremely numerous, are impossible to calculate from the sending end, the routes are calculated at the receiving ends, and routes towards the sending end are merged. However, if the route selection policies differ at each receiving end, a tree with the sending end as the root may not be well formed, and loops may be produced. Since many multicast routing methods cannot perform inter-domain routing well, inter-domain QoS routing like in problem (7) seems hopeless.
97
First, as stated in (1), we must clearly recognize that it is impossible to aggregate the routes of different communications in QoS routing. At first glance, this seems to imply that there will be no scalability. However, QoSguaranteed communications occupy resources. A volume-charge accounting system should be used according to the amount of resources that are occupied and the time those resources are occupied. Network providers can increase the bandwidth or router processing speed according to their revenues. At the same time, network providers should increase route calculation capabilities or routing tables according to their revenues, and no scalability problem will actually occur. The next point that we must recognize is that route aggregation is also impossible for multicast routing. A multicast receiver generally is a group of terminals rather than an individual terminal, and a multicast address is allocated to a group of terminals. There is no relationship between the closeness of multicast addresses and the similarity of destinations. Even if an attempt is made to allocate similar addresses to similar groups, multicast receivers change dynamically, and a meaningful similarity cannot be defined for group similarity with respect to routing. Therefore, route aggregation is impossible in multicast routing, and an individual routing table entry is occupied for each multicast address. In other words, as stated in (2), multicast communications occupy finite resources of the routing table entries, and it is apparent that at least part of the problem of inter-domain multicast routing is the same as the inter-domain QoS routing problem. Of course, this means that the multicast routing protocol must be unified with the QoS routing protocol, not that existing multicast routing protocols will be able to be used by anyone. A meaningful situation involving point (3) is preventing network providers from advertising unreachable QoS conditions. For example, if a certain network provider sets the delay or price to zero and the bandwidth to infinity when a hierarchy of routing information is created, many users can be attracted to that network provider, and if that network provider can actually achieve the requested QoS at the requested price, the revenue of that network provider will increase. However, not only will crankback be necessary if the requested QoS cannot be achieved, but even if the requested QoS can be achieved, if there exists another network provider that can achieve the requested QoS at a lower price, it will be the users' loss (that is, the selected network provider's gain). If this situation is neglected in an environment where there is competition between network providers, every provider will advertise that the delay or price is set to zero and the bandwidth is set to infinity to attract customers to itself. As a result, retries will randomly occur while crankback is eventually performed for all paths, and this situation can no longer be characterized as QoS routing. Therefore, if we impose the constraints that when each network provider issues an advertisement, the advertisement must be greater than or equal to the achievable delay or price and less than or equal to the achievable bandwidth of that network provider, then the advertisement will be reliable. If routes are selected according to advertisements, then the reservations will always succeed and crankback will be unnecessary except when the achievable cost or QoS changed because of multiple simultaneous reservations. If another reservation is made at the same time and a reservation fails, the failed reservation should be retried from the start. However, in hierarchical routing, the information that can be advertised is the QoS information when a relevant provider's network is passed through, and it is impossible to advertise the QoS to all internal destinations. Therefore, if a communication destination
98
is within a certain provider's network, the route up to the entrance of that provider's network can be calculated while taking QoS into consideration. However, whether or not the QoS conditions can be satisfied beyond that point is unknown, and even if there is a route that satisfies the QoS conditions, the entrance from which the QoS conditions can be satisfied is unknown. To solve this problem without performing crankback, advertisement information from the vicinity of the communication destination should be individually sent to the destination. If this information is sent in advertisements, no hierarchy will be created and the advertisement volume will increase boundlessly. However, no problem will occur if the portion that we have sent is carried in the signaling messages as described in (4). Similarly, if the available amount of resources when each reservation does not exist is carried in the signaling message of that reservation while the current amount of resources are advertised, route oscillation can be reduced while holding down the increase in the amount of advertisements. An idea concerning point (5) is as follows. Although inter-domain route selection is determined according to a policy, the policy must be determined by the user rather than the network provider in a similar manner as the selection of a long-distance or international carrier for telephones. Although the user must know sufficient routing information in order to determine the policy, since routing information can be reduced according to hierarchical routing based on the concepts discussed for (3) or (4), this will not particularly become a burden for the user. When multi-casting is performed, to prevent mismatches with the policies of the receiving sides, the sending side should determine the policy, and the sending-side's policy should be transferred to the receiving sides according to signaling messages when necessary. The above methods can be used to eliminate QoS routing problems, and multiple hierarchies can be created to enable QoS routing to operate even in a large-scale network or inter-domain environment.
99
5.10.1.1 shows the OSI reference model in which Internet protocols are applied as an example. However, recent network circumstances cannot be explained by a simple layered model. For example, PPP, which had often been used as a technology prior to ADSL also has a protocol stack for the telephone network. This means that the network layer and higher layers of the Internet protocol stack are stacked as applications on top of the telephone network's protocol stack. A similar situation is true for overlay networks. They can be considered as extensions in the vertical direction. However, it is important to keep in mind that lack of support in a lower layer can prevent a service from being provided in an ideal form in a higher layer. On the other hand, even in terminals and controls within a network, which are called routing protocols or signaling, protocols such as DNS or wireless access controls each have separate protocol stacks, which exhibit a horizontal extent that spans multiple networking technologies rather than just networks using the same technology. These can extend horizontally according to the number of applications. Various application services can be provided by combining these protocol stacks appropriately. Cooperation between layers has also been important so far. The reasons why cooperation between various layers is required will be explained later.
OSI Reference Model
7 Application layer
HTTP, SMTP, SNMP, FTP, Telnet SMTP, SNMP, FTP, Telnet NetBIOS, PAP TCP, UDP, SPX, NetBEUI
6 Presentation layer 5 4 3 2 1
Session layer Transport layer Network layer Data link layer Physical layer
A network model is a method of representing the network, and user requests must be implemented in the form of applications that include the network. Realistically, requests are implemented as functional parameters. However, in many cases the requests are probably for technological designs of networks that should be implemented. Although user requests indicate what users want, requests such as a desire for no delays, for always
100
being connected, or for never being cut off are practically impossible to achieve. Moreover, the representation method itself already is in terms of functional parameters, and in seeking a realistic solution, there is the risk that the value received by users will end up being quantified independently. Therefore, we propose an AKARI value model here based on people as shown in Fig. 5.10.1.3. All applications are manifested because users (people) exist. Applications or contents exist in the network, and computers or conversational partners also may exist through the network in the traditional way. Even if virtual spaces are implemented, media exist between the network and people, and various information (including perceptions) are exchanged. Although it is impossible to directly transmit all perceptions people have by using current technologies, this is similar to services such as Web 2.0, which have already begun to be carried out through computers.
Phone call, e-mail, Web, blogs, animation, music, publications, dictionaries, translation, maps, shopping, auctions, banking, storage, games, virtual spaces,
Implementation of societal requirements and network infrastructure
Voice (hearing) Image (sight) Text (language) Personal information Time Labor Emotions Trust Sense of security Feeling of satisfaction
Peta-bps class backbone network, 10Gbps FTTH 100 billion devices, M2M, 1 million broadcasting stations Principles of competition and user-orientation Essential services (medical care, transportation, emergency services) Safety, peace of mind (privacy, monetary and credit services, food supply traceability, disaster services) Affluent society, disabled persons, aged society, longtail applications Monitoring of global environment and human society Integration of communication and broadcasting Economic incentives (business-cost models) Ecology, sustainable society Human potential, universal communication Ubiquity Integration and simplification Network model Power saving Evolvability Diversity Safety Large capacity Scalability Openness Robustness
101
functionally sets of many interface boundaries. A logical interface structure ranging from physical signal rules for transmitting information to application layers that are implemented in the network is required. Also, the types of protocols that are used for control become more numerous as functions cross more branches. At an NNI, required signals are stipulated for an information transmission connection that spans different networks. However, even with exactly the same network configuration, if a limitation occurs for some reason, feasible interconnectivity will be lost. Various reasons that might cause this range from operational differences to functions that are implemented in the network. An ANI, which indicates an application-network interface, is an interface that an application serviced by the network uses to operate. Its purpose is to enable the network to be controlled by the application when some of the network control functions are stipulated. However, since functions that can control the network are disclosed, it is not realistic for all functions to be built, and the freedom of the application is often significantly limited due to the current tradeoff between freedom and operational stability, which cannot coexist. Freedom is expected to be very different just be enabling the user to select the API for providing the operation of a predetermined sequence. This ANI was stipulated in the NGN that was standardized in autumn 2006. However, whether this is to be provided to the user or application service provider depends on the telecommunications carriers that will implement the NGN. A node located at an end is often called a host. A router is a node for providing internetworking in the Internet. Although the functional difference between a host and router is not very great, from a role standpoint, a router has many more network interfaces and has been specialized for routing protocol processing. A host is used for more diverse network terminals and its use continues to increase as it includes all devices connected to a network ranging from large-scale computer systems to PCs, mobile terminals, and sensors. Of course, there tends to be an extremely great difference in the functions of these devices. The network not only supports these differences in terms of scalability, but must also incorporate and support the diverse functions. In the AKARI architecture, node positioning changes according to role. If we consider a path-packet integrated architecture, the part that should be observed most closely is a host acting as a UNI or end terminal. In other words, whether a user (person) requests path setup or packet switching is meaningless. The application should decide according to the conditions and network characteristics. The method used in AKARI should allow the delivery means (path or packet) to be selected within the protocol stack of the terminal rather than changing the Web application, for example, as in the conventional method or having the application decide.
102
Application
Packet
Path
Path signaling
5.10.3. Open NGN 5.10.3.1. Background for the Appearance of the NGN and NGN Features
Factors promoting the appearance of the NGN include problems of the current Internet and problems of telecommunications carriers. The problems of the current Internet concern security and communication quality. By designing and building a network so that anyone can use the network safely and securely, the Internet can become a foundation of society. Also, by equipping it with quality control functions, new services can be provided such as enterprise networks or emergency communications. Three subjects of concern to telecommunications carriers include reduction of operational costs, seamless linking of services, and creation of new revenue sources. Operational costs are reduced by integrating fixed-line telephone networks, mobile communication networks, data networks, and broadcasting networks. Services surrounding subscribers are seamlessly linked by implementing bundled services of fixed-line telephone, mobile telephone, data, and broadcasting. A conversion from a connection fee levy model to a function usage fee levy model is achieved and new sources of revenue are created by raising network functionality. The NGN that emerges from this kind of background will be based on the Internet and will incorporate the good aspects of fixed-line telephones or mobile telephones. The NGN will be based on the Internet because it will inherit the superior features of the Internet, which are "low cost" and the "guarantee of autonomy for applications or services." High reliability and high quality, which are features of the telephone network, will be guaranteed by introducing authentication technologies and communication quality control technologies. Also, the mobility of mobile telephones will be guaranteed by introducing mobility-support technologies. In other words, the NGN can be considered as the Internet with added functions that are required for implementing highly reliable, highquality services. Two of the functions that will be added deserve special mention. These are "access line authentication" and "communication session management." Without exaggeration, the NGN will be the Internet equipped with functions for authenticating access lines and managing communication sessions. Note that even if something is called the NGN does 103
not mean that new technologies have been introduced. The technologies for supporting the NGN are being discussed by the IETF (Internet Engineering Task Force).
5.10.3.2. Openness
Three types of interfaces are defined in the NGN. These are the ANI (applicationnetwork interface), NNI (network-network interface), and UNI (user-network interface). An NGN-specific interface, which differs from those of the telephone network, is the ANI for using functions in the service stratum. In existing networks, third parties cannot provide services by using network functions. Services are provided by telecommunications carriers. In contrast, by using an ANI, a third party will be able to provide users with new services that use functions such as authentication, session control, and bandwidth control. This hides the possibility of completely changing the business model of telecommunications carriers, not only of third parties. By providing network functions to outside parties, a telecommunications carrier can convert from a line usage fee levy model to a function usage amount levy model to implement a platform business. Third parties will be able to build diverse services by using network functions that were previously reserved solely for the telecommunications carrier.
104
The advantage of the Internet lies in the end-to-end design principle, which enables diverse services to be built on IP. Similarly, in the NGN, by opening up NGN functions to outside parties and entrusting various players to build new services, innovative services that could not be built on the existing Internet will be able to flourish on the NGN.
interface
[ITU-Ts NGN Architecture] application ANI end user function 1 service stratum other network 2 NNI
application platform
1 network service
communication user authentication, pricing providing communication service and internet access
ex transport function
transport stratum
UNI
Fig. 5.10.3.2. Open Interfaces Making network functions, previously unavailable to outsiders, publicly available to third parties and entrusting service development to third parties suggests a change to a business model that is similar to the MVNO (mobile virtual network operator) of a mobile communications network. An MVNO provides services by using the network functions of a mobile telecommunications carrier. Various levels of MVNO are considered according to the types of functions that are used. There is a model in which the MVNO performs location management or billing together with customer support. Then there is also a model that devotes itself to customer support or marketing and entrusts location management or billing to the mobile telecommunications carrier.
release network functions to third parties and consign service developments
application service stratum session authent location pricing management ication management bandwidth customer presence control support telecommunications carrier transport stratum
105
Various business models are also considered for the NGN according to the level to which telecommunications carriers make network functions publicly available to third parties. By designing the NGN in this way, we establish a model that produces gains to both the telecommunications carriers and third parties. By entrusting the development of services to third parties, we can expect that diverse services will flourish and that the NGN will contribute to a world in which users also can receive significant returns.
ReceiverNode
Schematic diagram showing the exchange of communications was created based created on the paper "On the Robustness of Soft- State Protocols" by Vishal Misra.
V iis h a l M i s r a O n tth e R o b u s ttn es s o f S o ft - S ta t e P r o tto co l s s a M O h e R b s ne s o of t- t a te oc ol s
Fig. 5.11.1. Robustness control However, if information networks continue to become increasingly larger and more complex in the future, soft state communications will probably be insufficient for maintaining robustness. When the scale of a system increases, multiple simultaneous failures rather than single failures become more regular and are no longer unexpected events. As a result, the margin for introducing software bugs steadily increases and 106
human error is also more likely to occur during operation management. When designing the new generation network architecture, a fundamentally new design technique is needed rather than just introducing soft state communications. Conventionally, network design for normal conditions was carried out first without taking failures into consideration. Later, robustness was generally increased often by assuming single failures. Detour control is a typical example of this kind of approach. As a result, if multiple simultaneous failures occur, robustness deteriorates suddenly, and a noncommunicable state occurs. In the new generation network architecture, it is important for design techniques to take into consideration multiple simultaneous failures as well as traffic fluctuations that depend on the occurrence of different traffic patterns than normal such as DDoS attacks. Both survivability (packet deliverability and end-to-end path connectivity) and sustainability (the maintenance of network functions) are required even if failures are widespread overall. If the user can recognize in a variety of ways that the network guarantees that its functions will be maintained even if the extent of failures increases, then the user will have peace of mind. This is the essence of a "safe and secure network (dependability)." Previously, information network design often focused on increasing performance under normal conditions with no failures. However, with this kind of design, which should be thought of as trying to achieve efficiency for efficiency's sake, the performance that is achieved is often immediately overtaken by developments in communications technologies. This is where the limitations of previous research techniques become apparent. It is also linked with the complaint against previous theoretical research in questioning "whether it made any contribution to information network development." One control technique for guaranteeing robustness is self-organization. Basically, entities in the network perform control only according to local communications, and target functions in the entire system, viewed macroscopically, emerge as a result. If distributed control can be implemented by using only local communications, its simplicity is directly linked to a guarantee of robustness. Soft state communications should be used for the local communications. However, if this is carried forward further, an approach can be considered in which each entity interacts only with the environment. This approach is called stigmergy [5-11-2]. In addition, these techniques naturally are self-distributed, and as a result, scalability and adaptability to changes in the communication environment that include even failures can be implemented. However, although convergence to an optimal solution during normal operation is considered to be slower and the performance of this technique is not necessarily higher than that of the conventional design technique, the various advantages that were described compensate for these faults.
107
System tuning
Quickly overtaken by technological developments
Conventional network
Performance
New network
References
[5-11-1] Lui, J. C. S., Misra, V. and Rubenstein, D., On the Robustness of Soft State Protocols, Proc. of 12th IEEE Intl. Conf. on Network Protocols (ICNP04), pp.5060 (2004). [5-11-2] Naoki Wakamiya and Masayuki Murata. Biologically Inspired Information Network Architecture, Transactions of the IEICE, Vol. J89-B, No. 316-323, March 2006 (in Japanese). [5-11-3] Masayuki Murata. Biologically Inspired Information Network Architecture, IEICE, Conference of Self-Growing and Repairing Networks Due to Complexity, December 2006 (in Japanese). [5-11-4] Masayuki Murata, Network Architecture and Future Research Directions, IEICE Technical Report on Photonic Networks (PN2005-110), 63-68, March 2006 (in Japanese).
108
layer multicasting), and overlay routing to enable highly reliable routing, QoS routing, and routing for emergency communications [5-12-2]. Second, it enables new network architecture experiments to be conducted without changing the underlying physical network. For example, when a new communications protocol is developed, an experiment can only be executed in a laboratory-scale experimental environment built only of nodes on which that protocol is installed. On the other hand, if overlay network technology is used, a new protocol experiment can be conducted globally so that it coexists with actual traffic on an actual wide-area network. Therefore, an overlay network is essential as an experimental environment of the new generation network architecture. When an overlay network is viewed as a basic component of the new generation network architecture, the overlay network will have two mutually contradictory aspects. In other words, an overlay network is a "solution technology" for avoiding the limitations or problems of lower layers to provide higher quality, highly reliable network services. On the other hand, when aiming to directly establish design principles for an ideal architecture from the start through a scrap and build approach as in this project, an overlay network that basically targets solutions should be unnecessary. However, overlay network technology is considered to fulfill a role as a basic component with respect to the following two points. The first is the "sustainable evolution" of the new generation network. An overlay network supports the basic principle of being sustainable and evolutionary, which was indicated in Section 4.1.2. When application requests are considered to change with the times, it is difficult to consider that a single network technology will be sustained in its current form for two or three decades. Since the Internet was not self-evolutionary, it ended up as an incompatible architecture that is full of patches. Considering this past experience, we plan to create a network architecture that has sustainable evolution as a design principle so it can flexibly deal with changes in application requests. Some technical requirements for implementing sustainable evolution are as follows.
Migration Policy
Determine an appropriate policy for selecting either basic functions or overlay functions as a network architecture. The second point is the provision of "user controllability" of the new generation network. An overlay network should satisfy the design requirement of openness, which was indicated in Section 2.4. When APIs for user or network services are provided at a level corresponding to a lower level such as the network level, for example, rather than at an upper level such as is used for the socket APIs of end terminals, the user can directly write programs for routing nodes. This controllability can be implemented by developing resource management technology in an overlay network. The advantages of overlay network technology can be utilized by introducing a mechanism that enables the user to directly control fixed resources for routing nodes. For example, if all of the resources of each routing node are divided in half, we can configure a network that combines both the functions of a production-quality network and a testbed network so that the conventional
109
network continues operating by using one half of the resources and new network functions are examined using the remaining resources. Some technological requirements for implementing user controllability are as follows.
110
start to have excellent fault-tolerance. If the objectives of conventional IP routing are considered, plans based on IP routing with new functions added to it will also appear. The use of a routing overlay, which is described here, is a complementary attempt to solve the problems of IP routing, which continue to materialize as the scale of the Internet increases. It may enable packets to be delivered by using a path with better performance than when packet delivery is entrusted to IP routing. In other words, a routing overlay searches for possible approaches that differ from just enhancing an IP network as a common foundation.
End node C
Overlay node
50msec
50msec
Fault occurs
Fig. 5.12.2. Passing through Overlay Nodes When There are No Faults
Although a routing overlay is a routing technology in which overlay network technology is applied, it has also been pointed out recently that a better path may be able to be allocated by a routing overlay even during normal times when there are no faults [5-12-3]. This depends on the existence of a shorter path than the path that is used according to IP routing. Fig. 5.12.2 shows an example in which the packet transfer delay was used as a performance measure. The path that passes through end node C has a shorter delay than the path having a delay of 150msec according to IP routing. Functions that are implemented by a routing overlay are summarized as follows.
The following results were reported in reference [5-12-3]. (1) For the border gateway protocol (BGP), while an interval on the order of minutes was required until a new route was discovered in response to a network fault, a new route could be discovered in 18 seconds on average using a routing overlay. (2) A detour route could be discovered in all cases in experiments using a routing overlay. Although the abovementioned experimental results were obtained using universitycentered experimental networks, similar results have also already been reported for commercial networks [5-12-5]. Note that better results have also been shown for emergency communications, which were sufficiently effective even during faults due to network attacks [5-12-6]. In a similar manner as for other overlay networks, an end node functions as a routing node in an overlay network, and a routing overlay [5-12-3, 5-12-4] delivers packets according to an independent route search even when a fault occurs. Since IP routing targets all nodes, when a fault occurs, time is required until a route for bypassing it is determined. On the other hand, the number of target nodes is limited in a routing overlay. Therefore, when a problem such as a fault occurs, it can be dealt with immediately. This enables a routing overlay to be applied to emergency communications. For more details, see reference [5-12-2].
References
[5-12-1] PlanetLab, http://www.planet-lab.org/ [5-12-2] Masayuki Murata, Creating Highly Reliable Networks Using a Service Overlay, Journal of the IEICE, Special Section on the Configuration and Control of Advanced Information and Communication Networks for Disasters, Vol. 89, No. 3, pp. 792-795, September 2006 (in Japanese). [5-12-3] D. Andersen, H. Balakrishnan, M. F. Kaashoek, and R. Morris, Resilient overlay networks, Proc. of 18th ACM Symposium on Operating Systems Principles (SOSP), Sept. 2001. [5-12-4] D. G. Andersen, H. Balakrishnan, M. F. Kaashoek, and R. Morris, The case for resilient overlay networks, Proc. of the 8th Annual Workshop on Hot Topics in Operating Systems (HotOS-VIII), May 2001. [5-12-5] H. Rahul, M. Kasbekar, R. Sitaraman, and A. Berger, Towards realizing the performance and availability benefits of a global overlay network, Proc. of Passive and Active Measurement Conference, March 2006. [5-12-6] N. Feamster, D. G. Andersen, H. Balakrishnan, and M. F. Kaashoek, Measuring the effects of Internet path faults on reactive routing, Proc. of ACM SIGMETRICS 2003, June 2003.
112
5.13. Layer Degeneracy [Ohta] 5.13.1. The End-to-End Principle and the Data Link Layer and Physical Layer
The end-to-end principle is a basic principle of the Internet, which states that all communications protocol operations that can be performed by the terminals are performed within the terminals (upper layers such as the transport layer or application layer) and not performed in the network. This directly leads to the greatest possible simplification of functions of the IP layer, which is the network layer of the Internet. As a result, IP headers contain only the minimum required information, and congestion control, which had been performed as an essential function in the conventional network layer, is also performed on the Internet in the transport layer within the terminals. The KISS principle, which is typified by the end-to-end-principle that was present in the original Internet design, should also be inherited by the new generation network architecture, and by assuming a network architecture that contains a common layer in which functions are simplified as in the original IP protocol, functions that are duplicated in various layers can be eliminated. In this way, there are practically no requests to the data link layer for operating a common network layer protocol in which functions are simplified, and practically all common network layer protocols operate on the data link layer. However, this is not compatible with the request that diverse data link layers should be used in a new generation network. The end-to-end principle is what joins the upper layers within the terminal through thin network layers as shown in Fig. 5.13.1.1 (a). The network layers connecting the upper layers are thin layers as shown in Fig. 5.13.1.1 (b). However, if lower layers (data link layer and physical layer) are added to this as shown in Fig. 5.13.1.2, it is apparent that the lower layers of each link also intervene between the upper layers, rather than just the network layers, as shown in Fig. 5.13.1.2 (b). Even if the trouble is taken to simplify the network layer, since the lower layers are more complex, the original property of the new generation network ends up being lost.
113
Network
Terminal Application layer Transport layer Router Network layer Network layer
Network layer
Fig. 5.13.1.1. End-to-End Principle and High (3 or higher) Layer Relation Therefore, it is apparent that the end-to-end principle requires not just the simplification of the network layer, but also includes the simplification of lower layers. First, there certainly should not be any unnecessary complexity in the physical layer but since information cannot be delivered if the physical layer disappears, it cannot be completely eliminated. The physical layer should have the minimum possible complexity according to the physical medium in use. On the other hand, the data link layer should be limited to only providing the minimum functions that are requested from the network layer or physical layer. This layer may be eliminated in some cases. Let us consider the following two examples in which the thinnest possible lower layers are used: optical fiber, which has an ultra-wideband transmission bandwidth, and airborne radio waves required for mobile communications.
Tranport layer
Network layer
Network layer
Network layer
114
Terminal Application layer Transport layer Router Network layer Datalink layer PHY
Network
(a) Layered structure PHY Datalink layer Network layer Transport layer
Application layer Application layer
Network layer
Datalink layer
Datalink layer
Datalink layer
Datalink layer
Network layer
Datalink layer
PHY PHY
PHY
PHY
5.13.2. Thin Lower Layer When Optical Fiber is the Transmission Medium
First, let us consider using optical fiber for the transmission medium. Optical fiber simply is a medium for ultra-wideband transmission in a point-to-point link. Therefore, its point-to-point simplicity should be leveraged as the physical layer. Let us consider optical circuit switching (OCS), which is a communication system that reduces physical layer functions as much as possible. Optical amplifiers eliminate limitations other than bandwidth in many cases, and the physical layer is format-free within the network. However, since the ultra-wideband property that the physical layer has cannot be fully utilized in a point-to-point link, the use of a multiplexing technology such as WDM is essential from a practical perspective. When the physical layer is format-free, the data link layer can be completely eliminated from within the network in the data plane, including functions up to the framing function, and will exist only in the terminals. If L2 OCS is removed and all OCS is L3 OCS, the data link layer is completely eliminated even in the control plane (excluding non-OCS-based control packet transmissions for signaling). Another method of making the lower layers thinner by using optical fiber as the transmission medium is "optical packet multiplexing," which shows the ultra-wideband property of the physical layer directly in the packet network layer. This enables the dynamic characteristics of packet multiplexing to be used to the fullest extent. Since ultra-wideband light cannot be directly modulated, a multiplexing technology such as 115
PHY
WDM is required in the physical layer. However, the physical layer and network layer are directly connected without using WDM multiplexing for packet multiplexing, and the entire transmission bandwidth (such as all wavelengths) can be directly provided to individual packets. By directly supporting packets in the physical layer and embedding L3 headers, the data link layer is eliminated. Although the payload part other than the L3 header generally may be format-free within the network, this is not the case when the network layer is IP since each router must read information of the payload part when ICMP control messages are generated for packets.
5.13.3. Thin Lower Layer Airborne Radio Waves as the Transmission Medium
Another example of a transmission medium is airborne radio waves. In this case, since the physical layer utilizes a many-to-many broadcast model, which differs from optical fiber, terminal identification according to MAC addresses and the suppression of simultaneous transmissions as well as the packet multiplexing or packet duplexing techniques that accompany it are essential in the data link layer. On the other hand, terminal identification and techniques such as packet multiplexing or packet duplexing are not required in the physical layer. The PDMA concept eliminates TDD, TDMA, CDD, CDMA, etc. in the physical layer and shows all available radio bandwidth transmission speeds to the packet network layer.
116
Guarantee Flexibility
Since the architecture of the new generation network or the set of protocols for its various layers have not been determined yet, and since diverse protocols, methods, and architectures may coexist without the set of protocols being collected together in several standards in a future network, the testbed must have a high degree of flexibility to support different protocols, methods, and architectures. In other words, the hardware and software that constitute the testbed must have a high degree of flexibility to support unknown methods and their combinations that may exist in the future. The programmable routers that are the subject of active research and development in the GENI project of the National Science Foundation (NSF) of the United States satisfy this objective. More specifically, although this is similar to the concept of software radio or cognitive radio, a hardware and software configuration that can support various transmission methods, network protocols, and applications is desirable.
117
services when the unified network is linked together with a sensor network in the home is also desirable. For wireless mobile communications, providing the following environments or functions merits consideration: o Space such as iron poles or building rooftops on which wireless base stations can be installed and wired circuits for connecting installed wireless base stations to the network. Wireless routing nodes installed in moving objects (such as taxis, busses, or trains). Access to public wireless networks such as cellular, PHS, or wireless LAN hot spots
o o
118
7.2. GENI/FIND
GENI (Global Environment for Network Innovations) [7-2, 7-3, 7-4] is a program of the National Science Foundation (NSF) of the United States (budget: approximately $300 million from MREFC; total budget: approximately $367 million; term: 5 to 7 years). It aims to develop a shared global facility (testbed) for promoting research and development of new Internet architectures or network services. The objectives of this project assert that a common network foundation enabling multiple network experiments to be conducted simultaneously and independently is required one to resolve problems of the existing Internet architecture concerning stability, security, QoS, and the like, two to construct experimental environments on actual networks using new network technologies, and three to incorporate innovative technologies such as optical, mobile, and sensor technologies. Like NewArch, GENI uses a "clean slate" approach for designing the architecture from scratch as its design policy. Five technical working groups (WG) have been established in GENI. These working groups are listed below along with their main topics of interest. (1) Research Coordination WG (Co-chair: David Clark, Scott Shenker) Making the scientific case for GENI, and ensuring that the requirements of the research community inform GENI's design.
119
(2) Facility Architecture WG (Co-chair: Larry Peterson, John Wroclawski) The management framework that ties the physical resources into a coherent facility. (3) Backbone Network WG (Chair: Jennifer Rexford) The underlying fiber plant, optical switches, customizable routers, tail circuits, and peering relationships. (4) Distributed Services WG (Co-chair: Tom Anderson, Amin Vahdat) Distributed services that collectively define GENI's functionality. (5) Wireless Subnets WG (Co-chair: Dipankar Raychaudhuri, Joe Evans) Various wireless subnet technologies and deployments. GENI considers PlanetLab as a prototype when asserting the validity of a shared global facility. However, GENI is more advanced than PlanetLab in several respects. First, although PlanetLab is limited to only general-purpose PCs as the elements for configuring the overlay network, GENI supports more diverse nodes such as sensors or mobile terminals and low-speed links. In addition, the GENI facility is more userfriendly since it provides more diverse network services. FIND (Future Internet Design) [7-5, 7-6] is a long-term program of the NSF that started in 2006 (budget: approximately $40 million from NeTS; term: undetermined). It aims to establish the Internet architecture of the future. While GENI aims to construct a global facility, FIND is focusing on comprehensive network architecture design research. Since individual FIND projects can benefit by using the GENI facility, synergistic results are expected. Overall, FIND is an umbrella program that consists of many relatively small-scale projects like conventional NSF-funded programs rather than a program that aims to establish a unified network architecture. In its initial fiscal year of 2006, FIND allocated a total of $12 million to 26 projects.
7.3. Euro-NGI/Euro-FGI
Euro-NGI (2003-2006) [7-7, 7-8] is a transnational project that belongs to the Information Society Technologies (IST) of the European Sixth Framework Program (FP6) (category: Network of Excellence; total budget: 5 million euros, participating organizations: 59). Its objectives include the exchange of information related to next generation networks, the unification of ideas and knowledge, and coordination between projects. Specifically, it focuses on the six areas of core networks, fixed access, mobile access, IP networking, and service overlays to establish basic technologies of multinetwork services that can support fixed mobile convergence (FMC), seamless mobility, and context awareness while accommodating diverse access networks such as sensor networks or personal area networks (PAN). This project is expected to continue in the Seventh Framework Program (FP7) as Euro-FGI (category: Network of Excellence; term: 2006-2008). Note that in FP7, "The Network of the Future" is given as one of the pillars of the information and communication technologies (ICT) field, and a total budget of 200 million euros is expected to be allotted. This will be broken down as 14 million euros for the Network of Excellence, and as research projects, at least 84 million euros for large 120
scale integrating projects (IP) and at least 42 million euros for small or medium scale focused research actions (STREP).
References
[7-1] http://www.isi.edu/newarch/ [7-2] http://www.nsf.gov/cise/cns/geni/ [7-3] http://www.geni.net/ [7-4] GENI Planning Group, GENI: Conceptual Design, Project Execution Plan, GENI Design Document 06-07, January 2006. http://www.geni.net/GDD/GDD-06-07.pdf [7-5] http://www.nets-find.net [7-6] http://find.isi.edu [7-7] http://eurongi.enst.fr/ [7-8] 'A view on Future Communications' http://eurongi.enst.fr/archive/172/AviewonFutureCommunications.doc [7-9] http://cordis.europa.eu/fp7/
121
Chapter 8. Conclusions
This conceptual design document is the first step towards the implementation of a new generation network architecture. It includes societal requirements, future basic technologies, design principles for designing a network architecture based on those requirements and technologies, and conceptual design examples of several key parts based on those design principles. Our approach is to concentrate our efforts on designing a new generation network while using testbeds to evaluate the quality of those designs experimentally. The most important goals of our efforts are design principles for an architecture that is comprehensively optimized and stabilized. However, until the final design is complete, even these design principles are not fixed, but can change according to feedback through the design and evaluation process. The AKARI architecture will be sustainable and evolutionary. The crisis confronting the current Internet must not be repeated. The information infrastructure that has become such an integral part of society can no longer be scrapped and rebuilt. It is imperative for the information infrastructure to provide surplus capacity in order to enhance the quality of society in the future. Sufficient capacity will enhance quality. However, the infrastructure must not be made more complex by uniting technologies. The role of an architecture is to select and integrate, that is, to guide towards the direction of more simplicity. A scheme for implementing a virtual space on the network solves societal problems stemming from security, which is a weakness of the Internet. Endowing the network core with robustness provides society with a sense of security that cannot be provided by superficial improvements. A sentiment included in the term "new generation" is to act based on free ideas that are not constrained by the limitations of existing technologies. That standpoint is both novel and neutral. To implement a new generation network architecture, the participation of many network architects who have knowledge of network technology in general is important. This conceptual design document just indicates the directions in which the process should advance. It also goes without saying that application fields and basic technological fields must be linked. New generation network research will help promote further advances towards detailed design in the future.
122
Addressing: A policy for assigning information for identifying a location on the network. Circuit switching: A method of communicating after allocating a circuit before communication begins. Packet switching: A method of communicating by dividing data into packets. Nodes (switches) perform communication processing in terms of individual packets without determining the route before communication begins. Virtual circuit: A mechanism for virtually allocating the route on which data will flow before communication starts in order to perform packet-switched communications efficiently. This differs from packet switching in that tags are added to packets rather than addresses, and the tags are rewritten at each node. It differs from circuit switching in that a probabilistic loss of packets is permitted at nodes or on links of the communication route. Scalability: Reliability: The ability of a system to estimate the absolute maximum number of entities within the system and support them. The degree to which a system can recover even if faults (including congestion) occur or, conversely, the degree to which it does not breakdown.
Availability: The useable operating ratio of a network. For example, if the noncommunicable time is 3.6 seconds per hour (3600 seconds), then the availability is 99.9%. Connectivity: A communicable state. Interoperability: The ability of multiple entities that are implemented according to certain common rules to communicate with each other. Next generation network (NGN): [ITU-T Y.2001 definition (entries in parentheses are supplements)] A packet-based network that uses multiple broadband QoSenabled transport technologies to provide telecommunication services. Its service-related functions (service stratum) are independent from underlying transport-related technologies (transport stratum). It enables unfettered access for users to networks and to competing service providers. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users.
123
The Internet: A set of interconnected networks with global reachability, which is built using the IP protocol.
124