Prashanth Seminar
Prashanth Seminar
BACHELOR OF TECHNOLOGY
IN
ELECTRONICS AND COMMUNICATION ENGINEERING
SUBMITTED BY
P. PRASHANTH KUMAR
20S15A0418
NAAC with ‘A’ Grade, NBA Accredited, An ISO 9001:2015 Certified, Approved by UK
Accreditation Centre, Granted Status of 2(f) & 12(b) under UGC Act. 1956, Govt.of India
NAAC with ‘A’ Grade, NBA Accredited, An ISO 9001:2015 Certified, approved by UK
Accreditation Centre, Granted Status of 2(f) & 12(b) under UGC Act. 1956, Govt. of India
MAISAMMAGUDA, DHULAPALLY, SECUNDERABAD-500100
BONAFIDE CERTIFICATE
This is to Certify that the seminar reported entitled “TERABIT SWITCHING AND
ROUTING”, being submitted by P. PRASHANTH KUMAR bearing roll no 20S15A0418 in
partial fulfillment for the award of Degree of Bachelor of Technology inELECTRONICS AND
COMMUNICATION ENGINEERING, during the academic year 2022-2023.
Certified further, to the best of our knowledge, the work reported here is not a part of any
other project on the basis of which a degree or an award has been given on an earlier occasion to
any other candidate. The results have been verified and found to be satisfactory.
We sincerely thank all the staff members, friends and parents without whose
support project would have been deferred.
i
i
i
ABSTRACT
Just a few years back, no one would have thought that internet traffic will increase
at such a rapid rate that even gigabit capacity routers in the backbone will be
insufficient to handle it. Today, routers with terabit switching capacities have become
an essential requirement of the core backbone networks and gigabit routers find their
place at the mid-core or even the edge.
With deployment of more and more fiber and improvements in DWDM technology,
terabit capacity routers are required to convert the abundant raw bandwidth into useful
bandwidth. These routers require fast switched backplanes and multiple forwarding
engines to eliminate the bottlenecks provided in traditional routers.
i
v
CONTENTS
ACKNOWLEDGEMENT iii
ABSTRACT iv
ABBREVIATION viii
v
CHAPTER-3 EFFICIENT ROUTING TABLE 12-14
4.1.1 Queuing 15
REFERENCES 24
v
i
LIST OF FIGURES
v
i
i
ABBREVIATION
IP : Internet Protocol
OC : Optical Carrier
v
i
i
i
Terabit Switching and Routing
CHAPTER 1
INTRODUCTION
1.1 NETWORK INFRASTRUCTURE
Department of ECE, 1
MRITS
Terabit Switching and Routing
This will also remove the overhead associated with the extra technologies to
enable more economical and efficient wide area communications. As the number of
channels transmitted on each fiber increases with DWDM, routers must also scale
port densities to handle all those channels. With increase in the speed of interfaces
as well as the port density, next thing which routers need to improve on is the
internal switching capacity. 64-channel OC-192 will require over a terabit of
switching capacity. Considering an example, a current state-of-the-art gigabit router
with 40 Gbps switch capacity can support only a 4-channel OC-48 DWDM
connection. Four of these will be required to support a 16-channel OC-48 DWDM
connection. And 16 of these are required to support 16-channel OC-192 DWDM
connection with a layer of 16 4::1 SONET Add/Drop Multiplexers in between. In
comparison to that a single router with terabit switching capacity can support 16-
channelOC-192DWDMconnection.
Department of ECE, 2
MRITS
Terabit Switching and Routing
This section gives a general introduction about the architecture of routers and
the functions of its various components. This is very important for understanding
about the bottlenecks in achieving high speed routing and how are these handled in
the design of gigabit and even terabit capacity routers available today in the market.
1. Datapath Functions: These functions are applied to every datagram that reaches
the router and successfully routed without being dropped at any stage.Main
functions included in this category are the forwarding decision, forwarding through
the backplane and output link scheduling.
Department of ECE, 3
MRITS
Terabit Switching and Routing
In this section, several parameters are listed which can be used to grade the
performance of new generation router architectures. These parameters reflect the
exponential traffic growth and the convergence of voice, video and data.
High packet transfer rate: Increasing internet traffic makes the packets per
second capacity of a router as the single most important parameter for grading its
performance. Further, considering the exponential growth of traffic, the capacity of
routers must be scalable.
Multi-service support: Most of the network backbones support both ATM and IP
traffic and will continue to do so as both technologies have their advantages.
Therefore routers must support ATM cells, IP frames and other network traffic types
in their native modes, delivering full efficiency of the corresponding network type.
Guarantee short deterministic delay: Real time voice and video traffic require
short and predictable delay through the system. Unpredictable delay results in a
discontinuity which is not acceptable for these applications.
Department of ECE, 4
MRITS
Terabit Switching and Routing
First: processing power of the central CPU since route table search is a highly time-
consuming task and
Second: the fact that every packet has to traverse twice through the shared bus.
Department of ECE, 5
MRITS
Terabit Switching and Routing
CHAPTER 2
LITERATURE REVIEW
2.1 ROUTER ARCHITECTURE WITH SWITCH BACKPLANE
some router vendors introduced parallelism by having multiple CPUs and each
CPU now handles a portion of the incoming traffic. But still each packet has to
traverse shared bus twice. Very soon, the design of router architecture advanced one
step further as shown in Fig 3. Now a route cache and processing power is provided
at each interface and forwarding decisions are made locally and each packet now
has to traverse the shared bus only once from input port to the output port.
Even though CPU performance improved with time, it could not keep pace with
the increase in line capacity of the physical links and it is not possible to make
forwarding decisions for the millions of packets per second coming on each input
link.
Department of ECE, 6
MRITS
Terabit Switching and Routing
The basic difference between switching and routing is that switching uses
'indexing' for determining the next hop for a packet in the address table whereas
routing uses 'searching'. Since indexing is O(1) operation, it is much faster than any
search technique.
Department of ECE, 7
MRITS
Terabit Switching and Routing
Because of this, many people started thinking about replacing routers with
switches wherever possible and vendors flooded the market with several products
under the name of switches. To differentiate their products, vendors gave different
names to them like Layer 3 Switch, IP Switch, Layer 4 Switch, Tag Switch etc. and
regardless of what a product does, it is called a switch. Therefore it is important to
understand the difference between all these different forms of switches.
It operates at Layer 1 of the OSI networking model. Individual ports are assigned
to different LAN segments as in a bridge. But while they are useful for managing
configuration changes, it must be noted that they still propagate contention among
their ports and therefore different from layer 2 bridges.
Layer 2 Switches is just another name for multiport bridges. As we know, bridges
are used to extend the LANs without extending the contention domain. So Layer 2
switches have been used in some places to replace routers for connecting various
LANs to produce one big flat network. But the problem with this approach was the
broadcast traffic which is propagated across all ports of a Layer 2 switch. To solve
this problem, people soon came up with the concept of "Virtual LAN" or VLAN.
Basic feature of VLAN is to divide one large LAN connected by layer 2 switches
into many independent and possibly overlapping LANs. This is done by limiting the
forwarding of packets in these switches and there are several ways of doing this:
Port based grouping: Packet coming on a certain port may be forwarded to only
a subset of all the ports.
Layer 2 address based grouping: Looking at the layer 2 address of the packet,
set of output ports is decided.
Layer 3 protocol based grouping: Bridges can also segregate traffic based on
the Protocol Type field of the packet.
Department of ECE, 8
MRITS
Terabit Switching and Routing
Some of the latest products in the market perform full routing at very high speeds.
Instead of using a route cache, these products actually perform a complete routing
table search for every packet.
Department of ECE, 9
MRITS
Terabit Switching and Routing
These products are often called Real Gigabit Routers, Gigabit Switching Routers
etc. By eliminating the route cache, these products have a predictable performance
for all traffic at all times even in most complex internetworks. Unlike other forms
of layer 3 switches, these products improve all aspects of routing to gigabit speeds
and not just a subset. These products are suited for deployment in large scale carrier
backbones. Some of the techniques used in these products to improve route lookup
are discussed later in the paper.
Department of ECE, 1
MRITS 0
Terabit Switching and Routing
CHAPTER-3
3.1 EFFICIENT ROUTING TABLE SEARCH
One of the major bottlenecks in backbone routers is the need to compute the
longest prefix match for each incoming packet. Data links now operate at
gigabits/sec or more and generate nearly 150,000 packets per second at each
interface. New protocols, such as RSVP, require route selection based on Protocol
Number, Source Address, Destination Port and source Port and therefore make it
even more time consuming. The speed of a route lookup algorithm is determined by
the number of memory accesses and the speed of the memory. This should be kept
in mind while evaluating various route lookup techniques described below.
Each path in the tree from root to leaf corresponds to an entry in the forwarding
table and the longest prefix match is the longest path in the tree that matches the
destination address of an incoming packet. In the worst case, it takes time
proportional to the length of the destination address to find the longest prefix match.
The main idea in tree based algorithms is that most nodes require storage for only a
few children instead of all possible ones and therefore make frugal use of memory
at cost of doing more memory lookups. But as the memory costs are dropping, these
algorithms are not the best ones to use. In this category, Patricia-tree algorithm is
one of the most common.
Department of ECE, 1
MRITS 1
Terabit Switching and Routing
Search starts at table 16 and if there is a hit, look for longer match, otherwise look
for shorter match. But this scheme has a bug that if the longest match is a 17-bit
prefix and there is no entry in table 16 that leads to looking at higher tables.
Therefore markers are added, which track best matching shorter prefix. So now the
algorithm works as follows. Hash first 16 bits and look in table 16. If find a marker,
save best match from marker and look for longer prefix at table 24. If find a prefix,
save prefix as best match and look for longer prefix at table 24. If miss, look for
shorter prefix. Continue algorithm until tables exhausted.
This algorithm has a very good performance and all the details are available in a
paper available on their site. A brief description of how it works is given here. This
algorithm makes use of the fact that most of the prefixes in route tables of the
backbone DRAM. The first table (TBL24) stores all possible route prefixes that are
up to, and including, 24 bits long. Prefixes shorter than 24 bits are expanded and
multiple 24 bit entries are kept for them. Second table (TBLLong) stores all route
prefixes in the routing table that are routers are shorter than 24 bits. The basic
scheme makes use of two tables, both stored in longer than 24-bits. Each entry in
TBLLong corresponds to one of the 256 possible longer prefixes that share the single
24-bit prefix in TBL24. The first 24 bits of the address are used as an index into the
first table TBL24 and a single memory read is performed, yielding 2 bytes. If the
first bit equals zero, then the remaining 15 bits describe the next hop. Otherwise, the
remaining 15 bits are multiplied by 256, and the product is added to the last 8 bits
of the original destination address, and this value is used as a direct index into
TBLLong, which contains the next hop. Two memory accesses in different tables
can be pipelined and the algorithm allows 20 million packets per second to be
processed. Fig 5. shows how the two tables are accessed to find the next hop.
Department of ECE, 1
MRITS 2
Terabit Switching and Routing
This algorithm widely acclaimed for its performance. Details are not disclosed,
but here is some information on its capabilities. It can be easily implemented in
hardware and adapted to various link speeds. Memory space required grows with
the number of networks rather than the number of hosts. It allows searches based on
multiple fields of the IP header and also has a deterministic worst-case time to locate
the best route( 16 memory accesses).
Other vendors like Pluris and Nexabit also have their own solutions to route
lookup problem which have very high performance. But nothing is mentioned about
it in their papers. Route lookup is the single most important thing in the design of
high speed routers and no vendor wants to share its ideas with anyone else unless
the patent is approved.
Department of ECE, 1
MRITS 3
Terabit Switching and Routing
CHAPTER-4
4.1 ROUTER ARCHITECTURE SERVICES
Providing any form of differentiated services requires the network to keep some
state information. The majority of the installed routers use architectures that will
experience a degraded performance if they are configured to provide complicated
QOS mechanisms. Therefore the traditional approach was that all the sophisticated
techniques should be in end systems and network should be kept as simple as
possible. But recent research and advances in hardware capabilities have made it
possible to make networks more intelligent.
4.1.1 Queuing
Once the packet header is processed and next-hop information is known, packet
is queued before being transmitted on the output link. Switches can either be input
or output queued.
Output queued switches require the switch fabric to run at a speed greater than
the sum of the speeds of the incoming links and the output queues themselves must
run at a speed much faster than the input links. This is often difficult to implement
with increasing link speeds. Therefore most of the switch designs are input queued
but it suffers from the head-of-line blocking problem which means that a packet at
the head of the input queue, while waiting for its turn to be transmitted to a busy
output port, can block packets behind it which are destined for an idle output port.
This problem is solved by maintaining per-output queues which are also known as
virtual output queuing. A centralized scheduling algorithm then examines the
contents of all the input queues, and finds a conflict-free match between inputs and
outputs. But input queueing poses another challenge for the scheduling. Most of the
packet scheduling algorithms are specified in terms of output queues and this is a
non-trivial problem to modify these algorithms based on input queuing.
Department of ECE, 1
MRITS 4
Terabit Switching and Routing
Department of ECE, 1
MRITS 5
Terabit Switching and Routing
CHAPTER-5
5.1 SURVEY OF PRODUCTS
This section provides a survey of the terabit and gigabit capacity routers available
in the market. Comparative analysis of all the major products classifies them into
various categories based on architecture design as well as performance. Later, a
slightly detailed description of two state-of-the-art products is also given.
This competitive study identifies the key router vendors and maps each of them
into the landscape of edge, mid-core, and core routing requirements. In addition, the
study provides an overview of the critical line capacity and total switching capacity
requirements for edge and core environments and compares the various architectural
approaches being used to address these performance needs. Many of the data and
ideas in this section are borrowed from a white paper at the site of Pluris corporation
which has one of the leading products in this category.
Department of ECE, 1
MRITS 6
Terabit Switching and Routing
Terabit routing devices are designed with the aggregate line capacity to handle
thousands of gigabits per second and to provide ultra-scalable performance and high
port density. These routers can support port interface speeds as high as OC-192 (10
Gbps) and beyond. Currently the leading terabit routing vendors include Pluris and Avici.
Switching Capacity: The switching capacity of a system consists of the total
bandwidth for all line-card connections and internal switching connections
throughout the system. The switching capacity should be substantially higher than
the line capacity to ensure non-blocking switching between any two ports.
Additional switching capacity is also needed to provide active redundancy and a
higher level of fault tolerance. Therefore switching capacity includes: Bandwidth
used for line card connections, Bandwidth available to modular expansion of line
card connections, Bandwidth for non-blocking intra-chassis switching, Bandwidth
for non-blocking inter-chassis switching and for modular multi-chassis expansion,
Aggregate bandwidth needed to support redundancy and fault tolerance.
Department of ECE, 1
MRITS 7
Terabit Switching and Routing
Lucent 60 40 16 OC-3/12/48 16 NA
PacketStar 6416
NetCore 20 10 4 OC-3/12/48 4 NA
Everest
Department of ECE, 1
MRITS 8
Terabit Switching and Routing
CHAPTER-6
APPLICATIONS
Department of ECE, 1
MRITS 9
Terabit Switching and Routing
Nexabit NX64000
The NX64000s innovative switch fabric delivers 6.4 Tbps switch capacity per
chassis. The NX64000 supports up to 192 OC-3, 96 OC-12, 64 OC-48 and 16 OC-
192 lines. The NX64000 allows Service Providers to even scale to higher speed like
OC-768 and OC-3072 and port densities can also be increased further by
interconnecting multiple chassis. The NX64000 implements a distributed
programmable hardware forwarding engine on each line-card. This approach
facilitates wire-speed route lookup for full, 32-bit network prefixes-even at OC-192
rates. The distributed model enables support for 128 independent forwarding tables
on each line-card. Each forwarding engine is capable of storing over one million
entries. One of the unique features of the NX64000 is its ability to support IP-IP
tunnel encapsulation and de-capsulation at line-rates.
The NX64000 is the only product in the market that can support a guaranteed
delay of 40 microseconds to variable sized packets independent of packet size and
type and therefore is capable of providing ATM-comparable QOS in an IP world.
Department of ECE, 2
MRITS 0
Terabit Switching and Routing
CHAPTER-7
ADVANTAGES
1. Terabit Network has high Flexibility , Efficiency and Transparency.
2. Improved Network Management and Operation Costs.
3. Multi - Protocal support.
4. Rapid Service Recovery.
5. Authentication , Authorization and Accounting.
DISADVANTAGES
1. Terabit network is high circuit complexity.
2. Very precise and High cost.
3. Supports only support 16-channel OC-192 DWDM connection.
4. High Maintenence.
Department of ECE, 2
MRITS 1
Terabit Switching and Routing
CHAPTER-8
FUTURE SCOPE
Terabit Switching Routing provides capacity and bandwidth to meet-
Increasing customer demands for data and voice communications.
To support future internet growth of high quality video.
To support e-commerce Applications.
Multi-terabit Transmission.
CONCLUSION
It is very clear now that with deployment of more and more fiber and
improvements in DWDM technology, terabit capacity routers are required to convert
the abundant raw bandwidth into useful bandwidth. These routers require fast
switched backplanes and multiple forwarding engines to eliminate the bottlenecks
provided in traditional routers. Ability to efficiently support differentiated services
is another feature which will be used along with total switching capacity to evaluate
these routers. Switching is faster than routing, and many products in the market
combine some sort of switching with routing functionality to improve the
performance and it is important to understand what the product actually does. But
the products which scale up all aspects of routing rather than the subset of them, are
bound to perform better with arbitrary traffic patterns. Route lookup is the major
bottleneck in the performance of routers and many efficient solutions are being
proposed to improve it. Supporting differentiated services at such high interface
speeds poses some new challenges for the design of router architecture and some
solutions are discussed here. Finally a survey of leading market products is
presented.
Department of ECE, 2
MRITS 2
Terabit Switching and Routing
REFERENCES
[Decis97] Decisys, "Route Once, Switch Many, " July 1999, 23 pages,
http://www.netreference.com/PublishedArchive/WhitePapers/WPIndex.html
[NexNeed] Nexabit, "The New Network Infrastructure : The Need for Terabit
Switch/Routers, " 1999, 11 pages, http://www.nexabit.com/need.html
[NexSup] Nexabit, "Will The New Super Routers Have What it Takes, " 1999, 12 pages,
http://www.nexabit.com/architecture.pdf
[Craig99] Craig Partridge, "Designing and Building Gigabit and Terabit Internet Routers,"
Networld+Interop99, May1999
Department of ECE, 2
MRITS 3