0% found this document useful (0 votes)
81 views31 pages

Prashanth Seminar

The document is a seminar report on terabit switching and routing submitted for the award of a Bachelor of Technology degree. It discusses the need for terabit routers to handle the high bandwidth available on fiber networks using DWDM technology. Traditional routers are insufficient to handle the bandwidth provided by multiple channels on high-speed DWDM systems. Terabit routers require fast switched backplanes and multiple forwarding engines to eliminate bottlenecks. The report evaluates different routing table architectures and router designs that can provide high throughput at terabit speeds.

Uploaded by

Prashanth kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
81 views31 pages

Prashanth Seminar

The document is a seminar report on terabit switching and routing submitted for the award of a Bachelor of Technology degree. It discusses the need for terabit routers to handle the high bandwidth available on fiber networks using DWDM technology. Traditional routers are insufficient to handle the bandwidth provided by multiple channels on high-speed DWDM systems. Terabit routers require fast switched backplanes and multiple forwarding engines to eliminate bottlenecks. The report evaluates different routing table architectures and router designs that can provide high throughput at terabit speeds.

Uploaded by

Prashanth kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

MINAR REPORT ON

TERABIT SWITCHING AND ROUTING

A SEMINAR REPORT SUBMITTED IN PARTIAL FULFILLMENT OF THE


REQUIREMENT FOR THE AWARD OF THE DEGREE OF

BACHELOR OF TECHNOLOGY
IN
ELECTRONICS AND COMMUNICATION ENGINEERING

SUBMITTED BY

P. PRASHANTH KUMAR
20S15A0418

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING

MALLA REDDY INSTITUTE OF TECHNOLOGY AND SCIENCE


(SPONSORED BY MALLA REDDY EDUCATIONAL SOCIETY)
Permanently Affiliated to JNTUH & Approved by AICTE, New Delhi

NAAC with ‘A’ Grade, NBA Accredited, An ISO 9001:2015 Certified, Approved by UK
Accreditation Centre, Granted Status of 2(f) & 12(b) under UGC Act. 1956, Govt.of India

MAISAMMAGUDA, DHULAPALLY, SECUNDERABAD-500100


(2020-2023)
MALLA REDDY INSTITUTE OF TECHNOLOGY AND SCIENCE

(SPONSORED BY MALLA REDDY EDUCATIONAL SOCIETY)


Permanently Affiliated to JNTUH & Approved by AICTE, New Delhi

NAAC with ‘A’ Grade, NBA Accredited, An ISO 9001:2015 Certified, approved by UK
Accreditation Centre, Granted Status of 2(f) & 12(b) under UGC Act. 1956, Govt. of India
MAISAMMAGUDA, DHULAPALLY, SECUNDERABAD-500100

BONAFIDE CERTIFICATE

This is to Certify that the seminar reported entitled “TERABIT SWITCHING AND
ROUTING”, being submitted by P. PRASHANTH KUMAR bearing roll no 20S15A0418 in
partial fulfillment for the award of Degree of Bachelor of Technology inELECTRONICS AND
COMMUNICATION ENGINEERING, during the academic year 2022-2023.

Certified further, to the best of our knowledge, the work reported here is not a part of any
other project on the basis of which a degree or an award has been given on an earlier occasion to
any other candidate. The results have been verified and found to be satisfactory.

Head of the Department


Mr. K.Y. SRINIVAS
(Associate Professor)
ACKNOWLEDGEMENT

We express a whole hearted gratitude to Dr. K. Ravindra, Principal and


Professor of Electronics and Communication Engineering Department, Malla Reddy
Institute of Technology and Science for providing us the conductive environment for
carrying our academic schedules and projects with ease.

We thank Mr. K.Y. Srinivas, Associate professor and Head, Department of


Electronics and Communication Engineering for providing his seamless support and
knowledge during our B. Tech course period and also for providing right suggestions at
every phase of the development of our project.

We sincerely thank all the staff members, friends and parents without whose
support project would have been deferred.

i
i
i
ABSTRACT

Just a few years back, no one would have thought that internet traffic will increase
at such a rapid rate that even gigabit capacity routers in the backbone will be
insufficient to handle it. Today, routers with terabit switching capacities have become
an essential requirement of the core backbone networks and gigabit routers find their
place at the mid-core or even the edge.

With deployment of more and more fiber and improvements in DWDM technology,
terabit capacity routers are required to convert the abundant raw bandwidth into useful
bandwidth. These routers require fast switched backplanes and multiple forwarding
engines to eliminate the bottlenecks provided in traditional routers.

Ability to efficiently support differentiated services is another feature which will be


used along with total switching capacity to evaluate these routers. This proposed
scheme explains the issues in designing terabit routers and the solutions for them. It
also discusses about some of the key products in this area.

i
v
CONTENTS

ACKNOWLEDGEMENT iii

ABSTRACT iv

LIST OF FIGURES vii

ABBREVIATION viii

CHAPTER-1 INTRODUCTION 2-6

1.1 Network Infrastructure 2-3

1.2 Architecture of Internet Routers 4

1.2.1 Router Functions 4

1.2.2 Assessing Router performance 5

1.3 Evolution of present day routers 6

CHAPTER-2 LITERATURE REVIEW 7-17

2.1 Router Architecture Switched Backplane 7-8

2.1.1 Switching vs Routing 8-9

2.1.2 Switching Hubs 9

2.1.3 Layer 2 Switching 9-10

2.1.4 Layer 3 Switching 10

2.1.5 Route Caching 10-11

2.1.6 Full Routing 11

v
CHAPTER-3 EFFICIENT ROUTING TABLE 12-14

3.1.1 Tree based Algorithms 12

3.1.2 WASHU Algorithm 12-13

3.1.3 Stanford University Algorithm 13

3.1.4 ASIK Algorithm 14

CHAPTER-4 ROUTER ARCHITECTURE SERVICES 15

4.1.1 Queuing 15

4.1.2 Optimized Packet Processing 16

CHPATER-5 SURVEY OF PRODUCTS 17-19

5.1.1 Line Capacity and Total Switching Capacity 17

5.1.2 Comparative Product Positioning 18-19

CHAPTER-6 APPLICATIONS 20-21

CHAPTER-7 ADVANTAGES AND DISADVANTAGES 22

CHAPTER-8 FUTURE SCOPE & CONCLUSION 23

REFERENCES 24

v
i
LIST OF FIGURES

Figure 1.1: Present Day Network Infrastructure 2

Figure 1.2: Architecture of Earliest Routers 6

Figure 2.1: Router Architecture with Intelligence on Each Line Card 7

Figure 2.2: Router Architecture with Switched Backplane 8

Figure 3.1: Stanford University's Algorithm for Efficient Route Lookup 13

Figure 5.1: Architecture of Tiny tera 20

v
i
i
ABBREVIATION

DWDM : Dense Wavelength Division


Multiplexing

ASIC : Application Specific Integrated circuit

CPU : Central Processing Unit

IP : Internet Protocol

OC : Optical Carrier

ATM : Asynchronous Transfer Mode

DRAM : Dynamic Random Access Memory

OSI : Open System Interconnection

VLAN : Virtual Local Area Network

QOS : Quality of Service

v
i
i
i
Terabit Switching and Routing

CHAPTER 1
INTRODUCTION
1.1 NETWORK INFRASTRUCTURE

In the present network infrastructure, world's communication service providers


are laying fiber at very rapid rates. And most of the fiber connections are now being
terminated using DWDM. The combination of fiber and DWDM has made raw
bandwidth available in abundance. 64-channel OC-192 capacity fibers are not
uncommon these days and OC-768 speeds will be available soon. Terabit routing
technologies are required to convert massive amounts of raw bandwidth into usable
bandwidth.

Figure 1.1: Present Day Network Infrastructure

Present day network infrastructure is shown in Fig 1. Currently, Add/Drop


multiplexers are used for spreading a high-speed optical interface across multiple
lower-capacity interfaces of traditional routers. But carriers require high-speed
router interfaces that can directly connect to the high-speed DWDM equipment to
Ensureopticalinteroperability.

Department of ECE, 1
MRITS
Terabit Switching and Routing

This will also remove the overhead associated with the extra technologies to
enable more economical and efficient wide area communications. As the number of
channels transmitted on each fiber increases with DWDM, routers must also scale
port densities to handle all those channels. With increase in the speed of interfaces
as well as the port density, next thing which routers need to improve on is the
internal switching capacity. 64-channel OC-192 will require over a terabit of
switching capacity. Considering an example, a current state-of-the-art gigabit router
with 40 Gbps switch capacity can support only a 4-channel OC-48 DWDM
connection. Four of these will be required to support a 16-channel OC-48 DWDM
connection. And 16 of these are required to support 16-channel OC-192 DWDM
connection with a layer of 16 4::1 SONET Add/Drop Multiplexers in between. In
comparison to that a single router with terabit switching capacity can support 16-
channelOC-192DWDMconnection.

Department of ECE, 2
MRITS
Terabit Switching and Routing

1.2. THE ARCHITECTURE OF INTERNET ROUTERS

This section gives a general introduction about the architecture of routers and
the functions of its various components. This is very important for understanding
about the bottlenecks in achieving high speed routing and how are these handled in
the design of gigabit and even terabit capacity routers available today in the market.

1.2.1 Router Functions

Functions of a router can be broadly classified into two main categories:

1. Datapath Functions: These functions are applied to every datagram that reaches
the router and successfully routed without being dropped at any stage.Main
functions included in this category are the forwarding decision, forwarding through
the backplane and output link scheduling.

2. Control Functions: These functions include mainly system configuration,


management and update of routing table information. These do not apply to every
datagram and therefore performed relatively infrequently. Goal in designing high
speed routers is to increase the rate at which datagrams are routed and therefore
datapath functions are the ones to be improved to enhance the performance
destination. The second data which the destination receives is either from the relay
1 or relay 2. These signals are then combined by using maximum ratio combining
technique.

Department of ECE, 3
MRITS
Terabit Switching and Routing

1.2.2 Assessing Router Performance

In this section, several parameters are listed which can be used to grade the
performance of new generation router architectures. These parameters reflect the
exponential traffic growth and the convergence of voice, video and data.

High packet transfer rate: Increasing internet traffic makes the packets per
second capacity of a router as the single most important parameter for grading its
performance. Further, considering the exponential growth of traffic, the capacity of
routers must be scalable.

Multi-service support: Most of the network backbones support both ATM and IP
traffic and will continue to do so as both technologies have their advantages.
Therefore routers must support ATM cells, IP frames and other network traffic types
in their native modes, delivering full efficiency of the corresponding network type.

Guarantee short deterministic delay: Real time voice and video traffic require
short and predictable delay through the system. Unpredictable delay results in a
discontinuity which is not acceptable for these applications.

Quality of Service: Routers must be able to support service level agreements,


guaranteed line-rate and differential quality of service to different applications, or
flows. This quality of service support must be configurable.

Multicast Traffic: Internet traffic is changing from predominantly point-to-


point to multicast and therefore routers must support large number of multicast
transmissions simultaneously.

Department of ECE, 4
MRITS
Terabit Switching and Routing

1.3 EVOLUTION OF PRESENT DAY ROUTERS

The architecture of earliest routers was based on that of a computer as shown


in Fig 1.4. It has a shared central bus, central CPU, memory and the Line cards for
input and output ports. Line cards provide MAC-layer functionality and connect to
the external links. Each incoming packet is transferred to the CPU across the shared
bus. Forwarding decision is made there and the packet then traverses the shared bus
again to the output port.

Performance of these routers is limited mainly by two factors:

First: processing power of the central CPU since route table search is a highly time-
consuming task and

Second: the fact that every packet has to traverse twice through the shared bus.

Fig 1.2. Architecture of Earliest Routers

Department of ECE, 5
MRITS
Terabit Switching and Routing

CHAPTER 2
LITERATURE REVIEW
2.1 ROUTER ARCHITECTURE WITH SWITCH BACKPLANE

some router vendors introduced parallelism by having multiple CPUs and each
CPU now handles a portion of the incoming traffic. But still each packet has to
traverse shared bus twice. Very soon, the design of router architecture advanced one
step further as shown in Fig 3. Now a route cache and processing power is provided
at each interface and forwarding decisions are made locally and each packet now
has to traverse the shared bus only once from input port to the output port.

Fig 2.1. Router Architecture with intelligence on each line card

Even though CPU performance improved with time, it could not keep pace with
the increase in line capacity of the physical links and it is not possible to make
forwarding decisions for the millions of packets per second coming on each input
link.

Department of ECE, 6
MRITS
Terabit Switching and Routing

Therefore special purpose ASICs(Application Specific Integrated Circuits) are


now placed on each interface which outperform a CPU in making forwarding
decisions, managing queues and arbitration access to the bus. But use of shared bus
still allowed only one packet at a time to move from input port to output port. Finally,
this last architectural bottleneck was eliminated by replacing shared bus by a
crossbar switch. Multiple line cards can communicate simultaneously with each
other now. Fig 4. shows the router architecture with switched backplane.

Fig 2.2. Router Architecture with switched backplane

2.1.1 Switching vs Routing

The basic difference between switching and routing is that switching uses
'indexing' for determining the next hop for a packet in the address table whereas
routing uses 'searching'. Since indexing is O(1) operation, it is much faster than any
search technique.

Department of ECE, 7
MRITS
Terabit Switching and Routing

Because of this, many people started thinking about replacing routers with
switches wherever possible and vendors flooded the market with several products
under the name of switches. To differentiate their products, vendors gave different
names to them like Layer 3 Switch, IP Switch, Layer 4 Switch, Tag Switch etc. and
regardless of what a product does, it is called a switch. Therefore it is important to
understand the difference between all these different forms of switches.

2.1.2 Switching Hubs

It operates at Layer 1 of the OSI networking model. Individual ports are assigned
to different LAN segments as in a bridge. But while they are useful for managing
configuration changes, it must be noted that they still propagate contention among
their ports and therefore different from layer 2 bridges.

2.1.3 Layer 2 Switching

Layer 2 Switches is just another name for multiport bridges. As we know, bridges
are used to extend the LANs without extending the contention domain. So Layer 2
switches have been used in some places to replace routers for connecting various
LANs to produce one big flat network. But the problem with this approach was the
broadcast traffic which is propagated across all ports of a Layer 2 switch. To solve
this problem, people soon came up with the concept of "Virtual LAN" or VLAN.
Basic feature of VLAN is to divide one large LAN connected by layer 2 switches
into many independent and possibly overlapping LANs. This is done by limiting the
forwarding of packets in these switches and there are several ways of doing this:

 Port based grouping: Packet coming on a certain port may be forwarded to only
a subset of all the ports.
 Layer 2 address based grouping: Looking at the layer 2 address of the packet,
set of output ports is decided.
 Layer 3 protocol based grouping: Bridges can also segregate traffic based on
the Protocol Type field of the packet.

Department of ECE, 8
MRITS
Terabit Switching and Routing

2.1.4 Layer 3 Switching

There is no consistent definition of "Layer 3 Switches" and they refer to wide


variety of products. The only common thing between all of these devices is that they
use layer 3 information to forward packets. Therefore, as discussed in the previous
section, even Layer 2 VLAN switches with protocol/subnet awareness are
sometimes referred as Layer 3 VLAN switches.

2.1.5 Route Caching

Since, number of internet hosts is increasing at an exponential rate, it is not


possible to have an entry for each of them in every routing table. Therefore, routers
combine many of these entries which have a same next hop. But this worsens already
complex task of route search. To improve route lookup time, many products keep a
route cache of frequently seen addresses. When addresses are not found in the cache,
then the search goes through traditional software-based slow path. Many products
in this category, combine layer 2 switching features with route cache based routing
and vendors have named them as Layer 3 Switches, Multilayer Switches, Routing
Switches and Switching Routers. Cache sizes range from 2000 to 64,000. Most of
these products have a processor based slow-path for looking up routes for cache
misses, but few of them take help of external router to perform these functions and
they are sometimes referred as "Layer 3 Learning Bridges". Route Cache technique
scale poorly with routing table size, and cannot be used for backbone routers that
support large routing tables.Frequent topology changes and random traffic pattern
also eliminate any benefits from the route cache and performance is bounded by the
speed of the slow path.

2.1.6 Full Routing

Some of the latest products in the market perform full routing at very high speeds.
Instead of using a route cache, these products actually perform a complete routing
table search for every packet.

Department of ECE, 9
MRITS
Terabit Switching and Routing

These products are often called Real Gigabit Routers, Gigabit Switching Routers
etc. By eliminating the route cache, these products have a predictable performance
for all traffic at all times even in most complex internetworks. Unlike other forms
of layer 3 switches, these products improve all aspects of routing to gigabit speeds
and not just a subset. These products are suited for deployment in large scale carrier
backbones. Some of the techniques used in these products to improve route lookup
are discussed later in the paper.

Department of ECE, 1
MRITS 0
Terabit Switching and Routing

CHAPTER-3
3.1 EFFICIENT ROUTING TABLE SEARCH

One of the major bottlenecks in backbone routers is the need to compute the
longest prefix match for each incoming packet. Data links now operate at
gigabits/sec or more and generate nearly 150,000 packets per second at each
interface. New protocols, such as RSVP, require route selection based on Protocol
Number, Source Address, Destination Port and source Port and therefore make it
even more time consuming. The speed of a route lookup algorithm is determined by
the number of memory accesses and the speed of the memory. This should be kept
in mind while evaluating various route lookup techniques described below.

3.1.1 Tree-based Algorithms

Each path in the tree from root to leaf corresponds to an entry in the forwarding
table and the longest prefix match is the longest path in the tree that matches the
destination address of an incoming packet. In the worst case, it takes time
proportional to the length of the destination address to find the longest prefix match.
The main idea in tree based algorithms is that most nodes require storage for only a
few children instead of all possible ones and therefore make frugal use of memory
at cost of doing more memory lookups. But as the memory costs are dropping, these
algorithms are not the best ones to use. In this category, Patricia-tree algorithm is
one of the most common.

3.1.2 WASHU Algorithm

Developed at Washington University St. Louis., it is a scalable algorithm that


uses binary hashing. The algorithm computes a separate hash table for each possible
prefix length and therefore maintains 33 hash tables in total. Instead of starting from
the longest possible prefix, a binary search on the prefix lengths is performed.

Department of ECE, 1
MRITS 1
Terabit Switching and Routing

Search starts at table 16 and if there is a hit, look for longer match, otherwise look
for shorter match. But this scheme has a bug that if the longest match is a 17-bit
prefix and there is no entry in table 16 that leads to looking at higher tables.
Therefore markers are added, which track best matching shorter prefix. So now the
algorithm works as follows. Hash first 16 bits and look in table 16. If find a marker,
save best match from marker and look for longer prefix at table 24. If find a prefix,
save prefix as best match and look for longer prefix at table 24. If miss, look for
shorter prefix. Continue algorithm until tables exhausted.

3.1.3 Stanford University's algorithm

This algorithm has a very good performance and all the details are available in a
paper available on their site. A brief description of how it works is given here. This
algorithm makes use of the fact that most of the prefixes in route tables of the
backbone DRAM. The first table (TBL24) stores all possible route prefixes that are
up to, and including, 24 bits long. Prefixes shorter than 24 bits are expanded and
multiple 24 bit entries are kept for them. Second table (TBLLong) stores all route
prefixes in the routing table that are routers are shorter than 24 bits. The basic
scheme makes use of two tables, both stored in longer than 24-bits. Each entry in
TBLLong corresponds to one of the 256 possible longer prefixes that share the single
24-bit prefix in TBL24. The first 24 bits of the address are used as an index into the
first table TBL24 and a single memory read is performed, yielding 2 bytes. If the
first bit equals zero, then the remaining 15 bits describe the next hop. Otherwise, the
remaining 15 bits are multiplied by 256, and the product is added to the last 8 bits
of the original destination address, and this value is used as a direct index into
TBLLong, which contains the next hop. Two memory accesses in different tables
can be pipelined and the algorithm allows 20 million packets per second to be
processed. Fig 5. shows how the two tables are accessed to find the next hop.

Department of ECE, 1
MRITS 2
Terabit Switching and Routing

Fig 3.1. Stanford University's Algorithm for Efficient Route Lookup

3.1.4 ASIK algorithm

This algorithm widely acclaimed for its performance. Details are not disclosed,
but here is some information on its capabilities. It can be easily implemented in
hardware and adapted to various link speeds. Memory space required grows with
the number of networks rather than the number of hosts. It allows searches based on
multiple fields of the IP header and also has a deterministic worst-case time to locate
the best route( 16 memory accesses).
Other vendors like Pluris and Nexabit also have their own solutions to route
lookup problem which have very high performance. But nothing is mentioned about
it in their papers. Route lookup is the single most important thing in the design of
high speed routers and no vendor wants to share its ideas with anyone else unless
the patent is approved.

Department of ECE, 1
MRITS 3
Terabit Switching and Routing

CHAPTER-4
4.1 ROUTER ARCHITECTURE SERVICES

Providing any form of differentiated services requires the network to keep some
state information. The majority of the installed routers use architectures that will
experience a degraded performance if they are configured to provide complicated
QOS mechanisms. Therefore the traditional approach was that all the sophisticated
techniques should be in end systems and network should be kept as simple as
possible. But recent research and advances in hardware capabilities have made it
possible to make networks more intelligent.

4.1.1 Queuing

Once the packet header is processed and next-hop information is known, packet
is queued before being transmitted on the output link. Switches can either be input
or output queued.
Output queued switches require the switch fabric to run at a speed greater than
the sum of the speeds of the incoming links and the output queues themselves must
run at a speed much faster than the input links. This is often difficult to implement
with increasing link speeds. Therefore most of the switch designs are input queued
but it suffers from the head-of-line blocking problem which means that a packet at
the head of the input queue, while waiting for its turn to be transmitted to a busy
output port, can block packets behind it which are destined for an idle output port.
This problem is solved by maintaining per-output queues which are also known as
virtual output queuing. A centralized scheduling algorithm then examines the
contents of all the input queues, and finds a conflict-free match between inputs and
outputs. But input queueing poses another challenge for the scheduling. Most of the
packet scheduling algorithms are specified in terms of output queues and this is a
non-trivial problem to modify these algorithms based on input queuing.

Department of ECE, 1
MRITS 4
Terabit Switching and Routing

4.1.2 Optimized Packet Processing


Increasing link capacities and the need for differentiated services stretch processor
based architecture to the limit. Therefore multiprocessor architectures with several
forwarding engines are designed. Another efficient solution is described here which
is based on functional partitioning of packet processing as done below:

 Buffer and forward packets through some switching fabric.


 Apply filtering and packet classification.
 Determine the next hop of the packet.
 Queue the packet in an appropriate queue based on both the classification decisions
and the route table lookup.

Department of ECE, 1
MRITS 5
Terabit Switching and Routing

CHAPTER-5
5.1 SURVEY OF PRODUCTS

This section provides a survey of the terabit and gigabit capacity routers available
in the market. Comparative analysis of all the major products classifies them into
various categories based on architecture design as well as performance. Later, a
slightly detailed description of two state-of-the-art products is also given.

Competitive Study of Leading Market Products

This competitive study identifies the key router vendors and maps each of them
into the landscape of edge, mid-core, and core routing requirements. In addition, the
study provides an overview of the critical line capacity and total switching capacity
requirements for edge and core environments and compares the various architectural
approaches being used to address these performance needs. Many of the data and
ideas in this section are borrowed from a white paper at the site of Pluris corporation
which has one of the leading products in this category.

5.1.1 Line Capacity and Total Switching Capacity

To get into more detailed architectural comparisons, it is important to further


define the differences between line capacity and total switching capacity and to
know what are these values for various types of scalable gigabit and terabit systems
available in the market.
Line Capacity: Line capacity refers to the effective input/output bandwidth that
is available to a subscriber via the line-card ports. For example, a line card that has
four OC-48 ports at 2.5 Gbps each would deliver 10 Gbps of line capacity. Invariably
line capacity represents only a percentage of overall switching capacity. Gigabit
routing devices typically can provide total line capacity of up to tens of gigabits per
second,and are able to support multiple port interface speeds up to OC-48 (2.5 Gbps)
or OC-192 (10 Gbps). Leading gigabit routing vendors include Cisco,
Lucent/Ascend, Argon/Siemens, NetCore/Tellabs, Juniper, Nexabit/Lucent, and
Torrent/Ericsson.

Department of ECE, 1
MRITS 6
Terabit Switching and Routing

Terabit routing devices are designed with the aggregate line capacity to handle
thousands of gigabits per second and to provide ultra-scalable performance and high
port density. These routers can support port interface speeds as high as OC-192 (10
Gbps) and beyond. Currently the leading terabit routing vendors include Pluris and Avici.
Switching Capacity: The switching capacity of a system consists of the total
bandwidth for all line-card connections and internal switching connections
throughout the system. The switching capacity should be substantially higher than
the line capacity to ensure non-blocking switching between any two ports.
Additional switching capacity is also needed to provide active redundancy and a
higher level of fault tolerance. Therefore switching capacity includes: Bandwidth
used for line card connections, Bandwidth available to modular expansion of line
card connections, Bandwidth for non-blocking intra-chassis switching, Bandwidth
for non-blocking inter-chassis switching and for modular multi-chassis expansion,
Aggregate bandwidth needed to support redundancy and fault tolerance.

5.1.2 Comparative Product Positioning

Table 1. shows various single-box and multi-chassis architectures. For


comparison, Table 1. compares only "single-chassis" versions of the multi-chassis
systems to better illustrate relative throughputs for their basic configurations. Key
factors to consider when comparing single-box and multi-chassis systems are the
switch fabric capacity, line card capacity, number of cards supported,

Department of ECE, 1
MRITS 7
Terabit Switching and Routing

Capacity in Number WAN Number Line Card


Gps of line Interface of OC-48 Performance
cards support ports (million pps)

Product Switc Line


h Card
Fabri
c
Single Box Edge to Mid-Core Devices

Cisco 12012 60 27 11 OC-3/12/48 8 1

Juniper M40 40 20 8 OC-3/12/48 8 2.5

Lucent 60 40 16 OC-3/12/48 16 NA
PacketStar 6416

Torrent IP9000 20 20 16 OC-3/12 NA NA

Single Box Mid-Core to Core Devices

Nexabit 6,400 160 16 OC 64 NA


NX64000 3/12/48/192

Integrated Multi-Chassis Edge to Mid-Core Devices

Argon GPN 40 20 8 OC-3/12/48 8 NA

NetCore 20 10 4 OC-3/12/48 4 NA
Everest

Integrated Multi-Chassis Mid-Core to Core Devices

Avici Systems 640 100 10 OC 3/12/48/192 10 7


TSR

Pluris TNR 1,440 150 15 OC 3/12/48/192 60 33

Table 1.single chassis Configuration

Department of ECE, 1
MRITS 8
Terabit Switching and Routing

CHAPTER-6

APPLICATIONS

 The Tiny Tera

Tiny Tera is a Stanford University research project, the goal of which is to


design a small, 1 Tbps packet switch using normal CMOS technology. The system
is suited for an ATM switch or Internet core router. It efficiently routes both unicast
and multicast traffic. The current version has 32 ports each operating at a 10 Gbps
(Sonet OC-192 rate) speed. The switch is a small stack composed of a pile of round
shaped crossbar slices and a scheduler. See Fig .5.1 Each slice (6 cm diameter)
contains a single 32*32 1-bit crossbar chip. A port is connected to the slices
radically. The port design is scalable in data rate and packet size. The basic switching
unit is 64 bits, called a chunk. Unicast traffic use a buffering scheme called "Virtual
Output Queuing" described earlier. When the 64-bit data chunks are transferred over
the 32*32 switch, the scheduler uses a heuristic algorithm called iSLIP. It achieves
fairness using independent round-robin arbiters at each input and output. This leads
to maximum throughput of just 63%, but slight modifications give 100% throughput.
If the iSLIP algorithm is implemented in hardware it can make decision in less than
40 nanoseconds. The switch also has special input queues for multicast. A multicast
input can deliver simultaneously to many outputs. The switch uses fan-out splitting,
which means that the crossbar may deliver packets to the output over a number of
transfer slots. Developing good multicast scheduling algorithms was an important
part of the Tiny Tera Project.

Department of ECE, 1
MRITS 9
Terabit Switching and Routing

Fig 5.1. Architecture of Tiny Tera

 Nexabit NX64000

The NX64000s innovative switch fabric delivers 6.4 Tbps switch capacity per
chassis. The NX64000 supports up to 192 OC-3, 96 OC-12, 64 OC-48 and 16 OC-
192 lines. The NX64000 allows Service Providers to even scale to higher speed like
OC-768 and OC-3072 and port densities can also be increased further by
interconnecting multiple chassis. The NX64000 implements a distributed
programmable hardware forwarding engine on each line-card. This approach
facilitates wire-speed route lookup for full, 32-bit network prefixes-even at OC-192
rates. The distributed model enables support for 128 independent forwarding tables
on each line-card. Each forwarding engine is capable of storing over one million
entries. One of the unique features of the NX64000 is its ability to support IP-IP
tunnel encapsulation and de-capsulation at line-rates.
The NX64000 is the only product in the market that can support a guaranteed
delay of 40 microseconds to variable sized packets independent of packet size and
type and therefore is capable of providing ATM-comparable QOS in an IP world.

Department of ECE, 2
MRITS 0
Terabit Switching and Routing

CHAPTER-7

ADVANTAGES & DISADVANTAGES

 ADVANTAGES
1. Terabit Network has high Flexibility , Efficiency and Transparency.
2. Improved Network Management and Operation Costs.
3. Multi - Protocal support.
4. Rapid Service Recovery.
5. Authentication , Authorization and Accounting.

 DISADVANTAGES
1. Terabit network is high circuit complexity.
2. Very precise and High cost.
3. Supports only support 16-channel OC-192 DWDM connection.
4. High Maintenence.

Department of ECE, 2
MRITS 1
Terabit Switching and Routing

CHAPTER-8

FUTURE SCOPE
 Terabit Switching Routing provides capacity and bandwidth to meet-
 Increasing customer demands for data and voice communications.
 To support future internet growth of high quality video.
 To support e-commerce Applications.
 Multi-terabit Transmission.

CONCLUSION

It is very clear now that with deployment of more and more fiber and
improvements in DWDM technology, terabit capacity routers are required to convert
the abundant raw bandwidth into useful bandwidth. These routers require fast
switched backplanes and multiple forwarding engines to eliminate the bottlenecks
provided in traditional routers. Ability to efficiently support differentiated services
is another feature which will be used along with total switching capacity to evaluate
these routers. Switching is faster than routing, and many products in the market
combine some sort of switching with routing functionality to improve the
performance and it is important to understand what the product actually does. But
the products which scale up all aspects of routing rather than the subset of them, are
bound to perform better with arbitrary traffic patterns. Route lookup is the major
bottleneck in the performance of routers and many efficient solutions are being
proposed to improve it. Supporting differentiated services at such high interface
speeds poses some new challenges for the design of router architecture and some
solutions are discussed here. Finally a survey of leading market products is
presented.

Department of ECE, 2
MRITS 2
Terabit Switching and Routing

REFERENCES

[Decis97] Decisys, "Route Once, Switch Many, " July 1999, 23 pages,
http://www.netreference.com/PublishedArchive/WhitePapers/WPIndex.html

[Decis96] Decisys, "The Evolution of Routing, " Sep 1996, 6 pages,


http://www.netreference.com/PublishedArchive/WhitePapers/WPIndex.html

[NexNeed] Nexabit, "The New Network Infrastructure : The Need for Terabit
Switch/Routers, " 1999, 11 pages, http://www.nexabit.com/need.html

[NexProd] Nexabit, "NX64000 Multi-Terabit Switch/Router Product Description," 1999,


18 pages, http://www.nexabit.com/proddescr.html

[NexSup] Nexabit, "Will The New Super Routers Have What it Takes, " 1999, 12 pages,
http://www.nexabit.com/architecture.pdf

[PluComp] Pluris, "Competitive Study, ", April 1999, 10 pages,


http://www.pluris.com/html/coretech/whitepaper4.htm

[PluPrac] Pluris, "Practical Implementation of Terabit Routing Scenarios", April 1999, 14


pages, http://www.pluris.com/html/coretech/whitepaper5.htm

[Klaus98] Klaus Lindberg, "Multi-gigabit Routers", May 1998,


http://www.csc.fi/lindberg/tik/paper.html

[Craig99] Craig Partridge, "Designing and Building Gigabit and Terabit Internet Routers,"
Networld+Interop99, May1999

Department of ECE, 2
MRITS 3

You might also like