0% found this document useful (0 votes)
68 views25 pages

Mckeown Line Card To Switch Protocol

Nick McKeown Line Card Switch protocol for Terabit Routers. This paper is a description of LCS Switch Core which is a PMC-Sierra product and how the Switch is arbitrated.

Uploaded by

Tim Mazumdar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views25 pages

Mckeown Line Card To Switch Protocol

Nick McKeown Line Card Switch protocol for Terabit Routers. This paper is a description of LCS Switch Core which is a PMC-Sierra product and how the Switch is arbitrated.

Uploaded by

Tim Mazumdar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

PMC-SIERRA

Accelerating The Broadband Revolution

A 2.5Tb/s LCS Switch Core


Nick McKeown
Costas Calamvokis
Shang-tse Chuang

August 20th, 2001 1


Outline

1. LCS: Linecard to Switch Protocol


™ What is it, and why use it?
2. Overview of 2.5Tb/s switch.
3. How to build scalable crossbars.
4. How to build a high performance,
centralized crossbar scheduler.

August 20th, 2001 2


Next-Generation Carrier Class
Switches/Routers
Switch Core Linecards

Up to 1000ft
1 2 3 4 5 6 13 14 15 16 17 18 25 26 27 28 29 30

1 2 3 4 5 6 7 8 9 10111213141516

17181920212223242526272829303132

7 8 9 10 11 12 19 20 21 22 23 24 31 32
1 2 3 4 5 6 7 8 9 10 111213141516

17181920212223242526272829303132

August 20th, 2001 3


Benefits of LCS Protocol
 Large Number of Ports.
™ Separation enables large number of ports in multiple racks.
™ Distributes system power.

2. Protection of end-user investment.


™ Future-proof linecards.


 In-service upgrades.
™ Replace switch or linecards without service interruption.

4. Enables Differentiation/Intelligence on Linecard.


™ Switch core can be bufferless and lossless. QoS, discard etc.
performed on linecard.


 Redundancy and Fault-Tolerance.
™ Full redundancy between switches to eliminate downtime.
August 20th, 2001 4
Main LCS Characteristics
1. Credit-based flow control
™ Enables separation.
™ Enables bufferless switch core.
2. Label-based multicast
™ Enables scaling to larger switch cores.
3. Protection
™ CRC protection.
™ Tolerant to loss of requests and data.
4. Operates over different media
™ Optical fiber,
™ Coaxial cable, and
™ Backplane traces.
5. Adapts to different fiber, cable or trace lengths
th
August 20 , 2001 5
LCS Ingress Flow control
Linecard Switch Port

1: Req

Switch
Switch
L 2: Grant/credit L
Fabric
Fabric
C C
S Seq num
S
3: Data Req
Switch
Switch
Grant
Scheduler
Scheduler

August 20th, 2001 6


LCS Adapting to Different
Cable Lengths
Linecard Switch Core
L
C
S
Linecard Switch
Switch
L Fabric
Fabric
C
S
Switch
Switch
Linecard
Scheduler
Scheduler
L
C
S

August 20th, 2001 7


LCS Over Optical Fiber
10Gb/s Linecards
10Gb/s Linecard 10Gb/s Switch Port

2.5Gb/s LVDS
12 multimode fibers

Switch
Switch
L L
C Fabric
Fabric
C
S 12 multimode fibers
S

Switch
Switch
GENET Scheduler
Quad Scheduler
Serdes

August 20th, 2001 8


Example of OC192c LCS Port
12 Serdes
Channels
LCS Protocol
to OC192
Linecard

August 20th, 2001 9


Outline

1. LCS: Linecard to Switch Protocol


™ What is it, and why use it?
2. Overview of 2.5Tb/s switch.
3. How to build scalable crossbars.
4. How to build a high performance,
centralized crossbar scheduler.

August 20th, 2001 10


Main Features of Switch Core
2.5Tb/s single-stage crossbar switch core with centralized
arbitration and external LCS interface.

1. Number of linecards:
™ 10G/OC192c linecards: 256
™ 2.5G/OC48c linecards: 1024
™ 40G/OC768c linecards: 64
2. LCS (Linecard to Switch Protocol):
™ Distance from line card to switch: 0-1000ft.
™ Payload size: 76+8B.
™ Payload duration: 36ns.
™ Optical physical layers: 12 x 2.5Gb/s.
3. Service Classes: 4 best-effort + TDM.
4. Unicast: True maximal size matching.
5. Multicast: Highly efficient fanout splitting.
6.
August 20th, 2001 Internal Redundancy: 1:N. 11
2.56Tb/s IP router
1000ft/300m
Port #1

LCS

Port #256
Linecards

2.56Tb/s switch core


August 20th, 2001 12
Switch core architecture
Port #1

Cell Data

LCS Protocol optics Port


Processor
optics
Request

Grant/Credit

Crossbar
Scheduler

Port #256

LCS Protocol optics Port


Processor
optics

August 20th, 2001 13


Outline

1. LCS: Linecard to Switch Protocol


™ What is it, and why use it?
2. Overview of 2.5Tb/s switch.
3. How to build scalable crossbars.
4. How to build a high performance,
centralized crossbar scheduler.

August 20th, 2001 14


How to build a scalable crossbar

1. Increasing the data rate per port


™ Use bit-slicing (e.g.Tiny Tera).

2. Increasing the number of ports


™ Conventional wisdom: N2 crosspoints per chip is a problem,
™ In practice: Today, crossbar chip capacity is limited by I/Os.
™ It’s not easy to build a crossbar from multiple chips.

August 20th, 2001 15


Scaling: Trying to build a
crossbar from multiple chips

16x16 crossbar switch:


Building Block:
4 inputs

4 outputs

Eight inputs and eight


outputs required!

August 20th, 2001 16


Scaling using “interchanging”
4x4 Example Reconfigure
Reconfigureevery
every
half
halfcell
celltime
time

Reconfigure 2x4 (2 I/Os)


Reconfigureevery
every A A
Cell time cell Cell time
celltime
time A A

4x4 B A

B A A A
A A
INT INT
B B
B A B B
B A

B B
B B
2x4 (2 I/Os)

August 20th, 2001 17


2.56Tb/s Crossbar operation
Crossbar A
Interchanger Interchanger
B A A A A A B A
2x2 128 2x2
B A fixed x fixed B A
“TDM” 256 “TDM”
xbar

Crossbar B
B B B B
128
x
128 256 128
xbar
inputs outputs

August 20th, 2001 18


Outline

1. LCS: Linecard to Switch Protocol


™ What is it, and why use it?
2. Overview of 2.5Tb/s switch.
3. How to build scalable crossbars.
4. How to build a high performance,
centralized crossbar scheduler.

August 20th, 2001 19


How to build a centralized scheduler
with true maximal matching?

Usual approaches
1. Use sub-maximal matching algorithms (e.g. iSLIP)
™ Problem: Reduced throughput.
2. Increase arbitration time: Load-balancing
™ Problem: Imbalance between layers leads to blocking and
reduced throughput.
3. Increase arbitration time: Deeper pipeline
™ Problem: Usually involves out-of-date queue occupancy
information, hence reduced throughput.
August 20th, 2001 20
How to build a centralized scheduler
with true maximal matching?

Our approach is to maintain high


throughput by:
1. Using true maximal matching algorithm.
2. Using single centralized scheduler to avoid the
blocking caused by load-balancing.
3. Using deep, strict-priority pipeline with up-to-
date information.

August 20th, 2001 21


Strict Priority Scheduler Pipeline

Scheduler Plane for


Scheduler • 2.56Tb/s
Priority
p=0 • 1 priority
• Unicast and multicast
Scheduler
Priority
p=1
LCS Protocol optics Port
Processor
optics Scheduler
Priority
p=2

Scheduler
Priority
p=3

August 20th, 2001 22


Strict Priority Scheduler Pipeline
Scheduler
Priority
p=0

Scheduler
multicast Priority
p=1
LCS Protocol optics Port
Processor
optics Scheduler
Priority
p=2

Scheduler
Priority

P=0 P=0 P=0 P=0 P=0 P=0 P=0 p=3

P=0 P=0
P=1 P=0
P=1 P=0
P=1 P=1 P=0 P=0
P=0 P=0 P=0
P=2 P=0
P=2 P=0
P=2 P=0
P=2 P=0
P=0 P=0 P=0 P=0
P=3 P=0
P=3 P=0
P=3 P=0
P=3
0 1 2 3 4 5 6 Time
August 20th, 2001 23
Strict Priority Scheduler Pipeline

Why implement strict priorities in the switch


core when the router needs to support
such services as WRR or WFQ?
1. Providing these services is a Traffic
Management (TM) function,
2. A TM can provide these services
using a technique called Priority
Modulation and a strict priority
switch core.
August 20th, 2001 24
Outline

1. LCS: Linecard to Switch Protocol


™ What is it, and why use it?
2. Overview of 2.5Tb/s switch.
3. How to build scalable crossbars.
4. How to build a high performance,
centralized crossbar scheduler.

August 20th, 2001 25

You might also like