Mckeown Line Card To Switch Protocol
Mckeown Line Card To Switch Protocol
Up to 1000ft
1 2 3 4 5 6 13 14 15 16 17 18 25 26 27 28 29 30
1 2 3 4 5 6 7 8 9 10111213141516
17181920212223242526272829303132
7 8 9 10 11 12 19 20 21 22 23 24 31 32
1 2 3 4 5 6 7 8 9 10 111213141516
17181920212223242526272829303132
In-service upgrades.
Replace switch or linecards without service interruption.
Redundancy and Fault-Tolerance.
Full redundancy between switches to eliminate downtime.
August 20th, 2001 4
Main LCS Characteristics
1. Credit-based flow control
Enables separation.
Enables bufferless switch core.
2. Label-based multicast
Enables scaling to larger switch cores.
3. Protection
CRC protection.
Tolerant to loss of requests and data.
4. Operates over different media
Optical fiber,
Coaxial cable, and
Backplane traces.
5. Adapts to different fiber, cable or trace lengths
th
August 20 , 2001 5
LCS Ingress Flow control
Linecard Switch Port
1: Req
Switch
Switch
L 2: Grant/credit L
Fabric
Fabric
C C
S Seq num
S
3: Data Req
Switch
Switch
Grant
Scheduler
Scheduler
2.5Gb/s LVDS
12 multimode fibers
Switch
Switch
L L
C Fabric
Fabric
C
S 12 multimode fibers
S
Switch
Switch
GENET Scheduler
Quad Scheduler
Serdes
1. Number of linecards:
10G/OC192c linecards: 256
2.5G/OC48c linecards: 1024
40G/OC768c linecards: 64
2. LCS (Linecard to Switch Protocol):
Distance from line card to switch: 0-1000ft.
Payload size: 76+8B.
Payload duration: 36ns.
Optical physical layers: 12 x 2.5Gb/s.
3. Service Classes: 4 best-effort + TDM.
4. Unicast: True maximal size matching.
5. Multicast: Highly efficient fanout splitting.
6.
August 20th, 2001 Internal Redundancy: 1:N. 11
2.56Tb/s IP router
1000ft/300m
Port #1
LCS
Port #256
Linecards
Cell Data
Grant/Credit
Crossbar
Scheduler
Port #256
4 outputs
4x4 B A
B A A A
A A
INT INT
B B
B A B B
B A
B B
B B
2x4 (2 I/Os)
Crossbar B
B B B B
128
x
128 256 128
xbar
inputs outputs
Usual approaches
1. Use sub-maximal matching algorithms (e.g. iSLIP)
Problem: Reduced throughput.
2. Increase arbitration time: Load-balancing
Problem: Imbalance between layers leads to blocking and
reduced throughput.
3. Increase arbitration time: Deeper pipeline
Problem: Usually involves out-of-date queue occupancy
information, hence reduced throughput.
August 20th, 2001 20
How to build a centralized scheduler
with true maximal matching?
Scheduler
Priority
p=3
Scheduler
multicast Priority
p=1
LCS Protocol optics Port
Processor
optics Scheduler
Priority
p=2
Scheduler
Priority
P=0 P=0
P=1 P=0
P=1 P=0
P=1 P=1 P=0 P=0
P=0 P=0 P=0
P=2 P=0
P=2 P=0
P=2 P=0
P=2 P=0
P=0 P=0 P=0 P=0
P=3 P=0
P=3 P=0
P=3 P=0
P=3
0 1 2 3 4 5 6 Time
August 20th, 2001 23
Strict Priority Scheduler Pipeline