Технический тренинг ACX, MX Juniper Networks
Технический тренинг ACX, MX Juniper Networks
Core
Agg
Access Pre-Aggregation
Service Node
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
LDP DoD over RSVP 3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
Any Service
6
SEAMLESS MPLS ARCHITECTURE 1588v2 GM
Pre-Agg
CSR
Core
Agg
Access Pre-Aggregation
Service Node
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
LDP DoD over RSVP 3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
Any Service
7
SEAMLESS MPLS ARCHITECTURE
SOLUTION OPTIONS
•1) End to End L3 MPLS VPN
•2) Pseudo wire in Access & L3 MPLS VPN in Agg/Core
•3) Pseudo wire in Access & VPLS in Agg/Core
•4) TDM/ATM PWs
1) L3VPN END-TO-END TOPOLOGY 1588v2 GM L3VPN Instance with
Pre-Agg vrf-table-label. All CSRs and Pre-
CSR Agg routers have the L3VPN
instance
Agg
Access Pre-Aggregation
Service Node
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
9
1) L3VPN END-TO-END TOPOLOGY
IGP DESIGN - ACCESS
1588v2 GM L3VPN Instance with
Pre-Agg vrf-table-label. All CSRs and Pre-
CSR Level-1 ISIS area. Agg routers have the L3VPN
instance
11
1) L3VPN END-TO-END TOPOLOGY
IGP DESIGN - AGGREGATION
1588v2 GM L3VPN Instance with
Pre-Agg vrf-table-label. All CSRs and Pre-
CSR Agg routers have the L3VPN
instance
Agg
Access Pre-Aggregation
Service Node
12
1) L3VPN END-TO-END TOPOLOGY
LSP DESIGN - AGGREGATION
1588v2 GM L3VPN Instance with
Pre-Agg vrf-table-label. All CSRs and Pre-
CSR Agg routers have the L3VPN
instance
Agg
Access Pre-Aggregation
Service Node
13
1) L3VPN END-TO-END TOPOLOGY
IGP AND LSP SUMMARY 1588v2 GM L3VPN Instance with
Pre-Agg vrf-table-label. All CSRs and Pre-
CSR Agg routers have the L3VPN
instance
Agg
Access Pre-Aggregation
Service Node
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
14
1) L3VPN END-TO-END TOPOLOGY
BGP DESIGN SUMMARY
1588v2 GM L3VPN Instance with
Pre-Agg vrf-table-label. All CSRs and Pre-
CSR Agg routers have the L3VPN
instance
Agg
Access Pre-Aggregation
Service Node
3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
15
1) L3VPN END-TO-END TOPOLOGY
TRAFFIC FLOW FROM CSR TO EPC (A->B)
1588v2 GM L3VPN Instance with
Pre-Agg vrf-table-label. All CSRs and Pre-
CSR Agg routers have the L3VPN
instance
Agg
Access Pre-Aggregation
push 4444,2222,100 swap 2222->1111
push 99,10
pop 1111
A
vrf ip look up B
swap 10->20
BGP-L : 1111 Service Node
pop 20 pop 200
swap 100->200
VC-L : 4444
VC-L : 99
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
16
1) L3VPN END-TO-END TOPOLOGY
TRAFFIC FLOW FROM CSR TO EPC (B->A)
1588v2 GM L3VPN Instance with
Pre-Agg vrf-table-label. All CSRs and Pre-
CSR Agg routers have the L3VPN
instance
Agg
Access Pre-Aggregation
swap 5555->100
push 88,100
swap 7777->5555
A
vrf ip look up B
pop 200 push 8888,7777Service
BGP-L : 3333 Node
swap 100->200
swap 100->200
pop 200 BGP-L : 5555
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
17
1) L3VPN END-TO-END TOPOLOGY
END-TO-END TRAFFIC FLOW
1588v2 GM L3VPN Instance with
Pre-Agg vrf-table-label. All CSRs and Pre-
CSR Agg routers have the L3VPN
instance
Agg
Access
A X2 B
Service Node
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
18
1) L3VPN END-TO-END TOPOLOGY
TRAFFIC FLOW IN THE ACCESS
1588v2 GM L3VPN Instance with
CSR
S1:
Pre-Agg vrf-table-label. All CSRs and Pre-
Agg routers have the L3VPN
instance
Several VRFs may be used (S1-U, S1-MME) All RSVP will have primary and
secondary paths with fast
reroute
AppliesS1also to S1-Flex Agg routers are RRs for Pre-aggs
Core Pre-agg routers are RRs for CSRs
Dedicated VRFs:
Agg
Access
Other VRFs can be provisioned for Management,
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
Or any partial mesh
3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
MP-IBGP (VPNv4)
At the expense of additional provisioning
MP-EBGP (VPNv4)
Agg
Access Pre-Aggregation
Service Node
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
L2Ckt with PW redundancy MP-EBGP (VPNv4)
Agg
Access Pre-Aggregation
Service Node
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
L2Ckt with PW redundancy VPLS
4) TDM AND ATM – SINGLE HOMED
1588v2 GM Service end points
Pre-Agg
CSR
ATM : IMA on CSR and STM-1 or
GE on Service Node
TDM: SATOP or CESOP
Core
Agg
Access Pre-Aggregation
Service Node
AS 65001 AS 65002
ISIS level 1 with BFD ISIS level 2 with BFD Physically connected ISIS level 2 with BFD
RSVP with BFD RSVP with BFD Physically connected RSVP with BFD
LDP DoD over RSVP 3107 I-BGP with BFD 3107 E-BGP with BFD 3107 I-BGP with BFD
SATOP, CESOP and ATM PWs with PW redundancy (Intra/Inter Chassis APS)
The above topology is not typical for TDM/ATM termination in MBH. They are usually terminated on or closer to Agg-routers. In that case,
the design would be much simpler, but would still fall under the same boarder Seamless-MPLS design.
4) TDM AND ATM – DUAL HOMED
1588v2 GM Service end points
Pre-Agg
CSR
ATM : IMA on CSR and STM-1 or
GE on Service Node
TDM: SATOP or CESOP
Agg
Access Pre-Aggregation
Service Node
AS 65001
SATOP, CESOP and ATM PWs with PW redundancy (Intra/Inter Chassis APS)
The above topology is not typical for TDM/ATM termination in MBH. They are usually terminated on or closer to Agg-routers. In that case,
the design would be much simpler, and would still fall under the same boarder Seamless-MPLS design.
Операционная система JUNOS
Находится в эксплуатации
с 1998
Изначально была
спроектирована с учётом
требований операторов
связи
Модульный подход
• Защита от сбоев
• Перезапуск
• Разделение процессов
• Защита памяти
Программная архитектура JUNOS
Несколько полезных функций
• Commit
• Иерархия команд
• OP-скрипты
COMMIT: применение конфигурации
1 2 3
commit
load candidate validated active
rollback
configuration configuration configuration
commit
confirmed commit commit 1
scripts validations 49
• Разделение процесса: редактирование и непосредственно применение
• Преимущества
– Ошибки конфигурации можно избежать
– Уменьшается время на внесение изменений
– Можно запрограммировать политики конфигурации
– Avoid risks of transient configuration state
– Можно сравнивать различные версии
– Возможен откат к предыдущим версиям
Иерархическая структура команд
• Логические структуры разделяют конфигурацию
– Глубокие уровни более специфичны
– Возможно создание пользовательских шаблонов
– Конфигурационные группы уменьшают трудозатраты
• Существенное удобство работы с конфигурацией
Top level
node
2nd level
nodes ... ... ...
3rd level
nodes ... ... ... ...
... ... ... ...
... ... ... ...
Автоматизация повседневных задач
Возможно создание
специальных
скриптов под
специфичные Op Actions
нужды Command-Line Scripts
Execution
Возможно
объединение lab@SaoPaulo> op terse-int customer ACME
нескольких команд Interface Admin Link Proto Local Remote
в один скрипт so-0/1/2 up up
so-0/1/2.501 up up inet 172.31.31.1/30
Автоматизация событий
События вызывают выполнение определенных скриптов.
Возможна корреляция
событий и сбор
информации Событие Действия
Event
На каждое событие можно Scripts
задать предопределенные
действия
Автоматизация конфигураций
COMMIT
CANDIDATE ACTIVE
Configuration Configuration
Abstract complex
configuration into a Benefits
simple set of base Assure compliance to
commands business rules and policies
Enforce best Actions Provide change
practices and management to avert and
business rules correct errors
Commit Simplify and speed setup
Scripts
of complex configuration
Основные операции при обновлении и
установке OS (1)
По мере того как появляются новые возможности или решаются проблемы с
существующим софтом, возможно периодическое обновление Junos OS.
MX80 routers attempt to boot from the storage media in the following order:
4. USB media
5. Dual, internal NAND flash device (first da0, then da1)
Note: Do not insert the removable media during normal operations. The
router does not operate normally when it is booted from the removable
media.
Пользовательский доступ
• Существует понятие класса пользователя
• Пользователь remote – специальный
пользователь для удаленных подключений
• Пользователю можно задавать ограничения для
выполнения конфигурационных команд
• При ограничениях используются классы и
регулярные выражения
Пример создания пользователя в радиус
The Junos OS uses one or more template accounts to perform user authentication. You create the template
account or accounts, and then configure the user access to use that account. If the RADIUS server is
unavailable, the fallback is for the login process to use the local account that set up on the router or switch.
The following example shows how to configure RADIUS authentication:
Просмотр процессов
lab@mx80> show system processes extensive
last pid: 63918; load averages: 0.14, 0.31, 0.36 up 12+22:07:53 09:50:25
144 processes: 5 running, 111 sleeping, 28 waiting
Mem: 587M Active, 80M Inact, 204M Wired, 328M Cache, 112M Buf, 789M Free
Swap: 2915M Total, 2915M Free
PID USERNAME THR PRI NICE SIZE RES STATE TIME WCPU COMMAND
11 root 1 171 52 0K 16K RUN 283.1H 91.16% idle
1168 root 10 8 0 9328K 3988K RUN 21.9H 5.03% clksyncd
12 root 1 -20 -139 0K 16K WAIT 95:31 0.00% swi7: clock
1153 root 1 96 0 28656K 9200K select 22:00 0.00% chassisd
1220 root 3 20 0 80256K 36764K sigwai 15:13 0.00% jpppd
1174 root 1 96 0 6940K 1836K select 13:34 0.00% license-check
1197 root 1 96 0 13620K 7032K select 9:41 0.00% l2ald
63323 lab 1 96 0 24160K 15540K select 8:50 0.00% cli
Или….
Router-cli> start shell
%top
Конфигурация процессов
• Единственные свойства процессов, на которые можно
воздействовать: генерировать или нет core-файл, отключить
или нет процесс
• Иерархия [edit system processes] позволяет включать эти опции
• Команда show system core-dumps показывает core-файлы
процессов
1,44 Тбит/с
Juniper MХ240
80 Гбит/с
Juniper MX80
MX: Характеристики
Энергопотребление
(теоретический 500 1420 2880 5093
максимум), Вт, DC-
питание
Платформа MX960
Шасси с 14-ю отсеками
Размеры
– Высота: 16RU (около 1/3 стойки), глубина:<800 мм
Надёжная аппаратура
– Пассивная Mid-Plane
– Резервируемые модули управления
– Резервируемые модули коммутации (2+1)
– Распределённая подсистема коммутации
– Резервирование подсистемы охлаждения и электропитания
Электропитание и охлаждение
– Вентиляция в направлении с лицевой стороны на обратную
сторону
– Два отсека вентиляции (резервирование 1+1)
– До 4-х блоков питания (2+2 DC, 3+1 AC)
– Rear-side power cabling
Ёмкость
– 14 слотов - 2 для матрицы коммутации/модулей управления с
возможностью установки дополнительной матрицы
коммутации
– До 1,3Тбит/с (полный дуплекс) с 11 картами (до 120Гбит/с на
слот)
Компоненты MX960
Switch
SCB -
l Board
Contro
Control tin g
RE- Rou
Panel Engin e
Upper
Fantray
MPC
SCB
RE
Lower Cable
Fantray Mgmnt
Air
Intake
МХ 960 - back
Охлаждение
Блоки питания
Заземление
Платформа MX480
Шасси с 8-ю отсеками
Размеры • Общая с MX960 аппаратная база
– Высота: 8RU (около 1/6 стойки), – Единый комплекс RE/SCB
глубина:<800 мм – Одинаковые DPC
• Общее ПО JunOS
Надёжная аппаратура
• Варианты комплектации RE
– Пассивная Mid-Plane – 1.3GHz процессор с 2GB
– Резервируемые модули управления – 2 GHz процессор с 4GB
– Резервируемые модули коммутации (1+1)
– Распределённая подсистема коммутации
– Резервирование подсистемы охлаждения и
электропитания
Электропитание и охлаждение
– Вентиляция в направлении от боковой стенки к
боковой стенке
– Резервирование вентиляторов
– До 4-х блоков питания (2+2 DC, 2+2 AC)
– Общее энергопотребление ~2800Вт
Ёмкость
– 8 слотов - 2 для матрицы коммутации/модулей
управления
– До 720Гбит/с (полный дуплекс) с 6 карт
Платформа MX240
• Шасси с 4-мя отсеками (2+2 or 3+1) • Общая с MX960 аппаратная база
• Размеры • Единый комплекс RE/SCB
– Высота: 5RU, Глубина: <800 мм • Одинаковые DPC
• Надежная аппаратура
– Пассивная Mid-Plane • Общее ПО JunOS
– Резервируемые модули управления • Варианты комплектации RE
(в конфигурации 2+2) • 1.3GHz процессор с 2GB
– Резервируемые модули коммутации (1+1)
– Распределённая подсистема коммутации • 2 GHz процессор с 4GB
– Резервирование подсистемы охлаждения и
электропитания
• Электропитание и охлаждение
– Вентиляция по направлению от боковой стенки к
боковой стенке
– Резервирование вентиляторов
– Блоки питания 1+1 DC или AC
– Вводы питания находятся сзади
• Емкость
– 4 отсека
• 1 процессор+3 линейные карты или
• 2 процессора и 2 линейные карты
– До 360Gbps (полный дуплекс) с 3-х карт
Платформа MX80 - новое шасси MX на
основе Trio
Fixed and Modular Versions
Производительность 70Gbps+
Модульный: 2 слота, SyncE, 1588
Фиксированный: 48x10/100/1000
Модульный: 2 слота MIC
PFE WAN
WAN PFE
Матрица
коммутации
WAN PFE PFE WAN
коммутации независимо от I I I I
модулей управления
I I I I
DP DP DP DP
C C C C
Несколько фактов об RE
• RE – Routing Engine – специальная карта на
маршрутизаторе МХ, которая осуществляет все функции
управления
• RE соединен специальным каналом GE с плоскостью
коммутации
• RE имеет также соединения с линейными картами (Serial и
GE)
• RE = комбинация из процессора (как правило Intel-
архитектуры), оперативной памяти и интерфейсов
управления (консоль, management ethernet)
• Встроенные интерфейс ethernet не служит для
коммутации пакетов!
Линейные карты MX
• Концентраторы портов MPC
• Концентраторы портов DPC
• Плата DPCE-X
• Плата DPCE-R
• Модули MX-FPC
• Сервисная карта MS-DPC
Варианты исполнения MX MPC
Плотность сервисов и производительность
VLAN Queue
Adv Services Edge
•VLAN Queue
MPC2-EQ
•64K IFL
•512K Queues
Enhanced Enet / MP BBE •60Gbps
•VLAN Queue
MPC2-Q
•64K IFL
Port Queue •256K I/E Queues
•60Gbps
Cost Optimized Enet MPC1-Q
•Port Queue •32K IFL
MPC2 •128K I/E Queues
•64K IFL
•30Gbps
•Port Queue
•60Gbps
MPC1
•32K IFL
•Port Queue
•30Gbps
QoS-функционал
MX-MPC-3D – новые интерфейсные карты
Пограничные сервисы с
масштабируемостью 3D
Бизнес, домашние, мобильные
сервисы
Видео, голос, SBC, IPS
Непревзойденное качество
MIC-3D-20GE-SFP
20 портов 10/100/1000Mbps SFP
MIC-3D-2XGE-XFP
2 порта 10GE XFP
Поддержка выбора настройки WAN либо LAN-PHY фрейминга
MIC-3D-4XGE-XFP
4 порта 10GE XFP
Поддержка выбора настройки WAN либо LAN-PHY фрейминга
Важно! Не поддерживается в картах MPC-1
MIC-3D-40GE-TX
40 портов 10/100/1000Mbps RJ-45
Один MIC занимает оба слота MPC
Интерфейсная карта MPC-3D-16XGE-SFPP
• Самая высокая в индустрии плотность интерфейсов 10GE на слот
маршрутизатора
• Устанавливается в MX960, MX480 и MX240
• Фиксированная архитектура
– 16 портов 10GE SFP+
– Построена на 4х Junos Trio PFE
– Пропускная способность 120Gbps/слот при использовании всех
фабрик коммутации
(80Gbps /слот – в случае резервирования SCB)
Функциональность аналогична картам MPC2
– Поддержка только LAN-PHY фрейминга
• Варианты применения
– Carrier Ethernet aggregation
– Video distribution networks
– Data center core & aggregation
– Business edge
JUNOS Trio: Сетевой процессор (NISP)
Процессоры общего назначения
Очень гибкие
× Низкая производительность, плотность,
питание
Количество подключений
Fixed and Mobile Users
Количество сервисов
Quality of Experience In-Line Services
Juniper MX960
MX и JUNOS Trio
непревзойденная масштабируемость и гибкость
Единая Операционная Система и Оборудование
2.5 Terabits
2.6Tb/s
2 Terabits Juniper
MX480
1.5
Terabits
1.4 Tb/s
Экономика
1 Terabits
Juniper
MX240 Отдача
Juniper
500 Gb/s MX80
480Gb/s
80Gb/s Емкость
100 Gb/s
8 24 72 132
Плотность портов10GE
16х10GE карта.
Производительность Trio – 70Gbps.
В Mpps – около 55 Mpps.
Производительность распределятся
равномерно между интерфейсом к фабрике и
интерфейсом к физическим интерфейсам.
LU fabri
4x10G c
MQ
LU fabri
4x10G c
MQ
LU fabri
4x10G
MQ
c
4x10G LU
fabri
MQ c
4 5 6 7
0 1 2 3
1G
Wintegra NP N X 1G
BRCM PFE
16xT1E1
1G 2x10G
Timing FPGA
2x10G
Service FPGA
ACX 1000
ACX-1000 (FORTIUS-F)
Highlights
Lower cost member of ACX family
Provides 12 – 1 Gig and 8 – T1/E1 ports
Offers Dual input DC [ Feed redundancy]
PICs:
8xT1/E1 –Circuit Emulation PIC [Builtin PIC 0]
8x1GE-Copper [Builtin PIC 1]
4x1GE-RJ45/SFP COMBO [Builtin PIC 2]
ACX 2000
ACX-2000 Chassis (Fortius-G)
RS232
Manage DE-15 alarm FPC0, PIC1
Port FPC0, PIC2
DC Inlet Port console 6x1GE Cu + 2x1GE POE 2x1GE SFP
81 93 5
10 7
11 9
12 11
13 13
14 15
15 41 53 65 77
00 21 24 36 48 10
5 12
6 14
7 00 12 24 63 SFP0 SFP1 SFP+0 SFP+1
FRONT
REAR
ACX-2000 Chassis (Fortius-G)
Highlights
Provides 16xTDM, 12xGE and 2x10GE interfaces
Provides 2 65W POE ports capable of powering all 5 classes of IEEE 803.af
PD’s (Powered Devices)
POE is offered on Ports 0/1/3 and 0/1/7
Offers Dual input DC power supply
PICs
16xT1/E1 –Circuit Emulation PIC [Builtin PIC 0]
8x1GE-Copper (PoE on 2 ports) [Builtin PIC 1]
2x1GE-SFP [Builtin PIC 2]
2x10GE-SFP+ [Builtin PIC 3]
ACX 4000
ACX4000 (FORTIUS M) – [12.3R1]
MIC0 MIC1
DE-15 Alarm
PSU0 PSU1 USB 2.0 MGMT Console BITS GPS SFP SFP + FAN TRAY
ACX4000 (FORTIUS M) – [12.3R1]
Highlights
2.5 RU chassis with number of configurable plug-in modules
o Fixed portion of chassis is 8xGE combo ports
Support for two modular power supplies offering 1+1 power supply
redundancy
o Two power supplies work in load sharing mode.
o When one fails, other takes over completely
Offers AC and DC inputs with POE option
PICs
8x1GE –Combo [ Builtin PIC 0]
2x1GE- SFP [Builtin PIC 1]
2x10GE-SFP+ [Builtin PIC 2]
2 pluggable MIC slots
ACX 4000 – PLUGGABLE MICs
6X1GE RJ45/SFP MIC [12.3 R1]
Height 1 RU 1 RU 2.5 RU
RE PFE
CPU CPU
T1/E1
Telecom Timing
Module Module
T1/E1
BRCM PFE
ASIC
(X)SFP
(X)SFP
Service
RJ45
Ethernet
FPGA**
PHY
RJ45
PFE HARDWARE OVERVIEW …
* HQOS in PFE-2
PFE HARDWARE OVERVIEW …
Ingress Pipeline
Packet VFP L2/L3 IFP
Parser (pre-filtering) Switching (filtering)
MAC
Egress Pipeline
•NNI
– Ethernet
•TTL processing
– Support for uniform and pipe modes
– No support for per-LSP TTL mode configuration
MPLS TRANSPORT – RESILIENCY
Protection Mechanisms Triggers
Path Protection Link Down
Primary & Secondary Paths Port Level CFM (10ms)
Local Protection (FRR) Protocol Level BFD
1:1 / fast-reroute
N:1 / facility backup
link protection
node-link protection
Protected LSP
Pr
im
P1 P2 Pa ary
th
th ry
P3 P4
Pa nda
co
Se
MPLS SERVICES …
•Ethernet PW
– Supported VC Types:
• Type 4 – tagged mode
• Type 5 – raw mode
– VLANID rewrites are supported at ingress & egress
• Only outer tag can be rewritten in the case of double-tagged packets
•PW Signaling
– LDP – l2circuit
– BGP – l2vpn
•Support for control-word and no-control-word signaling and
negotiation
• No support for family TCC
MPLS SERVICES – RESILIENCY …
PW Protection Mechanisms Triggers
Protect Interface CFM (10ms)
AC redundancy Ethernet PW (connection
Redundant PW protection)
Cold Standby Mode LSP Down
Hot Standby Mode PW Down
ry
rima
P PW
P1 PE2
Primary
CE1 PE1 CE2
Protect
Ba PW
P2 PE3
ck
up
MPLS SERVICES …
* full match routes can also be added to the LPM table, so the
max number of such routes will be 20K
CONFIGURATION
MAC
Egress Pipeline
Ingress Path
M
M
U
EFP(512)
512 Entries Egress
#4 Groups Processing
(128 x 4)
Egress Rewrites
Egress Path
• firewall {
• family mpls {
• filter <filter-name> {
• term <term-name> {
• from {
• exp <value>;
• }
• then {
• discard;
• count <counter-name>;
• policer <policer-name>;
• loss-priority <value>;
• forwarding-class <value>
• three-color-policer single-rate/two-rate <policer-name>;
• }
• }
• }
• }
• }
• * Note: Specifying action with loss-priority/forwarding-class specifies a MF classifier and
the MF-classifier will take precedence over CoS BA/Fixed Classification
Firewall – ccc family (supported match and actions)
Configuration:
• firewall {
• family ccc {
• filter <filter-name> {
• term <term-name> {
• from {
• NONE
• }
• then {
• count <counter-name>;
• loss-priority <value>;
• forwarding-class <value>;
• discard;
• policer <policer-name>;
• three-color-policer single-rate/two-rate<policer-name>;
• }
• }
• }
• }
• }
• firewall {
• family any{
• filter <filter-name> {
• term <term-name> {
• from {
• NONE
• }
• then {
• count <counter-name>;
• loss-priority <value>;
• forwarding-class <value>;
• discard;
• policer <policer-name>;
• three-color-policer single-rate/two-rate<policer-name>;
• }
• }
• }
• }
• }
• * Note: Specifying action with loss-priority/forwarding-class specifies a MF classifier
and the MF-classifier will take precedence over CoS BA/Fixed Classification
Applying the Firewall filter
Following filters are be applied at IFF level.( input or/and output )
• inet
• ccc
• mpls
• port-mirroring {
input {
rate 1;
}
family inet {
output {
interface ge-0/1/1.0 {
next-hop 20.20.20.3;
}
}
}
}
Scale Limits
Inbound Scale
Max Terms ( family : inet/ccc) 249 [IFP]
Max Terms (family : mpls) 124 [IFP]
Max Terms (family : any) 98 [IFP], 254[VFP]
ARP Policers 63 [IFP]
Outbound Scale/Value
Max Terms ( family : inet/ccc) 126[EFP]
Max Terms (family : any) 47[EFP]
Note:
Default and physical interface specific semantic will consume only one hardware
instance if the filter is used from IFP. Multiple attachments of same default-semantics
filter will use the VFP space (1 per attachment). (MAX=254 entries/254 attachments)
System overview
– By default, 4 forwarding classes are defined, and 4 egress queues are provisioned for each port.
– PLP can have one of the 3 (low, med-high or high) possible values.
– The system has an internal buffer of 2MB and buffer limits are enforced based on egress queues.
BA classifiers
Fixed classifiers
MF Filters/ Policers
Egress features:
CoS and Firewall on Fortius F/G platforms
WRED
Queue Scheduling
Queue shaping
Queue statistics
Port shaping
Egress BA rewrites
– inet-prec/DSCP classifiers derive the FC and PLP based on the inet-prec ToS/DSCP code-
point in the IP header of the packet.
– On Fortius the same hardware table is used to implement both inet-prec as well as DSCP
classification.
– Inet-prec/DSCP classifiers are bound at the port (IFD), unlike traditional Junos platforms
where they are bound to logical interfaces (IFL).
[edit class-of-service]
classifiers {
(inet-prec|dscp) <classifier-name> {
forwarding-class <fc> loss-priority [low | high] code-points <>;
}
}
– Ieee-802.1p classifiers are bound at the port (IFD), unlike traditional Junos platforms
where they are bound to logical interfaces (IFL).
– Ieee-802.1p has a lower precedence that inet-prec/DSCP and EXP classifiers. i.e when
both inet-prec and ieee-802.1p classifier are configured on an IFD, for an IP routed packet,
the FC and PLP are derived based in the inet-prec ToS byte and not based on the 802.1p
bits.
Behavior Aggregate (BA) Classifiers – ieee-802.1 Config
1) Define the classifier
[edit class-of-service]
classifiers {
(inet-precedence | dscp) <classifier-name> {
forwarding-class <fc> loss-priority [low | high] code-points <>;
}
}
– EXP classifiers derive the FC and PLP based on the EXP bits.
[edit class-of-service]
classifiers {
exp <classifier-name> {
forwarding-class <fc> loss-priority [low | high] code-points <>;
}
}
RE:
Show class-of-service forwarding-table classifier
Use the
Show following commands
class-of-service to seeclassifier
forwarding-table what classifiers
mapping are
actually sent to the kernel by Cosd:
PFE:
Show ifl brief
Use the following commands to see what classifiers are
Show ifd brief
actually
Show bound in the PFE and the hardware resources that
cos classifier
Show cos classifier binding
are used:
Show cos halp classifier [dot1p | exp]
debug cos halp-acx classifier
Fixed Classifiers
– Fixed classifiers assign the FC for the packet based on the logical
interface the packet is received on.
– The PLP cannot be configured and is always set to low.
• [edit class-of-service]
• interfaces {
• <interface-name> {
• unit <> {
• forwarding-class <fc>;
• }
• }
• }
PFE debug:
– MF classifiers assign the FC and PLP for the packet based on supported fields in the
packet header.
– Policers are attached to IFLs and police traffic based on supported packet fields.
– MF classifiers and policers are configured through the “firewall” hierarchy and will be
described in detail in the Firewall section.
– Please refer the Firewall-TOI to know more details about the MF classifier and
policers.
Forwarding class to queue mapping
• Up to eight forwarding classes can be defined globally.
• Four forwarding classes are defined by default (see below CLI o/p).
• There is a one-to-one mapping between the forwarding class and the queue number
which is displayed using the following command.
regress@fortius-g11# run show class-of-service forwarding-class
Forwarding class ID Queue Restricted queue Fabric priority Policing priority SPU priority
best-effort 0 0 0 low normal low
expedited-forwarding 1 1 1 low normal low
•The definition of the forwarding class specifies the
assured-forwarding
network-control
2
3
2
3
2
3
low
low
normal
normal
low
low
queue id
More forwarding classes can be defined by specifying a queue for the forwarding class
as below:
regress@fortius-g11# set class-of-service forwarding-classes class
new_fwding_class queue-num 4
More than one forwarding classes can use the same queue:
regress@fortius-g11# run show class-of-service forwarding-class
Forwarding class ID Queue Restricted queue Fabric priority Policing priority SPU priority
best-effort 0 0 0 low normal low
expedited-forwarding 1 1 1 low normal low
assured-forwarding 2 2 2 low normal low
network-control 3 3 3 low normal low
new_fwding_class 4 7 3 low normal low
new_fwding_class2 5 7 3 low normal low
new_fwding_class3 6 7 3 low normal low
new_fwding_class4 7 7 3 low normal low
Egress Buffer Management
- Fortius has a total of 2MB of internal buffers shared across all the egress
queues of the Ethernet ports of the system.
- Each egress queue has access to reserved and shared space, the former
being the guaranteed amount of buffers for the particular queue and
the latter being the buffers shared across all the ‘active’ queues of the
system.
- The shared space for a particular queue is not guaranteed, but Fortius
uses an intelligent buffer management algorithm which tries to ensure
that all ‘active’ queues get fair access to the shared space.
- By default, each egress Ethernet port has access to 100us of reserved
buffer space which are distributed across its egress queues.
- The default scheduler-map allocated 95% of the port’s buffer space to
best-effort and 5% to network-control.
- The JunOS buffer-size configuration allows the user to configure the
amount of reserved space per egress queue.
Egress Buffer Management Config
1) Define the buffer-size for a particular scheduler using the appropriate knob. The
exact and temporal knobs turn off the shared buffer space usage. The percent
knob allocates a greater share of the port’s 100us reserved space to the
configured queue
[edit class-of-service]
regress@fortius-f-svl6# set schedulers s1 buffer-size ?
Possible completions:
exact Enforce exact buffer size
percent Buffer size as a percentage (0..100)
> remainder Remainder of buffer size available
temporal Buffer size as temporal value (microseconds)
[edit class-of-service]
[edit class-of-service]
regress@fortius-f-svl6# set interfaces ge-0/1/5 scheduler-map sm1
Egress Buffer Management PFE debug
1) When the buffer configuration is changed, the current available counts are
printed on the PFE console
[Dec 29 22:04:31.855 LOG: Debug] bcm563xx_mmu_dump: res_cell: 1564, res_pkts: 527,
shr_cell: 14820, shr_pkts:5617
[Dec 29 22:04:31.855 LOG: Debug] bcm563xx_mmu_dump: res_cell: 1560, res_pkts: 526,
shr_cell: 14824, shr_pkts:5618
[Dec 29 22:04:31.872 LOG: Debug] bcm563xx_mmu_dump: res_cell: 1652, res_pkts: 560,
shr_cell: 14732, shr_pkts:5584
[Dec 29 22:04:31.872 LOG: Debug] bcm563xx_mmu_dump: res_cell: 1656, res_pkts: 561,
shr_cell: 14728, shr_pkts:5583
2) If the system is out of buffers then the error message is printed on the PFE
console. Also monitor the /var/log/messages for any error debugs
FFEB(fortius-f-svl6 vty)#
[Dec 29 22:05:46.443 LOG: Debug] bcm563xx_mmu_dump: res_cell: 1564, res_pkts: 527, shr_cell:
14820, shr_pkts:5617
[Dec 29 22:05:46.444 LOG: Debug] bcm563xx_mmu_dump: res_cell: 1560, res_pkts: 526, shr_cell:
14824, shr_pkts:5618
[Dec 29 22:05:46.445 LOG: Err] Out of buffers. req cells: 48828, req pkts: 18328, avail cells:
14824, avail pkts: 5618
[Dec 29 22:05:46.445 LOG: Err] ACX_COS_HALP(acx_cos_bind_sched_map_ifd:443): Delay buffer bind
failed for ifd ge-0/1/5 queue num 0
[Dec 29 22:05:46.445 LOG: Err] Delay buffer bind failed for ifd ge-0/1/5queue num 0
[Dec 29 22:05:46.446 LOG: Err] ACX_COS_HALP(acx_cos_bind_scheduler_map:526): Bind sched map failed
[Dec 29 22:05:46.446 LOG: Err] Bind sched map failed!
[Dec 29 22:05:46.446 LOG: Err] COS(cos_final_scheduler_bind_add_action:953): Platform failed to
bind scheduler map 49552 to element.cos_element_type 2 index 141)
[Dec 29 22:05:46.446 LOG: Debug] bcm563xx_mmu_dump: res_cell: 1560, res_pkts: 526, shr_cell:
14824, shr_pkts:5618
Scheduling Config
3) Configure the shaping-rate (PIR) on a scheduler
[edit class-of-service]
regress@fortius-f-svl6# set schedulers s1 shaping-rate ?
Possible completions:
<rate> Shaping rate as an absolute rate (3200..160000000000 bits per second)
percent Shaping rate as a percentage (1..100)
[edit class-of-service]
[edit class-of-service]
regress@fortius-f-svl6# set interfaces ge-0/1/5 scheduler-map sm1
Scheduling
- Fortius has a WDRR scheduler across the egress queues in the system
- The CIR (guaranteed rate) for each of the queues can be configured using
the transmit-rate JunOS CLI
- The PIR (peak rate) for each of the queues can be configured using the
shaping-rate JunOS CLI. In addition, PIR can be configured on an
aggregate basis at the interface level.
Scheduling Config
1) Configure the transmit-rate (CIR) on a scheduler. The exact
knob sets the PIR = CIR
[edit class-of-service]
regress@fortius-f-svl6# set schedulers s1 transmit-rate ?
Possible completions:
<rate> Transmit rate as rate (3200..160000000000 bits per second)
exact Enforce exact transmit rate
percent Transmit rate as percentage (0..100)
> remainder Remainder available
[edit class-of-service]
[edit class-of-service]
regress@fortius-f-svl6#
Scheduling Config – Interface shaper
1) A shaper can also be configured directly on the interface as shown below:
[edit class-of-service]
regress@fortius-f-svl6# show
interfaces {
ge-0/1/5 {
shaping-rate 450m;
}
}
Egress Rewrites
– The FC and PLP are rewritten into DSCP/ToS code-points of the packet IP
header using this feature.
[edit class-of-service]
rewrite-rules {
[dscp|inet-prec] <classifier-name> {
forwarding-class <fc> loss-priority [low | high] code-points <>;
}
}
– Ieee-802.1p rewrite-rules map the FC and PLP based into the 802.1p
bits in the VLAN-tag.
– The 802.1p bits can be written only on the outer VLAN tag.
[edit class-of-service]
rewrite-rules {
ieee-802.1 <classifier-name> {
forwarding-class <fc> loss-priority [low | high] code-points <>;
}
}
– EXP rewrite-rules map the FC and PLP based into the EXP
bits in the MPLS label.
– EXP rewrite-rules are applied on the egress logical interface
(IFL).
[edit class-of-service]
rewrite-rules {
exp <classifier-name> {
forwarding-class <fc> loss-priority [low | high] code-points <>;
}
}
RE:
Use the following commands to see what rewrite-rules are
actually sent to the kernel by Cosd:
Show class-of-service forwarding-table rewrite-rules
Show class-of-service forwarding-table rewrite-rules mapping
PFE:
Use the following commands to see what classifiers are
actually bound in the PFE and the hardware resources that
are used:
Show ifl brief
Show ifd brief
Show cos rewrite
Show cos rewrite binding
show cos halp rewrite
debug cos halp-acx rewrite
Egress Policing and Filters