Opendaylight Netvirt For Openstack: Andre Fredette Red Hat March 22, 2016
Opendaylight Netvirt For Openstack: Andre Fredette Red Hat March 22, 2016
OpenStack
Andre Fredette
Red Hat
March 22, 2016
1
Agenda
● ODL For OpenStack Overview
● ODL/OpenStack Demo
2
Links
● OVSDB NetVirt Project
https://wiki.opendaylight.org/view/OVSDB_Integration:Main
● Demo
https://wiki.opendaylight.org/view/OVSDB_Integration:
Main#Getting_Started_with_OpenDaylight_OVSDB_Plugin_Networ
k_Virtualization
3
ODL for OpenStack
Overview
OpenDaylight (ODL)
● Solve challenges with default
OpenStack Neutron OVS Plugin
○ Ease of Use
○ Scale
○ Resiliency
● True SDN “platform”
○ Support future Innovation
○ Application support
○ Multi-vendor networks
Layer 2
Gateway
5
Openstack and OpenDaylight Neutron
OVS VM
Compute node 6
Network View
Control node Control node
Data network
7
HA View Control node Control node
LoadBalancer
Neutron
Control node Control node
(HAProxy)
Neutron OpenDaylight
Data network
9
Red Hat OpenDaylight Focus
● Neutron
○ Neutron REST API
○ Neutron Service
● OVSDB NetVirt as a Neutron Service
Provider
● SFC for Service Function Chaining
● Southbound Protocols
○ OVSDB
○ OpenFlow
● OVS Switches and HW Gateways 10
Red Hat Focus Areas
11
OVSDB NetVirt
● The OVSDB NetVirt Project Includes
○ OVSDB Southbound Protocol
○ NetVirt network virtualization
● Current implementation of NetVirt
○ Focused on supporting
■ OpenStack Neutron API (Northbound)
■ Open vSwtich switches (Southbound)
● Future Plans
○ Support NetVirt as a separate application
○ Make NetVirt more
■ Scalable,
■ Modular and
■ Extensible
12
NetVirt Logical Flow Pipeline
Table: 0
Classifier
tenant network
ingress/egress
18
Topology
External: VB Internal: 192.168.56.0/24
odl31-control odl32-compute
eth2 eth2
patch-ext patch-ext
router-node vmvx1 vmvx2
tap883f9022-bd 10.100.5.3 10.100.5.4 tap5d62515a-be
192.168.56.10 192.168.56.11
vxlan-192.168.254.32 vxlan-192.168.254.31
19
Neutron Commands (1 of 2)
source openrc admin admin
os_addnano.sh:
nova flavor-create m1.nano auto 64 0 1
os_addadminkey.sh:
nova keypair-add --pub-key ~/.ssh/id_rsa.pub admin_key
os_addextnetrtr.sh:
neutron net-create ext-net --router:external --provider:physical_network public --provider:network_type flat
neutron subnet-create --name ext-subnet --allocation-pool start=192.168.56.9,end=192.168.56.14 --disable-dhcp
--gateway 192.168.56.1 ext-net 192.168.56.0/24
20
Neutron Commands (2 of 2)
os_addvms.sh:
nova boot --poll --flavor m1.nano --image $(nova image-list | grep 'uec\s' | awk '{print $2}' | tail -1) --nic
net-id=$(neutron net-list | grep -w vx-net | awk '{print $2}') vmvx1 --availability_zone=nova:odl31 --key_name
admin_key
nova boot --poll --flavor m1.nano --image $(nova image-list | grep 'uec\s' | awk '{print $2}' | tail -1) --nic
net-id=$(neutron net-list | grep -w vx-net | awk '{print $2}') vmvx2 --availability_zone=nova:odl32 --key_name
admin_key
os_addfloatingips.sh:
for vm in vmvx1 vmvx2; do
vm_id=$(nova list | grep $vm | awk '{print $2}')
port_id=$(neutron port-list -c id -c fixed_ips -- --device_id $vm_id | grep subnet_id | awk '{print $2}')
neutron floatingip-create --port_id $port_id ext-net
done;
21
DevStack local.conf ODL_MODE for networking-odl
https://github.com/flavio-fernandes/networking-odl/blob/heliumkilo/devstack/settings#L27
ODL_MODE=${ODL_MODE:-allinone}
# ODL_MODE is used to configure how devstack works with OpenDaylight. You
# can configure this three ways:
# ODL_MODE=allinone
# Use this mode if you want to run ODL in this devstack instance. Useful
# for a single node deployment or on the control node of a multi-node
# devstack environment.
# ODL_MODE=compute
# Use this for the compute nodes of a multi-node devstack install.
# ODL_MODE=externalodl
# This installs the neutron code for ODL, but does not attempt to
# manage ODL in devstack. This is used for development environments
# similar to the allinone case except where you are using bleeding edge ODL
# which is not yet released, and thus don't want it managed by
# devstack.
# ODL_MODE=manual
# You're on your own here, and are enabling services outside the scope of
# the ODL_MODE variable.
22
odl31-control local.conf
disable_all_services
enable_service g-api g-reg key n-api n-crt n-obj n-cpu n-cond n-
sch n-novnc n-xvnc n-cauth horizon neutron q-dhcp q-meta q-svc
mysql rabbit
enable_service odl-server odl-compute
…
HOST_IP=192.168.254.31
HOST_NAME=odl31
…
enable_plugin networking-odl https://github.com/flavio-
fernandes/networking-odl summit15demo
ODL_MODE=manual
NEUTRON_CREATE_INITIAL_NETWORKS=False
ODL_L3=True
PUBLIC_INTERFACE=eth2
23
odl32-compute local.conf
disable_all_services
enable_service n-cpu n-novnc neutron rabbit
enable_service odl-compute
…
HOST_IP=192.168.254.32
HOST_NAME=odl32
SERVICE_HOST_NAME=odl31
SERVICE_HOST=192.168.254.31
Q_HOST=$SERVICE_HOST
…
ODL_MODE=manual
ODL_L3=True
PUBLIC_INTERFACE=eth2
24
Demo Steps: Create Networks, L3 and Floating IPs
Individual steps:
1. source openrc admin admin
2. ../tools/os_addnano.sh: add a nano flavor of the vms
3. ../tools/os_addadminkey.sh: add ssh keys to have password-less logins to
the tenant vms
4. ../tools/os_addextnetrtr.sh: add external and vxlan networks and attach to
router
5. ../tools/os_addvms.sh: launch two vms, one on each compute node
6. ../tools/os_addfloatingips.sh: assign floating ip’s to each vm
Or just use ../tools/os_doitall.sh: But it’s more fun to do each step and see what
happens...
25
Topology: After Stacking
External: VB Internal: 192.168.56.0/24
odl31-control odl32-compute
eth2 eth2
router-node
26
OVSDB: After Stacking
sudo ovs-vsctl show
d9904cbd-34c7-48e2-b714-fb5d04a4d899
Manager "tcp:192.168.254.31:6640"
is_connected: true
Bridge br-ex
Controller "tcp:192.168.254.31:6653"
is_connected: true
fail_mode: secure
Port br-ex
Interface br-ex
type: internal
Port "eth2"
Interface "eth2"
Bridge br-int
Controller "tcp:192.168.254.31:6653"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
27
Flows: After Stacking
sudo ovs-ofctl --protocol=OpenFlow13 dump-flows br-ex
cookie=0x0, duration=49.967s, table=0, n_packets=0, n_bytes=0, priority=0 actions=NORMAL
cookie=0x0, duration=49.967s, table=0, n_packets=4, n_bytes=452, dl_type=0x88cc actions=CONTROLLER:65535
28
Topology: After Adding Neutron Networks and Router
External: VB Internal: 192.168.56.0/24
odl31-control odl32-compute
eth2 eth2
patch-ext patch-ext
router-node
vxlan-192.168.254.32 vxlan-192.168.254.31
29
OVSDB: After Adding Neutron Networks and Router
fail_mode: secure
sudo ovs-vsctl show Port br-int
d9904cbd-34c7-48e2-b714-fb5d04a4d899 Interface br-int
Manager "tcp:192.168.254.31:6640" type: internal
is_connected: true Port patch-ext
Bridge br-ex Interface patch-ext
Controller "tcp:192.168.254.31:6653" type: patch
is_connected: true options: {peer=patch-int}
fail_mode: secure Port "tapd0d15959-1f"
Port patch-int Interface "tapd0d15959-1f"
Interface patch-int type: internal
type: patch Port "vxlan-192.168.254.32"
options: {peer=patch-ext} Interface "vxlan-192.168.254.32"
Port br-ex type: vxlan
Interface br-ex options: {key=flow, local_ip="
type: internal 192.168.254.31", remote_ip="192.168.254.32"}
Port "eth2" ovs_version: "2.3.1"
Interface "eth2"
Bridge br-int
Controller "tcp:192.168.254.31:6653"
is_connected: true 30
Flows: After Adding Neutron Networks and Router (1 of 2)
sudo ovs-ofctl --protocol=OpenFlow13 dump-flows br-int
cookie=0x0, duration=35.009s, table=0, n_packets=7, n_bytes=558, in_port=1,dl_src=fa:16:3e:9f:82:6c actions=set_field:0x5dc-
>tun_id,load:0x1->NXM_NX_REG0[],goto_table:20 (DHCP port ingress)
cookie=0x0, duration=179.731s, table=0, n_packets=1, n_bytes=90, priority=0 actions=goto_table:20 (pipeline)
cookie=0x0, duration=35.011s, table=0, n_packets=0, n_bytes=0, priority=8192,in_port=1 actions=drop (drop everything else)
cookie=0x0, duration=34.793s, table=0, n_packets=0, n_bytes=0, tun_id=0x5dc,in_port=3 actions=load:0x2->NXM_NX_REG0
[],goto_table:20 (tunnel ingress)
cookie=0x0, duration=180.247s, table=0, n_packets=16, n_bytes=1808, dl_type=0x88cc actions=CONTROLLER:65535 (LLDP
punt)
cookie=0x0, duration=179.721s, table=20, n_packets=8, n_bytes=648, priority=0 actions=goto_table:30 (pipeline)
cookie=0x0, duration=29.644s, table=20, n_packets=0, n_bytes=0, priority=1024,arp,tun_id=0x5dc,arp_tpa=10.100.5.1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:30:19:de->eth_src,load:0x2-
>NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]-
>NXM_OF_ARP_TPA[],load:0xfa163e3019de->NXM_NX_ARP_SHA[],load:0xa640501->NXM_OF_ARP_SPA[],IN_PORT (ARP
response for vxnet gw)
cookie=0x0, duration=29.574s, table=20, n_packets=0, n_bytes=0, priority=1024,arp,tun_id=0x5dc,arp_tpa=10.100.5.2
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],set_field:fa:16:3e:9f:82:6c->eth_src,load:0x2-
>NXM_OF_ARP_OP[],move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],move:NXM_OF_ARP_SPA[]-
>NXM_OF_ARP_TPA[],load:0xfa163e9f826c->NXM_NX_ARP_SHA[],load:0xa640502->NXM_OF_ARP_SPA[],IN_PORT (ARP
response for vxnet DHCP namespace)
cookie=0x0, duration=179.715s, table=30, n_packets=8, n_bytes=648, priority=0 actions=goto_table:40 (pipeline)
cookie=0x0, duration=179.705s, table=40, n_packets=8, n_bytes=648, priority=0 actions=goto_table:50 (pipeline)
cookie=0x0, duration=35.165s, table=40, n_packets=0, n_bytes=0, priority=61012,udp,tp_src=68,tp_dst=67 actions=goto_table:
50 (allow DHCP)
cookie=0x0, duration=179.695s, table=50, n_packets=8, n_bytes=648, priority=0 actions=goto_table:60 (pipeline)
31
Flows: After Adding Neutron Networks and Router (2 of 2)
cookie=0x0, duration=179.684s, table=60, n_packets=8, n_bytes=648, priority=0 actions=goto_table:70 (pipeline)
cookie=0x0, duration=29.657s, table=60, n_packets=0, n_bytes=0, priority=2048,ip,reg3=0x5dc,nw_dst=10.100.5.0
/24 actions=set_field:fa:16:3e:30:19:de->eth_src,dec_ttl,set_field:0x5dc->tun_id,goto_table:70 (l3 src mac of tenant
router)
cookie=0x0, duration=179.673s, table=70, n_packets=8, n_bytes=648, priority=0 actions=goto_table:80 (pipeline)
cookie=0x0, duration=29.578s, table=70, n_packets=0, n_bytes=0, priority=1024,ip,tun_id=0x5dc,nw_dst=10.100.5.2
actions=set_field:fa:16:3e:9f:82:6c->eth_dst,goto_table:80 (l3 forward to DHCP)
cookie=0x0, duration=179.656s, table=80, n_packets=8, n_bytes=648, priority=0 actions=goto_table:90 (pipeline)
cookie=0x0, duration=179.652s, table=90, n_packets=8, n_bytes=648, priority=0 actions=goto_table:100 (pipeline)
cookie=0x0, duration=179.640s, table=100, n_packets=8, n_bytes=648, priority=0 actions=goto_table:110 (pipeline)
cookie=0x0, duration=29.631s, table=100, n_packets=0, n_bytes=0, priority=1024,ip,tun_id=0x5dc,nw_dst=10.
100.5.0/24 actions=goto_table:110 (allow subnet destined traffic)
cookie=0x0, duration=34.801s, table=110, n_packets=0, n_bytes=0, priority=8192,tun_id=0x5dc actions=drop
(pipeline)
cookie=0x0, duration=179.615s, table=110, n_packets=1, n_bytes=90, priority=0 actions=drop (pipeline)
cookie=0x0, duration=34.848s, table=110, n_packets=0, n_bytes=0, priority=16384,reg0=0x2,tun_id=0x5dc,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:1 ({multi,broad}cast tunnel ingress)
cookie=0x0, duration=34.830s, table=110, n_packets=7, n_bytes=558, priority=16383,reg0=0x1,tun_id=0x5dc,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=output:1,output:3 ({multi,broad}cast)
cookie=0x0, duration=34.998s, table=110, n_packets=0, n_bytes=0, tun_id=0x5dc,dl_dst=fa:16:3e:9f:82:6c
actions=output:1 (l2 forward to DHCP port)
32
Topology: After Adding VMs
External: VB Internal: 192.168.56.0/24
odl31-control odl32-compute
eth2 eth2
patch-ext patch-ext
router-node vmvx1 vmvx2
tap883f9022-bd 10.100.5.3 10.100.5.4 tap5d62515a-be
192.168.56.10 192.168.56.11
vxlan-192.168.254.32 vxlan-192.168.254.31
33
OpenStack Network Dashboard
34
ovsdg-ui
35
ovsdg-ui
36
NFV/SFC
NFV
Tacker
● Provider/Telco Focus
● OPNFV
○ SFC
○ Open vSwitch for NFV
○ Apex (RDO-based Installer)
● OpenStack
○ Tacker
● OpenDaylight
○ OVSDB NetVirt, SFC
● OVS Layer 2
Gateway
○ DPDK, VXLAN-GRE + NSH
38
SFC: The What and The Why
● Mechanism for overriding the basic destination based
forwarding (policy based routing).
● Cause network packet flows to route through a network
via a path other than the one that would be chosen by
routing table.
● Bottom line: it can be used to “stitch” network services to
create a service chain.
○ Dynamic (per need)
○ Software defined (no human involved)
○ No new “cabling” in existing network is required (overlay based)
○ Addresses NFV needs
Haim Daniel, SFC Components
39
SFC Network Model
Tacker
DB VNF
D
NFVO / VNFM SFC API
ODL
Workflow: sfc -driver
1) Onboard VNFD to
Catalog Heat ODL Controller
2) Instantiate 2 or more netconf
VNFs from Catalog Neutron / OVSDB
Nova Optional VNF config
3) Invoke Tacker SFC API to (ODL plugin) yang using ODL netconf/yang
chain them
VNF VNF
vRouter DPI ...
OVS OVS
43
Sridhar Ramaswamy and Tim Rozet, Tacker + SFC
Tacker + SFC Overview:
Phase 2
(Direct networking-sfc API)
Operator / OSS / BSS
Tacker VNF
DB VNF
D
D
Workflow: NFVO / VNFM SFC API
1) Onboard VNFD to networking-sfc
Catalog driver
2) Instantiate 2 or more
VNFs from Catalog
Heat
3) Invoke Tacker SFC API
to chain them
Neutron Optional VNF config
4) Neutron sfc-driver Nova
(networking-sfc) using ODL netconf/yang
invokes networking-sfc
API OVS driver
44
Sridhar Ramaswamy and Tim Rozet, Tacker + SFC
Tacker + SFC Overview:
Phase 3
(networking-sfc + ODL) Operator / OSS / BSS
Tacker
DB VNF
NSD
Workflow: D
1) Onboard VNFD to NFVO / VNFM / SFC API
Catalog
2) Onboard NSD to Catalog neutron
referring to 2 or more sfc -driver
VNFs and VNFFGD
describing the chain Heat ODL Controller
3) Instantiate NSD
4) Neutron sfc-driver invokes Neutron netconf
Nova Optional VNF config
networking-sfc API (sfc) / OVSDB using ODL netconf/yang
yang
ODL sfc
driver
Compute Node 1 Compute Node 2
VNF VNF
vRouter DPI ...
OVS OVS
45
Sridhar Ramaswamy and Tim Rozet, Tacker + SFC
SFC Data Plane Components
● Service Classifier: Determines which traffic requires
service and forms the logical start of a service path
● Service Path: The actual forwarding path used to
realize a service chain
● Service Function Forwarder (SFF): Responsible for
delivering traffic received from the network to one or
more connected service functions according to
information in the network service header as well as
handling traffic coming back from the Service Function
(SF)
46
Network Service Header (NSH)
● NSH is a data-plane protocol that represents a service
path in the network
● IETF adopted protocol
● Two major components: path information and
metadata
○ Path information to direct packets without requiring per flow
configuration
○ Metadata is information about the packets and can be used for
policy
● NSH is added to packet via a classifier
● NSH is carried along the chain to services
47
Network Service Header (NSH)
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Base Header |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Service Path Header |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
~ Context Headers ~
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
● Base header: provides information about the service header and the payload protocol.
● Service Path Header: provides path identification and location within a path.
● Context headers: carry opaque metadata and variable length encoded information.
https://datatracker.ietf.org/doc/draft-ietf-sfc-nsh/ 48
Service Path Header
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Service Path ID | Service Index |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
● Service Path Identifier (SPI): identifies a service path. Participating nodes MUST use this identifier for Service
Function Path selection.
● Service Index (SI): provides location within the SFP. The first Classifier (i.e. at the boundary of the NSH domain)in
the NSH Service Function Path, SHOULD set the SI to 255, however the control plane MAY configure the initial
value of SI as appropriate (i.e. taking into account the length of the service function path). A Classifier MUST send
the packet to the first SFF in the chain. Service index MUST be decremented by service functions or proxy nodes
after performing required services and the new decremented SI value MUST be reflected in the egress NSH
packet.
https://datatracker.ietf.org/doc/draft-ietf-sfc-nsh/ 49
VXLAN + NSH
IPv4 Packet
+----------+------------------------+---------------------+--------------+----------------+
|L2 header | IP + UDP dst port=4790 |VXLAN-gpe NP=0x4(NSH)|NSH, NP=0x1 |original packet |
+----------+------------------------+---------------------+--------------+----------------+
https://datatracker.ietf.org/doc/draft-ietf-sfc-nsh/ 50
OVS
● Currently using a very old patched version referred to as
NSH V8.
● Work is underway to commit newer patches that have
been adapted to fit the current OVS architecture and rely
on vxlan capabilities in the kernel.
○ Jiri Benc, Flavio Leitner and Thomas Herbert are
working on the patches.
○ Latest patch also has support for NSH over Ethernet
51
Going Forward
● Need the NSH patches upstreamed
● Or can we use something other than NSH.
● Resolve neutron-sfc and ODL SFC APIs
● ODL SFC isn’t as dynamic as it should be. Changing
any part of the chain rebuilds the whole chain.
● Is Tacker the VNf orchestrator to used or should other
options be used?
52
Thank you
53