Neutron
Neutron
Release 18.1.0.dev178
1 Overview 3
1.1 Example architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1.1 Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 Block Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.4 Object Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Networking Option 1: Provider networks . . . . . . . . . . . . . . . . . . . . 5
1.2.2 Networking Option 2: Self-service networks . . . . . . . . . . . . . . . . . . . 6
i
Networking Option 2: Self-service networks . . . . . . . . . . . . . . . . . . . 35
4.3.4 Configure the Compute service to use the Networking service . . . . . . . . . 36
4.3.5 Finalize installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4 Verify operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.4.1 Networking Option 1: Provider networks . . . . . . . . . . . . . . . . . . . . 40
4.4.2 Networking Option 2: Self-service networks . . . . . . . . . . . . . . . . . . . 41
5 Install and configure for Red Hat Enterprise Linux and CentOS 43
5.1 Host networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.1.1 Controller node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Configure network interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Configure name resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.1.2 Compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Configure network interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Configure name resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.1.3 Verify connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2 Install and configure controller node . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.2.2 Configure networking options . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Networking Option 1: Provider networks . . . . . . . . . . . . . . . . . . . . . 52
Networking Option 2: Self-service networks . . . . . . . . . . . . . . . . . . . 56
5.2.3 Configure the metadata agent . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
5.2.4 Configure the Compute service to use the Networking service . . . . . . . . . 61
5.2.5 Finalize installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3 Install and configure compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3.1 Install the components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3.2 Configure the common component . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3.3 Configure networking options . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Networking Option 1: Provider networks . . . . . . . . . . . . . . . . . . . . . 63
Networking Option 2: Self-service networks . . . . . . . . . . . . . . . . . . . 64
5.3.4 Configure the Compute service to use the Networking service . . . . . . . . . 65
5.3.5 Finalize installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
ii
6.3.2 Configure the common component . . . . . . . . . . . . . . . . . . . . . . . . 86
6.3.3 Configure networking options . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Networking Option 1: Provider networks . . . . . . . . . . . . . . . . . . . . . 87
Networking Option 2: Self-service networks . . . . . . . . . . . . . . . . . . . 88
6.3.4 Configure the Compute service to use the Networking service . . . . . . . . . 89
6.3.5 Finalize installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
iii
External processes run by agents . . . . . . . . . . . . . . . . . . . . . . . . . . 121
8.2.2 ML2 plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Reference implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
8.2.3 Address scopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Accessing address scopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
Backwards compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Create shared address scopes as an administrative user . . . . . . . . . . . . . . 131
Routing with address scopes for non-privileged users . . . . . . . . . . . . . . . 134
8.2.4 Automatic allocation of network topologies . . . . . . . . . . . . . . . . . . . 139
Enabling the deployment for auto-allocation . . . . . . . . . . . . . . . . . . . . 139
Get Me A Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Validating the requirements for auto-allocation . . . . . . . . . . . . . . . . . . 142
Project resources created by auto-allocation . . . . . . . . . . . . . . . . . . . . 142
Compatibility notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.2.5 Availability zones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Required extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Network scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Router scheduler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
L3 high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
DHCP high availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.2.6 BGP dynamic routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
Example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Prefix advertisement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Operation with Distributed Virtual Routers (DVR) . . . . . . . . . . . . . . . . 164
IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
High availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
8.2.7 High-availability for DHCP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Demo setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Prerequisites for demonstration . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Managing agents in neutron deployment . . . . . . . . . . . . . . . . . . . . . . 171
Managing assignment of networks to DHCP agent . . . . . . . . . . . . . . . . 174
HA of DHCP agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Disabling and removing an agent . . . . . . . . . . . . . . . . . . . . . . . . . 177
Enabling DHCP high availability by default . . . . . . . . . . . . . . . . . . . . 178
8.2.8 DNS integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
The Networking service internal DNS resolution . . . . . . . . . . . . . . . . . 178
8.2.9 DNS integration with an external service . . . . . . . . . . . . . . . . . . . . . 184
Configuring OpenStack Networking for integration with an external DNS service 184
Use case 1: Floating IPs are published with associated port DNS attributes . . . 185
Use case 2: Floating IPs are published in the external DNS service . . . . . . . . 191
Use case 3: Ports are published directly in the external DNS service . . . . . . . 197
Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Configuration of the externally accessible network for use cases 3b and 3c . . . . 210
8.2.10 DNS resolution for instances . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Case 1: Each virtual network uses unique DNS resolver(s) . . . . . . . . . . . . 211
Case 2: DHCP agents forward DNS queries from instances . . . . . . . . . . . . 212
8.2.11 Distributed Virtual Routing with VRRP . . . . . . . . . . . . . . . . . . . . . 213
iv
Configuration example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
Known limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
8.2.12 Floating IP port forwarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Configuring floating IP port forwarding . . . . . . . . . . . . . . . . . . . . . . 216
8.2.13 IPAM configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
The basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Known limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
8.2.14 IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Neutron subnets and the IPv6 API attributes . . . . . . . . . . . . . . . . . . . . 218
Project network considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Router support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
Advanced services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
OpenStack control & management network considerations . . . . . . . . . . . . 224
Prefix delegation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
8.2.15 Neutron Packet Logging Framework . . . . . . . . . . . . . . . . . . . . . . . 228
Supported loggable resource types . . . . . . . . . . . . . . . . . . . . . . . . . 229
Service Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
Service workflow for Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Logged events description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
8.2.16 Macvtap mechanism driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
Example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
Network traffic flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
8.2.17 MTU considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Jumbo frames . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
Instance network interfaces (VIFs) . . . . . . . . . . . . . . . . . . . . . . . . . 242
8.2.18 Network segment ranges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
Why you need it . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
How it works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Default network segment ranges . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244
Known limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
8.2.19 Open vSwitch with DPDK datapath . . . . . . . . . . . . . . . . . . . . . . . 248
The basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Using vhost-user interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249
Using vhost-user multiqueue . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Known limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
8.2.20 Open vSwitch hardware offloading . . . . . . . . . . . . . . . . . . . . . . . . 251
The basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251
Using Open vSwitch hardware offloading . . . . . . . . . . . . . . . . . . . . . 252
8.2.21 Native Open vSwitch firewall driver . . . . . . . . . . . . . . . . . . . . . . . 257
Configuring heterogeneous firewall drivers . . . . . . . . . . . . . . . . . . . . 258
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Enable the native OVS firewall driver . . . . . . . . . . . . . . . . . . . . . . . 258
Using GRE tunnels inside VMs with OVS firewall driver . . . . . . . . . . . . . 258
Differences between OVS and iptables firewall drivers . . . . . . . . . . . . . . 258
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
8.2.22 Quality of Service (QoS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
v
Supported QoS rule types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
L3 QoS support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
User workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
8.2.23 Quality of Service (QoS): Guaranteed Minimum Bandwidth . . . . . . . . . . 273
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Placement pre-requisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Nova pre-requisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Neutron pre-requisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Propagation of resource information . . . . . . . . . . . . . . . . . . . . . . . . 277
Sample usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
On Healing of Allocations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279
Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
8.2.24 Role-Based Access Control (RBAC) . . . . . . . . . . . . . . . . . . . . . . . 283
Supported objects for sharing with specific projects . . . . . . . . . . . . . . . . 283
Sharing an object with specific projects . . . . . . . . . . . . . . . . . . . . . . 283
Sharing a network with specific projects . . . . . . . . . . . . . . . . . . . . . . 283
Sharing a QoS policy with specific projects . . . . . . . . . . . . . . . . . . . . 285
Sharing a security group with specific projects . . . . . . . . . . . . . . . . . . 286
Sharing an address scope with specific projects . . . . . . . . . . . . . . . . . . 287
Sharing a subnet pool with specific projects . . . . . . . . . . . . . . . . . . . . 288
Sharing an address group with specific projects . . . . . . . . . . . . . . . . . . 289
How the shared flag relates to these entries . . . . . . . . . . . . . . . . . . . . 290
Allowing a network to be used as an external network . . . . . . . . . . . . . . 292
Preventing regular users from sharing objects with each other . . . . . . . . . . 294
8.2.25 Routed provider networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Create a routed provider network . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Migrating non-routed networks to routed . . . . . . . . . . . . . . . . . . . . . 303
8.2.26 Service function chaining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
8.2.27 SR-IOV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
The basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 310
Using SR-IOV interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
SR-IOV with ConnectX-3/ConnectX-3 Pro Dual Port Ethernet . . . . . . . . . . 318
SR-IOV with InfiniBand . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
Known limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
8.2.28 Subnet pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Why you need them . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
How they work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321
Quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Default subnet pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
8.2.29 Subnet onboard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
How it works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
8.2.30 Service subnets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
vi
8.2.31 BGP floating IPs over l2 segmented network . . . . . . . . . . . . . . . . . . . 335
Configuring the Neutron API side . . . . . . . . . . . . . . . . . . . . . . . . . 336
The BGP agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Setting-up BGP peering with the switches . . . . . . . . . . . . . . . . . . . . . 336
Setting-up physical network names . . . . . . . . . . . . . . . . . . . . . . . . 337
Setting-up the provider network . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Setting-up the 2nd segment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Setting-up the provider subnets for the BGP next HOP routing . . . . . . . . . . 339
Adding a subnet for VM floating IPs and router gateways . . . . . . . . . . . . . 340
Setting-up BGP advertizing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Per project operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Cumulus switch configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
8.2.32 Trunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Example configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
Using trunks and subports inside an instance . . . . . . . . . . . . . . . . . . . 349
Trunk states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Limitations and issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
8.2.33 Installing Neutron API via WSGI . . . . . . . . . . . . . . . . . . . . . . . . 350
WSGI Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Neutron API behind uwsgi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 350
Neutron API behind mod_wsgi . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Start Neutron RPC server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Neutron Worker Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
8.3 Deployment examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
8.3.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352
Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
Networks and network interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . 354
8.3.2 Mechanism drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Linux bridge mechanism driver . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Open vSwitch mechanism driver . . . . . . . . . . . . . . . . . . . . . . . . . . 402
8.4 Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
8.4.1 IP availability metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472
8.4.2 Resource tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 473
Use cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Filtering with tags . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
User workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
Future support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
8.4.3 Resource purge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 481
8.4.4 Manage Networking service quotas . . . . . . . . . . . . . . . . . . . . . . . 482
Basic quota configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482
Configure per-project quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483
8.5 Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
8.5.1 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
Database management command-line tool . . . . . . . . . . . . . . . . . . . . . 488
8.5.2 Legacy nova-network to OpenStack Networking (neutron) . . . . . . . . . . . 490
Impact and limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 490
Migration process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 491
vii
8.5.3 Add VRRP to an existing router . . . . . . . . . . . . . . . . . . . . . . . . . 492
Migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
L3 HA to Legacy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 494
8.6 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
8.6.1 Disable libvirt networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
libvirt network implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
How to disable libvirt networks . . . . . . . . . . . . . . . . . . . . . . . . . . 496
8.6.2 neutron-linuxbridge-cleanup utility . . . . . . . . . . . . . . . . . . . . . . . . 497
Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
8.6.3 Virtual Private Network-as-a-Service (VPNaaS) scenario . . . . . . . . . . . . 497
Enabling VPNaaS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 497
Using VPNaaS with endpoint group (recommended) . . . . . . . . . . . . . . . 499
Configure VPNaaS without endpoint group (the legacy way) . . . . . . . . . . . 503
8.7 OVN Driver Administration Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
8.7.1 OVN information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 506
8.7.2 Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507
8.7.3 Routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
North/South . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 509
East/West . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 514
8.7.4 IP Multicast: IGMP snooping configuration guide for OVN . . . . . . . . . . . 516
How to enable it . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
OVN Database information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
Extra information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517
8.7.5 OpenStack and OVN Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
8.7.6 Reference architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 518
Networking service with OVN integration . . . . . . . . . . . . . . . . . . . . . 520
Accessing OVN database content . . . . . . . . . . . . . . . . . . . . . . . . . 522
Adding a compute node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Security Groups/Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 523
Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525
Routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545
Instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 560
8.7.7 DPDK Support in OVN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
Configuration Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
Configuration Settings in compute hosts . . . . . . . . . . . . . . . . . . . . . . 589
8.7.8 Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Launching VMs failure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Multi-Node setup not working . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
8.7.9 SR-IOV guide for OVN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
External ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 590
Environment setup for OVN SR-IOV . . . . . . . . . . . . . . . . . . . . . . . 591
OVN Database information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 591
Known limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592
8.7.10 Router Availability Zones guide for OVN . . . . . . . . . . . . . . . . . . . . 593
How to configure it . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 593
Using router availability zones . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
OVN Database information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 595
8.7.11 Routed Provider Networks for OVN . . . . . . . . . . . . . . . . . . . . . . . 596
8.8 Archived Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
viii
8.8.1 Introduction to Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
Networking API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 598
Configure SSL support for networking API . . . . . . . . . . . . . . . . . . . . 598
Allowed-address-pairs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 599
Virtual-Private-Network-as-a-Service (VPNaaS) . . . . . . . . . . . . . . . . . 599
8.8.2 Networking architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 600
VMware NSX integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 601
8.8.3 Plug-in configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Configure Big Switch (Floodlight REST Proxy) plug-in . . . . . . . . . . . . . 603
Configure Brocade plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
Configure NSX-mh plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . . 604
Configure PLUMgrid plug-in . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
8.8.4 Configure neutron agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 606
Configure data-forwarding nodes . . . . . . . . . . . . . . . . . . . . . . . . . 606
Configure DHCP agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
Configure L3 agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 608
Configure metering agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
Configure Hyper-V L2 agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . 611
Basic operations on agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 612
8.8.5 Configure Identity service for Networking . . . . . . . . . . . . . . . . . . . . 612
Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 614
Networking API and credential configuration . . . . . . . . . . . . . . . . . . . 615
Configure security groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
Configure metadata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616
Example nova.conf (for nova-compute and nova-api) . . . . . . . . . . . . . . . 617
8.8.6 Advanced configuration options . . . . . . . . . . . . . . . . . . . . . . . . . 617
L3 metering agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
8.8.7 Scalable and highly available DHCP agents . . . . . . . . . . . . . . . . . . . 618
8.8.8 Use Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
Core Networking API features . . . . . . . . . . . . . . . . . . . . . . . . . . . 618
Use Compute with Networking . . . . . . . . . . . . . . . . . . . . . . . . . . 620
8.8.9 Advanced features through API extensions . . . . . . . . . . . . . . . . . . . . 623
Provider networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 623
L3 routing and NAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 626
Security groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 628
Plug-in specific extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 629
L3 metering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 635
8.8.10 Advanced operational features . . . . . . . . . . . . . . . . . . . . . . . . . . 637
Logging settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
Notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 637
8.8.11 Authentication and authorization . . . . . . . . . . . . . . . . . . . . . . . . . 639
ix
keystone_authtoken . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 670
nova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 675
oslo_concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 679
oslo_messaging_amqp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 680
oslo_messaging_kafka . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686
oslo_messaging_notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . 689
oslo_messaging_rabbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 690
oslo_middleware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
oslo_policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 695
privsep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696
quotas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697
ssl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 699
9.1.2 ml2_conf.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
ml2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 705
ml2_type_flat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
ml2_type_geneve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
ml2_type_gre . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706
ml2_type_vlan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
ml2_type_vxlan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
ovs_driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 707
securitygroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
sriov_driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 708
9.1.3 linuxbridge_agent.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 709
agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714
linux_bridge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
network_log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715
securitygroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
vxlan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
9.1.4 macvtap_agent.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 718
agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 723
macvtap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
securitygroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 724
9.1.5 openvswitch_agent.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 730
network_log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
ovs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 732
securitygroup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 735
9.1.6 sriov_agent.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 736
agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
sriov_nic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 741
9.1.7 ovn.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 742
ovn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747
ovs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
9.1.8 dhcp_agent.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 751
x
agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
ovs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 760
9.1.9 l3_agent.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 761
agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 766
network_log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
ovs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767
9.1.10 metadata_agent.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
DEFAULT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768
agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 776
9.1.11 Neutron Metering system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
Non-granular traffic messages . . . . . . . . . . . . . . . . . . . . . . . . . . . 779
Granular traffic messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 780
Sample of metering_agent.ini . . . . . . . . . . . . . . . . . . . . . . . . . . . 781
9.2 Policy Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
9.2.1 neutron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 789
xi
11.3.3 OVN Database information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 861
11.4 Frequently Asked Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 862
xii
Vagrant prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 932
Sparse architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933
14.4.4 Contributing new extensions to Neutron . . . . . . . . . . . . . . . . . . . . . 935
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
Contribution Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 935
Design and Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 936
Testing and Continuous Integration . . . . . . . . . . . . . . . . . . . . . . . . 936
Defect Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937
Backport Management Strategies . . . . . . . . . . . . . . . . . . . . . . . . . 938
DevStack Integration Strategies . . . . . . . . . . . . . . . . . . . . . . . . . . 938
Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938
Project Initial Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 938
Internationalization support . . . . . . . . . . . . . . . . . . . . . . . . . . . . 939
Integrating with the Neutron system . . . . . . . . . . . . . . . . . . . . . . . . 940
14.4.5 Neutron public API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
Breakages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 944
14.4.6 Client command extension support . . . . . . . . . . . . . . . . . . . . . . . . 945
14.4.7 Alembic Migrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
The Migration Wrapper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946
Migration Branches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947
Developers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 948
14.4.8 Upgrade checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
3rd party plugins checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
14.4.9 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
Testing Neutron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 954
Full Stack Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 965
Test Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 970
Template for ModelMigrationSync for external repos . . . . . . . . . . . . . . . 971
Transient DB Failure Injection . . . . . . . . . . . . . . . . . . . . . . . . . . . 974
Neutron jobs running in Zuul CI . . . . . . . . . . . . . . . . . . . . . . . . . . 974
Testing OVN with DevStack . . . . . . . . . . . . . . . . . . . . . . . . . . . . 979
Tempest testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 991
14.5 Neutron Internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
14.5.1 Neutron Internals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 995
Subnet Pools and Address Scopes . . . . . . . . . . . . . . . . . . . . . . . . . 995
Agent extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 999
API Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1000
Neutron WSGI/HTTP API layer . . . . . . . . . . . . . . . . . . . . . . . . . . 1002
Calling the ML2 Plugin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003
Profiling Neutron Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003
Neutron Database Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1010
Relocation of Database Models . . . . . . . . . . . . . . . . . . . . . . . . . . 1013
Keep DNS Nameserver Order Consistency In Neutron . . . . . . . . . . . . . . 1014
Integration with external DNS services . . . . . . . . . . . . . . . . . . . . . . 1015
Neutron Stadium i18n . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
L2 agent extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016
L2 Agent Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1017
L3 agent extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1028
Layer 3 Networking in Neutron - via Layer 3 agent & OpenVSwitch . . . . . . . 1028
xiii
Live-migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1035
ML2 Extension Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1039
Network IP Availability Extension . . . . . . . . . . . . . . . . . . . . . . . . . 1039
Objects in neutron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1042
Open vSwitch Firewall Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . 1053
Neutron Open vSwitch vhost-user support . . . . . . . . . . . . . . . . . . . . . 1064
Neutron Plugin Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1065
Authorization Policy Enforcement . . . . . . . . . . . . . . . . . . . . . . . . . 1071
Composite Object Status via Provisioning Blocks . . . . . . . . . . . . . . . . . 1077
Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1079
Quota Management and Enforcement . . . . . . . . . . . . . . . . . . . . . . . 1087
Retrying Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1092
Neutron RPC API Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094
Neutron Messaging Callback System . . . . . . . . . . . . . . . . . . . . . . . 1097
Segments extension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1102
Service Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1103
Services and agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104
Add Tags to Neutron Resources . . . . . . . . . . . . . . . . . . . . . . . . . . 1106
Upgrade strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1108
OVN Design Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1112
14.5.2 Module Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
14.6 OVN Driver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
14.6.1 OVN backend . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
OVN Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1146
14.7 Dashboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
14.7.1 CI Status Dashboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
Gerrit Dashboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
Grafana Dashboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1148
xiv
Neutron Documentation, Release 18.1.0.dev178
Neutron is an OpenStack project to provide network connectivity as a service between interface de-
vices (e.g., vNICs) managed by other OpenStack services (e.g., nova). It implements the OpenStack
Networking API.
This documentation is generated by the Sphinx toolkit and lives in the source tree. Additional docu-
mentation on Neutron and other components of OpenStack can be found on the OpenStack wiki and the
Neutron section of the wiki. The Neutron Development wiki is also a good resource for new contributors.
Enjoy!
CONTENTS 1
Neutron Documentation, Release 18.1.0.dev178
2 CONTENTS
CHAPTER
ONE
OVERVIEW
The OpenStack project is an open source cloud computing platform that supports all types of cloud
environments. The project aims for simple implementation, massive scalability, and a rich set of features.
Cloud computing experts from around the world contribute to the project.
OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complementary
services. Each service offers an Application Programming Interface (API) that facilitates this integration.
This guide covers step-by-step deployment of the major OpenStack services using a functional example
architecture suitable for new users of OpenStack with sufficient Linux experience. This guide is not
intended to be used for production system installations, but to create a minimum proof-of-concept for
the purpose of learning about OpenStack.
After becoming familiar with basic installation, configuration, operation, and troubleshooting of these
OpenStack services, you should consider the following steps toward deployment using a production
architecture:
• Determine and implement the necessary core and optional services to meet performance and re-
dundancy requirements.
• Increase security using methods such as firewalls, encryption, and service policies.
• Implement a deployment tool such as Ansible, Chef, Puppet, or Salt to automate deployment and
management of the production environment.
The example architecture requires at least two nodes (hosts) to launch a basic virtual machine (VM) or
instance. Optional services such as Block Storage and Object Storage require additional nodes.
Important: The example architecture used in this guide is a minimum configuration, and is not in-
tended for production system installations. It is designed to provide a minimum proof-of-concept for the
purpose of learning about OpenStack. For information on creating architectures for specific use cases,
or how to determine which architecture is required, see the Architecture Design Guide.
3
Neutron Documentation, Release 18.1.0.dev178
For more information on production architectures, see the Architecture Design Guide, OpenStack Oper-
ations Guide, and OpenStack Networking Guide.
1.1.1 Controller
The controller node runs the Identity service, Image service, management portions of Compute, manage-
ment portion of Networking, various Networking agents, and the Dashboard. It also includes supporting
services such as an SQL database, message queue, and Network Time Protocol (NTP).
Optionally, the controller node runs portions of the Block Storage, Object Storage, Orchestration, and
Telemetry services.
The controller node requires a minimum of two network interfaces.
4 Chapter 1. Overview
Neutron Documentation, Release 18.1.0.dev178
1.1.2 Compute
The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute
uses the kernel-based VM (KVM) hypervisor. The compute node also runs a Networking service agent
that connects instances to virtual networks and provides firewalling services to instances via security
groups.
You can deploy more than one compute node. Each node requires a minimum of two network interfaces.
The optional Block Storage node contains the disks that the Block Storage and Shared File System
services provision for instances.
For simplicity, service traffic between compute nodes and this node uses the management network.
Production environments should implement a separate storage network to increase performance and
security.
You can deploy more than one block storage node. Each node requires a minimum of one network
interface.
The optional Object Storage node contain the disks that the Object Storage service uses for storing
accounts, containers, and objects.
For simplicity, service traffic between compute nodes and this node uses the management network.
Production environments should implement a separate storage network to increase performance and
security.
This service requires two nodes. Each node requires a minimum of one network interface. You can
deploy more than two object storage nodes.
1.2 Networking
The provider networks option deploys the OpenStack Networking service in the simplest way possible
with primarily layer-2 (bridging/switching) services and VLAN segmentation of networks. Essentially,
it bridges virtual networks to physical networks and relies on physical network infrastructure for layer-
3 (routing) services. Additionally, a DHCP<Dynamic Host Configuration Protocol (DHCP) service
provides IP address information to instances.
The OpenStack user requires more information about the underlying network infrastructure to create a
virtual network to exactly match the infrastructure.
1.2. Networking 5
Neutron Documentation, Release 18.1.0.dev178
Warning: This option lacks support for self-service (private) networks, layer-3 (routing) services,
and advanced services such as LoadBalancer-as-a-Service (Octavia). Consider the self-service net-
works option below if you desire these features.
The self-service networks option augments the provider networks option with layer-3 (routing) ser-
vices that enable self-service networks using overlay segmentation methods such as Virtual Extensible
LAN (VXLAN). Essentially, it routes virtual networks to physical networks using Network Address
Translation (NAT). Additionally, this option provides the foundation for advanced services such as
LoadBalancer-as-a-service.
The OpenStack user can create virtual networks without the knowledge of underlying infrastructure on
the data network. This can also include VLAN networks if the layer-2 plug-in is configured accordingly.
6 Chapter 1. Overview
Neutron Documentation, Release 18.1.0.dev178
1.2. Networking 7
Neutron Documentation, Release 18.1.0.dev178
8 Chapter 1. Overview
CHAPTER
TWO
OpenStack Networking (neutron) allows you to create and attach interface devices managed by other
OpenStack services to networks. Plug-ins can be implemented to accommodate different networking
equipment and software, providing flexibility to OpenStack architecture and deployment.
It includes the following components:
neutron-server Accepts and routes API requests to the appropriate OpenStack Networking plug-in for
action.
OpenStack Networking plug-ins and agents Plug and unplug ports, create networks or subnets, and
provide IP addressing. These plug-ins and agents differ depending on the vendor and technologies
used in the particular cloud. OpenStack Networking ships with plug-ins and agents for Cisco
virtual and physical switches, NEC OpenFlow products, Open vSwitch, Linux bridging, and the
VMware NSX product.
The common agents are L3 (layer 3), DHCP (dynamic host IP addressing), and a plug-in agent.
Messaging queue Used by most OpenStack Networking installations to route information between the
neutron-server and various agents. Also acts as a database to store networking state for particular
plug-ins.
OpenStack Networking mainly interacts with OpenStack Compute to provide networks and connectivity
for its instances.
9
Neutron Documentation, Release 18.1.0.dev178
THREE
OpenStack Networking (neutron) manages all networking facets for the Virtual Networking Infrastruc-
ture (VNI) and the access layer aspects of the Physical Networking Infrastructure (PNI) in your Open-
Stack environment. OpenStack Networking enables projects to create advanced virtual network topolo-
gies which may include services such as a firewall, and a virtual private network (VPN).
Networking provides networks, subnets, and routers as object abstractions. Each abstraction has func-
tionality that mimics its physical counterpart: networks contain subnets, and routers route traffic between
different subnets and networks.
Any given Networking set up has at least one external network. Unlike the other networks, the external
network is not merely a virtually defined network. Instead, it represents a view into a slice of the
physical, external network accessible outside the OpenStack installation. IP addresses on the external
network are accessible by anybody physically on the outside network.
In addition to external networks, any Networking set up has one or more internal networks. These
software-defined networks connect directly to the VMs. Only the VMs on any given internal network,
or those on subnets connected through interfaces to a similar router, can access VMs connected to that
network directly.
For the outside network to access VMs, and vice versa, routers between the networks are needed. Each
router has one gateway that is connected to an external network and one or more interfaces connected
to internal networks. Like a physical router, subnets can access machines on other subnets that are
connected to the same router, and machines can access the outside network through the gateway for the
router.
Additionally, you can allocate IP addresses on external networks to ports on the internal network. When-
ever something is connected to a subnet, that connection is called a port. You can associate external
network IP addresses with ports to VMs. This way, entities on the outside network can access VMs.
Networking also supports security groups. Security groups enable administrators to define firewall rules
in groups. A VM can belong to one or more security groups, and Networking applies the rules in those
security groups to block or unblock ports, port ranges, or traffic types for that VM.
Each plug-in that Networking uses has its own concepts. While not vital to operating the VNI and
OpenStack environment, understanding these concepts can help you set up Networking. All Networking
installations use a core plug-in and a security group plug-in (or just the No-Op security group plug-in).
11
Neutron Documentation, Release 18.1.0.dev178
FOUR
After installing the operating system on each node for the architecture that you choose to deploy, you
must configure the network interfaces. We recommend that you disable any automated network man-
agement tools and manually edit the appropriate configuration files for your distribution. For more
information on how to configure networking on your distribution, see the SLES 12 or openSUSE docu-
mentation.
All nodes require Internet access for administrative purposes such as package installation, security up-
dates, Domain Name System (DNS), and Network Time Protocol (NTP). In most cases, nodes should
obtain Internet access through the management network interface. To highlight the importance of net-
work separation, the example architectures use private address space for the management network and
assume that the physical network infrastructure provides Internet access via Network Address Transla-
tion (NAT) or other methods. The example architectures use routable IP address space for the provider
(external) network and assume that the physical network infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly to the provider network. In the self-
service (private) networks architecture, instances can attach to a self-service or provider network. Self-
service networks can reside entirely within OpenStack or provide some level of external network access
using Network Address Translation (NAT) through the provider network.
The example architectures assume use of the following networks:
• Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all nodes for administrative purposes
such as package installation, security updates, Domain Name System (DNS), and Network Time
Protocol (NTP).
• Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to instances in your OpenStack envi-
ronment.
You can modify these ranges and gateways to work with your particular network infrastructure.
Network interface names vary by distribution. Traditionally, interfaces use eth followed by a sequential
number. To cover all variations, this guide refers to the first interface as the interface with the lowest
number and the second interface as the interface with the highest number.
Unless you intend to use the exact configuration provided in this example architecture, you must modify
the networks in this procedure to match your environment. Each node must resolve the other nodes by
13
Neutron Documentation, Release 18.1.0.dev178
14 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
name in addition to IP address. For example, the controller name must resolve to 10.0.0.11,
the IP address of the management interface on the controller node.
Note: Your distribution enables a restrictive firewall by default. During the installation process, cer-
tain steps will fail unless you alter or disable the firewall. For more information about securing your
environment, refer to the OpenStack Security Guide.
STARTMODE='auto'
BOOTPROTO='static'
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
(continues on next page)
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves
the actual hostname to another loopback IP address such as 127.0.1.1. You must comment
out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1
entry.
Note: This guide includes host entries for optional services in order to reduce complexity should
you choose to deploy them.
Note: Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
2. The provider interface uses a special configuration without an IP address assigned to it. Configure
the second interface as the provider interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.
• Edit the /etc/sysconfig/network/ifcfg-INTERFACE_NAME file to contain the fol-
lowing:
STARTMODE='auto'
BOOTPROTO='static'
16 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves
the actual hostname to another loopback IP address such as 127.0.1.1. You must comment
out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1
entry.
Note: This guide includes host entries for optional services in order to reduce complexity should
you choose to deploy them.
If you want to deploy the Block Storage service, configure one additional storage node.
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves
the actual hostname to another loopback IP address such as 127.0.1.1. You must comment
out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1
entry.
Note: This guide includes host entries for optional services in order to reduce complexity should
you choose to deploy them.
We recommend that you verify network connectivity to the Internet and among the nodes before pro-
ceeding further.
1. From the controller node, test access to the Internet:
# ping -c 4 openstack.org
2. From the controller node, test access to the management interface on the compute node:
18 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
# ping -c 4 compute1
# ping -c 4 openstack.org
4. From the compute node, test access to the management interface on the controller node:
# ping -c 4 controller
Note: Your distribution enables a restrictive firewall by default. During the installation process, cer-
tain steps will fail unless you alter or disable the firewall. For more information about securing your
environment, refer to the OpenStack Security Guide.
4.2.1 Prerequisites
Before you configure the OpenStack Networking (neutron) service, you must create a database, service
credentials, and API endpoints.
1. To create the database, complete these steps:
• Use the database access client to connect to the database server as the root user:
$ mysql -u root -p
• Grant proper access to the neutron database, replacing NEUTRON_DBPASS with a suit-
able password:
$ . admin-openrc
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
20 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
You can deploy the Networking service using one of two architectures represented by options 1 and 2.
Option 1 deploys the simplest possible architecture that only supports attaching instances to provider
(external) networks. No self-service (private) networks, routers, or floating IP addresses. Only the
admin or other privileged user can manage provider networks.
Option 2 augments option 1 with layer-3 services that support attaching instances to self-service net-
works. The demo or other unprivileged user can manage self-service networks including routers that
provide connectivity between self-service and provider networks. Additionally, floating IP addresses
provide connectivity to instances using self-service networks from external networks such as the Inter-
net.
Self-service networks typically use overlay networks. Overlay network protocols such as VXLAN in-
clude additional headers that increase overhead and decrease space available for the payload or user
data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using
the default Ethernet maximum transmission unit (MTU) of 1500 bytes. The Networking service auto-
matically provides the correct MTU value to instances via DHCP. However, some cloud images do not
use DHCP or ignore the DHCP MTU option and require configuration using metadata or a script.
Choose one of the following networking options to configure services specific to it. Afterwards, return
here and proceed to Configure the metadata agent.
22 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
The Networking server component configuration includes the database, authentication mechanism, mes-
sage queue, topology change notifications, and plug-in.
Note: Default configuration files vary by distribution. You might need to add these sections and options
rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets
indicates potential default configuration options that you should retain.
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/
,→neutron
Replace NEUTRON_DBPASS with the password you chose for the database.
Note: Comment out or remove any other connection options in the [database]
section.
– In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in and disable addi-
tional plug-ins:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in Rab-
bitMQ.
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Note: Comment out or remove any other options in the [keystone_authtoken] sec-
tion.
– In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of net-
work topology changes:
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.
• In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
24 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual
networking infrastructure for instances.
• Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following
actions:
– In the [ml2] section, enable flat and VLAN networks:
[ml2]
# ...
type_drivers = flat,vlan
[ml2]
# ...
tenant_network_types =
[ml2]
# ...
mechanism_drivers = linuxbridge
Warning: After you configure the ML2 plug-in, removing values in the
type_drivers option can lead to database inconsistency.
[ml2]
# ...
extension_drivers = port_security
– In the [ml2_type_flat] section, configure the provider virtual network as a flat net-
work:
[ml2_type_flat]
# ...
flat_networks = provider
[securitygroup]
# ...
enable_ipset = true
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge
iptables firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
[DEFAULT]
# ...
interface_driver = linuxbridge
(continues on next page)
26 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
Follow this provider network document from the General Installation Guide.
Return to Networking controller node configuration.
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/
,→neutron
Replace NEUTRON_DBPASS with the password you chose for the database.
Note: Comment out or remove any other connection options in the [database]
section.
– In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in, router service, and
overlapping IP addresses:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in Rab-
bitMQ.
– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service
access:
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Note: Comment out or remove any other options in the [keystone_authtoken] sec-
tion.
– In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of net-
work topology changes:
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.
• In the [oslo_concurrency] section, configure the lock path:
28 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual
networking infrastructure for instances.
• Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following
actions:
– In the [ml2] section, enable flat, VLAN, and VXLAN networks:
[ml2]
# ...
type_drivers = flat,vlan,vxlan
[ml2]
# ...
tenant_network_types = vxlan
– In the [ml2] section, enable the Linux bridge and layer-2 population mechanisms:
[ml2]
# ...
mechanism_drivers = linuxbridge,l2population
Warning: After you configure the ML2 plug-in, removing values in the
type_drivers option can lead to database inconsistency.
Note: The Linux bridge agent only supports VXLAN overlay networks.
[ml2]
# ...
extension_drivers = port_security
– In the [ml2_type_flat] section, configure the provider virtual network as a flat net-
work:
[ml2_type_flat]
# ...
flat_networks = provider
– In the [ml2_type_vxlan] section, configure the VXLAN network identifier range for
self-service networks:
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
30 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
The Layer-3 (L3) agent provides routing and NAT services for self-service virtual networks.
• Edit the /etc/neutron/l3_agent.ini file and complete the following actions:
– In the [DEFAULT] section, configure the Linux bridge interface driver:
[DEFAULT]
# ...
interface_driver = linuxbridge
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
Note: The Nova compute service must be installed to complete this step. For more details see the
compute install guide found under the Installation Guides section of the docs website.
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Replace METADATA_SECRET with the secret you chose for the metadata proxy.
See the compute service configuration guide for the full set of options including overriding
the service catalog endpoint URL if necessary.
Note: SLES enables apparmor by default and restricts dnsmasq. You need to either completely disable
apparmor or disable only the dnsmasq profile:
# ln -s /etc/apparmor.d/usr.sbin.dnsmasq /etc/apparmor.d/disable/
# systemctl restart apparmor
2. Start the Networking services and configure them to start when the system boots.
For both networking options:
# systemctl enable openstack-neutron.service \
openstack-neutron-linuxbridge-agent.service \
openstack-neutron-dhcp-agent.service \
openstack-neutron-metadata-agent.service
# systemctl start openstack-neutron.service \
(continues on next page)
32 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
For networking option 2, also enable and start the layer-3 service:
The compute node handles connectivity and security groups for instances.
The Networking common component configuration includes the authentication mechanism, message
queue, and plug-in.
Note: Default configuration files vary by distribution. You might need to add these sections and options
rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets
indicates potential default configuration options that you should retain.
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in Rab-
bitMQ.
– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service
access:
[DEFAULT]
# ...
auth_strategy = keystone
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Note: Comment out or remove any other options in the [keystone_authtoken] sec-
tion.
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
Choose the same networking option that you chose for the controller node to configure services specific
to it. Afterwards, return here and proceed to Configure the Compute service to use the Networking
service.
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
34 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge
iptables firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
Return to Networking compute node configuration
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
Return to Networking compute node configuration.
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
See the compute service configuration guide for the full set of options including overriding
the service catalog endpoint URL if necessary.
36 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
3. Start the Linux Bridge agent and configure it to start when the system boots:
# systemctl enable openstack-neutron-linuxbridge-agent.service
# systemctl start openstack-neutron-linuxbridge-agent.service
+---------------------------+---------------------------+-------------
,→---------------+
38 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
You can perform further testing of your networking using the neutron-sanity-check command line client.
Use the verification section for the networking option that you chose to deploy.
+--------------------------------------+--------------------+---------
,→---+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host
,→ | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+---------
,→---+-------------------+-------+-------+---------------------------+
| 0400c2f6-4d3b-44bc-89fa-99093432f3bf | Metadata agent |
,→controller | None | True | UP | neutron-metadata-
,→agent | (continues on next page)
40 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
Neutron Documentation, Release 18.1.0.dev178
The output should indicate three agents on the controller node and one agent on each compute
node.
+--------------------------------------+--------------------+---------
,→---+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host
,→ | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+---------
,→---+-------------------+-------+-------+---------------------------+
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent |
,→controller | None | True | UP | neutron-metadata-
,→agent |
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent |
,→controller | None | True | UP | neutron-
,→linuxbridge-agent |
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent |
,→compute1 | None | True | UP | neutron-
,→linuxbridge-agent |
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent |
,→controller | nova | True | UP | neutron-l3-agent
,→ |
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent |
,→controller | nova | True | UP | neutron-dhcp-agent
,→ |
+--------------------------------------+--------------------+---------
,→---+-------------------+-------+-------+---------------------------+
The output should indicate four agents on the controller node and one agent on each compute
node.
42 Chapter 4. Install and configure for openSUSE and SUSE Linux Enterprise
CHAPTER
FIVE
After installing the operating system on each node for the architecture that you choose to deploy, you
must configure the network interfaces. We recommend that you disable any automated network man-
agement tools and manually edit the appropriate configuration files for your distribution. For more
information on how to configure networking on your distribution, see the documentation .
All nodes require Internet access for administrative purposes such as package installation, security up-
dates, Domain Name System (DNS), and Network Time Protocol (NTP). In most cases, nodes should
obtain Internet access through the management network interface. To highlight the importance of net-
work separation, the example architectures use private address space for the management network and
assume that the physical network infrastructure provides Internet access via Network Address Transla-
tion (NAT) or other methods. The example architectures use routable IP address space for the provider
(external) network and assume that the physical network infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly to the provider network. In the self-
service (private) networks architecture, instances can attach to a self-service or provider network. Self-
service networks can reside entirely within OpenStack or provide some level of external network access
using Network Address Translation (NAT) through the provider network.
The example architectures assume use of the following networks:
• Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all nodes for administrative purposes
such as package installation, security updates, Domain Name System (DNS), and Network Time
Protocol (NTP).
• Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to instances in your OpenStack envi-
ronment.
You can modify these ranges and gateways to work with your particular network infrastructure.
Network interface names vary by distribution. Traditionally, interfaces use eth followed by a sequential
number. To cover all variations, this guide refers to the first interface as the interface with the lowest
number and the second interface as the interface with the highest number.
Unless you intend to use the exact configuration provided in this example architecture, you must modify
the networks in this procedure to match your environment. Each node must resolve the other nodes by
43
Neutron Documentation, Release 18.1.0.dev178
44 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
name in addition to IP address. For example, the controller name must resolve to 10.0.0.11,
the IP address of the management interface on the controller node.
Note: Your distribution enables a restrictive firewall by default. During the installation process, cer-
tain steps will fail unless you alter or disable the firewall. For more information about securing your
environment, refer to the OpenStack Security Guide.
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves
the actual hostname to another loopback IP address such as 127.0.1.1. You must comment
out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1
entry.
Note: This guide includes host entries for optional services in order to reduce complexity should
you choose to deploy them.
Note: Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
2. The provider interface uses a special configuration without an IP address assigned to it. Configure
the second interface as the provider interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.
• Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to con-
tain the following:
Do not change the HWADDR and UUID keys.
DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"
46 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves
the actual hostname to another loopback IP address such as 127.0.1.1. You must comment
out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1
entry.
Note: This guide includes host entries for optional services in order to reduce complexity should
you choose to deploy them.
We recommend that you verify network connectivity to the Internet and among the nodes before pro-
ceeding further.
1. From the controller node, test access to the Internet:
# ping -c 4 openstack.org
2. From the controller node, test access to the management interface on the compute node:
# ping -c 4 compute1
# ping -c 4 openstack.org
4. From the compute node, test access to the management interface on the controller node:
# ping -c 4 controller
Note: Your distribution enables a restrictive firewall by default. During the installation process, cer-
tain steps will fail unless you alter or disable the firewall. For more information about securing your
environment, refer to the OpenStack Security Guide.
48 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
5.2.1 Prerequisites
Before you configure the OpenStack Networking (neutron) service, you must create a database, service
credentials, and API endpoints.
1. To create the database, complete these steps:
• Use the database access client to connect to the database server as the root user:
$ mysql -u root -p
• Grant proper access to the neutron database, replacing NEUTRON_DBPASS with a suit-
able password:
$ . admin-openrc
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | f71529314dab4a4d8eca427e701d209e |
| name | neutron |
| type | network |
+-------------+----------------------------------+
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
50 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
You can deploy the Networking service using one of two architectures represented by options 1 and 2.
Option 1 deploys the simplest possible architecture that only supports attaching instances to provider
(external) networks. No self-service (private) networks, routers, or floating IP addresses. Only the
admin or other privileged user can manage provider networks.
Option 2 augments option 1 with layer-3 services that support attaching instances to self-service net-
works. The demo or other unprivileged user can manage self-service networks including routers that
provide connectivity between self-service and provider networks. Additionally, floating IP addresses
provide connectivity to instances using self-service networks from external networks such as the Inter-
net.
Self-service networks typically use overlay networks. Overlay network protocols such as VXLAN in-
clude additional headers that increase overhead and decrease space available for the payload or user
data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using
the default Ethernet maximum transmission unit (MTU) of 1500 bytes. The Networking service auto-
matically provides the correct MTU value to instances via DHCP. However, some cloud images do not
use DHCP or ignore the DHCP MTU option and require configuration using metadata or a script.
Choose one of the following networking options to configure services specific to it. Afterwards, return
here and proceed to Configure the metadata agent.
The Networking server component configuration includes the database, authentication mechanism, mes-
sage queue, topology change notifications, and plug-in.
Note: Default configuration files vary by distribution. You might need to add these sections and options
rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets
indicates potential default configuration options that you should retain.
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/
,→neutron
Replace NEUTRON_DBPASS with the password you chose for the database.
Note: Comment out or remove any other connection options in the [database]
section.
– In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in and disable addi-
tional plug-ins:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in Rab-
bitMQ.
52 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Note: Comment out or remove any other options in the [keystone_authtoken] sec-
tion.
– In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of net-
work topology changes:
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.
• In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual
networking infrastructure for instances.
• Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following
actions:
– In the [ml2] section, enable flat and VLAN networks:
[ml2]
# ...
type_drivers = flat,vlan
[ml2]
# ...
tenant_network_types =
[ml2]
# ...
mechanism_drivers = linuxbridge
Warning: After you configure the ML2 plug-in, removing values in the
type_drivers option can lead to database inconsistency.
[ml2]
# ...
extension_drivers = port_security
– In the [ml2_type_flat] section, configure the provider virtual network as a flat net-
work:
[ml2_type_flat]
# ...
flat_networks = provider
[securitygroup]
# ...
enable_ipset = true
54 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge
iptables firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
[DEFAULT]
# ...
interface_driver = linuxbridge
(continues on next page)
Follow this provider network document from the General Installation Guide.
Return to Networking controller node configuration.
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/
,→neutron
Replace NEUTRON_DBPASS with the password you chose for the database.
Note: Comment out or remove any other connection options in the [database]
section.
– In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in, router service, and
overlapping IP addresses:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
56 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
Replace RABBIT_PASS with the password you chose for the openstack account in Rab-
bitMQ.
– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service
access:
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Note: Comment out or remove any other options in the [keystone_authtoken] sec-
tion.
– In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of net-
work topology changes:
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.
• In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual
networking infrastructure for instances.
• Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following
actions:
– In the [ml2] section, enable flat, VLAN, and VXLAN networks:
[ml2]
# ...
type_drivers = flat,vlan,vxlan
[ml2]
# ...
tenant_network_types = vxlan
– In the [ml2] section, enable the Linux bridge and layer-2 population mechanisms:
[ml2]
# ...
mechanism_drivers = linuxbridge,l2population
Warning: After you configure the ML2 plug-in, removing values in the
type_drivers option can lead to database inconsistency.
Note: The Linux bridge agent only supports VXLAN overlay networks.
[ml2]
# ...
extension_drivers = port_security
– In the [ml2_type_flat] section, configure the provider virtual network as a flat net-
work:
[ml2_type_flat]
# ...
flat_networks = provider
– In the [ml2_type_vxlan] section, configure the VXLAN network identifier range for
self-service networks:
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
58 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
[securitygroup]
# ...
enable_ipset = true
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
The Layer-3 (L3) agent provides routing and NAT services for self-service virtual networks.
• Edit the /etc/neutron/l3_agent.ini file and complete the following actions:
– In the [DEFAULT] section, configure the Linux bridge interface driver:
[DEFAULT]
# ...
interface_driver = linuxbridge
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
60 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
Note: The Nova compute service must be installed to complete this step. For more details see the
compute install guide found under the Installation Guides section of the docs website.
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Replace METADATA_SECRET with the secret you chose for the metadata proxy.
See the compute service configuration guide for the full set of options including overriding
the service catalog endpoint URL if necessary.
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Note: Database population occurs later for Networking because the script requires complete
server and plug-in configuration files.
4. Start the Networking services and configure them to start when the system boots.
For both networking options:
For networking option 2, also enable and start the layer-3 service:
The compute node handles connectivity and security groups for instances.
The Networking common component configuration includes the authentication mechanism, message
queue, and plug-in.
Note: Default configuration files vary by distribution. You might need to add these sections and options
rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets
indicates potential default configuration options that you should retain.
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in Rab-
bitMQ.
62 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Note: Comment out or remove any other options in the [keystone_authtoken] sec-
tion.
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
Choose the same networking option that you chose for the controller node to configure services specific
to it. Afterwards, return here and proceed to Configure the Compute service to use the Networking
service.
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge
iptables firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
Return to Networking compute node configuration
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
64 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
Neutron Documentation, Release 18.1.0.dev178
– In the [vxlan] section, enable VXLAN overlay networks, configure the IP address of the
physical network interface that handles overlay networks, and enable layer-2 population:
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
Return to Networking compute node configuration.
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
See the compute service configuration guide for the full set of options including overriding
the service catalog endpoint URL if necessary.
2. Start the Linux bridge agent and configure it to start when the system boots:
66 Chapter 5. Install and configure for Red Hat Enterprise Linux and CentOS
CHAPTER
SIX
After installing the operating system on each node for the architecture that you choose to deploy, you
must configure the network interfaces. We recommend that you disable any automated network man-
agement tools and manually edit the appropriate configuration files for your distribution. For more
information on how to configure networking on your distribution, see the documentation.
All nodes require Internet access for administrative purposes such as package installation, security up-
dates, Domain Name System (DNS), and Network Time Protocol (NTP). In most cases, nodes should
obtain Internet access through the management network interface. To highlight the importance of net-
work separation, the example architectures use private address space for the management network and
assume that the physical network infrastructure provides Internet access via Network Address Transla-
tion (NAT) or other methods. The example architectures use routable IP address space for the provider
(external) network and assume that the physical network infrastructure provides direct Internet access.
In the provider networks architecture, all instances attach directly to the provider network. In the self-
service (private) networks architecture, instances can attach to a self-service or provider network. Self-
service networks can reside entirely within OpenStack or provide some level of external network access
using Network Address Translation (NAT) through the provider network.
The example architectures assume use of the following networks:
• Management on 10.0.0.0/24 with gateway 10.0.0.1
This network requires a gateway to provide Internet access to all nodes for administrative purposes
such as package installation, security updates, Domain Name System (DNS), and Network Time
Protocol (NTP).
• Provider on 203.0.113.0/24 with gateway 203.0.113.1
This network requires a gateway to provide Internet access to instances in your OpenStack envi-
ronment.
You can modify these ranges and gateways to work with your particular network infrastructure.
Network interface names vary by distribution. Traditionally, interfaces use eth followed by a sequential
number. To cover all variations, this guide refers to the first interface as the interface with the lowest
number and the second interface as the interface with the highest number.
Unless you intend to use the exact configuration provided in this example architecture, you must modify
the networks in this procedure to match your environment. Each node must resolve the other nodes by
name in addition to IP address. For example, the controller name must resolve to 10.0.0.11,
the IP address of the management interface on the controller node.
67
Neutron Documentation, Release 18.1.0.dev178
Note: Your distribution does not enable a restrictive firewall by default. For more information about
securing your environment, refer to the OpenStack Security Guide.
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves
the actual hostname to another loopback IP address such as 127.0.1.1. You must comment
out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1
entry.
Note: This guide includes host entries for optional services in order to reduce complexity should
you choose to deploy them.
Note: Additional compute nodes should use 10.0.0.32, 10.0.0.33, and so on.
2. The provider interface uses a special configuration without an IP address assigned to it. Configure
the second interface as the provider interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth1 or ens224.
• Edit the /etc/network/interfaces file to contain the following:
# controller
10.0.0.11 controller
# compute1
10.0.0.31 compute1
# block1
10.0.0.41 block1
# object1
10.0.0.51 object1
# object2
10.0.0.52 object2
Warning: Some distributions add an extraneous entry in the /etc/hosts file that resolves
the actual hostname to another loopback IP address such as 127.0.1.1. You must comment
out or remove this entry to prevent name resolution problems. Do not remove the 127.0.0.1
entry.
Note: This guide includes host entries for optional services in order to reduce complexity should
you choose to deploy them.
We recommend that you verify network connectivity to the Internet and among the nodes before pro-
ceeding further.
1. From the controller node, test access to the Internet:
# ping -c 4 openstack.org
2. From the controller node, test access to the management interface on the compute node:
# ping -c 4 compute1
# ping -c 4 openstack.org
4. From the compute node, test access to the management interface on the controller node:
# ping -c 4 controller
Note: Your distribution does not enable a restrictive firewall by default. For more information about
securing your environment, refer to the OpenStack Security Guide.
6.2.1 Prerequisites
Before you configure the OpenStack Networking (neutron) service, you must create a database, service
credentials, and API endpoints.
1. To create the database, complete these steps:
• Use the database access client to connect to the database server as the root user:
$ mysql -u root -p
• Grant proper access to the neutron database, replacing NEUTRON_DBPASS with a suit-
able password:
IDENTIFIED BY 'NEUTRON_DBPASS';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'
,→%' \
IDENTIFIED BY 'NEUTRON_DBPASS';
$ . admin-openrc
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
+-------------+----------------------------------+
| Field | Value |
(continues on next page)
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 09753b537ac74422a68d2d791cf3714f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
| service_name | neutron |
| service_type | network |
| url | http://controller:9696 |
+--------------+----------------------------------+
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1ee14289c9374dffb5db92a5c112fc4e |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | f71529314dab4a4d8eca427e701d209e |
(continues on next page)
You can deploy the Networking service using one of two architectures represented by options 1 and 2.
Option 1 deploys the simplest possible architecture that only supports attaching instances to provider
(external) networks. No self-service (private) networks, routers, or floating IP addresses. Only the
admin or other privileged user can manage provider networks.
Option 2 augments option 1 with layer-3 services that support attaching instances to self-service net-
works. The demo or other unprivileged user can manage self-service networks including routers that
provide connectivity between self-service and provider networks. Additionally, floating IP addresses
provide connectivity to instances using self-service networks from external networks such as the Inter-
net.
Self-service networks typically use overlay networks. Overlay network protocols such as VXLAN in-
clude additional headers that increase overhead and decrease space available for the payload or user
data. Without knowledge of the virtual network infrastructure, instances attempt to send packets using
the default Ethernet maximum transmission unit (MTU) of 1500 bytes. The Networking service auto-
matically provides the correct MTU value to instances via DHCP. However, some cloud images do not
use DHCP or ignore the DHCP MTU option and require configuration using metadata or a script.
Choose one of the following networking options to configure services specific to it. Afterwards, return
here and proceed to Configure the metadata agent.
The Networking server component configuration includes the database, authentication mechanism, mes-
sage queue, topology change notifications, and plug-in.
Note: Default configuration files vary by distribution. You might need to add these sections and options
rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets
indicates potential default configuration options that you should retain.
Replace NEUTRON_DBPASS with the password you chose for the database.
Note: Comment out or remove any other connection options in the [database]
section.
– In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in and disable addi-
tional plug-ins:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
Replace RABBIT_PASS with the password you chose for the openstack account in Rab-
bitMQ.
– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service
access:
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
(continues on next page)
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Note: Comment out or remove any other options in the [keystone_authtoken] sec-
tion.
– In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of net-
work topology changes:
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.
• In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual
networking infrastructure for instances.
• Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following
actions:
– In the [ml2] section, enable flat and VLAN networks:
[ml2]
# ...
type_drivers = flat,vlan
[ml2]
# ...
tenant_network_types =
[ml2]
# ...
mechanism_drivers = linuxbridge
Warning: After you configure the ML2 plug-in, removing values in the
type_drivers option can lead to database inconsistency.
[ml2]
# ...
extension_drivers = port_security
– In the [ml2_type_flat] section, configure the provider virtual network as a flat net-
work:
[ml2_type_flat]
# ...
flat_networks = provider
[securitygroup]
# ...
enable_ipset = true
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge
iptables firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
Follow this provider network document from the General Installation Guide.
Return to Networking controller node configuration.
[database]
# ...
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/
,→neutron
Replace NEUTRON_DBPASS with the password you chose for the database.
Note: Comment out or remove any other connection options in the [database]
section.
– In the [DEFAULT] section, enable the Modular Layer 2 (ML2) plug-in, router service, and
overlapping IP addresses:
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in Rab-
bitMQ.
– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service
access:
[DEFAULT]
# ...
auth_strategy = keystone
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Note: Comment out or remove any other options in the [keystone_authtoken] sec-
tion.
– In the [DEFAULT] and [nova] sections, configure Networking to notify Compute of net-
work topology changes:
[DEFAULT]
# ...
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
Replace NOVA_PASS with the password you chose for the nova user in the Identity service.
• In the [oslo_concurrency] section, configure the lock path:
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging and switching) virtual
networking infrastructure for instances.
• Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following
actions:
– In the [ml2] section, enable flat, VLAN, and VXLAN networks:
[ml2]
# ...
type_drivers = flat,vlan,vxlan
[ml2]
# ...
tenant_network_types = vxlan
– In the [ml2] section, enable the Linux bridge and layer-2 population mechanisms:
[ml2]
# ...
mechanism_drivers = linuxbridge,l2population
Warning: After you configure the ML2 plug-in, removing values in the
type_drivers option can lead to database inconsistency.
Note: The Linux bridge agent only supports VXLAN overlay networks.
[ml2]
# ...
extension_drivers = port_security
– In the [ml2_type_flat] section, configure the provider virtual network as a flat net-
work:
[ml2_type_flat]
# ...
flat_networks = provider
– In the [ml2_type_vxlan] section, configure the VXLAN network identifier range for
self-service networks:
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
The Layer-3 (L3) agent provides routing and NAT services for self-service virtual networks.
• Edit the /etc/neutron/l3_agent.ini file and complete the following actions:
– In the [DEFAULT] section, configure the Linux bridge interface driver:
[DEFAULT]
# ...
interface_driver = linuxbridge
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
[DEFAULT]
# ...
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
Note: The Nova compute service must be installed to complete this step. For more details see the
compute install guide found under the Installation Guides section of the docs website.
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Replace METADATA_SECRET with the secret you chose for the metadata proxy.
See the compute service configuration guide for the full set of options including overriding
the service catalog endpoint URL if necessary.
Note: Database population occurs later for Networking because the script requires complete
server and plug-in configuration files.
The compute node handles connectivity and security groups for instances.
The Networking common component configuration includes the authentication mechanism, message
queue, and plug-in.
Note: Default configuration files vary by distribution. You might need to add these sections and options
rather than modifying existing sections and options. Also, an ellipsis (...) in the configuration snippets
indicates potential default configuration options that you should retain.
[DEFAULT]
# ...
transport_url = rabbit://openstack:RABBIT_PASS@controller
Replace RABBIT_PASS with the password you chose for the openstack account in Rab-
bitMQ.
– In the [DEFAULT] and [keystone_authtoken] sections, configure Identity service
access:
[DEFAULT]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
(continues on next page)
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
Note: Comment out or remove any other options in the [keystone_authtoken] sec-
tion.
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
Choose the same networking option that you chose for the controller node to configure services specific
to it. Afterwards, return here and proceed to Configure the Compute service to use the Networking
service.
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = false
– In the [securitygroup] section, enable security groups and configure the Linux bridge
iptables firewall driver:
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
Return to Networking compute node configuration
The Linux bridge agent builds layer-2 (bridging and switching) virtual networking infrastructure for
instances and handles security groups.
• Edit the /etc/neutron/plugins/ml2/linuxbridge_agent.ini file and complete
the following actions:
– In the [linux_bridge] section, map the provider virtual network to the provider physi-
cal network interface:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan = true
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.
,→IptablesFirewallDriver
– Ensure your Linux operating system kernel supports network bridge filters by verifying all
the following sysctl values are set to 1:
net.bridge.bridge-nf-call-iptables
net.bridge.bridge-nf-call-ip6tables
To enable networking bridge support, typically the br_netfilter kernel module needs
to be loaded. Check your operating systems documentation for additional details on enabling
this module.
Return to Networking compute node configuration.
[neutron]
# ...
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = NEUTRON_PASS
Replace NEUTRON_PASS with the password you chose for the neutron user in the Iden-
tity service.
See the compute service configuration guide for the full set of options including overriding
the service catalog endpoint URL if necessary.
SEVEN
This document discusses what is required for manual installation or integration into a production Open-
Stack deployment tool of conventional architectures that include the following types of nodes:
• Controller - Runs OpenStack control plane services such as REST APIs and databases.
• Network - Runs the layer-2, layer-3 (routing), DHCP, and metadata agents for the Networking
service. Some agents optional. Usually provides connectivity between provider (public) and
project (private) networks via NAT and floating IP addresses.
• Compute - Runs the hypervisor and layer-2 agent for the Networking service.
7.1.1 Packaging
Open vSwitch (OVS) includes OVN beginning with version 2.5 and considers it experimental. From
version 2.13 OVN has been released as separate project. The Networking service integration for OVN
is now one of the in-tree Neutron drivers so should be delivered with neutron package, but older
versions of this integration were delivered with independent package, typically networking-ovn.
Building OVS from source automatically installs OVN for releases older than 2.13. For newer re-
leases it is required to build OVS and OVN separately. For deployment tools using distribution pack-
ages, the openvswitch-ovn package for RHEL/CentOS and compatible distributions automati-
cally installs openvswitch as a dependency. Ubuntu/Debian includes ovn-central, ovn-host,
ovn-docker, and ovn-common packages that pull in the appropriate Open vSwitch dependencies as
needed.
A python-networking-ovn RPM may be obtained for Fedora or CentOS from the RDO project.
Since Ussuri release OVN driver is shipped with neutron package. A package based on the older
branch of networking-ovn can be found at https://trunk.rdoproject.org/.
Fedora and CentOS RPM builds of OVS and OVN from the master branch of ovs can be found in
this COPR repository: https://copr.fedorainfracloud.org/coprs/leifmadsen/ovs-master/.
91
Neutron Documentation, Release 18.1.0.dev178
Each controller node runs the OVS service (including dependent services such as ovsdb-server) and
the ovn-northd service. However, only a single instance of the ovsdb-server and ovn-northd
services can operate in a deployment. However, deployment tools can implement active/passive high-
availability using a management tool that monitors service health and automatically starts these services
on another node after failure of the primary node. See the Frequently Asked Questions for more infor-
mation.
1. Install the openvswitch-ovn and networking-ovn packages.
2. Start the OVS service. The central OVS service starts the ovsdb-server service that manages
OVN databases.
Using the systemd unit:
3. Configure the ovsdb-server component. By default, the ovsdb-server service only per-
mits local access to databases via Unix socket. However, OVN services on compute nodes require
access to these databases.
• Permit remote database access.
Replace 0.0.0.0 with the IP address of the management network interface on the con-
troller node to avoid listening on all interfaces.
Note: Permit remote access to TCP ports: 6640 (OVS) to VTEPS (if you use vteps),
6642 (SBDB) to hosts running neutron-server, gateway nodes that run ovn-controller, and
compute node services like ovn-controller an ovn-metadata-agent. 6641 (NBDB) to hosts
running neutron-server.
# /usr/share/openvswitch/scripts/ovn-ctl start_northd
5. Configure the Networking server component. The Networking service implements OVN as an
ML2 driver. Edit the /etc/neutron/neutron.conf file:
• Enable the ML2 core plug-in.
[DEFAULT]
...
core_plugin = ml2
Note: To enable VLAN self-service networks, make sure that OVN version 2.11 (or higher)
is used, then add vlan to the tenant_network_types option. The first network type
in the list becomes the default self-service network type.
To use IPv6 for all overlay (tunnel) network endpoints, set the overlay_ip_version
option to 6.
• Configure the Geneve ID range and maximum header size. The IP version overhead (20
bytes for IPv4 (default) or 40 bytes for IPv6) is added to the maximum header size based on
the ML2 overlay_ip_version option.
[ml2_type_geneve]
...
vni_ranges = 1:65536
max_header_size = 38
Note: The Networking service uses the vni_ranges option to allocate network segments.
However, OVN ignores the actual values. Thus, the ID range only determines the quantity
• Optionally, enable support for VLAN provider and self-service networks on one or more
physical networks. If you specify only the physical network, only administrative (privileged)
users can manage VLAN networks. Additionally specifying a VLAN ID range for a physical
network enables regular (non-privileged) users to manage VLAN networks. The Networking
service allocates the VLAN ID for each self-service network using the VLAN ID range for
the physical network.
[ml2_type_vlan]
...
network_vlan_ranges = PHYSICAL_NETWORK:MIN_VLAN_ID:MAX_VLAN_ID
Replace PHYSICAL_NETWORK with the physical network name and optionally define the
minimum and maximum VLAN IDs. Use a comma to separate each physical network.
For example, to enable support for administrative VLAN networks on the physnet1 net-
work and self-service VLAN networks on the physnet2 network using VLAN IDs 1001
to 2000:
network_vlan_ranges = physnet1,physnet2:1001:2000
[securitygroup]
...
enable_security_group = true
[ovn]
...
ovn_nb_connection = tcp:IP_ADDRESS:6641
ovn_sb_connection = tcp:IP_ADDRESS:6642
ovn_l3_scheduler = OVN_L3_SCHEDULER
Note: Replace IP_ADDRESS with the IP address of the controller node that runs the
ovsdb-server service. Replace OVN_L3_SCHEDULER with leastloaded if you
want the scheduler to select a compute node with the least number of gateway ports or
chance if you want the scheduler to randomly select a compute node from the available
list of compute nodes.
Deployments using OVN native layer-3 and DHCP services do not require conventional network nodes
because connectivity to external networks (including VTEP gateways) and routing occurs on compute
nodes.
Each compute node runs the OVS and ovn-controller services. The ovn-controller service
replaces the conventional OVS layer-2 agent.
1. Install the openvswitch-ovn and networking-ovn packages.
2. Start the OVS service.
Using the systemd unit:
Replace IP_ADDRESS with the IP address of the controller node that runs the
ovsdb-server service.
• Enable one or more overlay network protocols. At a minimum, OVN requires enabling
the geneve protocol. Deployments using VTEP gateways should also enable the vxlan
protocol.
Note: Deployments without VTEP gateways can safely enable both protocols.
Replace IP_ADDRESS with the IP address of the overlay network interface on the compute
node.
# /usr/share/openvswitch/scripts/ovn-ctl start_controller
# ovn-sbctl show
<output>
TripleO is a project aimed at installing, upgrading and operating OpenStack clouds using OpenStacks
own cloud facilities as the foundation.
RDO is the OpenStack distribution that runs on top of CentOS, and can be deployed via TripleO.
TripleO Quickstart is an easy way to try out TripleO in a libvirt virtualized environment.
In this document we will stick to the details of installing a 3 controller + 1 compute in high availability
through TripleO Quickstart, but the non-quickstart details in this document also work with TripleO.
Note: This deployment requires 32GB for the VMs, so your host may have >32GB of RAM at least. If
you have 32GB I recommend to trim down the compute node memory in config/nodes/3ctlr_1comp.yml
to 2GB and controller nodes to 5GB.
$ curl -O https://raw.githubusercontent.com/openstack/tripleo-
,→quickstart/master/quickstart.sh
$ export ansible_tags="untagged,provision,environment,libvirt,\
undercloud-scripts,undercloud-inventory,overcloud-scripts,\
undercloud-setup,undercloud-install,undercloud-post-install,\
overcloud-prep-config"
Note: When deploying directly on localhost use the loopback address 127.0.0.2 as your
$VIRTHOST. The loopback address 127.0.0.1 is reserved by ansible. Also make sure that
127.0.0.2 is accessible via public keys:
Note: You can adjust RAM/VCPUs if you want by editing config/nodes/3ctlr_1comp.yml before
running the above command. If you have enough memory stick to the defaults. We recommend
using 8GB of RAM for the controller nodes.
5. When quickstart has finished you will have 5 VMs ready to be used, 1 for the undercloud (TripleOs
node to deploy your openstack from), 3 VMs for controller nodes and 1 VM for the compute node.
6. Log in into the undercloud:
9. Grab a coffee, that may take around 1 hour (depending on your hardware).
10. If anything goes wrong, go to IRC on freenode, and ask on #oooq
Once deployed, inside the undercloud root directory two files are present: stackrc and overcloudrc,
which will let you connect to the APIs of the undercloud (managing the openstack node), and to the
overcloud (where your instances would live).
We can find out the existing controller/computes this way:
We can connect to the IP address in the openstack server list we showed before.
type: patch
options: {peer="patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-
,→dca5145b6fe6"}
Port "eth0"
Interface "eth0"
...
Bridge br-int
fail_mode: secure
Port "ovn-c8b85a-0"
Interface "ovn-c8b85a-0"
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.0.17"}
Port "ovn-b5643d-0"
Interface "ovn-b5643d-0"
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.0.14"}
Port "ovn-14d60a-0"
Interface "ovn-14d60a-0"
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.0.12"}
Port "patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-dca5145b6fe6"
Interface "patch-br-int-to-provnet-84d63c87-aad1-43d0-bdc9-
,→dca5145b6fe6"
type: patch
options: {peer="patch-provnet-84d63c87-aad1-43d0-bdc9-
,→dca5145b6fe6-to-br-int"}
Port br-int
Interface br-int
type: internal
Well, now you have a virtual cloud with 3 controllers in HA, and one compute node, but no instances or
routers running. We can give it a try and create a few resources:
source ~/overcloudrc
curl http://download.cirros-cloud.net/0.5.0/cirros-0.5.1-x86_64-disk.img \
> cirros-0.5.1-x86_64-disk.img
openstack image create "cirros" --file cirros-0.5.1-x86_64-disk.img \
--disk-format qcow2 --container-format bare --public
Note: You can now log in into the instance if you want. In a CirrOS >0.4.0 image, the login account is
cirros. The password is gocubsgo.
$ ip a | grep eth0 -A 10
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1442 qdisc pfifo_fast qlen
,→1000
$ ping 10.0.0.1
PING 10.0.0.1 (10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: seq=0 ttl=63 time=2.145 ms
64 bytes from 10.0.0.1: seq=1 ttl=63 time=1.025 ms
64 bytes from 10.0.0.1: seq=2 ttl=63 time=0.836 ms
^C
--- 10.0.0.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.836/1.335/2.145 ms
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=52 time=3.943 ms
64 bytes from 8.8.8.8: seq=1 ttl=52 time=4.519 ms
64 bytes from 8.8.8.8: seq=2 ttl=52 time=3.778 ms
This chapter explains how to install and configure the Networking service (neutron) using the provider
networks or self-service networks option.
For more information about the Networking service including virtual networking components, layout,
and traffic flows, see the OpenStack Networking Guide.
EIGHT
This guide targets OpenStack administrators seeking to deploy and manage OpenStack Networking
(neutron).
8.1 Introduction
The OpenStack Networking service (neutron) provides an API that allows users to set up and define
network connectivity and addressing in the cloud. The project code-name for Networking services is
neutron. OpenStack Networking handles the creation and management of a virtual networking infras-
tructure, including networks, switches, subnets, and routers for devices managed by the OpenStack
Compute service (nova). Advanced services such as firewalls or virtual private network (VPN) can also
be used.
OpenStack Networking consists of the neutron-server, a database for persistent storage, and any num-
ber of plug-in agents, which provide other services such as interfacing with native Linux networking
mechanisms, external devices, or SDN controllers.
OpenStack Networking is entirely standalone and can be deployed to a dedicated host. If your deploy-
ment uses a controller host to run centralized Compute components, you can deploy the Networking
server to that specific host instead.
OpenStack Networking integrates with various OpenStack components:
• OpenStack Identity service (keystone) is used for authentication and authorization of API requests.
• OpenStack Compute service (nova) is used to plug each virtual NIC on the VM into a particular
network.
• OpenStack Dashboard (horizon) is used by administrators and project users to create and manage
network services through a web-based graphical interface.
Note: The network address ranges used in this guide are chosen in accordance with RFC 5737 and RFC
3849, and as such are restricted to the following:
IPv4:
• 192.0.2.0/24
• 198.51.100.0/24
• 203.0.113.0/24
IPv6:
103
Neutron Documentation, Release 18.1.0.dev178
• 2001:DB8::/32
The network address ranges in the examples of this guide should not be used for any purpose other than
documentation.
Note: To reduce clutter, this guide removes command output without relevance to the particular action.
Ethernet
Ethernet is a networking protocol, specified by the IEEE 802.3 standard. Most wired network interface
cards (NICs) communicate using Ethernet.
In the OSI model of networking protocols, Ethernet occupies the second layer, which is known as the
data link layer. When discussing Ethernet, you will often hear terms such as local network, layer 2, L2,
link layer and data link layer.
In an Ethernet network, the hosts connected to the network communicate by exchanging frames. Every
host on an Ethernet network is uniquely identified by an address called the media access control (MAC)
address. In particular, every virtual machine instance in an OpenStack environment has a unique MAC
address, which is different from the MAC address of the compute host. A MAC address has 48 bits and
is typically represented as a hexadecimal string, such as 08:00:27:b9:88:74. The MAC address
is hard-coded into the NIC by the manufacturer, although modern NICs allow you to change the MAC
address programmatically. In Linux, you can retrieve the MAC address of a NIC using the ip command:
Conceptually, you can think of an Ethernet network as a single bus that each of the network hosts
connects to. In early implementations, an Ethernet network consisted of a single coaxial cable that
hosts would tap into to connect to the network. However, network hosts in modern Ethernet networks
connect directly to a network device called a switch. Still, this conceptual model is useful, and in network
diagrams (including those generated by the OpenStack dashboard) an Ethernet network is often depicted
as if it was a single bus. Youll sometimes hear an Ethernet network referred to as a layer 2 segment.
In an Ethernet network, every host on the network can send a frame directly to every other host. An Eth-
ernet network also supports broadcasts so that one host can send a frame to every host on the network
by sending to the special MAC address ff:ff:ff:ff:ff:ff. ARP and DHCP are two notable pro-
tocols that use Ethernet broadcasts. Because Ethernet networks support broadcasts, you will sometimes
hear an Ethernet network referred to as a broadcast domain.
When a NIC receives an Ethernet frame, by default the NIC checks to see if the destination MAC
address matches the address of the NIC (or the broadcast address), and the Ethernet frame is discarded
if the MAC address does not match. For a compute host, this behavior is undesirable because the frame
may be intended for one of the instances. NICs can be configured for promiscuous mode, where they
pass all Ethernet frames to the operating system, even if the MAC address does not match. Compute
hosts should always have the appropriate NICs configured for promiscuous mode.
As mentioned earlier, modern Ethernet networks use switches to interconnect the network hosts. A
switch is a box of networking hardware with a large number of ports that forward Ethernet frames from
one connected host to another. When hosts first send frames over the switch, the switch doesnt know
which MAC address is associated with which port. If an Ethernet frame is destined for an unknown MAC
address, the switch broadcasts the frame to all ports. The switch learns which MAC addresses are at
which ports by observing the traffic. Once it knows which MAC address is associated with a port, it can
send Ethernet frames to the correct port instead of broadcasting. The switch maintains the mappings of
MAC addresses to switch ports in a table called a forwarding table or forwarding information base (FIB).
Switches can be daisy-chained together, and the resulting connection of switches and hosts behaves like
a single network.
VLANs
VLAN is a networking technology that enables a single switch to act as if it was multiple independent
switches. Specifically, two hosts that are connected to the same switch but on different VLANs do not
see each others traffic. OpenStack is able to take advantage of VLANs to isolate the traffic of different
projects, even if the projects happen to have instances running on the same compute host. Each VLAN
has an associated numerical ID, between 1 and 4095. We say VLAN 15 to refer to the VLAN with a
numerical ID of 15.
To understand how VLANs work, lets consider VLAN applications in a traditional IT environment,
where physical hosts are attached to a physical switch, and no virtualization is involved. Imagine a
scenario where you want three isolated networks but you only have a single physical switch. The net-
work administrator would choose three VLAN IDs, for example, 10, 11, and 12, and would configure
the switch to associate switchports with VLAN IDs. For example, switchport 2 might be associated
with VLAN 10, switchport 3 might be associated with VLAN 11, and so forth. When a switchport is
configured for a specific VLAN, it is called an access port. The switch is responsible for ensuring that
the network traffic is isolated across the VLANs.
Now consider the scenario that all of the switchports in the first switch become occupied, and so the
organization buys a second switch and connects it to the first switch to expand the available number of
switchports. The second switch is also configured to support VLAN IDs 10, 11, and 12. Now imagine
host A connected to switch 1 on a port configured for VLAN ID 10 sends an Ethernet frame intended
for host B connected to switch 2 on a port configured for VLAN ID 10. When switch 1 forwards the
Ethernet frame to switch 2, it must communicate that the frame is associated with VLAN ID 10.
If two switches are to be connected together, and the switches are configured for VLANs, then the
switchports used for cross-connecting the switches must be configured to allow Ethernet frames from
any VLAN to be forwarded to the other switch. In addition, the sending switch must tag each Ethernet
frame with the VLAN ID so that the receiving switch can ensure that only hosts on the matching VLAN
are eligible to receive the frame.
A switchport that is configured to pass frames from all VLANs and tag them with the VLAN IDs is
called a trunk port. IEEE 802.1Q is the network standard that describes how VLAN tags are encoded in
Ethernet frames when trunking is being used.
Note that if you are using VLANs on your physical switches to implement project isolation in your
OpenStack cloud, you must ensure that all of your switchports are configured as trunk ports.
It is important that you select a VLAN range not being used by your current network infrastructure. For
example, if you estimate that your cloud must support a maximum of 100 projects, pick a VLAN range
outside of that value, such as VLAN 200299. OpenStack, and all physical network infrastructure that
handles project networks, must then support this VLAN range.
Trunking is used to connect between different switches. Each trunk uses a tag to identify which VLAN
is in use. This ensures that switches on the same VLAN can communicate.
While NICs use MAC addresses to address network hosts, TCP/IP applications use IP addresses. The
Address Resolution Protocol (ARP) bridges the gap between Ethernet and IP by translating IP addresses
into MAC addresses.
IP addresses are broken up into two parts: a network number and a host identifier. Two hosts are on
the same subnet if they have the same network number. Recall that two hosts can only communicate
directly over Ethernet if they are on the same local network. ARP assumes that all machines that are in
the same subnet are on the same local network. Network administrators must take care when assigning
IP addresses and netmasks to hosts so that any two hosts that are in the same subnet are on the same
local network, otherwise ARP does not work properly.
To calculate the network number of an IP address, you must know the netmask associated with the
address. A netmask indicates how many of the bits in the 32-bit IP address make up the network number.
There are two syntaxes for expressing a netmask:
• dotted quad
• classless inter-domain routing (CIDR)
Consider an IP address of 192.0.2.5, where the first 24 bits of the address are the network number. In
dotted quad notation, the netmask would be written as 255.255.255.0. CIDR notation includes both
the IP address and netmask, and this example would be written as 192.0.2.5/24.
Note: Creating CIDR subnets including a multicast address or a loopback address cannot be used in an
OpenStack environment. For example, creating a subnet using 224.0.0.0/16 or 127.0.1.0/24
is not supported.
Sometimes we want to refer to a subnet, but not any particular IP address on the subnet. A common
convention is to set the host identifier to all zeros to make reference to a subnet. For example, if a hosts
IP address is 192.0.2.24/24, then we would say the subnet is 192.0.2.0/24.
To understand how ARP translates IP addresses to MAC addresses, consider the following example.
Assume host A has an IP address of 192.0.2.5/24 and a MAC address of fc:99:47:49:d4:a0,
and wants to send a packet to host B with an IP address of 192.0.2.7. Note that the network number
is the same for both hosts, so host A is able to send frames directly to host B.
The first time host A attempts to communicate with host B, the destination MAC address is not known.
Host A makes an ARP request to the local network. The request is a broadcast with a message like this:
To: everybody (ff:ff:ff:ff:ff:ff). I am looking for the computer who has IP address 192.0.2.7. Signed:
MAC address fc:99:47:49:d4:a0.
Host B responds with a response like this:
To: fc:99:47:49:d4:a0. I have IP address 192.0.2.7. Signed: MAC address 54:78:1a:86:00:a5.
Host A then sends Ethernet frames to host B.
You can initiate an ARP request manually using the arping command. For example, to send an ARP
request to IP address 192.0.2.132:
To reduce the number of ARP requests, operating systems maintain an ARP cache that contains the
mappings of IP addresses to MAC address. On a Linux machine, you can view the contents of the ARP
cache by using the arp command:
$ arp -n
Address HWtype HWaddress Flags Mask
,→Iface
192.0.2.3 ether 52:54:00:12:35:03 C
,→eth0
192.0.2.2 ether 52:54:00:12:35:02 C
,→eth0
DHCP
Hosts connected to a network use the Dynamic Host Configuration Protocol (DHCP) to dynamically
obtain IP addresses. A DHCP server hands out the IP addresses to network hosts, which are the DHCP
clients.
DHCP clients locate the DHCP server by sending a UDP packet from port 68 to address 255.255.
255.255 on port 67. Address 255.255.255.255 is the local network broadcast address: all hosts
on the local network see the UDP packets sent to this address. However, such packets are not forwarded
to other networks. Consequently, the DHCP server must be on the same local network as the client, or
the server will not receive the broadcast. The DHCP server responds by sending a UDP packet from port
67 to port 68 on the client. The exchange looks like this:
1. The client sends a discover (Im a client at MAC address 08:00:27:b9:88:74, I need an IP
address)
2. The server sends an offer (OK 08:00:27:b9:88:74, Im offering IP address 192.0.2.112)
3. The client sends a request (Server 192.0.2.131, I would like to have IP 192.0.2.112)
4. The server sends an acknowledgement (OK 08:00:27:b9:88:74, IP 192.0.2.112 is
yours)
OpenStack uses a third-party program called dnsmasq to implement the DHCP server. Dnsmasq writes
to the syslog, where you can observe the DHCP request and replies:
When troubleshooting an instance that is not reachable over the network, it can be helpful to examine
this log to verify that all four steps of the DHCP protocol were carried out for the instance in question.
IP
The Internet Protocol (IP) specifies how to route packets between hosts that are connected to different
local networks. IP relies on special network hosts called routers or gateways. A router is a host that is
connected to at least two local networks and can forward IP packets from one local network to another.
A router has multiple IP addresses: one for each of the networks it is connected to.
In the OSI model of networking protocols IP occupies the third layer, known as the network layer. When
discussing IP, you will often hear terms such as layer 3, L3, and network layer.
A host sending a packet to an IP address consults its routing table to determine which machine on the
local network(s) the packet should be sent to. The routing table maintains a list of the subnets associated
with each local network that the host is directly connected to, as well as a list of routers that are on these
local networks.
On a Linux machine, any of the following commands displays the routing table:
$ ip route show
$ route -n
$ netstat -rn
Line 1 of the output specifies the location of the default route, which is the effective routing rule if
none of the other rules match. The router associated with the default route (192.0.2.2 in the example
above) is sometimes referred to as the default gateway. A DHCP server typically transmits the IP address
of the default gateway to the DHCP client along with the clients IP address and a netmask.
Line 2 of the output specifies that IPs in the 192.0.2.0/24 subnet are on the local network associated
with the network interface eth0.
Line 3 of the output specifies that IPs in the 198.51.100.0/25 subnet are on the local network
associated with the network interface eth1.
Line 4 of the output specifies that IPs in the 198.51.100.192/26 subnet are on the local network
associated with the network interface virbr0.
The output of the route -n and netstat -rn commands are formatted in a slightly different way.
This example shows how the same routes would be formatted using these commands:
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt
,→Iface
The ip route get command outputs the route for a destination IP address. From the below example,
destination IP address 192.0.2.14 is on the local network of eth0 and would be sent directly:
The destination IP address 203.0.113.34 is not on any of the connected local networks and would
be forwarded to the default gateway at 192.0.2.2:
It is common for a packet to hop across multiple routers to reach its final destination. On a Linux
machine, the traceroute and more recent mtr programs prints out the IP address of each router that
an IP packet traverses along its path to its destination.
TCP/UDP/ICMP
For networked software applications to communicate over an IP network, they must use a protocol
layered atop IP. These protocols occupy the fourth layer of the OSI model known as the transport layer
or layer 4. See the Protocol Numbers web page maintained by the Internet Assigned Numbers Authority
(IANA) for a list of protocols that layer atop IP and their associated numbers.
The Transmission Control Protocol (TCP) is the most commonly used layer 4 protocol in networked ap-
plications. TCP is a connection-oriented protocol: it uses a client-server model where a client connects
to a server, where server refers to the application that receives connections. The typical interaction in a
TCP-based application proceeds as follows:
1. Client connects to server.
2. Client and server exchange data.
3. Client or server disconnects.
Because a network host may have multiple TCP-based applications running, TCP uses an addressing
scheme called ports to uniquely identify TCP-based applications. A TCP port is associated with a
number in the range 1-65535, and only one application on a host can be associated with a TCP port at a
time, a restriction that is enforced by the operating system.
A TCP server is said to listen on a port. For example, an SSH server typically listens on port 22. For a
client to connect to a server using TCP, the client must know both the IP address of a servers host and
the servers TCP port.
The operating system of the TCP client application automatically assigns a port number to the client.
The client owns this port number until the TCP connection is terminated, after which the operating
system reclaims the port number. These types of ports are referred to as ephemeral ports.
IANA maintains a registry of port numbers for many TCP-based services, as well as services that use
other layer 4 protocols that employ ports. Registering a TCP port number is not required, but registering
a port number is helpful to avoid collisions with other services. See firewalls and default ports in Open-
Stack Installation Guide for the default TCP ports used by various services involved in an OpenStack
deployment.
The most common application programming interface (API) for writing TCP-based applications is called
Berkeley sockets, also known as BSD sockets or, simply, sockets. The sockets API exposes a stream ori-
ented interface for writing TCP applications. From the perspective of a programmer, sending data over
a TCP connection is similar to writing a stream of bytes to a file. It is the responsibility of the operating
systems TCP/IP implementation to break up the stream of data into IP packets. The operating system is
also responsible for automatically retransmitting dropped packets, and for handling flow control to en-
sure that transmitted data does not overrun the senders data buffers, receivers data buffers, and network
capacity. Finally, the operating system is responsible for re-assembling the packets in the correct order
into a stream of data on the receivers side. Because TCP detects and retransmits lost packets, it is said
to be a reliable protocol.
The User Datagram Protocol (UDP) is another layer 4 protocol that is the basis of several well-known
networking protocols. UDP is a connectionless protocol: two applications that communicate over UDP
do not need to establish a connection before exchanging data. UDP is also an unreliable protocol. The
operating system does not attempt to retransmit or even detect lost UDP packets. The operating system
also does not provide any guarantee that the receiving application sees the UDP packets in the same
order that they were sent in.
UDP, like TCP, uses the notion of ports to distinguish between different applications running on the same
system. Note, however, that operating systems treat UDP ports separately from TCP ports. For example,
it is possible for one application to be associated with TCP port 16543 and a separate application to be
associated with UDP port 16543.
Like TCP, the sockets API is the most common API for writing UDP-based applications. The sockets
API provides a message-oriented interface for writing UDP applications: a programmer sends data over
UDP by transmitting a fixed-sized message. If an application requires retransmissions of lost packets
or a well-defined ordering of received packets, the programmer is responsible for implementing this
functionality in the application code.
DHCP, the Domain Name System (DNS), the Network Time Protocol (NTP), and Virtual extensible
local area network (VXLAN) are examples of UDP-based protocols used in OpenStack deployments.
UDP has support for one-to-many communication: sending a single packet to multiple hosts. An appli-
cation can broadcast a UDP packet to all of the network hosts on a local network by setting the receiver
IP address as the special IP broadcast address 255.255.255.255. An application can also send a
UDP packet to a set of receivers using IP multicast. The intended receiver applications join a multicast
group by binding a UDP socket to a special IP address that is one of the valid multicast group addresses.
The receiving hosts do not have to be on the same local network as the sender, but the intervening routers
must be configured to support IP multicast routing. VXLAN is an example of a UDP-based protocol
that uses IP multicast.
The Internet Control Message Protocol (ICMP) is a protocol used for sending control messages over
an IP network. For example, a router that receives an IP packet may send an ICMP packet back to the
source if there is no route in the routers routing table that corresponds to the destination address (ICMP
code 1, destination host unreachable) or if the IP packet is too large for the router to handle (ICMP code
4, fragmentation required and dont fragment flag is set).
The ping and mtr Linux command-line tools are two examples of network utilities that use ICMP.
Switches
Switches are Multi-Input Multi-Output (MIMO) devices that enable packets to travel from one node to
another. Switches connect hosts that belong to the same layer-2 network. Switches enable forwarding of
the packet received on one port (input) to another port (output) so that they reach the desired destination
node. Switches operate at layer-2 in the networking model. They forward the traffic based on the
destination Ethernet address in the packet header.
Routers
Routers are special devices that enable packets to travel from one layer-3 network to another. Routers
enable communication between two nodes on different layer-3 networks that are not directly connected
to each other. Routers operate at layer-3 in the networking model. They route the traffic based on the
destination IP address in the packet header.
Firewalls
Firewalls are used to regulate traffic to and from a host or a network. A firewall can be either a specialized
device connecting two networks or a software-based filtering mechanism implemented on an operating
system. Firewalls are used to restrict traffic to a host based on the rules defined on the host. They can
filter packets based on several criteria such as source IP address, destination IP address, port numbers,
connection state, and so on. It is primarily used to protect the hosts from unauthorized access and
malicious attacks. Linux-based operating systems implement firewalls through iptables.
Load balancers
Load balancers can be software-based or hardware-based devices that allow traffic to evenly be dis-
tributed across several servers. By distributing the traffic across multiple servers, it avoids overload of
a single server thereby preventing a single point of failure in the product. This further improves the
performance, network throughput, and response time of the servers. Load balancers are typically used
in a 3-tier architecture. In this model, a load balancer receives a request from the front-end web server,
which then forwards the request to one of the available back-end database servers for processing. The
response from the database server is passed back to the web server for further processing.
Tunneling is a mechanism that makes transfer of payloads feasible over an incompatible delivery net-
work. It allows the network user to gain access to denied or insecure networks. Data encryption may be
employed to transport the payload, ensuring that the encapsulated user network data appears as public
even though it is private and can easily pass the conflicting network.
Generic routing encapsulation (GRE) is a protocol that runs over IP and is employed when delivery and
payload protocols are compatible but payload addresses are incompatible. For instance, a payload might
think it is running on a datalink layer but it is actually running over a transport layer using datagram
protocol over IP. GRE creates a private point-to-point connection and works by encapsulating a pay-
load. GRE is a foundation protocol for other tunnel protocols but the GRE tunnels provide only weak
authentication.
The purpose of VXLAN is to provide scalable network isolation. VXLAN is a Layer 2 overlay scheme
on a Layer 3 network. It allows an overlay layer-2 network to spread across multiple underlay layer-3
network domains. Each overlay is termed a VXLAN segment. Only VMs within the same VXLAN
segment can communicate.
Geneve is designed to recognize and accommodate changing capabilities and needs of different devices
in network virtualization. It provides a framework for tunneling rather than being prescriptive about the
entire system. Geneve defines the content of the metadata flexibly that is added during encapsulation
and tries to adapt to various virtualization scenarios. It uses UDP as its transport protocol and is dynamic
in size using extensible option headers. Geneve supports unicast, multicast, and broadcast.
A namespace is a way of scoping a particular set of identifiers. Using a namespace, you can use the
same identifier multiple times in different namespaces. You can also restrict an identifier set visible to
particular processes.
For example, Linux provides namespaces for networking and processes, among other things. If a process
is running within a process namespace, it can only see and communicate with other processes in the same
namespace. So, if a shell in a particular process namespace ran ps waux, it would only show the other
processes in the same namespace.
In a network namespace, the scoped identifiers are network devices; so a given network device, such as
eth0, exists in a particular namespace. Linux starts up with a default network namespace, so if your
operating system does not do anything special, that is where all the network devices will be located. But
it is also possible to create further non-default namespaces, and create new devices in those namespaces,
or to move an existing device from one namespace to another.
Each network namespace also has its own routing table, and in fact this is the main reason for namespaces
to exist. A routing table is keyed by destination IP address, so network namespaces are what you need if
you want the same destination IP address to mean different things at different times - which is something
that OpenStack Networking requires for its feature of providing overlapping IP addresses in different
virtual networks.
Each network namespace also has its own set of iptables (for both IPv4 and IPv6). So, you can apply
different security to flows with the same IP addressing in different namespaces, as well as different
routing.
Any given Linux process runs in a particular network namespace. By default this is inherited from its
parent process, but a process with the right capabilities can switch itself into a different namespace; in
practice this is mostly done using the ip netns exec NETNS COMMAND... invocation, which
starts COMMAND running in the namespace named NETNS. Suppose such a process sends out a message
to IP address A.B.C.D, the effect of the namespace is that A.B.C.D will be looked up in that namespaces
routing table, and that will determine the network device that the message is transmitted through.
Virtual routing and forwarding is an IP technology that allows multiple instances of a routing table to
coexist on the same router at the same time. It is another name for the network namespace functionality
described above.
Network Address Translation (NAT) is a process for modifying the source or destination addresses in
the headers of an IP packet while the packet is in transit. In general, the sender and receiver applications
are not aware that the IP packets are being manipulated.
NAT is often implemented by routers, and so we will refer to the host performing NAT as a NAT router.
However, in OpenStack deployments it is typically Linux servers that implement the NAT functionality,
not hardware routers. These servers use the iptables software package to implement the NAT function-
ality.
There are multiple variations of NAT, and here we describe three kinds commonly found in OpenStack
deployments.
SNAT
In Source Network Address Translation (SNAT), the NAT router modifies the IP address of the sender
in IP packets. SNAT is commonly used to enable hosts with private addresses to communicate with
servers on the public Internet.
RFC 1918 reserves the following three subnets as private addresses:
• 10.0.0.0/8
• 172.16.0.0/12
• 192.168.0.0/16
These IP addresses are not publicly routable, meaning that a host on the public Internet can not send
an IP packet to any of these addresses. Private IP addresses are widely used in both residential and
corporate environments.
Often, an application running on a host with a private IP address will need to connect to a server on the
public Internet. An example is a user who wants to access a public website such as www.openstack.org.
If the IP packets reach the web server at www.openstack.org with a private IP address as the source, then
the web server cannot send packets back to the sender.
SNAT solves this problem by modifying the source IP address to an IP address that is routable on the
public Internet. There are different variations of SNAT; in the form that OpenStack deployments use, a
NAT router on the path between the sender and receiver replaces the packets source IP address with the
routers public IP address. The router also modifies the source TCP or UDP port to another value, and
the router maintains a record of the senders true IP address and port, as well as the modified IP address
and port.
When the router receives a packet with the matching IP address and port, it translates these back to the
private IP address and port, and forwards the packet along.
Because the NAT router modifies ports as well as IP addresses, this form of SNAT is sometimes referred
to as Port Address Translation (PAT). It is also sometimes referred to as NAT overload.
OpenStack uses SNAT to enable applications running inside of instances to connect out to the public
Internet.
DNAT
In Destination Network Address Translation (DNAT), the NAT router modifies the IP address of the
destination in IP packet headers.
OpenStack uses DNAT to route packets from instances to the OpenStack metadata service. Applications
running inside of instances access the OpenStack metadata service by making HTTP GET requests to a
web server with IP address 169.254.169.254. In an OpenStack deployment, there is no host with this IP
address. Instead, OpenStack uses DNAT to change the destination IP of these packets so they reach the
network interface that a metadata service is listening on.
One-to-one NAT
In one-to-one NAT, the NAT router maintains a one-to-one mapping between private IP addresses and
public IP addresses. OpenStack uses one-to-one NAT to implement floating IP addresses.
OpenStack Networking allows you to create and manage network objects, such as networks, subnets, and
ports, which other OpenStack services can use. Plug-ins can be implemented to accommodate different
networking equipment and software, providing flexibility to OpenStack architecture and deployment.
The Networking service, code-named neutron, provides an API that lets you define network connectivity
and addressing in the cloud. The Networking service enables operators to leverage different networking
technologies to power their cloud networking. The Networking service also provides an API to configure
and manage a variety of network services ranging from L3 forwarding and Network Address Translation
(NAT) to perimeter firewalls, and virtual private networks.
It includes the following components:
API server The OpenStack Networking API includes support for Layer 2 networking and IP Address
Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing
between Layer 2 networks and gateways to external networks. OpenStack Networking includes a
growing list of plug-ins that enable interoperability with various commercial and open source net-
work technologies, including routers, switches, virtual switches and software-defined networking
(SDN) controllers.
OpenStack Networking plug-in and agents Plugs and unplugs ports, creates networks or subnets, and
provides IP addressing. The chosen plug-in and agents differ depending on the vendor and tech-
nologies used in the particular cloud. It is important to mention that only one plug-in can be used
at a time.
Messaging queue Accepts and routes RPC requests between agents to complete API operations. Mes-
sage queue is used in the ML2 plug-in for RPC between the neutron server and neutron agents
that run on each hypervisor, in the ML2 mechanism drivers for Open vSwitch and Linux bridge.
Concepts
To configure rich network topologies, you can create and configure networks and subnets and instruct
other OpenStack services like Compute to attach virtual devices to ports on these networks. OpenStack
Compute is a prominent consumer of OpenStack Networking to provide connectivity for its instances. In
particular, OpenStack Networking supports each project having multiple private networks and enables
projects to choose their own IP addressing scheme, even if those IP addresses overlap with those that
other projects use. There are two types of network, project and provider networks. It is possible to share
any of these types of networks among projects as part of the network creation process.
Provider networks
Provider networks offer layer-2 connectivity to instances with optional support for DHCP and metadata
services. These networks connect, or map, to existing layer-2 networks in the data center, typically using
VLAN (802.1q) tagging to identify and separate them.
Provider networks generally offer simplicity, performance, and reliability at the cost of flexibility. By
default only administrators can create or update provider networks because they require configuration
of physical network infrastructure. It is possible to change the user who is allowed to create or update
provider networks with the following parameters of policy.yaml:
• create_network:provider:physical_network
• update_network:provider:physical_network
Warning: The creation and modification of provider networks enables use of physical network
resources, such as VLAN-s. Enable these changes only for trusted projects.
Also, provider networks only handle layer-2 connectivity for instances, thus lacking support for features
such as routers and floating IP addresses.
In many cases, operators who are already familiar with virtual networking architectures that rely on
physical network infrastructure for layer-2, layer-3, or other services can seamlessly deploy the Open-
Stack Networking service. In particular, provider networks appeal to operators looking to migrate from
the Compute networking service (nova-network) to the OpenStack Networking service. Over time, op-
erators can build on this minimal architecture to enable more cloud networking features.
In general, the OpenStack Networking software components that handle layer-3 operations impact per-
formance and reliability the most. To improve performance and reliability, provider networks move
layer-3 operations to the physical network infrastructure.
In one particular use case, the OpenStack deployment resides in a mixed environment with conventional
virtualization and bare-metal hosts that use a sizable physical network infrastructure. Applications that
run inside the OpenStack deployment might require direct layer-2 access, typically using VLANs, to
applications outside of the deployment.
Routed provider networks offer layer-3 connectivity to instances. These networks map to existing layer-
3 networks in the data center. More specifically, the network maps to multiple layer-2 segments, each
of which is essentially a provider network. Each has a router gateway attached to it which routes traffic
between them and externally. The Networking service does not provide the routing.
Routed provider networks offer performance at scale that is difficult to achieve with a plain provider
network at the expense of guaranteed layer-2 connectivity.
Neutron port could be associated with only one network segment, but there is an exception for OVN
distributed services like OVN Metadata.
See Routed provider networks for more information.
Self-service networks
Self-service networks primarily enable general (non-privileged) projects to manage networks without
involving administrators. These networks are entirely virtual and require virtual routers to interact with
provider and external networks such as the Internet. Self-service networks also usually provide DHCP
and metadata services to instances.
In most cases, self-service networks use overlay protocols such as VXLAN or GRE because they can
support many more networks than layer-2 segmentation using VLAN tagging (802.1q). Furthermore,
VLANs typically require additional configuration of physical network infrastructure.
IPv4 self-service networks typically use private IP address ranges (RFC1918) and interact with provider
networks via source NAT on virtual routers. Floating IP addresses enable access to instances from
provider networks via destination NAT on virtual routers. IPv6 self-service networks always use public
IP address ranges and interact with provider networks via virtual routers with static routes.
The Networking service implements routers using a layer-3 agent that typically resides at least one net-
work node. Contrary to provider networks that connect instances to the physical network infrastructure
at layer-2, self-service networks must traverse a layer-3 agent. Thus, oversubscription or failure of a
layer-3 agent or network node can impact a significant quantity of self-service networks and instances
using them. Consider implementing one or more high-availability features to increase redundancy and
performance of self-service networks.
Users create project networks for connectivity within projects. By default, they are fully isolated and are
not shared with other projects. OpenStack Networking supports the following types of network isolation
and overlay technologies.
Flat All instances reside on the same network, which can also be shared with the hosts. No VLAN
tagging or other network segregation takes place.
VLAN Networking allows users to create multiple provider or project networks using VLAN IDs
(802.1Q tagged) that correspond to VLANs present in the physical network. This allows in-
stances to communicate with each other across the environment. They can also communicate with
dedicated servers, firewalls, and other networking infrastructure on the same layer 2 VLAN.
GRE and VXLAN VXLAN and GRE are encapsulation protocols that create overlay networks to acti-
vate and control communication between compute instances. A Networking router is required to
allow traffic to flow outside of the GRE or VXLAN project network. A router is also required to
connect directly-connected project networks with external networks, including the Internet. The
router provides the ability to connect to instances directly from an external network using floating
IP addresses.
Subnets
A block of IP addresses and associated configuration state. This is also known as the native IPAM
(IP Address Management) provided by the networking service for both project and provider networks.
Subnets are used to allocate IP addresses when new ports are created on a network.
Subnet pools
End users normally can create subnets with any valid IP addresses without other restrictions. However,
in some cases, it is nice for the admin or the project to pre-define a pool of addresses from which to
create subnets with automatic allocation.
Using subnet pools constrains what addresses can be used by requiring that every subnet be within the
defined pool. It also prevents address reuse or overlap by two subnets from the same pool.
See Subnet pools for more information.
Ports
A port is a connection point for attaching a single device, such as the NIC of a virtual server, to a
virtual network. The port also describes the associated network configuration, such as the MAC and IP
addresses to be used on that port.
Routers
Routers provide virtual layer-3 services such as routing and NAT between self-service and provider
networks or among self-service networks belonging to a project. The Networking service uses a layer-3
agent to manage routers via namespaces.
Security groups
Security groups provide a container for virtual firewall rules that control ingress (inbound to instances)
and egress (outbound from instances) network traffic at the port level. Security groups use a default deny
policy and only contain rules that allow specific traffic. Each port can reference one or more security
groups in an additive fashion. The firewall driver translates security group rules to a configuration for
the underlying packet filtering technology such as iptables.
Each project contains a default security group that allows all egress traffic and denies all ingress
traffic. You can change the rules in the default security group. If you launch an instance without
specifying a security group, the default security group automatically applies to it. Similarly, if you
create a port without specifying a security group, the default security group automatically applies to
it.
Note: If you use the metadata service, removing the default egress rules denies access to TCP port 80
on 169.254.169.254, thus preventing instances from retrieving metadata.
Security group rules are stateful. Thus, allowing ingress TCP port 22 for secure shell automatically
creates rules that allow return egress traffic and ICMP error messages involving those TCP connections.
By default, all security groups contain a series of basic (sanity) and anti-spoofing rules that perform the
following actions:
• Allow egress traffic only if it uses the source MAC and IP addresses of the port for the instance,
source MAC and IP combination in allowed-address-pairs, or valid MAC address (port
or allowed-address-pairs) and associated EUI64 link-local IPv6 address.
• Allow egress DHCP discovery and request messages that use the source MAC address of the port
for the instance and the unspecified IPv4 address (0.0.0.0).
• Allow ingress DHCP and DHCPv6 responses from the DHCP server on the subnet so instances
can acquire IP addresses.
• Deny egress DHCP and DHCPv6 responses to prevent instances from acting as DHCP(v6) servers.
• Allow ingress/egress ICMPv6 MLD, neighbor solicitation, and neighbor discovery messages so
instances can discover neighbors and join multicast groups.
• Deny egress ICMPv6 router advertisements to prevent instances from acting as IPv6 routers and
forwarding IPv6 traffic for other instances.
• Allow egress ICMPv6 MLD reports (v1 and v2) and neighbor solicitation messages that use the
source MAC address of a particular instance and the unspecified IPv6 address (::). Duplicate
address detection (DAD) relies on these messages.
• Allow egress non-IP traffic from the MAC address of the port for the instance and any additional
MAC addresses in allowed-address-pairs on the port for the instance.
Although non-IP traffic, security groups do not implicitly allow all ARP traffic. Separate ARP filtering
rules prevent instances from using ARP to intercept traffic for another instance. You cannot disable or
remove these rules.
You can disable security groups including basic and anti-spoofing rules by setting the port attribute
port_security_enabled to False.
Extensions
The OpenStack Networking service is extensible. Extensions serve two purposes: they allow the intro-
duction of new features in the API without requiring a version change and they allow the introduction of
vendor specific niche functionality. Applications can programmatically list available extensions by per-
forming a GET on the /extensions URI. Note that this is a versioned request; that is, an extension
available in one API version might not be available in another.
DHCP
The optional DHCP service manages IP addresses for instances on provider and self-service networks.
The Networking service implements the DHCP service using an agent that manages qdhcp namespaces
and the dnsmasq service.
Metadata
The optional metadata service provides an API for instances to obtain metadata such as SSH keys.
Server
Plug-ins
• Manages agents
Agents
• Linux Bridge
• OVS
• L3
• DHCP
Miscellaneous
• Metadata
Services
Routing services
VPNaaS
The Virtual Private Network-as-a-Service (VPNaaS) is a neutron extension that introduces the VPN
feature set.
LBaaS
The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load balancers. The reference
implementation is based on the HAProxy software load balancer. See the Octavia project for more
information.
8.2 Configuration
A usual neutron setup consists of multiple services and agents running on one or multiple nodes (though
some setups may not need any agents). Each of these services provide some of the networking or API
services. Among those of special interest are:
1. The neutron-server that provides API endpoints and serves as a single point of access to the
database. It usually runs on the controller nodes.
2. Layer2 agent that can utilize Open vSwitch, Linux Bridge or other vendor-specific technology to
provide network segmentation and isolation for project networks. The L2 agent should run on
every node where it is deemed responsible for wiring and securing virtual interfaces (usually both
compute and network nodes).
3. Layer3 agent that runs on network node and provides east-west and north-south routing plus some
advanced services such as VPNaaS.
Configuration options
The neutron configuration options are segregated between neutron-server and agents. Both services and
agents may load the main neutron.conf since this file should contain the oslo.messaging configu-
ration for internal neutron RPCs and may contain host specific configuration, such as file paths. The
neutron.conf contains the database, keystone, nova credentials, and endpoints strictly for neutron-
server to use.
In addition, neutron-server may load a plugin-specific configuration file, yet the agents should not. As
the plugin configuration is primarily site wide options and the plugin provides the persistence layer for
neutron, agents should be instructed to act upon these values through RPC.
Each individual agent may have its own configuration file. This file should be loaded after the main
neutron.conf file, so the agent configuration takes precedence. The agent-specific configuration
may contain configurations which vary between hosts in a neutron deployment such as the local_ip
for an L2 agent. If any agent requires access to additional external services beyond the neutron RPC,
those endpoints should be defined in the agent-specific configuration file (for example, nova metadata
for metadata agent).
Some neutron agents, like DHCP, Metadata or L3, often run external processes to provide some of their
functionalities. It may be keepalived, dnsmasq, haproxy or some other process. Neutron agents are
responsible for spawning and killing such processes when necessary. By default, to kill such processes,
agents use a simple kill command, but in some cases, like for example when those additional ser-
vices are running inside containers, it may be not a good solution. To address this problem, operators
should use the AGENT config group option kill_scripts_path to configure a path to where kill
scripts for such processes live. By default, it is set to /etc/neutron/kill_scripts/. If
option kill_scripts_path is changed in the config to the different location, exec_dirs in /
etc/rootwrap.conf should be changed accordingly. If kill_scripts_path is set, every time
neutron has to kill a process, for example dnsmasq, it will look in this directory for a file with the
name <process_name>-kill. So for dnsmasq process it will look for a dnsmasq-kill script.
If such a file exists there, it will be called instead of using the kill command.
where: <sig> is the signal, same as with the kill command, for example 9 or SIGKILL; and <pid>
is pid of the process to kill.
This external script should then handle killing of the given process as neutron will not call the kill
command for it anymore.
Architecture
The Modular Layer 2 (ML2) neutron plug-in is a framework allowing OpenStack Networking to simul-
taneously use the variety of layer 2 networking technologies found in complex real-world data centers.
The ML2 framework distinguishes between the two kinds of drivers that can be configured:
• Type drivers
Define how an OpenStack network is technically realized. Example: VXLAN
Each available network type is managed by an ML2 type driver. Type drivers maintain any needed
type-specific network state. They validate the type specific information for provider networks and
are responsible for the allocation of a free segment in project networks.
• Mechanism drivers
Define the mechanism to access an OpenStack network of a certain type. Example: Open vSwitch
mechanism driver.
The mechanism driver is responsible for taking the information established by the type driver
and ensuring that it is properly applied given the specific networking mechanisms that have been
enabled.
Mechanism drivers can utilize L2 agents (via RPC) and/or interact directly with external devices
or controllers.
Multiple mechanism and type drivers can be used simultaneously to access different ports of the same
virtual network.
Note: L2 population is a special mechanism driver that optimizes BUM (Broadcast, unknown destina-
tion address, multicast) traffic in the overlay networks VXLAN, GRE and Geneve. It needs to be used
in conjunction with either the Linux bridge or the Open vSwitch mechanism driver and cannot be used
as standalone mechanism driver. For more information, see the Mechanism drivers section below.
Configuration
[ml2]
type_drivers = flat,vlan,vxlan,gre
For more details, see the Networking configuration options of Configuration Reference.
The following type drivers are available
• Flat
• VLAN
• GRE
• VXLAN
Provider networks provide connectivity like project networks. But only administrative (privileged) users
can manage those networks because they interface with the physical network infrastructure. More infor-
mation about provider networks see OpenStack Networking.
• Flat
The administrator needs to configure a list of physical network names that can be used for provider
networks. For more details, see the related section in the Configuration Reference.
• VLAN
The administrator needs to configure a list of physical network names that can be used for provider
networks. For more details, see the related section in the Configuration Reference.
• GRE
No additional configuration required.
• VXLAN
The administrator can configure the VXLAN multicast group that should be used.
Note: VXLAN multicast group configuration is not applicable for the Open vSwitch agent.
As of today it is not used in the Linux bridge agent. The Linux bridge agent has its own agent
specific configuration option. For more details, see the Bug 1523614.
Project networks provide connectivity to instances for a particular project. Regular (non-privileged)
users can manage project networks within the allocation that an administrator or operator defines for
them. More information about project and provider networks see OpenStack Networking.
Project network configurations are made in the /etc/neutron/plugins/ml2/ml2_conf.ini
configuration file on the neutron server:
• VLAN
The administrator needs to configure the range of VLAN IDs that can be used for project network
allocation. For more details, see the related section in the Configuration Reference.
• GRE
The administrator needs to configure the range of tunnel IDs that can be used for project network
allocation. For more details, see the related section in the Configuration Reference.
• VXLAN
The administrator needs to configure the range of VXLAN IDs that can be used for project network
allocation. For more details, see the related section in the Configuration Reference.
Note: Flat networks for project allocation are not supported. They only can exist as a provider network.
Mechanism drivers
[ml2]
mechanism_drivers = ovs,l2pop
* OpenDaylight
* OpenContrail
– Proprietary (vendor)
External mechanism drivers from various vendors exist as well as the neutron integrated
reference implementations.
Configuration of those drivers is not part of this document.
The vnic_type_prohibit_list option is used to remove values from the mechanism drivers
supported_vnic_types list.
Extension Drivers
The ML2 plug-in also supports extension drivers that allows other pluggable drivers to extend the core
resources implemented in the ML2 plug-in (networks, ports, etc.). Examples of extension drivers
include support for QoS, port security, etc. For more details see the extension_drivers configu-
ration option in the Configuration Reference.
Agents
L2 agent
An L2 agent serves layer 2 (Ethernet) network connectivity to OpenStack resources. It typically runs on
each Network Node and on each Compute Node.
• Open vSwitch agent
The Open vSwitch agent configures the Open vSwitch to realize L2 networks for OpenStack
resources.
Configuration for the Open vSwitch agent is typically done in the openvswitch_agent.ini
configuration file. Make sure that on agent start you pass this configuration file as argument.
For a detailed list of configuration options, see the related section in the Configuration Reference.
• Linux bridge agent
The Linux bridge agent configures Linux bridges to realize L2 networks for OpenStack resources.
Configuration for the Linux bridge agent is typically done in the linuxbridge_agent.ini
configuration file. Make sure that on agent start you pass this configuration file as argument.
For a detailed list of configuration options, see the related section in the Configuration Reference.
• SRIOV Nic Switch agent
The sriov nic switch agent configures PCI virtual functions to realize L2 networks for OpenStack
instances. Network attachments for other resources like routers, DHCP, and so on are not sup-
ported.
Configuration for the SRIOV nic switch agent is typically done in the sriov_agent.ini con-
figuration file. Make sure that on agent start you pass this configuration file as argument.
For a detailed list of configuration options, see the related section in the Configuration Reference.
• MacVTap agent
The MacVTap agent uses kernel MacVTap devices for realizing L2 networks for OpenStack in-
stances. Network attachments for other resources like routers, DHCP, and so on are not supported.
Configuration for the MacVTap agent is typically done in the macvtap_agent.ini configu-
ration file. Make sure that on agent start you pass this configuration file as argument.
For a detailed list of configuration options, see the related section in the Configuration Reference.
L3 agent
The L3 agent offers advanced layer 3 services, like virtual Routers and Floating IPs. It requires an L2
agent running in parallel.
Configuration for the L3 agent is typically done in the l3_agent.ini configuration file. Make sure
that on agent start you pass this configuration file as argument.
For a detailed list of configuration options, see the related section in the Configuration Reference.
DHCP agent
The DHCP agent is responsible for DHCP (Dynamic Host Configuration Protocol) and RADVD (Router
Advertisement Daemon) services. It requires a running L2 agent on the same node.
Configuration for the DHCP agent is typically done in the dhcp_agent.ini configuration file. Make
sure that on agent start you pass this configuration file as argument.
For a detailed list of configuration options, see the related section in the Configuration Reference.
Metadata agent
The Metadata agent allows instances to access cloud-init meta data and user data via the network. It
requires a running L2 agent on the same node.
Configuration for the Metadata agent is typically done in the metadata_agent.ini configuration
file. Make sure that on agent start you pass this configuration file as argument.
For a detailed list of configuration options, see the related section in the Configuration Reference.
L3 metering agent
The L3 metering agent enables layer3 traffic metering. It requires a running L3 agent on the same node.
Configuration for the L3 metering agent is typically done in the metering_agent.ini configuration
file. Make sure that on agent start you pass this configuration file as argument.
For a detailed list of configuration options, see the related section in the Configuration Reference.
Security
Reference implementations
Overview
In this section, the combination of a mechanism driver and an L2 agent is called reference implementa-
tion. The following table lists these implementations:
The following tables shows which reference implementations support which non-L2 neutron agents:
Note: L2 population is not listed here, as it is not a standalone mechanism. If other agents are supported
depends on the conjunctive mechanism driver that is used for binding a port.
More information about L2 population see the OpenStack Manuals.
Buying guide
Due to direct connection, some features are not available when using SRIOV. For example, DVR,
security groups, migration.
For more information see the SR-IOV.
• MacVTap mechanism driver and MacVTap agent
Can only be used for instance network attachments (device_owner = compute) and not for attach-
ment of other resources like routers, DHCP, and so on.
It is positioned as alternative to Open vSwitch or Linux bridge support on the compute node for
internal deployments.
MacVTap offers a direct connection with very little overhead between instances and down to the
adapter. You can use MacVTap agent on the compute node when you require a network connection
that is performance critical. It does not require specific hardware (like with SRIOV).
Due to the direct connection, some features are not available when using it on the compute node.
For example, DVR, security groups and arp-spoofing protection.
Address scopes build from subnet pools. While subnet pools provide a mechanism for controlling the
allocation of addresses to subnets, address scopes show where addresses can be routed between net-
works, preventing the use of overlapping addresses in any two subnets. Because all addresses allocated
in the address scope do not overlap, neutron routers do not NAT between your projects network and
your external network. As long as the addresses within an address scope match, the Networking service
performs simple routing between networks.
Anyone with access to the Networking service can create their own address scopes. However, network
administrators can create shared address scopes, allowing other projects to create networks within that
address scope.
Access to addresses in a scope are managed through subnet pools. Subnet pools can either be created in
an address scope, or updated to belong to an address scope.
With subnet pools, all addresses in use within the address scope are unique from the point of view of the
address scope owner. Therefore, add more than one subnet pool to an address scope if the pools have
different owners, allowing for delegation of parts of the address scope. Delegation prevents address
overlap across the whole scope. Otherwise, you receive an error if two pools have the same address
ranges.
Each router interface is associated with an address scope by looking at subnets connected to the network.
When a router connects to an external network with matching address scopes, network traffic routes
between without Network address translation (NAT). The router marks all traffic connections originating
from each interface with its corresponding address scope. If traffic leaves an interface in the wrong
scope, the router blocks the traffic.
Backwards compatibility
Networks created before the Mitaka release do not contain explicitly named address scopes, unless
the network contains subnets from a subnet pool that belongs to a created or updated address scope.
The Networking service preserves backwards compatibility with pre-Mitaka networks through special
address scope properties so that these networks can perform advanced routing:
1. Unlimited address overlap is allowed.
2. Neutron routers, by default, will NAT traffic from internal networks to external networks.
3. Pre-Mitaka address scopes are not visible through the API. You cannot list address scopes or show
details. Scopes exist implicitly as a catch-all for addresses that are not explicitly scoped.
This section shows how to set up shared address scopes to allow simple routing for project networks
with the same subnet pools.
Note: Irrelevant fields have been trimmed from the output of these commands for brevity.
+------------+--------------------------------------+
| Field | Value |
+------------+--------------------------------------+
| headers | |
| id | 28424dfc-9abd-481b-afa3-1da97a8fead7 |
| ip_version | 6 |
| name | address-scope-ip6 |
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
| shared | True |
+------------+--------------------------------------+
+------------+--------------------------------------+
| Field | Value |
+------------+--------------------------------------+
| headers | |
| id | 3193bd62-11b5-44dc-acf8-53180f21e9f2 |
| ip_version | 4 |
| name | address-scope-ip4 |
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
| shared | True |
+------------+--------------------------------------+
2. Create subnet pools specifying the name (or UUID) of the address scope that the subnet pool
belongs to. If you have existing subnet pools, use the openstack subnet pool set com-
mand to put them in a new address scope:
3. Make sure that subnets on an external network are created from the subnet pools created above:
This section shows how non-privileged users can use address scopes to route straight to an external
network without NAT.
1. Create a couple of networks to host subnets:
$ openstack network create network1
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2016-12-13T23:21:01Z |
| description | |
| headers | |
| id | 1bcf3fe9-a0cb-4d88-a067-a4d7f8e635f0 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| mtu | 1450 |
| name | network1 |
| port_security_enabled | True |
| project_id | 098429d072d34d3596c88b7dbf7e91b6 |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 94 |
| revision_number | 3 |
| router:external | Internal |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | [] |
| updated_at | 2016-12-13T23:21:01Z |
+---------------------------+--------------------------------------+
3. Create a subnet using a subnet pool associated with an address scope from an external network:
By creating subnets from scoped subnet pools, the network is associated with the address scope.
4. Connect a router to each of the project subnets that have been created, for example, using a router
called router1:
Checking connectivity
This example shows how to check the connectivity between networks with address scopes.
1. Launch two instances, instance1 on network1 and instance2 on network2. Associate
a floating IP address to both instances.
2. Adjust security groups to allow pings and SSH (both IPv4 and IPv6):
Regardless of address scopes, the floating IPs can be pinged from the external network:
$ ping -c 1 203.0.113.3
1 packets transmitted, 1 received, 0% packet loss, time 0ms
$ ping -c 1 203.0.113.4
1 packets transmitted, 1 received, 0% packet loss, time 0ms
You can now ping instance2 directly because instance2 shares the same address scope as the
external network:
Note: BGP routing can be used to automatically set up a static route for your instances.
You cannot ping instance1 directly because the address scopes do not match:
If the address scopes match between networks then pings and other traffic route directly through. If the
scopes do not match between networks, the router either drops the traffic or applies NAT to cross scope
boundaries.
The auto-allocation feature introduced in Mitaka simplifies the procedure of setting up an external con-
nectivity for end-users, and is also known as Get Me A Network.
Previously, a user had to configure a range of networking resources to boot a server and get access to the
Internet. For example, the following steps are required:
• Create a network
• Create a subnet
• Create a router
• Uplink the router on an external network
• Downlink the router on the previously created subnet
These steps need to be performed on each logical segment that a VM needs to be connected to, and may
require networking knowledge the user might not have.
This feature is designed to automate the basic networking provisioning for projects. The steps to provi-
sion a basic network are run during instance boot, making the networking setup hands-free.
To make this possible, provide a default external network and default subnetpools (one for IPv4, or one
for IPv6, or one of each) so that the Networking service can choose what to do in lieu of input. Once
these are in place, users can boot their VMs without specifying any networking details. The Compute
service will then use this feature automatically to wire user VMs.
To use this feature, the neutron service must have the following extensions enabled:
• auto-allocated-topology
• subnet_allocation
• external-net
• router
Before the end-user can use the auto-allocation feature, the operator must create the resources that will be
used for the auto-allocated network topology creation. To perform this task, proceed with the following
steps:
Note: The flag --default (and --no-default flag) is only effective with external networks
and has no effects on regular (or internal) networks.
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| address_scope_id | None |
| created_at | 2017-01-12T15:10:34Z |
| default_prefixlen | 26 |
| default_quota | None |
| description | |
| headers | |
| id | b41b7b9c-de57-4c19-b1c5-731985bceb7f |
| ip_version | 4 |
| is_default | True |
| max_prefixlen | 32 |
| min_prefixlen | 8 |
| name | shared-default |
| prefixes | 192.0.2.0/24 |
| project_id | 86acdbd1d72745fd8e8320edd7543400 |
| revision_number | 1 |
| shared | True |
| tags | [] |
| updated_at | 2017-01-12T15:10:34Z |
+-------------------+--------------------------------------+
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| address_scope_id | None |
| created_at | 2017-01-12T15:14:35Z |
| default_prefixlen | 64 |
| default_quota | None |
| description | |
(continues on next page)
Get Me A Network
In a deployment where the operator has set up the resources as described above, they can get their
auto-allocated network topology as follows:
Note: When the --or-show option is used the command returns the topology information if it already
exists.
Operators (and users with admin role) can get the auto-allocated topology for a project by specifying the
project ID:
The ID returned by this command is a network which can be used for booting a VM.
The auto-allocated topology for a user never changes. In practice, when a user boots a server omitting
the --nic option, and there is more than one network available, the Compute service will invoke the
API behind auto allocated topology create, fetch the network UUID, and pass it on during
the boot process.
To validate that the required resources are correctly set up for auto-allocation, without actually provi-
sioning anything, use the --check-resources option:
The validation option behaves identically for all users. However, it is considered primarily an admin or
service utility since it is the operator who must set up the requirements.
The auto-allocation feature creates one network topology in every project where it is used. The auto-
allocated network topology for a project contains the following resources:
Resource Name
network auto_allocated_network
subnet (IPv4) auto_allocated_subnet_v4
subnet (IPv6) auto_allocated_subnet_v6
router auto_allocated_router
Compatibility notes
Nova uses the auto allocated topology feature with API micro version 2.37 or later. This is
because, unlike the neutron feature which was implemented in the Mitaka release, the integration for
nova was completed during the Newton release cycle. Note that the CLI option --nic can be omitted
regardless of the microversion used as long as there is no more than one network available to the project,
in which case nova fails with a 400 error because it does not know which network to use. Furthermore,
nova does not start using the feature, regardless of whether or not a user requests micro version 2.37 or
later, unless all of the nova-compute services are running Newton-level code.
An availability zone groups network nodes that run services like DHCP, L3, FW, and others. It is defined
as an agents attribute on the network node. This allows users to associate an availability zone with their
resources so that the resources get high availability.
Use case
An availability zone is used to make network resources highly available. The operators group the nodes
that are attached to different power sources under separate availability zones and configure scheduling
for resources with high availability so that they are scheduled on different availability zones.
Required extensions
The core plug-in must support the availability_zone extension. The core plug-in also
must support the network_availability_zone extension to schedule a network according
to availability zones. The Ml2Plugin supports it. The router service plug-in must support the
router_availability_zone extension to schedule a router according to the availability zones.
The L3RouterPlugin supports it.
[AGENT]
availability_zone = zone-1
default_availability_zones = zone-1,zone-2
Look at the availability_zones attribute of each resource to confirm in which zone the resource
is hosted:
Note: The availability_zones attribute does not have a value until the resource
is scheduled. Once the Networking service schedules the resource to zones according to
availability_zone_hints, availability_zones shows in which zone the resource is
hosted practically. The availability_zones may not match availability_zone_hints.
For example, even if you specify a zone with availability_zone_hints, all agents of the zone
may be dead before the resource is scheduled. In general, they should match, unless there are failures or
there is no capacity left in the zone requested.
Network scheduler
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.
,→AZAwareWeightScheduler
dhcp_load_type = networks
The Networking service schedules a network to one of the agents within the selected zone as with
WeightScheduler. In this case, scheduler refers to dhcp_load_type as well.
Router scheduler
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.
,→AZLeastRoutersScheduler
The Networking service schedules a router to one of the agents within the selected zone as with
LeastRouterScheduler.
Although, the Networking service provides high availability for routers and high availability and fault
tolerance for networks DHCP services, availability zones provide an extra layer of protection by seg-
menting a Networking service deployment in isolated failure domains. By deploying HA nodes across
different availability zones, it is guaranteed that network services remain available in face of zone-wide
failures that affect the deployment.
This section explains how to get high availability with the availability zone for L3 and DHCP. You
should naturally set above configuration options for the availability zone.
L3 high availability
Set the following configuration options in file /etc/neutron/neutron.conf so that you get L3
high availability.
l3_ha = True
max_l3_agents_per_router = 3
HA routers are created on availability zones you selected when creating the router.
Set the following configuration options in file /etc/neutron/neutron.conf so that you get
DHCP high availability.
dhcp_agents_per_network = 2
DHCP services are created on availability zones you selected when creating the network.
BGP dynamic routing enables advertisement of self-service (private) network prefixes to physical net-
work devices that support BGP such as routers, thus removing the conventional dependency on static
routes. The feature relies on address scopes and requires knowledge of their operation for proper de-
ployment.
BGP dynamic routing consists of a service plug-in and an agent. The service plug-in implements the
Networking service extension and the agent manages BGP peering sessions. A cloud administrator
creates and configures a BGP speaker using the CLI or API and manually schedules it to one or more
hosts running the agent. Agents can reside on hosts with or without other Networking service agents.
Prefix advertisement depends on the binding of external networks to a BGP speaker and the address
scope of external and internal IP address ranges or subnets.
Note: Although self-service networks generally use private IP address ranges (RFC1918) for IPv4
subnets, BGP dynamic routing can advertise any IPv4 address ranges.
Example configuration
Note: The example configuration assumes sufficient knowledge about the Networking service, routing,
and BGP. For basic deployment of the Networking service, consult one of the Deployment examples.
For more information on BGP, see RFC 4271.
Controller node
• In the neutron.conf file, enable the conventional layer-3 and BGP dynamic routing service
plug-ins:
[DEFAULT]
service_plugins = neutron_dynamic_routing.services.bgp.bgp_plugin.
,→BgpPlugin,neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
Agent nodes
Note: The agent currently only supports the os-ken BGP driver.
Replace ROUTER_ID with a suitable unique 32-bit number, typically an IPv4 address on
the host running the agent. For example, 192.0.2.2.
,→+
| ID | Agent Type |
,→Host | Availability Zone | Alive | State | Binary
,→ | (continues on next page)
1. Create an address scope. The provider (external) and self-service networks must belong to the
same address scope for the agent to advertise those self-service network prefixes.
+------------+--------------------------------------+
| Field | Value |
+------------+--------------------------------------+
| headers | |
| id | f71c958f-dbe8-49a2-8fb9-19c5f52a37f1 |
| ip_version | 4 |
| name | bgp |
| project_id | 86acdbd1d72745fd8e8320edd7543400 |
| shared | True |
+------------+--------------------------------------+
2. Create subnet pools. The provider and self-service networks use different pools.
• Create the provider network pool.
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| address_scope_id | f71c958f-dbe8-49a2-8fb9-19c5f52a37f1 |
| created_at | 2017-01-12T14:58:57Z |
| default_prefixlen | 8 |
| default_quota | None |
| description | |
| headers | |
| id | 63532225-b9a0-445a-9935-20a15f9f68d1 |
| ip_version | 4 |
| is_default | False |
| max_prefixlen | 32 |
| min_prefixlen | 8 |
| name | provider |
| prefixes | 203.0.113.0/24 |
| project_id | 86acdbd1d72745fd8e8320edd7543400 |
| revision_number | 1 |
| shared | False |
(continues on next page)
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| address_scope_id | f71c958f-dbe8-49a2-8fb9-19c5f52a37f1 |
| created_at | 2017-01-12T15:02:31Z |
| default_prefixlen | 8 |
| default_quota | None |
| description | |
| headers | |
| id | 8d8270b1-b194-4b7e-914c-9c741dcbd49b |
| ip_version | 4 |
| is_default | False |
| max_prefixlen | 32 |
| min_prefixlen | 8 |
| name | selfservice |
| prefixes | 192.0.2.0/25, 192.0.2.128/25 |
| project_id | 86acdbd1d72745fd8e8320edd7543400 |
| revision_number | 1 |
| shared | True |
| tags | [] |
| updated_at | 2017-01-12T15:02:31Z |
+-------------------+--------------------------------------+
2. Create a subnet on the provider network using an IP address range from the provider subnet pool.
$ openstack subnet create --subnet-pool provider \
--prefix-length 24 --gateway 203.0.113.1 --network provider \
--allocation-pool start=203.0.113.11,end=203.0.113.254 provider
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 203.0.113.11-203.0.113.254 |
| cidr | 203.0.113.0/24 |
| created_at | 2016-03-17T23:17:16 |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| host_routes | |
| id | 8ed65d41-2b2a-4f3a-9f92-45adb266e01a |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | provider |
| network_id | 68ec148c-181f-4656-8334-8f4eb148689d |
| project_id | b3ac05ef10bf441fbf4aa17f16ae1e6d |
| segment_id | None |
| service_types | |
| subnetpool_id | 3771c0e7-7096-46d3-a3bd-699c58e70259 |
| tags | |
| updated_at | 2016-03-17T23:17:16 |
+-------------------+--------------------------------------+
Note: The IP address allocation pool starting at .11 improves clarity of the diagrams. You can
safely omit it.
4. Create a subnet on the first two self-service networks using an IP address range from the self-
service subnet pool.
$ openstack subnet create --network selfservice1 --subnet-pool
,→selfservice \
--prefix-length 25 selfservice1
+-------------------+-------------------------------------------------
,→---+
| Field | Value
,→ |
+-------------------+-------------------------------------------------
,→---+
| allocation_pools | 192.0.2.2-192.0.2.127
,→ |
| cidr | 192.0.2.0/25
,→ |
| created_at | 2016-03-17T23:20:20
,→ |
| description |
,→ |
| dns_nameservers |
,→ |
| enable_dhcp | True
,→ |
| gateway_ip | 198.51.100.1
,→ | (continues on next page)
5. Create a subnet on the last self-service network using an IP address range outside of the address
scope.
2. For each router, add one self-service subnet as an interface on the router.
$ openstack router add subnet router1 selfservice1
The BGP speaker advertises the next-hop IP address for eligible self-service networks and floating IP
addresses for instances using those networks.
1. Create the BGP speaker.
$ openstack bgp speaker create --ip-version 4 \
--local-as LOCAL_AS bgpspeaker
Created a new bgp_speaker:
+-----------------------------------+---------------------------------
,→-----+
| Field | Value
,→ |
+-----------------------------------+---------------------------------
,→-----+
| advertise_floating_ip_host_routes | True
,→ |
| advertise_tenant_networks | True
,→ | (continues on next page)
Replace LOCAL_AS with an appropriate local autonomous system number. The example config-
uration uses AS 1234.
2. A BGP speaker requires association with a provider network to determine eligible prefixes. The
association builds a list of all virtual routers with gateways on provider and self-service networks
in the same address scope so the BGP speaker can advertise self-service network prefixes with
the corresponding router as the next-hop IP address. Associate the BGP speaker with the provider
network.
$ openstack bgp speaker add network bgpspeaker provider
Added network provider to BGP speaker bgpspeaker.
| Field | Value
,→ |
+-----------------------------------+---------------------------------
,→-----+
| advertise_floating_ip_host_routes | True
,→ |
| advertise_tenant_networks | True
,→ |
| id | 5f227f14-4f46-4eca-9524-
,→fc5a1eabc358 |
| ip_version | 4
,→ |
| local_as | 1234
,→ |
| name | bgpspeaker
,→ |
| networks | 68ec148c-181f-4656-8334-
,→8f4eb148689d |
| peers |
,→ |
| tenant_id |
,→b3ac05ef10bf441fbf4aa17f16ae1e6d |
(continues on next page)
4. Verify the prefixes and next-hop IP addresses that the BGP speaker advertises.
$ openstack bgp speaker list advertised routes bgpspeaker
+-----------------+--------------+
| Destination | Nexthop |
+-----------------+--------------+
| 192.0.2.0/25 | 203.0.113.11 |
| 192.0.2.128/25 | 203.0.113.12 |
+-----------------+--------------+
Replace REMOTE_AS with an appropriate remote autonomous system number. The example
configuration uses AS 4321 which triggers EBGP peering.
Note: The host containing the BGP agent must have layer-3 connectivity to the provider router.
| Field | Value
,→ |
+-----------------------------------+---------------------------------
,→-----+
| advertise_floating_ip_host_routes | True
,→ |
| advertise_tenant_networks | True
,→ |
| id | 5f227f14-4f46-4eca-9524-
,→fc5a1eabc358 |
(continues on next page)
Note: After creating a peering session, you cannot change the local or remote autonomous system
numbers.
1. Unlike most agents, BGP speakers require manual scheduling to an agent. BGP speakers only
form peering sessions and begin prefix advertisement after scheduling to an agent. Schedule the
BGP speaker to agent 37729181-2224-48d8-89ef-16eca8e2f77e.
Prefix advertisement
BGP dynamic routing advertises prefixes for self-service networks and host routes for floating IP ad-
dresses.
Advertisement of a self-service network requires satisfying the following conditions:
• The external and self-service network reside in the same address scope.
• The router contains an interface on the self-service subnet and a gateway on the external network.
• The BGP speaker associates with the external network that provides a gateway on the router.
• The BGP speaker has the advertise_tenant_networks attribute set to True.
For both floating IP and IPv4 fixed IP addresses, the BGP speaker advertises the floating IP agent gate-
way on the corresponding compute node as the next-hop IP address. When using IPv6 fixed IP addresses,
the BGP speaker advertises the DVR SNAT node as the next-hop IP address.
For example, consider the following components:
1. A provider network using IP address range 203.0.113.0/24, and supporting floating IP addresses
203.0.113.101, 203.0.113.102, and 203.0.113.103.
2. A self-service network using IP address range 198.51.100.0/24.
3. Instances with fixed IPs 198.51.100.11, 198.51.100.12, and 198.51.100.13
4. The SNAT gateway resides on 203.0.113.11.
5. The floating IP agent gateways (one per compute node) reside on 203.0.113.12, 203.0.113.13, and
203.0.113.14.
6. Three instances, one per compute node, each with a floating IP address.
7. advertise_tenant_networks is set to False on the BGP speaker
When floating IPs are disassociated and advertise_tenant_networks is set to True, the fol-
lowing routes will be advertised:
You can also identify floating IP agent gateways in your environment to assist with verifying operation
of the BGP speaker.
IPv6
BGP dynamic routing supports peering via IPv6 and advertising IPv6 prefixes.
• To enable peering via IPv6, create a BGP peer and use an IPv6 address for peer_ip.
• To enable advertising IPv6 prefixes, create an address scope with ip_version=6 and a BGP
speaker with ip_version=6.
Note: DVR lacks support for routing directly to a fixed IPv6 address via the floating IP agent gateway
port and thus prevents the BGP speaker from advertising /128 host routes.
High availability
BGP dynamic routing supports scheduling a BGP speaker to multiple agents which effectively multiplies
prefix advertisements to the same peer. If an agent fails, the peer continues to receive advertisements
from one or more operational agents.
1. Show available dynamic routing agents.
This section describes how to use the agent management (alias agent) and scheduler (alias
agent_scheduler) extensions for DHCP agents scalability and HA.
Note: Use the openstack extension list command to check if these extensions are enabled.
Check agent and agent_scheduler are included in the output.
Demo setup
Host Description
OpenStack controller host - Runs the Networking, Identity, and Compute services that are re-
controlnode quired to deploy VMs. The node must have at least one network
interface that is connected to the Management Network. Note that
nova-network should not be running because it is replaced by
Neutron.
HostA Runs nova-compute, the Neutron L2 agent and DHCP agent
HostB Same as HostA
Configuration
[DEFAULT]
core_plugin = linuxbridge
rabbit_host = controlnode
allow_overlapping_ips = True
host = controlnode
agent_down_time = 5
dhcp_agents_per_network = 1
[vlans]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:2999
[database]
connection = mysql+pymysql://root:[email protected]:3306/neutron_linux_
,→bridge
retry_interval = 2
[linux_bridge]
physical_interface_mappings = physnet1:eth0
[DEFAULT]
rabbit_host = controlnode
rabbit_password = openstack
# host = HostB on hostb
host = HostA
[vlans]
tenant_network_type = vlan
network_vlan_ranges = physnet1:1000:2999
[database]
connection = mysql://root:[email protected]:3306/neutron_linux_bridge
retry_interval = 2
[linux_bridge]
physical_interface_mappings = physnet1:eth0
[DEFAULT]
use_neutron=True
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[neutron]
admin_username=neutron
admin_password=servicepassword
admin_auth_url=http://controlnode:35357/v2.0/
auth_strategy=keystone
admin_tenant_name=servicetenant
url=http://203.0.113.10:9696/
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
Admin role is required to use the agent management and scheduler extensions. Ensure you run the
following commands under a project with an admin role.
To experiment, you need VMs and a neutron network:
Every agent that supports these extensions will register itself with the neutron server when it starts
up.
The output shows information for four agents. The alive field shows True if the agent reported
its state within the period defined by the agent_down_time option in the neutron.conf
file. Otherwise the alive is False.
2. List DHCP agents that host a specified network:
$ openstack network agent list --network net1
+--------------------------------------+---------------+--------------
,→--+-------+
| ID | Host | Admin State
,→Up | Alive |
+--------------------------------------+---------------+--------------
,→--+-------+
| 22467163-01ea-4231-ba45-3bd316f425e6 | HostA | UP
,→ | True |
+--------------------------------------+---------------+--------------
,→--+-------+
| ID | Name | Subnets
,→ |
+--------------------------------+------------------------+-----------
,→----------------------+
| ad88e059-e7fa- | net1 | 8086db87-
,→3a7a-4cad- |
| 4cf7-8857-6731a2a3a554 | | 88c9-
,→7bab9bc69258 |
+--------------------------------+------------------------+-----------
,→----------------------+
In this output, last_heartbeat_at is the time on the neutron server. You do not need to syn-
chronize all agents to this time for this extension to run correctly. configurations describes
the static configuration for the agent or run time data. This agent is a DHCP agent and it hosts one
network, one subnet, and three ports.
Different types of agents show different details. The following output shows information for a
Linux bridge agent:
The output shows bridge-mapping and the number of virtual network devices on this L2
agent.
A single network can be assigned to more than one DHCP agents and one DHCP agent can host more
than one network. You can add a network to a DHCP agent and remove one from it.
1. Default scheduling.
When you create a network with one port, the network will be scheduled to an active DHCP agent.
If many active DHCP agents are running, select one randomly. You can design more sophisticated
scheduling algorithms in the same way as nova-schedule later on.
$ openstack network create net2
$ openstack subnet create --network net2 --subnet-range 198.51.100.0/
,→24 subnet2
+--------------------------------------+---------------+--------------
,→--+-------+
| 2444c54d-0d28-460c-ab0f-cd1e6b5d3c7b | HostA | UP
,→ | True |
+--------------------------------------+---------------+--------------
,→--+-------+
It is allocated to DHCP agent on HostA. If you want to validate the behavior through the dnsmasq
command, you must create a subnet for the network because the DHCP agent starts the dnsmasq
service only if there is a DHCP.
2. Assign a network to a given DHCP agent.
To add another DHCP agent to host the network, run this command:
$ openstack network agent add network --dhcp \
55569f4e-6f31-41a6-be9d-526efce1f7fe net2
$ openstack network agent list --network net2
+--------------------------------------+-------+----------------+-----
,→---+
You can see that only the DHCP agent for HostB is hosting the net2 network.
HA of DHCP agents
Boot a VM on net2. Let both DHCP agents host net2. Fail the agents in turn to see if the VM can
still get the desired IP.
1. Boot a VM on net2:
| ID | Name | Status |
,→Networks | Image | Flavor |
+--------------------------------------+-----------+--------+---------
,→----------+---------+----------+
| c394fcd0-0baa-43ae-a793-201815c3e8ce | myserver1 | ACTIVE |
,→net1=192.0.2.3 | cirros | m1.tiny |
| 2d604e05-9a6c-4ddb-9082-8a1fbdcc797d | myserver2 | ACTIVE |
,→net1=192.0.2.4 | ubuntu | m1.small |
| c7c0481c-3db8-4d7a-a948-60ce8211d585 | myserver3 | ACTIVE |
,→net1=192.0.2.5 | centos | m1.small |
| f62f4731-5591-46b1-9d74-f0c901de567f | myserver4 | ACTIVE |
,→net2=198.51.100.2 | cirros1 | m1.tiny |
+--------------------------------------+-----------+--------+---------
,→----------+---------+----------+
An administrator might want to disable an agent if a system hardware or software upgrade is planned.
Some agents that support scheduling also support disabling and enabling agents, such as L3 and DHCP
agents. After the agent is disabled, the scheduler does not schedule new resources to the agent.
After the agent is disabled, you can safely remove the agent. Even after disabling the agent, resources on
the agent are kept assigned. Ensure you remove the resources on the agent before you delete the agent.
Disable the DHCP agent on HostA before you stop it:
After you stop the DHCP agent on HostA, you can delete it by the following command:
After deletion, if you restart the DHCP agent, it appears on the agent list again.
You can control the default number of DHCP agents assigned to a network by setting the following
configuration option in the file /etc/neutron/neutron.conf.
dhcp_agents_per_network = 3
This page serves as a guide for how to use the DNS integration functionality of the Networking service
and its interaction with the Compute service.
The integration of the Networking service with an external DNSaaS (DNS-as-a-Service) is described in
DNS integration with an external service.
Users can control the behavior of the Networking service in regards to DNS using two attributes associ-
ated with ports, networks, and floating IPs. The following table shows the attributes available for each
one of these resources:
Note: The DNS Integration extension enables all the attribute and resource combinations shown
in the previous table, except for dns_domain for ports, which requires the dns_domain for
ports extension.
Note: Since the DNS Integration extension is a subset of dns_domain for ports, if
dns_domain functionality for ports is required, only the latter extension has to be configured.
Note: When the dns_domain for ports extension is configured, DNS Integration is also
included when the Neutron server responds to a request to list the active API extensions. This preserves
backwards API compatibility.
The Networking service enables users to control the name assigned to ports by the internal DNS. To
enable this functionality, do the following:
1. Edit the /etc/neutron/neutron.conf file and assign a value different to
openstacklocal (its default value) to the dns_domain parameter in the [default]
section. As an example:
dns_domain = example.org.
2. Add dns (for the DNS Integration extension) or dns_domain_ports (for the
dns_domain for ports extension) to extension_drivers in the [ml2] section of
/etc/neutron/plugins/ml2/ml2_conf.ini. The following is an example:
[ml2]
extension_drivers = port_security,dns_domain_ports
After re-starting the neutron-server, users will be able to assign a dns_name attribute to their
ports.
Note: The enablement of this functionality is prerequisite for the enablement of the Networking service
integration with an external DNS service, which is described in detail in DNS integration with an external
service.
The following illustrates the creation of a port with my-port in its dns_name attribute.
Note: The name assigned to the port by the Networking service internal DNS is now visible in the
response in the dns_assignment attribute.
When this functionality is enabled, it is leveraged by the Compute service when creating instances.
When allocating ports for an instance during boot, the Compute service populates the dns_name at-
tributes of these ports with the hostname attribute of the instance, which is a DNS sanitized version of
its display name. As a consequence, at the end of the boot process, the allocated ports will be known in
the dnsmasq associated to their networks by their instance hostname.
The following is an example of an instance creation, showing how its hostname populates the
dns_name attribute of the allocated port:
$ openstack server create --image cirros --flavor 42 \
--nic net-id=37aaff3a-6047-45ac-bf4f-a825e56fd2b3 my_vm
+--------------------------------------+-----------------------------------
,→-----------------------------+
| Field | Value
,→ |
+--------------------------------------+-----------------------------------
,→-----------------------------+
| OS-DCF:diskConfig | MANUAL
,→ |
| OS-EXT-AZ:availability_zone |
,→ |
(continues on next page)
,→------+--------+
| b3ecc464-1263-44a7-8c38-2d8a52751773 | | fa:16:3e:a8:ce:b8 | ip_
,→address='203.0.113.8', subnet_id='277eca5d-9869-474b-960e-6da5951d09f7'
,→ | ACTIVE |
| | | | ip_
,→address='2001:db8:10::8', subnet_id='eab47748-3f0a-4775-a09f-b0c24bb64bc4
,→' | |
+--------------------------------------+------+-------------------+--------
,→-------------------------------------------------------------------------
,→------+--------+
| admin_state_up | UP
,→ |
| allowed_address_pairs |
,→ |
| binding_host_id | vultr.guest
,→ |
| binding_profile |
,→ |
| binding_vif_details | datapath_type='system', ovs_hybrid_plug='True',
,→port_filter='True' |
| binding_vif_type | ovs
,→ |
| binding_vnic_type | normal
,→ |
| created_at | 2016-02-05T21:35:04Z
,→ |
| data_plane_status | None
,→ |
| description |
,→ |
| device_id | 66c13cb4-3002-4ab3-8400-7efc2659c363
,→ |
| device_owner | compute:None
,→ |
| dns_assignment | fqdn='my-vm.example.org.', hostname='my-vm', ip_
,→address='203.0.113.8' |
| | fqdn='my-vm.example.org.', hostname='my-vm', ip_
,→address='2001:db8:10::8' |
| dns_domain | example.org.
,→ |
| dns_name | my-vm
,→ |
| extra_dhcp_opts |
,→ |(continues on next page)
This page serves as a guide for how to use the DNS integration functionality of the Networking service
with an external DNSaaS (DNS-as-a-Service).
As a prerequisite this needs the internal DNS functionality offered by the Networking service to be
enabled, see DNS integration.
The first step to configure the integration with an external DNS service is to enable the functionality
described in The Networking service internal DNS resolution. Once this is done, the user has to take the
following steps and restart neutron-server.
1. Edit the [default] section of /etc/neutron/neutron.conf and specify the external
DNS service driver to be used in parameter external_dns_driver. The valid options are
defined in namespace neutron.services.external_dns_drivers. The following ex-
ample shows how to set up the driver for the OpenStack DNS service:
external_dns_driver = designate
2. If the OpenStack DNS service is the target external DNS, the [designate] section of /etc/
neutron/neutron.conf must define the following parameters:
• url: the OpenStack DNS service public endpoint URL. Note that this must always be the
versioned endpoint currently.
• auth_type: the authorization plugin to use. Usually this should be password, see https:
//docs.openstack.org/keystoneauth/latest/authentication-plugins.html for other options.
• auth_url: the Identity service authorization endpoint url. This endpoint will be used by
the Networking service to authenticate as an user to create and update reverse lookup (PTR)
zones.
• username: the username to be used by the Networking service to create and update reverse
lookup (PTR) zones.
• password: the password of the user to be used by the Networking service to create and
update reverse lookup (PTR) zones.
• project_name: the name of the project to be used by the Networking service to create
and update reverse lookup (PTR) zones.
• project_domain_name: the name of the domain for the project to be used by the Net-
working service to create and update reverse lookup (PTR) zones.
• user_domain_name: the name of the domain for the user to be used by the Networking
service to create and update reverse lookup (PTR) zones.
• region_name: the name of the region to be used by the Networking service to create and
update reverse lookup (PTR) zones.
• allow_reverse_dns_lookup: a boolean value specifying whether to enable or not
the creation of reverse lookup (PTR) records.
• ipv4_ptr_zone_prefix_size: the size in bits of the prefix for the IPv4 reverse
lookup (PTR) zones.
• ipv6_ptr_zone_prefix_size: the size in bits of the prefix for the IPv6 reverse
lookup (PTR) zones.
• ptr_zone_email: the email address to use when creating new reverse lookup (PTR)
zones. The default is admin@<dns_domain> where <dns_domain> is the domain for
the first record being created in that zone.
• insecure: whether to disable SSL certificate validation. By default, certificates are vali-
dated.
• cafile: Path to a valid Certificate Authority (CA) certificate. Optional, the system CAs
are used as default.
The following is an example:
[designate]
url = http://192.0.2.240:9001/v2
auth_type = password
auth_url = http://192.0.2.240:5000
username = neutron
password = PASSWORD
project_name = service
project_domain_name = Default
user_domain_name = Default
allow_reverse_dns_lookup = True
ipv4_ptr_zone_prefix_size = 24
ipv6_ptr_zone_prefix_size = 116
ptr_zone_email = [email protected]
cafile = /etc/ssl/certs/my_ca_cert
Once the neutron-server has been configured and restarted, users will have functionality that cov-
ers three use cases, described in the following sections. In each of the use cases described below:
• The examples assume the OpenStack DNS service as the external DNS.
• A, AAAA and PTR records will be created in the DNS service.
• Before executing any of the use cases, the user must create in the DNS service under their project
a DNS zone where the A and AAAA records will be created. For the description of the use cases
below, it is assumed the zone example.org. was created previously.
• The PTR records will be created in zones owned by the project specified for project_name
above.
Use case 1: Floating IPs are published with associated port DNS attributes
In this use case, the address of a floating IP is published in the external DNS service in conjunction with
the dns_name of its associated port and the dns_domain of the ports network. The steps to execute
in this use case are the following:
1. Assign a valid domain name to the networks dns_domain attribute. This name must end with a
period (.).
2. Boot an instance or alternatively, create a port specifying a valid value to its dns_name attribute.
If the port is going to be used for an instance boot, the value assigned to dns_name must be
equal to the hostname that the Compute service will assign to the instance. Otherwise, the boot
will fail.
| OS-DCF:diskConfig | MANUAL
,→ |
| OS-EXT-AZ:availability_zone |
,→ |
| OS-EXT-STS:power_state | 0
,→ |
| OS-EXT-STS:task_state | scheduling
,→ |
| OS-EXT-STS:vm_state | building
,→ | (continues on next page)
In this example, notice that the data is published in the DNS service when the floating IP is associated
to the port.
Following are the PTR records created for this example. Note that for IPv4, the value of
ipv4_ptr_zone_prefix_size is 24. Also, since the zone for the PTR records is created in the
service project, you need to use admin credentials in order to be able to view it.
$ openstack recordset list --all-projects 100.51.198.in-addr.arpa.
+--------------------------------------+----------------------------------
,→+----------------------------+------+------------------------------------
,→---------------------------------+--------+--------+
| id | project_id
,→| name | type | data
,→ | status | action |
+--------------------------------------+----------------------------------
,→+-----------------------------------+------------------------------------
,→---------------------------------+--------+--------+
| 2dd0b894-25fa-4563-9d32-9f13bd67f329 | 07224d17d76d42499a38f00ba4339710
,→| 100.51.198.in-addr.arpa. | NS | ns1.devstack.org.
,→ | ACTIVE | NONE |
| 47b920f1-5eff-4dfa-9616-7cb5b7cb7ca6 | 07224d17d76d42499a38f00ba4339710
,→| 100.51.198.in-addr.arpa. | SOA | ns1.devstack.org. admin.example.
,→org. 1455564862 3600 600 86400 3600 | ACTIVE | NONE |
| fb1edf42-abba-410c-8397-831f45fd0cd7 | 07224d17d76d42499a38f00ba4339710
,→| 4.100.51.198.in-addr.arpa. | PTR | my-vm.example.org.
,→ | ACTIVE | NONE |
+--------------------------------------+----------------------------------
,→+----------------------------+------+------------------------------------
,→---------------------------------+--------+--------+
Use case 2: Floating IPs are published in the external DNS service
In this use case, the user assigns dns_name and dns_domain attributes to a floating IP when it is
created. The floating IP data becomes visible in the external DNS service as soon as it is created. The
floating IP can be associated with a port on creation or later on. The following example shows a user
booting an instance and then creating a floating IP associated to the port allocated for the instance:
$ openstack network show 38c5e950-b450-4c30-83d4-ee181c28aad3
+---------------------------+----------------------------------------------
,→------------------------------+
| Field | Value
,→ |
+---------------------------+----------------------------------------------
,→------------------------------+
| admin_state_up | UP
,→ |
| availability_zone_hints |
,→ |
| availability_zones | nova
,→ |
| created_at | 2016-05-04T19:27:34Z
,→ |
| description |
,→ |
| dns_domain | example.org.
,→ |
| id | 38c5e950-b450-4c30-83d4-ee181c28aad3
,→ |
| ipv4_address_scope | None
,→ | (continues on next page)
| Field | Value
,→ |
+-----------------------+--------------------------------------------------
,→----------------------------------------------------------+
| admin_state_up | UP
,→ |
| allowed_address_pairs |
,→ |
| binding_host_id | vultr.guest
,→ |
| binding_profile |
,→ |
| binding_vif_details | datapath_type='system', ovs_hybrid_plug='True',
,→port_filter='True' |
| binding_vif_type | ovs
,→ |
| binding_vnic_type | normal
,→ |
| created_at | 2016-02-15T19:42:44Z
,→ |
| data_plane_status | None
,→ |
| description |
,→ |
| device_id | 71fb4ac8-eed8-4644-8113-0641962bb125
,→ |
| device_owner | compute:None
,→ |(continues on next page)
| id | name | type |
,→records |
,→status | action |
+--------------------------------------+--------------------+------+-------
,→----------------------------------------------------------------+--------
,→+--------+
,→+--------+
,→---------------------------------+--------+--------+
| id | project_id
,→| name | type | data
,→ | status | action |
+--------------------------------------+----------------------------------
,→+-----------------------------------+------------------------------------
,→---------------------------------+--------+--------+
| 2dd0b894-25fa-4563-9d32-9f13bd67f329 | 07224d17d76d42499a38f00ba4339710
,→| 100.51.198.in-addr.arpa. | NS | ns1.devstack.org.
,→ | ACTIVE | NONE |
| 47b920f1-5eff-4dfa-9616-7cb5b7cb7ca6 | 07224d17d76d42499a38f00ba4339710
,→| 100.51.198.in-addr.arpa. | SOA | ns1.devstack.org. admin.example.
,→org. 1455564862 3600 600 86400 3600 | ACTIVE | NONE |
| 589a0171-e77a-4ab6-ba6e-23114f2b9366 | 07224d17d76d42499a38f00ba4339710
,→| 5.100.51.198.in-addr.arpa. | PTR | my-floatingip.example.org.
,→ | ACTIVE | NONE |
+--------------------------------------+----------------------------------
,→+----------------------------+------+------------------------------------
,→---------------------------------+--------+--------+
Use case 3: Ports are published directly in the external DNS service
In this case, the user is creating ports or booting instances on a network that is accessible externally.
There are multiple possible scenarios here depending on which of the DNS extensions is enabled in the
Neutron configuration. These extensions are described in the following in descending order of priority.
+--------------------------------------+--------------+------+-------------
,→-------------------------------------------------------+--------+--------
,→+
| 404e9846-1482-433b-8bbc-67677e587d28 | example.org. | NS | ns1.
,→devstack.org. | ACTIVE
,→| NONE |
| de73576a-f9c7-4892-934c-259b77ff02c0 | example.org. | SOA | ns1.
,→devstack.org. mail.example.org. 1575897792 3559 600 86400 3600 | ACTIVE
,→| NONE |
+--------------------------------------+--------------+------+-------------
,→-------------------------------------------------------+--------+--------
,→+
,→------------------------------------------------+
| admin_state_up | UP
,→
,→ |
| allowed_address_pairs |
,→
,→ |
| binding_host_id | None
,→
,→ |
| binding_profile | None
,→
,→ |
| binding_vif_details | None
,→
,→ |
| binding_vif_type | None
,→
,→ |
| binding_vnic_type | normal
,→
,→ |
| created_at | 2019-12-09T13:23:52Z
,→
,→ |
| data_plane_status | None
,→
,→ |
| description |
,→
,→ |
| device_id |
,→
,→ |
| device_owner |
,→
,→ |
| dns_assignment | fqdn='port1.openstackgate.local.', hostname=
,→'port1', ip_address='192.0.2.100'
,→ |
| | fqdn='port1.openstackgate.local.', hostname=
,→'port1', ip_address='2001:db8:42:42::2a2'
,→ |
| dns_domain | example.org.
,→
,→ |
| dns_name | port1
,→
,→ |
| extra_dhcp_opts |
,→
,→ | (continues on next page)
,→ |
| id | f8bc991b-1f84-435a-a5f8-814bd8b9ae9f
,→
,→ |
| location | cloud='devstack', project.domain_id='default',
,→project.domain_name=, project.id='86de4dab952d48f79e625b106f7a75f7',
,→ |
| network_id | fa8118ed-b7c2-41b8-89bc-97e46f0491ac
,→
,→ |
| port_security_enabled | True
,→
,→ |
| project_id | 86de4dab952d48f79e625b106f7a75f7
,→
,→ |
| propagate_uplink_status | None
,→
,→ |
| qos_policy_id | None
,→
,→ |
| resource_request | None
,→
,→ |
| revision_number | 1
,→
,→ |
| security_group_ids | f0b02df0-a0b9-4ce8-b067-8b61a8679e9d
,→
,→ |
| status | DOWN
,→
,→ |
| tags |
,→
,→ |
| trunk_details | None
,→
,→ |
| updated_at | 2019-12-09T13:23:53Z
,→
,→ |
+-------------------------+------------------------------------------------
,→-------------------------------------------------------------------------
,→------------------------------------------------+ (continues on next page)
If the dns_domain for ports extension has been configured, the user can create a port specify-
ing a non-blank value in its dns_domain attribute. If the port is created in an externally accessible
network, DNS records will be published for this port:
$ openstack port create --network 37aaff3a-6047-45ac-bf4f-a825e56fd2b3 --
,→dns-name my-vm --dns-domain port-domain.org. test
+-------------------------+------------------------------------------------
,→-------------------------------+
| Field | Value
,→ |
+-------------------------+------------------------------------------------
,→-------------------------------+
| admin_state_up | UP
,→ |
| allowed_address_pairs |
,→ |
| binding_host_id | None
,→ |
| binding_profile | None
,→ |
| binding_vif_details | None
,→ |
| binding_vif_type | None
,→ |
| binding_vnic_type | normal
,→ |
| created_at | 2019-06-12T15:43:29Z
,→ |
| data_plane_status | None
,→ | (continues on next page)
In this case, the ports dns_name (my-vm) will be published in the port-domain.org. zone, as
shown here:
Note: If both the port and its network have a valid non-blank string assigned to their dns_domain
attributes, the ports dns_domain takes precedence over the networks.
Note: The name assigned to the ports dns_domain attribute must end with a period (.).
Note: In the above example, the port-domain.org. zone must be created before Neutron can
publish any port data to it.
Note: See Configuration of the externally accessible network for use cases 3b and 3c for detailed
instructions on how to create the externally accessible network.
If the user wants to publish a port in the external DNS service in a zone specified by the dns_domain
attribute of the network, these are the steps to be taken:
1. Assign a valid domain name to the networks dns_domain attribute. This name must end with a
period (.).
2. Boot an instance specifying the externally accessible network. Alternatively, create a port on the
externally accessible network specifying a valid value to its dns_name attribute. If the port is
going to be used for an instance boot, the value assigned to dns_name must be equal to the
hostname that the Compute service will assign to the instance. Otherwise, the boot will fail.
Once these steps are executed, the ports DNS data will be published in the external DNS service. This
is an example:
$ openstack network list
+--------------------------------------+----------+------------------------
,→-----------------------------------------------------+
| ID | Name | Subnets
,→ |
+--------------------------------------+----------+------------------------
,→-----------------------------------------------------+
In this example the port is created manually by the user and then used to boot an instance. Notice that:
• The ports data was visible in the DNS service as soon as it was created.
• See Performance considerations for an explanation of the potential performance impact associated
with this use case.
Following are the PTR records created for this example. Note that for IPv4, the
value of ipv4_ptr_zone_prefix_size is 24. In the case of IPv6, the value of
ipv6_ptr_zone_prefix_size is 116.
$ openstack recordset list --all-projects 113.0.203.in-addr.arpa.
+--------------------------------------+----------------------------------
,→+---------------------------+------+-------------------------------------
,→--------------------------------+--------+--------+
| id | project_id
,→| name | type | records
,→ | status | action |
+--------------------------------------+----------------------------------
,→+---------------------------+------+-------------------------------------
,→--------------------------------+--------+--------+
| 32f1c05b-7c5d-4230-9088-961a0a462d28 | 07224d17d76d42499a38f00ba4339710
,→| 113.0.203.in-addr.arpa. | SOA | ns1.devstack.org. admin.example.org.
,→ 1455563035 3600 600 86400 3600 | ACTIVE | NONE |
| 3d402c43-b215-4a75-a730-51cbb8999cb8 | 07224d17d76d42499a38f00ba4339710
,→| 113.0.203.in-addr.arpa. | NS | ns1.devstack.org.
,→ | ACTIVE | NONE |
| 8e4e618c-24b0-43db-ab06-91b741a91c10 | 07224d17d76d42499a38f00ba4339710
,→| 9.113.0.203.in-addr.arpa. | PTR | my-vm.example.org.
,→ | ACTIVE | NONE |
+--------------------------------------+----------------------------------
,→+---------------------------+------+-------------------------------------
,→--------------------------------+--------+--------+
,→-------+--------+--------+
See Configuration of the externally accessible network for use cases 3b and 3c for detailed instructions
on how to create the externally accessible network.
Performance considerations
Only for Use case 3: Ports are published directly in the external DNS service, if the port binding
extension is enabled in the Networking service, the Compute service will execute one additional port
update operation when allocating the port for the instance during the boot process. This may have a
noticeable adverse effect in the performance of the boot process that should be evaluated before adoption
of this use case.
For use cases 3b and 3c, the externally accessible network must meet the following requirements:
• The network may not have attribute router:external set to True.
• The network type can be FLAT, VLAN, GRE, VXLAN or GENEVE.
• For network types VLAN, GRE, VXLAN or GENEVE, the segmentation ID must be outside the
ranges assigned to project networks.
This usually implies that these use cases only work for networks specifically created for this purpose by
an admin, they do not work for networks which tenants can create on their own.
The Networking service offers several methods to configure name resolution (DNS) for instances. Most
deployments should implement case 1 or 2a. Case 2b requires security considerations to prevent leaking
internal DNS information to instances.
Note: All of these setups require the configured DNS resolvers to be reachable from the virtual network
in question. So unless the resolvers are located inside the virtual network itself, this implies the need for
a router to be attached to that network having an external gateway configured.
In this case, the DHCP agent offers one or more unique DNS resolvers to instances via DHCP on each
virtual network. You can configure a DNS resolver when creating or updating a subnet. To configure
more than one DNS resolver, repeat the option multiple times.
• Configure a DNS resolver when creating a subnet.
Replace DNS_RESOLVER with the IP address of a DNS resolver reachable from the virtual net-
work. Repeat the option if you want to specify multiple IP addresses. For example:
Note: This command requires additional options outside the scope of this content.
Replace DNS_RESOLVER with the IP address of a DNS resolver reachable from the virtual net-
work and SUBNET_ID_OR_NAME with the UUID or name of the subnet. For example, using the
selfservice subnet:
Replace SUBNET_ID_OR_NAME with the UUID or name of the subnet. For example, using the
selfservice subnet:
Note: You can use this option in combination with the previous one in order to replace all existing
DNS resolver addresses with new ones.
You can also set the DNS resolver address to 0.0.0.0 for IPv4 subnets, or :: for IPv6 subnets, which
are special values that indicate to the DHCP agent that it should not announce any DNS resolver at all
on the subnet.
Note: When DNS resolvers are explicitly specified for a subnet this way, that setting will take prece-
dence over the options presented in case 2.
In this case, the DHCP agent offers the list of all DHCP agents IP addresses on a subnet as DNS re-
solver(s) to instances via DHCP on that subnet.
The DHCP agent then runs a masquerading forwarding DNS resolver with two possible options to
determine where the DNS queries are sent to.
Note: The DHCP agent will answer queries for names and addresses of instances running within the
virtual network directly instead of forwarding them.
Case 2a: Queries are forwarded to an explicitly configured set of DNS resolvers
In the dhcp_agent.ini file, configure one or more DNS resolvers. To configure more than one DNS
resolver, use a comma between the values.
[DEFAULT]
dnsmasq_dns_servers = DNS_RESOLVER
Replace DNS_RESOLVER with a list of IP addresses of DNS resolvers reachable from all virtual net-
works. For example:
[DEFAULT]
dnsmasq_dns_servers = 203.0.113.8, 198.51.100.53
Note: You must configure this option for all eligible DHCP agents and restart them to activate the
values.
Case 2b: Queries are forwarded to DNS resolver(s) configured on the host
In this case, the DHCP agent forwards queries from the instances to the DNS resolver(s) configured
in the resolv.conf file on the host running the DHCP agent. This requires these resolvers being
reachable from all virtual networks.
In the dhcp_agent.ini file, enable using the DNS resolver(s) configured on the host.
[DEFAULT]
dnsmasq_local_resolv = True
Note: You must configure this option for all eligible DHCP agents and restart them to activate this
setting.
Open vSwitch: High availability using DVR supports augmentation using Virtual Router Redundancy
Protocol (VRRP). Using this configuration, virtual routers support both the --distributed and
--ha options.
Similar to legacy HA routers, DVR/SNAT HA routers provide a quick fail over of the SNAT service to
a backup DVR/SNAT router on an l3-agent running on a different node.
SNAT high availability is implemented in a manner similar to the Linux bridge: High availability using
VRRP and Open vSwitch: High availability using VRRP examples where keepalived uses VRRP to
provide quick failover of SNAT services.
During normal operation, the primary router periodically transmits heartbeat packets over a hidden
project network that connects all HA routers for a particular project.
If the DVR/SNAT backup router stops receiving these packets, it assumes failure of the primary
DVR/SNAT router and promotes itself to primary router by configuring IP addresses on the interfaces
in the snat namespace. In environments with more than one backup router, the rules of VRRP are
followed to select a new primary router.
Warning: There is a known bug with keepalived v1.2.15 and earlier which can cause packet
loss when max_l3_agents_per_router is set to 3 or more. Therefore, we recommend that
you upgrade to keepalived v1.2.16 or greater when using this feature.
Configuration example
The basic deployment model consists of one controller node, two or more network nodes, and multiple
computes nodes.
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
router_distributed = True
l3_ha = True
l3_ha_net_cidr = 169.254.192.0/18
max_l3_agents_per_router = 3
When the router_distributed = True flag is configured, routers created by all users
are distributed. Without it, only privileged users can create distributed routers by using
--distributed True.
Similarly, when the l3_ha = True flag is configured, routers created by all users default to
HA.
It follows that with these two flags set to True in the configuration file, routers created by all
users will default to distributed HA routers (DVR HA).
The same can explicitly be accomplished by a user with administrative credentials setting the flags
in the openstack router create command:
[ml2]
type_drivers = flat,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = external
[ml2_type_vxlan]
vni_ranges = MIN_VXLAN_ID:MAX_VXLAN_ID
Replace MIN_VXLAN_ID and MAX_VXLAN_ID with VXLAN ID minimum and maximum val-
ues suitable for your environment.
Note: The first value in the tenant_network_types option becomes the default project
network type when a regular user creates a network.
Network nodes
[ovs]
local_ip = TUNNEL_INTERFACE_IP_ADDRESS
bridge_mappings = external:br-ex
[agent]
enable_distributed_routing = True
tunnel_types = vxlan
l2_population = True
[DEFAULT]
ha_vrrp_auth_password = password
interface_driver = openvswitch
agent_mode = dvr_snat
Compute nodes
[ovs]
local_ip = TUNNEL_INTERFACE_IP_ADDRESS
bridge_mappings = external:br-ex
[agent]
enable_distributed_routing = True
tunnel_types = vxlan
l2_population = True
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.
,→OVSHybridIptablesFirewallDriver
[DEFAULT]
interface_driver = openvswitch
agent_mode = dvr
The health of your keepalived instances can be automatically monitored via a bash script that verifies
connectivity to all available and configured gateway addresses. In the event that connectivity is lost, the
master router is rescheduled to another node.
If all routers lose connectivity simultaneously, the process of selecting a new master router will be
repeated in a round-robin fashion until one or more routers have their connectivity restored.
To enable this feature, edit the l3_agent.ini file:
ha_vrrp_health_check_interval = 30
Known limitations
• Migrating a router from distributed only, HA only, or legacy to distributed HA is not supported at
this time. The router must be created as distributed HA. The reverse direction is also not supported.
You cannot reconfigure a distributed HA router to be only distributed, only HA, or legacy.
• There are certain scenarios where l2pop and distributed HA routers do not interact in an expected
manner. These situations are the same that affect HA only routers and l2pop.
Floating IP port forwarding enables users to forward traffic from a TCP/UDP/other protocol port of a
floating IP to a TCP/UDP/other protocol port associated to one of the fixed IPs of a Neutron port. This
is accomplished by associating port_forwarding sub-resource to a floating IP.
CRUD operations for port forwarding are implemented by a Neutron API extension and a service plug-
in. Please refer to the Neutron API Reference documentation for details on the CRUD operations.
service_plugins = router,segments,port_forwarding
extensions = port_forwarding
Note: The router service plug-in manages floating IPs and routers. As a consequence, it has to be
configured along with the port_forwarding service plug-in.
Note: After updating the options in the configuration files, the neutron-server and every neutron-l3-
agent need to be restarted for the new values to take effect.
Starting with the Liberty release, OpenStack Networking includes a pluggable interface for the IP Ad-
dress Management (IPAM) function. This interface creates a driver framework for the allocation and
de-allocation of subnets and IP addresses, enabling the integration of alternate IPAM implementations
or third-party IP Address Management systems.
The basics
In Liberty and Mitaka, the IPAM implementation within OpenStack Networking provided a pluggable
and non-pluggable flavor. As of Newton, the non-pluggable flavor is no longer available. Instead, it is
completely replaced with a reference driver implementation of the pluggable framework. All data will be
automatically migrated during the upgrade process, unless you have previously configured a pluggable
IPAM driver. In that case, no migration is necessary.
To configure a driver other than the reference driver, specify it in the neutron.conf file. Do this after
the migration is complete. There is no need to specify any value if you wish to use the reference driver.
ipam_driver = ipam-driver-name
There is no need to specify any value if you wish to use the reference driver, though specifying
internal will explicitly choose the reference driver. The documentation for any alternate drivers
will include the value to use when specifying that driver.
Known limitations
• The driver interface is designed to allow separate drivers for each subnet pool. However, the
current implementation allows only a single IPAM driver system-wide.
• Third-party drivers must provide their own migration mechanisms to convert existing OpenStack
installations to their IPAM.
8.2.14 IPv6
This section describes the OpenStack Networking reference implementation for IPv6, including the
following items:
• How to enable dual-stack (IPv4 and IPv6 enabled) instances.
• How those instances receive an IPv6 address.
• How those instances communicate across a router to other subnets or the internet.
• How those instances interact with other OpenStack services.
Enabling a dual-stack network in OpenStack Networking simply requires creating a subnet with the
ip_version field set to 6, then the IPv6 attributes (ipv6_ra_mode and ipv6_address_mode)
set. The ipv6_ra_mode and ipv6_address_mode will be described in detail in the next section.
Finally, the subnets cidr needs to be provided.
This section does not include the following items:
• Single stack IPv6 project networking
• OpenStack control communication between servers and services over an IPv6 network.
• Connection to the OpenStack APIs via an IPv6 transport network
• IPv6 multicast
• IPv6 support in conjunction with any out of tree routers, switches, services or agents whether in
physical or virtual form factors.
As of Juno, the OpenStack Networking service (neutron) provides two new attributes to the subnet
object, which allows users of the API to configure IPv6 subnets.
There are two IPv6 attributes:
• ipv6_ra_mode
• ipv6_address_mode
These attributes can be set to the following values:
• slaac
• dhcpv6-stateful
• dhcpv6-stateless
The attributes can also be left unset.
IPv6 addressing
Router advertisements
Dataplane
Both the Linux bridge and the Open vSwitch dataplane modules support forwarding IPv6 packets
amongst the guests and router ports. Similar to IPv4, there is no special configuration or setup re-
quired to enable the dataplane to properly forward packets from the source to the destination using IPv6.
Note that these dataplanes will forward Link-local Address (LLA) packets between hosts on the same
network just fine without any participation or setup by OpenStack components after the ports are all
connected and MAC addresses learned.
There are three methods currently implemented for a subnet to get its cidr in OpenStack:
1. Direct assignment during subnet creation via command line or Horizon
2. Referencing a subnet pool during subnet creation
3. Using a Prefix Delegation (PD) client to request a prefix for a subnet from a PD server
In the future, additional techniques could be used to allocate subnets to projects, for example, use of an
external IPAM module.
Note: An external DHCPv6 server in theory could override the full address OpenStack assigns based
on the EUI-64 address, but that would not be wise as it would not be consistent through the system.
IPv6 supports three different addressing schemes for address configuration and for providing optional
network information.
Stateless Address Auto Configuration (SLAAC) Address configuration using Router Advertise-
ments.
DHCPv6-stateless Address configuration using Router Advertisements and optional information using
DHCPv6.
DHCPv6-stateful Address configuration and optional information using DHCPv6.
OpenStack can be setup such that OpenStack Networking directly provides Router Advertisements,
DHCP relay and DHCPv6 address and optional information for their networks or this can be delegated
to external routers and services based on the drivers that are in use. There are two neutron subnet
attributes - ipv6_ra_mode and ipv6_address_mode that determine how IPv6 addressing and
network information is provided to project instances:
• ipv6_ra_mode: Determines who sends Router Advertisements.
• ipv6_address_mode: Determines how instances obtain IPv6 address, default gateway, or
optional information.
For the above two attributes to be effective, enable_dhcp of the subnet object must be set to True.
When using SLAAC, the currently supported combinations for ipv6_ra_mode and
ipv6_address_mode are as follows.
ipv6_ra_modeipv6_address_mode
Result
Not speci- SLAAC Addresses are assigned using EUI-64, and an external router will be
fied. used for routing.
SLAAC SLAAC Address are assigned using EUI-64, and OpenStack Networking pro-
vides routing.
Setting SLAAC for ipv6_ra_mode configures the neutron router with an radvd agent to send Router
Advertisements. The list below captures the values set for the address configuration flags in the Router
Advertisement messages in this scenario.
• Auto Configuration Flag = 1
• Managed Configuration Flag = 0
• Other Configuration Flag = 0
New or existing neutron networks that contain a SLAAC enabled IPv6 subnet will result in all neutron
ports attached to the network receiving IPv6 addresses. This is because when Router Advertisement
messages are multicast on a neutron network, they are received by all IPv6 capable ports on the network,
and each port will then configure an IPv6 address based on the information contained in the Router
Advertisement messages. In some cases, an IPv6 SLAAC address will be added to a port, in addition to
other IPv4 and IPv6 addresses that the port already has been assigned.
Note: If a router is not created and added to the subnet, SLAAC addressing will not succeed for
instances since no Router Advertisement messages will be generated.
DHCPv6
ipv6_ra_modeipv6_address_mode
Result
DHCPv6- DHCPv6- Addresses are assigned through Router Advertisements (see SLAAC
stateless stateless above) and optional information is delivered through DHCPv6.
DHCPv6- DHCPv6- Addresses and optional information are assigned using DHCPv6.
stateful stateful
Setting DHCPv6-stateless for ipv6_ra_mode configures the neutron router with an radvd agent to
send Router Advertisements. The list below captures the values set for the address configuration
flags in the Router Advertisement messages in this scenario. Similarly, setting DHCPv6-stateless for
ipv6_address_mode configures neutron DHCP implementation to provide the additional network
information.
• Auto Configuration Flag = 1
• Managed Configuration Flag = 0
Note: If a router is not created and added to the subnet, DHCPv6 addressing will not succeed for
instances since no Router Advertisement messages will be generated.
Router support
The behavior of the neutron router for IPv6 is different than for IPv4 in a few ways.
Internal router ports, that act as default gateway ports for a network, will share a common port for all
IPv6 subnets associated with the network. This implies that there will be an IPv6 internal router interface
with multiple IPv6 addresses from each of the IPv6 subnets associated with the network and a separate
IPv4 internal router interface for the IPv4 subnet. On the other hand, external router ports are allowed
to have a dual-stack configuration with both an IPv4 and an IPv6 address assigned to them.
Neutron project networks that are assigned Global Unicast Address (GUA) prefixes and addresses dont
require NAT on the neutron router external gateway port to access the outside world. As a consequence
of the lack of NAT the external router port doesnt require a GUA to send and receive to the external
networks. This implies a GUA IPv6 subnet prefix is not necessarily needed for the neutron external
network. By default, a IPv6 LLA associated with the external gateway port can be used for routing
purposes. To handle this scenario, the implementation of router-gateway-set API in neutron has been
modified so that an IPv6 subnet is not required for the external network that is associated with the
neutron router. The LLA address of the upstream router can be learned in two ways.
1. In the absence of an upstream Router Advertisement message, the ipv6_gateway flag can be
set with the external router gateway LLA in the neutron L3 agent configuration file. This also
requires that no subnet is associated with that port.
2. The upstream router can send a Router Advertisement and the neutron router will automatically
learn the next-hop LLA, provided again that no subnet is assigned and the ipv6_gateway flag
is not set.
Effectively the ipv6_gateway flag takes precedence over a Router Advertisements that is received
from the upstream router. If it is desired to use a GUA next hop that is accomplished by allocating a
subnet to the external router port and assigning the upstream routers GUA address as the gateway for the
subnet.
Note: It should be possible for projects to communicate with each other on an isolated network (a
network without a router port) using LLA with little to no participation on the part of OpenStack. The
authors of this section have not proven that to be true for all scenarios.
Note: When using the neutron L3 agent in a configuration where it is auto-configuring an IPv6 address
via SLAAC, and the agent is learning its default IPv6 route from the ICMPv6 Router Advertisement, it
may be necessary to set the net.ipv6.conf.<physical_interface>.accept_ra sysctl to
the value 2 in order for routing to function correctly. For a more detailed description, please see the bug.
IPv6 does work when the Distributed Virtual Router functionality is enabled, but all ingress/egress traf-
fic is via the centralized router (hence, not distributed). More work is required to fully enable this
functionality.
Advanced services
VPNaaS
VPNaaS supports IPv6, but support in Kilo and prior releases will have some bugs that may limit how
it can be used. More thorough and complete testing and bug fixing is being done as part of the Liberty
release. IPv6-based VPN-as-a-Service is configured similar to the IPv4 configuration. Either or both
the peer_address and the peer_cidr can specified as an IPv6 address. The choice of addressing
modes and router modes described above should not impact support.
At the current time OpenStack Networking does not provide any facility to support any flavor of NAT
with IPv6. Unlike IPv4 there is no current embedded support for floating IPs with IPv6. It is assumed
that the IPv6 addressing amongst the projects is using GUAs with no overlap across the projects.
Security considerations
For more information about security considerations, see the Security groups section in OpenStack
Networking.
OpenStack currently doesnt support the Privacy Extensions defined by RFC 4941, or the Opaque Iden-
tifier generation methods defined in RFC 7217. The interface identifier and DUID used must be directly
derived from the MAC address as described in RFC 2373. The compute instances must not be set up
to utilize either of these methods when generating their interface identifier, or they might not be able to
communicate properly on the network. For example, in Linux guests, these are controlled via these two
sysctl variables:
• net.ipv6.conf.*.use_tempaddr (Privacy Extensions)
This allows the use of non-changing interface identifiers for IPv6 addresses according to RFC3041
semantics. It should be disabled (zero) so that stateless addresses are constructed using a stable, EUI64-
based value.
• net.ipv6.conf.*.addr_gen_mode
This defines how link-local and auto-configured IPv6 addresses are generated. It should be set to zero
(default) so that IPv6 addresses are generated using an EUI64-based value.
Other types of guests might have similar configuration options, please consult your distribution docu-
mentation for more information.
There are no provisions for an IPv6-based metadata service similar to what is provided for IPv4. In
the case of dual-stacked guests though it is always possible to use the IPv4 metadata service instead.
IPv6-only guests will have to use another method for metadata injection such as using a configuration
drive, which is described in the Nova documentation on config-drive.
Unlike IPv4, the MTU of a given network can be conveyed in both the Router Advertisement messages
sent by the router, as well as in DHCP messages.
As of the Kilo release, considerable effort has gone in to ensuring the project network can handle dual
stack IPv6 and IPv4 transport across the variety of configurations described above. OpenStack control
network can be run in a dual stack configuration and OpenStack API endpoints can be accessed via an
IPv6 network. At this time, Open vSwitch (OVS) tunnel types - STT, VXLAN, GRE, support both IPv4
and IPv6 endpoints.
Prefix delegation
From the Liberty release onwards, OpenStack Networking supports IPv6 prefix delegation. This section
describes the configuration and workflow steps necessary to use IPv6 prefix delegation to provide auto-
matic allocation of subnet CIDRs. This allows you as the OpenStack administrator to rely on an external
(to the OpenStack Networking service) DHCPv6 server to manage your project network prefixes.
Note: Prefix delegation became available in the Liberty release, it is not available in the Kilo release.
HA and DVR routers are not currently supported by this feature.
ipv6_pd_enabled = True
Note: If you are not using the default dibbler-based driver for prefix delegation, then you also need to
set the driver in /etc/neutron/neutron.conf:
Drivers other than the default one may require extra configuration.
This tells OpenStack Networking to use the prefix delegation mechanism for subnet allocation when the
user does not provide a CIDR or subnet pool id when creating a subnet.
Requirements
To use this feature, you need a prefix delegation capable DHCPv6 server that is reachable from your
OpenStack Networking node(s). This could be software running on the OpenStack Networking node(s)
or elsewhere, or a physical router. For the purposes of this guide we are using the open-source DHCPv6
server, Dibbler. Dibbler is available in many Linux package managers, or from source at tomaszmrugal-
ski/dibbler.
When using the reference implementation of the OpenStack Networking prefix delegation driver, Dib-
bler must also be installed on your OpenStack Networking node(s) to serve as a DHCPv6 client. Version
1.0.1 or higher is required.
This guide assumes that you are running a Dibbler server on the network node where the external net-
work bridge exists. If you already have a prefix delegation capable DHCPv6 server in place, then you
can skip the following section.
script "/var/lib/dibbler/pd-server.sh"
iface "br-ex" {
pd-class {
pd-pool 2001:db8:2222::/48
pd-length 64
}
}
# dibbler-server run
# dibbler-server start
When using DevStack, it is important to start your server after the stack.sh script has finished to
ensure that the required network interfaces have been created.
User workflow
The subnet is initially created with a temporary CIDR before one can be assigned by prefix delega-
tion. Any number of subnets with this temporary CIDR can exist without raising an overlap error. The
subnetpool_id is automatically set to prefix_delegation.
To trigger the prefix delegation process, create a router interface between this subnet and a router with
an active interface on the external network:
The prefix delegation mechanism then sends a request via the external network to your prefix delegation
server, which replies with the delegated prefix. The subnet is then updated with the new prefix, including
issuing new IP addresses to all ports:
If the prefix delegation server is configured to delegate globally routable prefixes and setup routes, then
any instance with a port on this subnet should now have external network access.
Deleting the router interface causes the subnet to be reverted to the temporary CIDR, and all ports have
their IPs updated. Prefix leases are released and renewed automatically as necessary.
References
The following presentation from the Barcelona Summit provides a great guide for setting up IPv6 with
OpenStack: Deploying IPv6 in OpenStack Environments.
Packet logging service is designed as a Neutron plug-in that captures network packets for relevant re-
sources (e.g. security group or firewall group) when the registered events occur.
From Rocky release, both of security_group and firewall_group are supported as resource
types in Neutron packet logging framework.
Service Configuration
service_plugins = router,metering,log
[agent]
extensions = log
[network_log]
rate_limit = 100
burst_limit = 25
#local_output_log_base = <None>
Note:
• It requires at least 100 for rate_limit and at least 25 for burst_limit.
• If rate_limit is unset, logging will log unlimited.
• If we dont specify local_output_log_base, logged packets will be stored
in system journal like /var/log/syslog by default.
"get_loggable_resources": "rule:regular_user",
"create_log": "rule:regular_user",
"get_log": "rule:regular_user",
"get_logs": "rule:regular_user",
"update_log": "rule:regular_user",
"delete_log": "rule:regular_user",
Note:
• In VM ports, logging for security_group in currently works with openvswitch fire-
wall driver only. linuxbridge is under development.
• Logging for firewall_group works on internal router ports only. VM ports would be
supported in the future.
2. Log creation:
Warning: In the case of --resource and --target are not specified from the request,
these arguments will be assigned to ALL by default. Hence, there is an enormous range of log
events will be created.
• Create logging resource for only the given target (portB) and the given resource (sg1 or
fwg1)
Note:
• The Enabled field is set to True by default. If enabled, logged events are written to
the destination if local_output_log_base is configured or /var/log/syslog in
default.
• The Event field will be set to ALL if --event is not specified from log creation request.
3. Enable/Disable log
We can enable or disable logging objects at runtime. It means that it will apply to all regis-
tered ports with the logging object immediately. For example:
Currently, packet logging framework supports to collect ACCEPT or DROP or both events related to
registered resources. As mentioned above, Neutron packet logging framework offers two loggable re-
sources through the log service plug-in: security_group and firewall_group.
The general characteristics of each event will be shown as the following:
• Log every DROP event: Every DROP security events will be generated when an incoming or
outgoing session is blocked by the security groups or firewall groups
• Log an ACCEPT event: The ACCEPT security event will be generated only for each NEW incoming
or outgoing session that is allowed by security groups or firewall groups. More details for the
ACCEPT events are shown as bellow:
– North/South ACCEPT: For a North/South session there would be a single ACCEPT event
irrespective of direction.
– East/West ACCEPT/ACCEPT: In an intra-project East/West session where the originating
port allows the session and the destination port allows the session, i.e. the traffic is allowed,
there would be two ACCEPT security events generated, one from the perspective of the
originating port and one from the perspective of the destination port.
– East/West ACCEPT/DROP: In an intra-project East/West session initiation where the origi-
nating port allows the session and the destination port does not allow the session there would
be ACCEPT security events generated from the perspective of the originating port and DROP
security events generated from the perspective of the destination port.
1. The security events that are collected by security group should include:
• A timestamp of the flow.
• A status of the flow ACCEPT/DROP.
• An indication of the originator of the flow, e.g which project or log resource generated the
events.
• An identifier of the associated instance interface (neutron port id).
• A layer 2, 3 and 4 information (mac, address, port, protocol, etc).
• Security event record format:
– Logged data of an ACCEPT event would look like:
May 5 09:05:07 action=ACCEPT project_
,→id=736672c700cd43e1bd321aeaf940365c
log_resource_ids=['4522efdf-8d44-4e19-b237-64cafc49469b',
,→'42332d89-df42-4588-a2bb-3ce50829ac51']
vm_port=e0259ade-86de-482e-a717-f58258f7173f
ethernet(dst='fa:16:3e:ec:36:32',ethertype=2048,src=
,→'fa:16:3e:50:aa:b5'),
ipv4(csum=62071,dst='10.0.0.4',flags=2,header_length=5,
,→identification=36638,offset=0,
option=None,proto=6,src='172.24.4.10',tos=0,total_length=60,
,→ttl=63,version=4),
tcp(ack=0,bits=2,csum=15097,dst_port=80,offset=10,
,→option=[TCPOptionMaximumSegmentSize(kind=2,length=4,max_seg_
,→size=1460),
TCPOptionSACKPermitted(kind=4,length=2),
,→TCPOptionTimestamps(kind=8,length=10,ts_ecr=0,ts_val=196418896),
TCPOptionNoOperation(kind=1,length=1),
,→TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)],
seq=3284890090,src_port=47825,urgent=0,window_size=14600)
TCPOptionNoOperation(kind=1,length=1),
,→TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)],
seq=3284890090,src_port=47825,urgent=0,window_size=14600)
,→',tos=0,total_length=84,ttl=63,version=4)
icmp(code=0,csum=41376,data=echo(data='\xe5\xf2\xfej\x00\x00\x00\
,→x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
,→x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00',id=29185,seq=0),type=8)
,→16',tos=0,total_length=84,ttl=63,version=4)
icmp(code=0,csum=23772,data=echo(data='\x8a\xa0\xac|\x00\x00\x00\
,→x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\
(continues on next page)
,→x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
Note: No other extraneous events are generated within the security event logs, e.g. no debugging data,
etc.
The Macvtap mechanism driver for the ML2 plug-in generally increases network performance of in-
stances.
Consider the following attributes of this mechanism driver to determine practicality in your environment:
• Supports only instance ports. Ports for DHCP and layer-3 (routing) services must use another
mechanism driver such as Linux bridge or Open vSwitch (OVS).
• Supports only untagged (flat) and tagged (VLAN) networks.
• Lacks support for security groups including basic (sanity) and anti-spoofing rules.
• Lacks support for layer-3 high-availability mechanisms such as Virtual Router Redundancy Pro-
tocol (VRRP) and Distributed Virtual Routing (DVR).
• Only compute resources can be attached via macvtap. Attaching other resources like DHCP,
Routers and others is not supported. Therefore run either OVS or linux bridge in VLAN or flat
mode on the controller node.
• Instance migration requires the same values for the physical_interface_mapping con-
figuration option on each compute node. For more information, see https://bugs.launchpad.net/
neutron/+bug/1550400.
Prerequisites
You can add this mechanism driver to an existing environment using either the Linux bridge or OVS
mechanism drivers with only provider networks or provider and self-service networks. You can change
the configuration of existing compute nodes or add compute nodes with the Macvtap mechanism driver.
The example configuration assumes addition of compute nodes with the Macvtap mechanism driver to
the Linux bridge: Self-service networks or Open vSwitch: Self-service networks deployment examples.
Add one or more compute nodes with the following components:
• Three network interfaces: management, provider, and overlay.
• OpenStack Networking Macvtap layer-2 agent and any dependencies.
Note: To support integration with the deployment examples, this content configures the Macvtap mech-
anism driver to use the overlay network for untagged (flat) or tagged (VLAN) networks in addition to
overlay networks such as VXLAN. Your physical network infrastructure must support VLAN (802.1q)
tagging on the overlay network.
Architecture
The Macvtap mechanism driver only applies to compute nodes. Otherwise, the environment resembles
the prerequisite deployment example.
Example configuration
Use the following example configuration as a template to add support for the Macvtap mechanism driver
to an existing operational environment.
Controller node
[ml2]
mechanism_drivers = macvtap
[ml2_type_flat]
flat_networks = provider,macvtap
[ml2_type_vlan]
network_vlan_ranges = provider,macvtap:VLAN_ID_START:VLAN_ID_END
Note: Use of macvtap is arbitrary. Only the self-service deployment examples require
VLAN ID ranges. Replace VLAN_ID_START and VLAN_ID_END with appropriate nu-
merical values.
Network nodes
No changes.
Compute nodes
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and Configuration Reference for your OpenStack re-
lease to obtain the appropriate additional configuration for the [DEFAULT], [database],
[keystone_authtoken], [nova], and [agent] sections.
3. In the macvtap_agent.ini file, configure the layer-2 agent.
[macvtap]
physical_interface_mappings = macvtap:MACVTAP_INTERFACE
[securitygroup]
firewall_driver = noop
Replace MACVTAP_INTERFACE with the name of the underlying interface that handles
Macvtap mechanism driver interfaces. If using a prerequisite deployment example, replace
MACVTAP_INTERFACE with the name of the underlying interface that handles overlay networks.
For example, eth1.
4. Start the following services:
• Macvtap agent
This mechanism driver simply changes the virtual network interface driver for instances. Thus, you can
reference the Create initial networks content for the prerequisite deployment example.
This mechanism driver simply changes the virtual network interface driver for instances. Thus, you can
reference the Verify network operation content for the prerequisite deployment example.
This mechanism driver simply removes the Linux bridge handling security groups on the compute nodes.
Thus, you can reference the network traffic flow scenarios for the prerequisite deployment example.
The Networking service uses the MTU of the underlying physical network to calculate the MTU for
virtual network components including instance network interfaces. By default, it assumes a standard
1500-byte MTU for the underlying physical network.
The Networking service only references the underlying physical network MTU. Changing the underly-
ing physical network device MTU requires configuration of physical network devices such as switches
and routers.
Jumbo frames
The Networking service supports underlying physical networks using jumbo frames and also enables in-
stances to use jumbo frames minus any overlay protocol overhead. For example, an underlying physical
network with a 9000-byte MTU yields a 8950-byte MTU for instances using a VXLAN network with
IPv4 endpoints. Using IPv6 endpoints for overlay networks adds 20 bytes of overhead for any protocol.
The Networking service supports the following underlying physical network architectures. Case 1 refers
to the most common architecture. In general, architectures should avoid cases 2 and 3.
Note: After you adjust MTU configuration options in neutron.conf and ml2_conf.ini, you
should update mtu attribute for all existing networks that need a new MTU. (Network MTU update is
available for all core plugins that implement the net-mtu-writable API extension.)
Case 1
For typical underlying physical network architectures that implement a single MTU value, you can lever-
age jumbo frames using two options, one in the neutron.conf file and the other in the ml2_conf.
ini file. Most environments should use this configuration.
For example, referencing an underlying physical network with a 9000-byte MTU:
1. In the neutron.conf file:
[DEFAULT]
global_physnet_mtu = 9000
[ml2]
path_mtu = 9000
Case 2
Some underlying physical network architectures contain multiple layer-2 networks with different MTU
values. You can configure each flat or VLAN provider network in the bridge or interface mapping
options of the layer-2 agent to reference a unique MTU value.
For example, referencing a 4000-byte MTU for provider2, a 1500-byte MTU for provider3, and
a 9000-byte MTU for other networks using the Open vSwitch agent:
1. In the neutron.conf file:
[DEFAULT]
global_physnet_mtu = 9000
[ovs]
bridge_mappings = provider1:eth1,provider2:eth2,provider3:eth3
[ml2]
physical_network_mtus = provider2:4000,provider3:1500
path_mtu = 9000
Case 3
Some underlying physical network architectures contain a unique layer-2 network for overlay networks
using protocols such as VXLAN and GRE.
For example, referencing a 4000-byte MTU for overlay networks and a 9000-byte MTU for other net-
works:
1. In the neutron.conf file:
[DEFAULT]
global_physnet_mtu = 9000
[ml2]
path_mtu = 4000
Note: Other networks including provider networks and flat or VLAN self-service networks
assume the value of the global_physnet_mtu option.
The DHCP agent provides an appropriate MTU value to instances using IPv4, while the L3 agent pro-
vides an appropriate MTU value to instances using IPv6. IPv6 uses RA via the L3 agent because the
DHCP agent only supports IPv4. Instances using IPv4 and IPv6 should obtain the same MTU value
regardless of method.
The network segment range service exposes the segment range management to be administered via the
Neutron API. In addition, it introduces the ability for the administrator to control the segment ranges
globally or on a per-tenant basis.
Before Stein, network segment ranges were configured as an entry in ML2 config file ml2_conf.ini
that was statically defined for tenant network allocation and therefore had to be managed as part of
the host deployment and management. When a regular tenant user creates a network, Neutron assigns
the next free segmentation ID (VLAN ID, VNI etc.) from the configured segment ranges. Only an
administrator can assign a specific segment ID via the provider extension.
The network segment range management service provides the following capabilities that the administra-
tor may be interested in:
1. To check out the network segment ranges defined by the operators in the ML2 config file so that
the admin can use this information to make segment range allocation.
2. To dynamically create and assign network segment ranges, which can help with the distribution of
the underlying network connection mapping for privacy or dedicated business connection needs.
This includes:
• global shared network segment ranges
• tenant-specific network segment ranges
3. To dynamically update a network segment range to offer the ability to adapt to the connection
mapping changes.
4. To dynamically manage a network segment range when there are no segment ranges defined within
the ML2 config file ml2_conf.ini and no restart of the Neutron server is required in this
situation.
5. To check the availability and usage statistics of network segment ranges.
How it works
A network segment range manages a set of segments from which self-service networks can be allocated.
The network segment range management service is admin-only.
As a regular project in an OpenStack cloud, you can not create a network segment range of your own
and you just create networks in regular way.
If you are an admin, you can create a network segment range which can be shared (i.e. used by any
regular project) or tenant-specific (i.e. assignment on a per-tenant basis). Your network segment ranges
will not be visible to any other regular projects. Other CRUD operations are also supported.
When a tenant allocates a segment, it will first be allocated from an available segment range assigned to
the tenant, and then a shared range if no tenant specific allocation is possible.
A set of default network segment ranges are created out of the values defined in the ML2
config file: network_vlan_ranges for ml2_type_vlan, vni_ranges for ml2_type_vxlan,
tunnel_id_ranges for ml2_type_gre and vni_ranges for ml2_type_geneve. They will be
reloaded when Neutron server starts or restarts. The default network segment ranges are
read-only, but will be treated as any other shared ranges on segment allocation.
The administrator can use the default network segment range information to make shared and/or per-
tenant range creation and assignment.
Example configuration
Controller node
[DEFAULT]
# ...
service_plugins = ...,network_segment_range,...
1. Source the administrative project credentials and list the enabled extensions.
2. Use the command openstack extension list --network to verify that the Neutron
Network Segment Range extension with Alias network-segment-range is enabled.
Workflow
At a high level, the basic workflow for a network segment range creation is the following:
1. The Cloud administrator:
• Lists the existing network segment ranges.
• Creates a shared or a tenant-specific network segment range based on the requirement.
2. A regular tenant creates a network in regular way. The network created will automatically allocate
a segment from the segment ranges assigned to the tenant or shared if no tenant specific range
available.
At a high level, the basic workflow for a network segment range update is the following:
1. The Cloud administrator:
• Lists the existing network segment ranges and identifies the one that needs to be updated.
• Updates the network segment range based on the requirement.
2. A regular tenant creates a network in regular way. The network created will automatically allocate
a segment from the updated network segment ranges available.
The network segment ranges with Default as True are the ranges specified by the operators in the
ML2 config file. Besides, there are also shared and tenant specific network segment ranges created by
the admin previously.
The admin is also able to check/show the detailed information (e.g. availability and usage statistics) of
a network segment range:
Now, as project demo (to source the client environment script demo-openrc for demo project accord-
ing to https://docs.openstack.org/keystone/latest/install/keystone-openrc-rdo.html), create a network in
a regular way.
$ source demo-openrc
$ openstack network create test_net
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2019-02-25T23:20:36Z |
(continues on next page)
Then, switch back to the admin to check the segmentation ID of the tenant network created.
$ source admin-openrc
$ openstack network show test_net
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2019-02-25T23:20:36Z |
| description | |
| dns_domain | |
| id | 39e5b95c-ad7a-40b5-9ec1-a4b4a8a43f14 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| location | None |
| mtu | 1450 |
| name | test_net |
| port_security_enabled | True |
| project_id | 7011dc7fccac4efda89dc3b7f0d0975a |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 137 |
| qos_policy_id | None |
| revision_number | 2 |
| router:external | Internal |
| segments | None |
(continues on next page)
The tenant network created automatically allocates a segment with segmentation ID 137 from the net-
work segment range with segmentation ID range 120-140 that is assigned to the tenant.
If no more available segment in the network segment range assigned to this tenant, then the segment
allocation would refer to the shared segment ranges to check whether theres one segment available. If
still there is no segment available, the allocation will fail as follows:
In this case, the admin is advised to check the availability and usage statistics of the related network
segment ranges in order to take further actions (e.g. enlarging a segment range etc.).
Known limitations
• This service plugin is only compatible with ML2 core plugin for now. However, it is possible for
other core plugins to support this feature with a follow-on effort.
This page serves as a guide for how to use the OVS with DPDK datapath functionality available in the
Networking service as of the Mitaka release.
The basics
Open vSwitch (OVS) provides support for a Data Plane Development Kit (DPDK) datapath since OVS
2.2, and a DPDK-backed vhost-user virtual interface since OVS 2.4. The DPDK datapath provides
lower latency and higher performance than the standard kernel OVS datapath, while DPDK-backed
vhost-user interfaces can connect guests to this datapath. For more information on DPDK, refer to
the DPDK website.
OVS with DPDK, or OVS-DPDK, can be used to provide high-performance networking between in-
stances on OpenStack compute nodes.
Prerequisites
Once OVS and neutron are correctly configured with DPDK support, vhost-user interfaces are com-
pletely transparent to the guest (except in case of multiqueue configuration described below). However,
guests must request huge pages. This can be done through flavors. For example:
For more information about the syntax for hw:mem_page_size, refer to the Flavors guide.
Note: vhost-user requires file descriptor-backed shared memory. Currently, the only way to request
this is by requesting large pages. This is why instances spawned on hosts with OVS-DPDK must request
large pages. The aggregate flavor affinity filter can be used to associate flavors with large page support
to hosts with OVS-DPDK support.
Create and add vhost-user network interfaces to instances in the same fashion as conventional in-
terfaces. These interfaces can use the kernel virtio-net driver or a DPDK-compatible driver in the
guest
To use this feature, the following should be set in the flavor extra specs (flavor keys):
This setting can be overridden by the image metadata property if the feature is enabled in the extra specs:
Support of virtio-net multiqueue needs to be present in kernel of guest VM and is available starting
from Linux kernel 3.8.
Check pre-set maximum for number of combined channels in channel configuration. Configuration of
OVS and flavor done successfully should result in maximum being more than 1):
$ ethtool -l INTERFACE_NAME
To increase number of current combined channels run following command in guest VM:
The number of queues should typically match the number of vCPUs defined for the instance. In newer
kernel versions this is configured automatically.
Known limitations
• This feature is only supported when using the libvirt compute driver, and the KVM/QEMU hy-
pervisor.
• Huge pages are required for each instance running on hosts with OVS-DPDK. If huge pages are
not present in the guest, the interface will appear but will not function.
• Expect performance degradation of services using tap devices: these devices do not support
DPDK. Example services include DVR.
• When the ovs_use_veth option is set to True, any traffic sent from a DHCP namespace will
have an incorrect TCP checksum. This means that if enable_isolated_metadata is set to
True and metadata service is reachable through the DHCP namespace, responses from metadata
will be dropped due to an invalid checksum. In such cases, ovs_use_veth should be switched
to False and Open vSwitch (OVS) internal ports should be used instead.
The purpose of this page is to describe how to enable Open vSwitch hardware offloading functional-
ity available in OpenStack (using OpenStack Networking). This functionality was first introduced in
the OpenStack Pike release. This page intends to serve as a guide for how to configure OpenStack
Networking and OpenStack Compute to enable Open vSwitch hardware offloading.
The basics
Open vSwitch is a production quality, multilayer virtual switch licensed under the open source Apache
2.0 license. It is designed to enable massive network automation through programmatic extension,
while still supporting standard management interfaces and protocols. Open vSwitch (OVS) allows Vir-
tual Machines (VM) to communicate with each other and with the outside world. The OVS software
based solution is CPU intensive, affecting system performance and preventing fully utilizing available
bandwidth.
Term Definition
PF Physical Function. The physical Ethernet controller that supports SR-IOV.
VF Virtual Function. The virtual PCIe device created from a physical Ethernet
controller.
Representor Port Virtual network interface similar to SR-IOV port that represents Nova in-
stance.
First Compute Node OpenStack Compute Node that can host Compute instances (Virtual Ma-
chines).
Second Compute Node OpenStack Compute Node that can host Compute instances (Virtual Ma-
chines).
Prerequisites
In order to enable Open vSwitch hardware offloading, the following steps are required:
1. Enable SR-IOV
2. Configure NIC to switchdev mode (relevant Nodes)
3. Enable Open vSwitch hardware offloading
Note: Throughout this guide, enp3s0f0 is used as the PF and eth3 is used as the representor port.
These ports may vary in different environments.
Note: Throughout this guide, we use systemctl to restart OpenStack services. This is correct for
systemd OS. Other methods to restart services should be used in other environments.
Create the VFs for the network interface that will be used for SR-IOV. We use enp3s0f0 as PF, which
is also used as the interface for the VLAN provider network and has access to the private networks of
all nodes.
Note: The following steps detail how to create VFs using Mellanox ConnectX-4 and SR-IOV Ethernet
cards on an Intel system. Steps may be different for the hardware of your choice.
1. Ensure SR-IOV and VT-d are enabled on the system. Enable IOMMU in Linux by adding
intel_iommu=on to kernel parameters, for example, using GRUB.
2. On each Compute node, create the VFs:
Note: A network interface can be used both for PCI passthrough, using the PF, and SR-IOV,
using the VFs. If the PF is used, the VF number stored in the sriov_numvfs file is lost. If the
PF is attached again to the operating system, the number of VFs assigned to this interface will be
zero. To keep the number of VFs always assigned to this interface, update a relevant file according
to your OS. See some examples below:
In Ubuntu, modifying the /etc/network/interfaces file:
auto enp3s0f0
iface enp3s0f0 inet dhcp
pre-up echo '4' > /sys/class/net/enp3s0f0/device/sriov_numvfs
#!/bin/sh
if [[ "$1" == "enp3s0f0" ]]
then
echo '4' > /sys/class/net/enp3s0f0/device/sriov_numvfs
fi
Warning: Alternatively, you can create VFs by passing the max_vfs to the kernel module
of your network interface. However, the max_vfs parameter has been deprecated, so the PCI
/sys interface is the preferred method.
# cat /sys/class/net/enp3s0f0/device/sriov_totalvfs
8
3. Verify that the VFs have been created and are in up state:
Note: The PCI bus number of the PF (03:00.0) and VFs (03:00.2 .. 03:00.5) will be used later.
If the interfaces are down, set them to up before launching a guest, otherwise the instance will
fail to spawn:
1. Change the e-switch mode from legacy to switchdev on the PF device. This will also create the
VF representor network devices in the host OS.
Note: This should be done for all relevant VFs (in this example 0000:03:00.2 .. 0000:03:00.5)
2. Enable Open vSwitch hardware offloading, set PF to switchdev mode and bind VFs back.
Note: This should be done for all relevant VFs (in this example 0000:03:00.2 .. 0000:03:00.5)
Note: The given aging of OVS is given in milliseconds and can be controlled with:
[ml2]
tenant_network_types = vlan
type_drivers = vlan
mechanism_drivers = openvswitch
[DEFAULT]
core_plugin = ml2
[filter_scheduler]
enabled_filters = PciPassthroughFilter
[pci]
#VLAN Configuration passthrough_whitelist example
passthrough_whitelist ={"'"address"'":"'"*:'"03:00"'.*"'","'"physical_
,→network"'":"'"physnet2"'"}
[ml2]
tenant_network_types = vxlan
type_drivers = vxlan
mechanism_drivers = openvswitch
[DEFAULT]
core_plugin = ml2
[filter_scheduler]
enabled_filters = PciPassthroughFilter
[pci]
#VLAN Configuration passthrough_whitelist example
passthrough_whitelist ={"'"address"'":"'"*:'"03:00"'.*"'","'"physical_
,→network"'":null}
Note: In this example we will bring up two instances on different Compute nodes and send
ICMP echo packets between them. Then we will check TCP packets on a representor port
and we will see that only the first packet will be shown there. All the rest will be offloaded.
Note: In this example, we used Mellanox Image with NIC Drivers that can be downloaded from
http://www.mellanox.com/repository/solutions/openstack/images/mellanox_eth.img
3. Repeat steps above and create a second instance on Second Compute Node
# openstack port create --network private --vnic-type=direct --
,→binding-profile '{"capabilities": ["switchdev"]}' direct_port2
# openstack server create --flavor m1.small --image mellanox_fedora --
,→nic port-id=direct_port2 vm2
Note: You can use availability-zone nova:compute_node_1 option to set the desired Compute
Node
5. Connect to Second Compute Node and find representor port of the instance
compute_node2# ls -l /sys/class/net/
lrwxrwxrwx 1 root root 0 Sep 11 10:54 eth0 -> ../../devices/virtual/
,→net/eth0
6. Check traffic on the representor port. Verify that only the first ICMP packet appears.
Historically, Open vSwitch (OVS) could not interact directly with iptables to implement security groups.
Thus, the OVS agent and Compute service use a Linux bridge between each instance (VM) and the OVS
integration bridge br-int to implement security groups. The Linux bridge device contains the iptables
rules pertaining to the instance. In general, additional components between instances and physical net-
work infrastructure cause scalability and performance problems. To alleviate such problems, the OVS
agent includes an optional firewall driver that natively implements security groups as flows in OVS rather
than the Linux bridge device and iptables. This increases scalability and performance.
L2 agents can be configured to use differing firewall drivers. There is no requirement that they all be the
same. If an agent lacks a firewall driver configuration, it will default to what is configured on its server.
This also means there is no requirement that the server has any firewall driver configured at all, as long
as the agents are configured correctly.
Prerequisites
The native OVS firewall implementation requires kernel and user space support for conntrack, thus
requiring minimum versions of the Linux kernel and Open vSwitch. All cases require Open vSwitch
version 2.5 or newer.
• Kernel version 4.3 or newer includes conntrack support.
• Kernel version 3.3, but less than 4.3, does not include conntrack support and requires building the
OVS modules.
• On nodes running the Open vSwitch agent, edit the openvswitch_agent.ini file and enable
the firewall driver.
[securitygroup]
firewall_driver = openvswitch
For more information, see the Open vSwitch Firewall Driver and the video.
If GRE tunnels from VM to VM are going to be used, the native OVS firewall implementation requires
nf_conntrack_proto_gre module to be loaded in the kernel on nodes running the Open vSwitch
agent. It can be loaded with the command:
# modprobe nf_conntrack_proto_gre
Some Linux distributions have files that can be used to automatically load kernel modules at boot time,
for example, /etc/modules. Check with your distribution for further information.
This isnt necessary to use gre tunnel network type Neutron.
Both OVS and iptables firewall drivers should always behave in the same way if the same rules are
configured for the security group. But in some cases that is not true and there may be slight differences
between those drivers.
References
QoS is defined as the ability to guarantee certain network requirements like bandwidth, latency, jitter,
and reliability in order to satisfy a Service Level Agreement (SLA) between an application provider and
end users.
Network devices such as switches and routers can mark traffic so that it is handled with a higher priority
to fulfill the QoS conditions agreed under the SLA. In other cases, certain network traffic such as Voice
over IP (VoIP) and video streaming needs to be transmitted with minimal bandwidth constraints. On a
system without network QoS management, all traffic will be transmitted in a best-effort manner making
it impossible to guarantee service delivery to customers.
QoS is an advanced service plug-in. QoS is decoupled from the rest of the OpenStack Networking code
on multiple levels and it is available through the ml2 extension driver.
Details about the DB models, API extension, and use cases are out of the scope of this guide but can be
found in the Neutron QoS specification.
QoS supported rule types are now available as VALID_RULE_TYPES in QoS rule types:
• bandwidth_limit: Bandwidth limitations on networks, ports or floating IPs.
• dscp_marking: Marking network traffic with a DSCP value.
• minimum_bandwidth: Minimum bandwidth constraints on certain types of traffic.
Any QoS driver can claim support for some QoS rule types by providing a driver property called
supported_rules, the QoS driver manager will recalculate rule types dynamically that the QoS
driver supports.
The following table shows the Networking back ends, QoS supported rules, and traffic directions (from
the VM point of view).
1
https://bugs.launchpad.net/neutron/+bug/1460741
2
https://bugs.launchpad.net/neutron/+bug/1896587
3
https://bugs.launchpad.net/neutron/+bug/1889631
Note:
(1) Max burst parameter is skipped because it is not supported by the IP tool.
(2) Placement based enforcement works for both egress and ingress directions, but dataplane enforce-
ment depends on the backend.
Note:
(1) Since Newton
(2) Since Stein
In the most simple case, the property can be represented by a simple Python list defined on the class.
For an ml2 plug-in, the list of supported QoS rule types and parameters is defined as a common subset of
rules supported by all active mechanism drivers. A QoS rule is always attached to a QoS policy. When
a rule is created or updated:
• The QoS plug-in will check if this rule and parameters are supported by any active mechanism
driver if the QoS policy is not attached to any port or network.
• The QoS plug-in will check if this rule and parameters are supported by the mechanism drivers
managing those ports if the QoS policy is attached to any port or network.
Valid DSCP mark values are even numbers between 0 and 56, except 2-6, 42, 44, and 50-54. The full
list of valid DSCP marks is:
0, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 46, 48, 56
L3 QoS support
The Neutron L3 services have implemented their own QoS extensions. Currently only bandwidth limit
QoS is provided. This is the L3 QoS extension list:
• Floating IP bandwidth limit: the rate limit is applied per floating IP address independently.
• Gateway IP bandwidth limit: the rate limit is applied in the router namespace gateway port (or in
the SNAT namespace in case of DVR edge router). The rate limit applies to the gateway IP; that
means all traffic using this gateway IP will be limited. This rate limit does not apply to the floating
IP traffic.
L3 services that provide QoS extensions:
• L3 router: implements the rate limit using Linux TC.
• OVN L3: implements the rate limit using the OVN QoS metering rules.
The following table shows the L3 service, the QoS supported extension, and traffic directions (from the
VM point of view) for bandwidth limiting.
Configuration
To enable the service on a cloud with the architecture described in Networking architecture, follow the
steps below:
On the controller nodes:
1. Add the QoS service to the service_plugins setting in /etc/neutron/neutron.
conf. For example:
service_plugins = router,metering,qos
service_plugins = router,qos
[ml2]
extension_drivers = port_security,qos
5. Edit the configuration file for the agent you are using and set the extensions to include qos in
the [agent] section of the configuration file. The agent configuration file will reside in /etc/
neutron/plugins/ml2/<agent_name>_agent.ini where agent_name is the name
of the agent being used (for example openvswitch). For example:
[agent]
extensions = qos
[agent]
extensions = qos
2. Optionally, in order to enable QoS for floating IPs, set the extensions option in the [agent]
section of /etc/neutron/l3_agent.ini to include fip_qos. If dvr is enabled, this has
to be done for all the L3 agents. For example:
[agent]
extensions = fip_qos
Note: Floating IP associated to neutron port or to port forwarding can all have bandwidth limit since
Stein release. These neutron server side and agent side extension configs will enable it once for all.
1. Optionally, in order to enable QoS for router gateway IPs, set the extensions option in the
[agent] section of /etc/neutron/l3_agent.ini to include gateway_ip_qos. Set
this to all the dvr_snat or legacy L3 agents. For example:
[agent]
extensions = gateway_ip_qos
And gateway_ip_qos should work together with the fip_qos in L3 agent for centralized
routers, then all L3 IPs with binding QoS policy can be limited under the QoS bandwidth limit
rules:
[agent]
extensions = fip_qos, gateway_ip_qos
2. As rate limit doesnt work on Open vSwitchs internal ports, optionally, as a workaround, to
make QoS bandwidth limit work on routers gateway ports, set ovs_use_veth to True in
DEFAULT section in /etc/neutron/l3_agent.ini
[DEFAULT]
ovs_use_veth = True
Note: QoS currently works with ml2 only (SR-IOV, Open vSwitch, and linuxbridge are drivers enabled
for QoS).
When using overlay networks (e.g., VxLAN), the DSCP marking rule only applies to the inner header,
and during encapsulation, the DSCP mark is not automatically copied to the outer header.
1. In order to set the DSCP value of the outer header, modify the dscp configuration option in /
etc/neutron/plugins/ml2/<agent_name>_agent.ini where <agent_name> is
the name of the agent being used (e.g., openvswitch):
[agent]
dscp = 8
2. In order to copy the DSCP field of the inner header to the outer header, change
the dscp_inherit configuration option to true in /etc/neutron/plugins/ml2/
<agent_name>_agent.ini where <agent_name> is the name of the agent being used
(e.g., openvswitch):
[agent]
dscp_inherit = true
If the dscp_inherit option is set to true, the previous dscp option is overwritten.
If projects are trusted to administrate their own QoS policies in your cloud, neutrons file policy.yaml
can be modified to allow this.
Modify /etc/neutron/policy.yaml policy entries as follows:
"get_policy": "rule:regular_user",
"create_policy": "rule:regular_user",
"update_policy": "rule:regular_user",
"delete_policy": "rule:regular_user",
"get_rule_type": "rule:regular_user",
"get_policy_bandwidth_limit_rule": "rule:regular_user",
"create_policy_bandwidth_limit_rule": "rule:regular_user",
"delete_policy_bandwidth_limit_rule": "rule:regular_user",
"update_policy_bandwidth_limit_rule": "rule:regular_user",
"get_policy_dscp_marking_rule": "rule:regular_user",
"create_dscp_marking_rule": "rule:regular_user",
"delete_dscp_marking_rule": "rule:regular_user",
"update_dscp_marking_rule": "rule:regular_user",
"get_policy_minimum_bandwidth_rule": "rule:regular_user",
"create_policy_minimum_bandwidth_rule": "rule:regular_user",
"delete_policy_minimum_bandwidth_rule": "rule:regular_user",
"update_policy_minimum_bandwidth_rule": "rule:regular_user",
User workflow
QoS policies are only created by admins with the default policy.yaml. Therefore, you should have
the cloud operator set them up on behalf of the cloud projects.
If projects are trusted to create their own policies, check the trusted projects policy.yaml configura-
tion section.
First, create a QoS policy and its bandwidth limit rule:
Note: The QoS implementation requires a burst value to ensure proper behavior of bandwidth limit rules
in the Open vSwitch and Linux bridge agents. Configuring the proper burst value is very important. If the
burst value is set too low, bandwidth usage will be throttled even with a proper bandwidth limit setting.
This issue is discussed in various documentation sources, for example in Junipers documentation. For
TCP traffic it is recommended to set burst value as 80% of desired bandwidth limit value. For example,
if the bandwidth limit is set to 1000kbps then enough burst value will be 800kbit. If the configured burst
value is too low, achieved bandwidth limit will be lower than expected. If the configured burst value is
too high, too few packets could be limited and achieved bandwidth limit would be higher than expected.
If you do not provide a value, it defaults to 80% of the bandwidth limit which works for typical TCP
traffic.
Second, associate the created policy with an existing neutron port. In order to do this, user extracts
the port id to be associated to the already created policy. In the next example, we will assign the
bw-limiter policy to the VM with IP address 192.0.2.1.
In order to detach a port from the QoS policy, simply update again the port configuration.
| binding_profile |
,→|
| binding_vif_details |
,→|
| binding_vif_type | unbound
,→|
You can attach networks to a QoS policy. The meaning of this is that any compute port connected to
the network will use the network policy by default unless the port has a specific policy attached to it.
Internal network owned ports like DHCP and internal router ports are excluded from network policy
application.
In order to attach a QoS policy to a network, update an existing network, or initially create the network
attached to the policy.
The created policy can be associated with an existing floating IP. In order to do this, user extracts the
floating IP id to be associated to the already created policy. In the next example, we will assign the
bw-limiter policy to the floating IP address 172.16.100.18.
In order to detach a floating IP from the QoS policy, simply update the floating IP configuration.
The QoS bandwidth limit rules attached to a floating IP will become active when you associate the
latter with a port. For example, to associate the previously created floating IP 172.16.100.12 to the
instance port with uuid a7f25e73-4288-4a16-93b9-b71e6fd00862 and fixed IP 192.168.
222.5:
Note: The QoS policy attached to a floating IP is not applied to a port, it is applied to an associated
floating IP only. Thus the ID of QoS policy attached to a floating IP will not be visible in a ports
qos_policy_id field after asscoating a floating IP to the port. It is only visible in the floating IP
attributes.
Note: For now, the L3 agent floating IP QoS extension only supports bandwidth_limit rules.
Other rule types (like DSCP marking) will be silently ignored for floating IPs. A QoS policy that does
not contain any bandwidth_limit rules will have no effect when attached to a floating IP.
If floating IP is bound to a port, and both have binding QoS bandwidth rules, the L3 agent floating IP
QoS extension ignores the behavior of the port QoS, and installs the rules from the QoS policy associated
to the floating IP on the appropriate device in the router namespace.
Each project can have at most one default QoS policy, although it is not mandatory. If a default QoS
policy is defined, all new networks created within this project will have this policy assigned, as long as
no other QoS policy is explicitly attached during the creation process. If the default QoS policy is unset,
no change to existing networks will be made.
In order to set a QoS policy as default, the parameter --default must be used. To unset this QoS
policy as default, the parameter --no-default must be used.
Administrator enforcement
Administrators are able to enforce policies on project ports or networks. As long as the policy is not
shared, the project is not be able to detach any policy attached to a network or port.
If the policy is shared, the project is able to attach or detach such policy from its own ports and networks.
Rule modification
You can modify rules at runtime. Rule modifications will be propagated to any attached port.
Just like with bandwidth limiting, create a policy for DSCP marking rule:
You can create, update, list, delete, and show DSCP markings with the neutron client:
A policy with a minimum bandwidth ensures best efforts are made to provide no less than the specified
bandwidth to each port on which the rule is applied. However, as this feature is not yet integrated with
the Compute scheduler, minimum bandwidth cannot be guaranteed.
It is also possible to combine several rules in one policy, as long as the type or direction of each rule
is different. For example, You can specify two bandwidth-limit rules, one with egress and one
with ingress direction.
Most Networking Quality of Service (QoS) features are implemented solely by OpenStack Neutron and
they are already documented in the QoS configuration chapter of the Networking Guide. Some more
complex QoS features necessarily involve the scheduling of a cloud server, therefore their implementa-
tion is shared between OpenStack Nova, Neutron and Placement. As of the OpenStack Stein release the
Guaranteed Minimum Bandwidth feature is like the latter.
This Networking Guide chapter does not aim to replace Nova or Placement documentation in any way,
but it still hopes to give an overall OpenStack-level guide to understanding and configuring a deployment
to use the Guaranteed Minimum Bandwidth feature.
A guarantee of minimum available bandwidth can be enforced on two levels:
• Scheduling a server on a compute host where the bandwidth is available. To be more precise:
scheduling one or more ports of a server on a compute hosts physical network interfaces where
the bandwidth is available.
• Queueing network packets on a physical network interface to provide the guaranteed bandwidth.
In short the enforcement has two levels:
• (server) placement and
• data plane.
Since the data plane enforcement is already documented in the QoS chapter, here we only document the
placement-level enforcement.
Limitations
• A pre-created port with a minimum-bandwidth rule must be passed when booting a server
(openstack server create). Passing a network with a minimum-bandwidth rule at boot
is not supported because of technical reasons (in this case the port is created too late for Neutron
to affect scheduling).
• Bandwidth guarantees for ports can only be requested on networks backed by a physical network
(physnet).
• In Stein there is no support for networks with multiple physnets. However some simpler multi-
segment networks are still supported:
– Networks with multiple segments all having the same physnet name.
– Networks with only one physnet segment (the other segments being tunneled segments).
• If you mix ports with and without bandwidth guarantees on the same physical interface then the
ports without a guarantee may starve. Therefore mixing them is not recommended. Instead it is
recommended to separate them by Nova host aggregates.
• Changing the guarantee of a QoS policy (adding/deleting a minimum_bandwidth rule, or
changing the min_kbps field of a minimum_bandwidth rule) is only possible while the pol-
icy is not in effect. That is ports of the QoS policy are not yet used by Nova. Requests to change
guarantees of in-use policies are rejected.
• Changing the QoS policy of the port with new minimum_bandwidth rules changes place-
ment allocations from Victoria release. If the VM was booted with port without QoS
policy and minimum_bandwidth rules the port update succeeds but placement allocations
will not change. The same is true if the port has no binding:profile, thus no placement
allocation record exists for it. But if the VM was booted with a port with QoS policy and
minimum_bandwidth rules the update is possible and the allocations are changed in place-
ment as well.
Note: As it is possible to update a port to remove the QoS policy, updating it back to have QoS
policy with minimum_bandwidth rule will not result in placement allocation record, only
the dataplane enforcement will happen.
Note: updating the minimum_bandwidth rule of a QoS policy that is attached to a port which is
bound to a VM is still not possible.
• The first data-plane-only Guaranteed Minimum Bandwidth implementation (for SR-IOV egress
traffic) was released in the Newton release of Neutron. Because of the known lack of placement-
level enforcement it was marked as best effort (5th bullet point). Since placement-level enforce-
ment was not implemented bandwidth may have become overallocated and the system level re-
source inventory may have become inconsistent. Therefore for users of the data-plane-only im-
plementation a migration/healing process is mandatory (see section On Healing of Allocations)
to bring the system level resource inventory to a consistent state. Further operations that would
reintroduce inconsistency (e.g. migrating a server with minimum_bandwidth QoS rule, but no
resource allocation in Placement) are rejected now in a backward-incompatible way.
• The Guaranteed Minimum Bandwidth feature is not complete in the Stein release. Not all Nova
server lifecycle operations can be executed on a server with bandwidth guarantees. Since Stein
(Nova API microversion 2.72+) you can boot and delete a server with a guarantee and detach a
port with a guarantee. Since Train you can also migrate and resize a server with a guarantee.
Support for further server move operations (for example evacuate, live-migrate and unshelve after
shelve-offload) is to be implemented later. For the definitive documentation please refer to the
Port with Resource Request chapter of the OpenStack Compute API Guide.
• If an SR-IOV physical function is configured for use by the neutron-openvswitch-agent, and the
same physical functions virtual functions are configured for use by the neutron-sriov-agent then
the available bandwidth must be statically split between the corresponding resource providers by
administrative choice. For example a 10 Gbps SR-IOV capable physical NIC could be treated as
two independent NICs - a 5 Gbps NIC (technically the physical function of the NIC) added to
an Open vSwitch bridge, and another 5 Gbps NIC whose virtual functions can be handed out to
servers by neutron-sriov-agent.
Placement pre-requisites
Placement must support microversion 1.29. This was first released in Rocky.
Nova pre-requisites
Nova must support microversion 2.72. This was first released in Stein.
Not all Nova virt drivers are supported, please refer to the Virt Driver Support section of the Nova Admin
Guide.
Neutron pre-requisites
In release Stein the following agent-based ML2 mechanism drivers are supported:
• Open vSwitch (openvswitch) vnic_types: normal, direct
• SR-IOV (sriovnicswitch) vnic_types: direct, macvtap
neutron-server config
The placement service plugin synchronizes the agents resource provider information from neutron-
server to Placement.
Since neutron-server talks to Placement you need to configure how neutron-server should find Placement
and authenticate to it.
/etc/neutron/neutron.conf (on controller nodes):
[DEFAULT]
service_plugins = placement,...
auth_strategy = keystone
[placement]
auth_type = password
auth_url = https://controller/identity
password = secret
project_domain_name = Default
project_name = service
user_domain_name = Default
username = placement
[ovs_driver]
vnic_type_prohibit_list = direct
[sriov_driver]
#vnic_type_prohibit_list = direct
neutron-openvswitch-agent config
Set the agent configuration as the authentic source of the resources available. Set it on a per-bridge basis
by ovs.resource_provider_bandwidths. The format is: bridge:egress:ingress,..
. You may set only one direction and omit the other.
Note: egress / ingress is meant from the perspective of a cloud server. That is egress = cloud
server upload, ingress = download.
Egress and ingress available bandwidth values are in kilobit/sec (kbps).
If desired, resource provider inventory fields can be tweaked on a per-agent basis by setting ovs.
resource_provider_inventory_defaults. Valid values are all the optional parameters of
the update resource provider inventory call.
/etc/neutron/plugins/ml2/ovs_agent.ini (on compute and network nodes):
[ovs]
bridge_mappings = physnet0:br-physnet0,...
resource_provider_bandwidths = br-physnet0:10000000:10000000,...
#resource_provider_inventory_defaults = step_size:1000,...
neutron-sriov-agent config
[sriov_nic]
physical_device_mappings = physnet0:ens5,physnet0:ens6,...
(continues on next page)
# as admin
$ openstack network agent list --agent-type open-vswitch --host devstack0
+--------------------------------------+--------------------+-----------+--
,→-----------------+-------+-------+---------------------------+
| ID | Agent Type | Host |
,→Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+-----------+--
,→-----------------+-------+-------+---------------------------+
| 5e57b85f-b017-419a-8745-9c406e149f9e | Open vSwitch agent | devstack0 |
,→None | :-) | UP | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-----------+--
,→-----------------+-------+-------+---------------------------+
Re-reading the resource related subset of configuration on SIGHUP is not implemented. The agent must
be restarted to pick up and send changed configuration.
Neutron-server propagates the information further to Placement for the resources of each agent via
Placements HTTP REST API. To avoid overloading Placement this synchronization generally does not
happen on every received heartbeat message. Instead the re-synchronization of the resources of one
agent is triggered by:
• The creation of a network agent record (as queried by openstack network agent list).
Please note that deleting an agent record and letting the next heartbeat to re-create it can be used
to trigger synchronization without restarting an agent.
• The restart of that agent (technically start_flag being present in the heartbeat message).
# as admin
$ openstack network agent show -f value -c resources_synced 5e57b85f-b017-
,→419a-8745-9c406e149f9e
True
Sample usage
Physnets and QoS policies (together with their rules) are usually pre-created by a cloud admin:
# as admin
Then a normal user can use the pre-created policy to create ports and boot servers with those ports:
# as an unprivileged user
On Healing of Allocations
Since Placement carries a global view of a cloud deployments resources (what is available, what is used)
it may in some conditions get out of sync with reality.
One important case is when the data-plane-only Minimum Guaranteed Bandwidth feature was used be-
fore Stein (first released in Newton). Since before Stein guarantees were not enforced during server
placement the available resources may have become overallocated without notice. In this case Place-
ments view and the reality of resource usage should be made consistent during/after an upgrade to Stein.
Another case stems from OpenStack not having distributed transactions to allocate resources provided
by multiple OpenStack components (here Nova and Neutron). There are known race conditions in which
Placements view may get out of sync with reality. The design knowingly minimizes the race condition
windows, but there are known problems:
• If a QoS policy is modified after Nova read a ports resource_request but before the port is
bound its state before the modification will be applied.
• If a bound port with a resource allocation is deleted. The ports allocation is leaked. https://bugs.
launchpad.net/nova/+bug/1820588
Note: Deleting a bound port has no known use case. Please consider detaching the interface first by
Debugging
,→-------------------------+
# as admin
$ openstack --os-placement-api-version 1.17 trait list | awk '/CUSTOM_/ {
,→print $2 }' | sort
CUSTOM_PHYSNET_PHYSNET0
CUSTOM_VNIC_TYPE_DIRECT
CUSTOM_VNIC_TYPE_DIRECT_PHYSICAL
CUSTOM_VNIC_TYPE_MACVTAP
CUSTOM_VNIC_TYPE_NORMAL
• Do the physical network interface resource providers have the proper trait associations and inven-
tories?
# as admin
$ openstack --os-placement-api-version 1.17 resource provider trait list
,→RP-UUID
$ openstack --os-placement-api-version 1.17 resource provider inventory
,→list RP-UUID
# as admin
$ openstack port show port-normal-qos | grep resource_request
# as admin
$ openstack --os-placement-api-version 1.17 resource provider allocation
,→show SERVER-UUID
• Does the allocation have a part on the expected physical network interface resource provider?
# as admin
$ openstack --os-placement-api-version 1.17 resource provider show --
,→allocations RP-UUID
• Did placement manage to produce an allocation candidate list to nova during scheduling?
# as admin
$ openstack port show port-normal-qos | grep binding.profile.*allocation
Links
The Role-Based Access Control (RBAC) policy framework enables both operators and users to grant
access to resources for specific projects.
Currently, the access that can be granted using this feature is supported by:
• Regular port creation permissions on networks (since Liberty).
• Binding QoS policies permissions to networks or ports (since Mitaka).
• Attaching router gateways to networks (since Mitaka).
• Binding security groups to ports (since Stein).
• Assigning address scopes to subnet pools (since Ussuri).
• Assigning subnet pools to subnets (since Ussuri).
• Assigning address groups to security group rules (since Wallaby).
Sharing an object with a specific project is accomplished by creating a policy entry that permits the
target project the access_as_shared action on that object.
Create the policy entry using the openstack network rbac create command (in this example,
the ID of the project we want to share with is b87b2fc13e0248a4a031d38e06dc191d):
The target-project parameter specifies the project that requires access to the network. The
action parameter specifies what the project is allowed to do. The type parameter says that the
target object is a network. The final parameter is the ID of the network we are granting access to.
Project b87b2fc13e0248a4a031d38e06dc191d will now be able to see the network when run-
ning openstack network list and openstack network show and will also be able to cre-
ate ports on that network. No other users (other than admins and the owner) will be able to see the
network.
To remove access for that project, delete the policy that allows it using the openstack network
rbac delete command:
If that project has ports on the network, the server will prevent the policy from being deleted until the
ports have been deleted:
This process can be repeated any number of times to share a network with an arbitrary number of
projects.
Create the RBAC policy entry using the openstack network rbac create command (in this
example, the ID of the project we want to share with is be98b82f8fdf46b696e9e01cebc33fd9):
The target-project parameter specifies the project that requires access to the QoS policy. The
action parameter specifies what the project is allowed to do. The type parameter says that the target
object is a QoS policy. The final parameter is the ID of the QoS policy we are granting access to.
Project be98b82f8fdf46b696e9e01cebc33fd9 will now be able to see the QoS policy
when running openstack network qos policy list and openstack network qos
policy show and will also be able to bind it to its ports or networks. No other users (other than
admins and the owner) will be able to see the QoS policy.
To remove access for that project, delete the RBAC policy that allows it using the openstack
network rbac delete command:
If that project has ports or networks with the QoS policy applied to them, the server will not delete the
RBAC policy until the QoS policy is no longer in use:
This process can be repeated any number of times to share a qos-policy with an arbitrary number of
projects.
Create the RBAC policy entry using the openstack network rbac create command (in this
example, the ID of the project we want to share with is 32016615de5d43bb88de99e7f2e26a1e):
The target-project parameter specifies the project that requires access to the security group. The
action parameter specifies what the project is allowed to do. The type parameter says that the target
object is a security group. The final parameter is the ID of the security group we are granting access to.
Project 32016615de5d43bb88de99e7f2e26a1e will now be able to see the security group when
running openstack security group list and openstack security group show
and will also be able to bind it to its ports. No other users (other than admins and the owner) will
be able to see the security group.
To remove access for that project, delete the RBAC policy that allows it using the openstack
network rbac delete command:
If that project has ports with the security group applied to them, the server will not delete the RBAC
policy until the security group is no longer in use:
This process can be repeated any number of times to share a security-group with an arbitrary number of
projects.
Create the RBAC policy entry using the openstack network rbac create command (in this
example, the ID of the project we want to share with is 32016615de5d43bb88de99e7f2e26a1e):
The target-project parameter specifies the project that requires access to the address scope. The
action parameter specifies what the project is allowed to do. The type parameter says that the target
object is an address scope. The final parameter is the ID of the address scope we are granting access to.
Project 32016615de5d43bb88de99e7f2e26a1e will now be able to see the address scope when
running openstack address scope list and openstack address scope show and
will also be able to assign it to its subnet pools. No other users (other than admins and the owner)
will be able to see the address scope.
To remove access for that project, delete the RBAC policy that allows it using the openstack
network rbac delete command:
If that project has subnet pools with the address scope applied to them, the server will not delete the
RBAC policy until the address scope is no longer in use:
$ openstack network rbac delete d54b1482-98c4-44aa-9115-ede80387ffe0
RBAC policy on object c19cb654-3489-4160-9c82-8a3015483643
cannot be removed because other objects depend on it.
This process can be repeated any number of times to share an address scope with an arbitrary number of
projects.
Create the RBAC policy entry using the openstack network rbac create command (in this
example, the ID of the project we want to share with is 32016615de5d43bb88de99e7f2e26a1e):
$ openstack network rbac create --target-project \
32016615de5d43bb88de99e7f2e26a1e --action access_as_shared \
--type subnetpool 11f79287-bc17-46b2-bfd0-2562471eb631
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| action | access_as_shared |
| id | d54b1482-98c4-44aa-9115-ede80387ffe0 |
| location | ... |
| name | None |
| object_id | 11f79287-bc17-46b2-bfd0-2562471eb631 |
| object_type | subnetpool |
| project_id | 290ccedbcf594ecc8e76eff06f964f7e |
(continues on next page)
The target-project parameter specifies the project that requires access to the subnet pool. The
action parameter specifies what the project is allowed to do. The type parameter says that the target
object is a subnet pool. The final parameter is the ID of the subnet pool we are granting access to.
Project 32016615de5d43bb88de99e7f2e26a1e will now be able to see the subnet pool when
running openstack subnet pool list and openstack subnet pool show and will
also be able to assign it to its subnets. No other users (other than admins and the owner) will be able to
see the subnet pool.
To remove access for that project, delete the RBAC policy that allows it using the openstack
network rbac delete command:
If that project has subnets with the subnet pool applied to them, the server will not delete the RBAC
policy until the subnet pool is no longer in use:
This process can be repeated any number of times to share a subnet pool with an arbitrary number of
projects.
Create the RBAC policy entry using the openstack network rbac create command (in this
example, the ID of the project we want to share with is bbd82892525d4372911390b984ed3265):
The target-project parameter specifies the project that requires access to the address group. The
action parameter specifies what the project is allowed to do. The type parameter says that the target
object is an address group. The final parameter is the ID of the address group we are granting access to.
Project bbd82892525d4372911390b984ed3265 will now be able to see the address group when
running openstack address group list and openstack address group show and
will also be able to assign it to its security group rules. No other users (other than admins and the
owner) will be able to see the address group.
To remove access for that project, delete the RBAC policy that allows it using the openstack
network rbac delete command:
If that project has security group rules with the address group applied to them, the server will not delete
the RBAC policy until the address group is no longer in use:
This process can be repeated any number of times to share an address group with an arbitrary number of
projects.
As introduced in other guide entries, neutron provides a means of making an object (address-scope,
network, qos-policy, security-group, subnetpool) available to every project. This is
accomplished using the shared flag on the supported object:
This is the equivalent of creating a policy on the network that permits every project to perform the action
access_as_shared on that network. Neutron treats them as the same thing, so the policy entry for
that network should be visible using the openstack network rbac list command:
Use the openstack network rbac show command to see the details:
The output shows that the entry allows the action access_as_shared on object
84a7e627-573b-49da-af66-c9a65244f3ce of type network to target_tenant *, which is a
wildcard that represents all projects.
Currently, the shared flag is just a mapping to the underlying RBAC policies for a network. Setting the
flag to True on a network creates a wildcard RBAC entry. Setting it to False removes the wildcard
entry.
When you run openstack network list or openstack network show, the shared flag
is calculated by the server based on the calling project and the RBAC entries for each network. For
QoS objects use openstack network qos policy list or openstack network qos
policy show respectively. If there is a wildcard entry, the shared flag is always set to True.
If there are only entries that share with specific projects, only the projects the object is shared to will see
the flag as True and the rest will see the flag as False.
To make a network available as an external network for specific projects rather than all projects, use the
access_as_external action.
1. Create a network that you want to be available as an external network:
2. Create a policy entry using the openstack network rbac create com-
mand (in this example, the ID of the project we want to share with is
838030a7bf3c4d04b4b054c0f0b2b17c):
The target-project parameter specifies the project that requires access to the network. The
action parameter specifies what the project is allowed to do. The type parameter indicates that
the target object is a network. The final parameter is the ID of the network we are granting external
access to.
Now project 838030a7bf3c4d04b4b054c0f0b2b17c is able to see the network when running
openstack network list and openstack network show and can attach router gateway
ports to that network. No other users (other than admins and the owner) are able to see the network.
To remove access for that project, delete the policy that allows it using the openstack network
rbac delete command:
If that project has router gateway ports attached to that network, the server prevents the policy from
being deleted until the ports have been deleted:
This process can be repeated any number of times to make a network available as external to an arbitrary
number of projects.
If a network is marked as external during creation, it now implicitly creates a wildcard RBAC policy
granting everyone access to preserve previous behavior before this feature was added.
In the output above the standard router:external attribute is External as expected. Now a
wildcard policy is visible in the RBAC policy listings:
You can modify or delete this policy with the same constraints as any other RBAC
access_as_external policy.
The default policy.yaml file will not allow regular users to share objects with every other project
using a wildcard; however, it will allow them to share objects with specific project IDs.
If an operator wants to prevent normal users from doing this, the "create_rbac_policy": entry
in policy.yaml can be adjusted from "" to "rule:admin_only".
Note: Use of this feature requires the OpenStack client version 3.3 or newer.
Before routed provider networks, the Networking service could not present a multi-segment layer-3
network as a single entity. Thus, each operator typically chose one of the following architectures:
• Single large layer-2 network
• Multiple smaller layer-2 networks
Single large layer-2 networks become complex at scale and involve significant failure domains.
Multiple smaller layer-2 networks scale better and shrink failure domains, but leave network selection
to the user. Without additional information, users cannot easily differentiate these networks.
A routed provider network enables a single provider network to represent multiple layer-2 networks
(broadcast domains) or segments and enables the operator to present one network to users. However, the
particular IP addresses available to an instance depend on the segment of the network available on the
particular compute node. Neutron port could be associated with only one network segment, but there is
an exception for OVN distributed services like OVN Metadata.
Similar to conventional networking, layer-2 (switching) handles transit of traffic between ports on the
same segment and layer-3 (routing) handles transit of traffic between segments.
Each segment requires at least one subnet that explicitly belongs to that segment. The association be-
tween a segment and a subnet distinguishes a routed provider network from other types of networks.
The Networking service enforces that either zero or all subnets on a particular network associate with a
segment. For example, attempting to create a subnet without a segment on a network containing subnets
with segments generates an error.
The Networking service does not provide layer-3 services between segments. Instead, it relies on phys-
ical network infrastructure to route subnets. Thus, both the Networking service and physical network
infrastructure must contain configuration for routed provider networks, similar to conventional provider
networks. In the future, implementation of dynamic routing protocols may ease configuration of routed
networks.
Prerequisites
Routed provider networks require additional prerequisites over conventional provider networks. We
recommend using the following procedure:
1. Begin with segments. The Networking service defines a segment using the following components:
• Unique physical network name
• Segmentation type
• Segmentation ID
For example, provider1, VLAN, and 2016. See the API reference for more information.
Within a network, use a unique physical network name for each segment which enables reuse of
the same segmentation details between subnets. For example, using the same VLAN ID across
all segments of a particular provider network. Similar to conventional provider networks, the
operator must provision the layer-2 physical network infrastructure accordingly.
2. Implement routing between segments.
The Networking service does not provision routing among segments. The operator must imple-
ment routing among segments of a provider network. Each subnet on a segment must contain the
gateway address of the router interface on that particular subnet. For example:
Note: Coordination between the Networking service and the Compute scheduler is not necessary
for IPv6 subnets as a consequence of their large address spaces.
Note: The coordination between the Networking service and the Compute scheduler requires the
following minimum API micro-versions.
• Compute service API: 2.41
• Placement API: 1.1
Example configuration
Controller node
1. Enable the segments service plug-in by appending segments to the list of service_plugins
in the neutron.conf file on all nodes running the neutron-server service:
[DEFAULT]
# ...
service_plugins = ...,segments
2. Add a placement section to the neutron.conf file with authentication credentials for the
Compute service placement API:
[placement]
www_authenticate_uri = http://192.0.2.72/identity
project_domain_name = Default
project_name = service
user_domain_name = Default
password = apassword
username = nova
auth_url = http://192.0.2.72/identity_admin
auth_type = password
region_name = RegionOne
• Configure the layer-2 agent on each node to map one or more segments to the appropriate physical
network bridge or interface and restart the agent.
The following steps create a routed provider network with two segments. Each segment contains one
IPv4 subnet and one IPv6 subnet.
1. Source the administrative project credentials.
2. Create a VLAN provider network which includes a default segment. In this example, the network
uses the provider1 physical network with VLAN ID 2016.
4. Create a second segment on the provider network. In this example, the segment uses the
provider2 physical network with VLAN ID 2017.
5. Verify that the network contains the segment1 and segment2 segments.
6. Create subnets on the segment1 segment. In this example, the IPv4 subnet uses 203.0.113.0/24
and the IPv6 subnet uses fd00:203:0:113::/64.
$ openstack subnet create \
--network multisegment1 --network-segment segment1 \
--ip-version 4 --subnet-range 203.0.113.0/24 \
multisegment1-segment1-v4
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 203.0.113.2-203.0.113.254 |
| cidr | 203.0.113.0/24 |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| id | c428797a-6f8e-4cb1-b394-c404318a2762 |
| ip_version | 4 |
| name | multisegment1-segment1-v4 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 |
| tags | [] |
+-------------------+--------------------------------------+
Note: By default, IPv6 subnets on provider networks rely on physical network infrastructure for
stateless address autoconfiguration (SLAAC) and router advertisement.
7. Create subnets on the segment2 segment. In this example, the IPv4 subnet uses 198.51.100.0/24
and the IPv6 subnet uses fd00:198:51:100::/64.
$ openstack subnet create \
--network multisegment1 --network-segment segment2 \
--ip-version 4 --subnet-range 198.51.100.0/24 \
multisegment1-segment2-v4
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 198.51.100.2-198.51.100.254 |
| cidr | 198.51.100.0/24 |
| enable_dhcp | True |
| gateway_ip | 198.51.100.1 |
| id | 242755c2-f5fd-4e7d-bd7a-342ca95e50b2 |
| ip_version | 4 |
| name | multisegment1-segment2-v4 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| tags | [] |
+-------------------+--------------------------------------+
8. Verify that each IPv4 subnet associates with at least one DHCP agent.
+--------------------------------------+------------+-------------+---
,→----------------+-------+-------+--------------------+
| ID | Agent Type | Host |
,→Availability Zone | Alive | State | Binary |
+--------------------------------------+------------+-------------+---
,→----------------+-------+-------+--------------------+
9. Verify that inventories were created for each segment IPv4 subnet in the Compute service place-
ment API (for the sake of brevity, only one of the segments is shown in this example).
$ SEGMENT_ID=053b7925-9a89-4489-9992-e164c8cc8763
$ openstack resource provider inventory list $SEGMENT_ID
+----------------+------------------+----------+----------+-----------
,→+----------+-------+
| resource_class | allocation_ratio | max_unit | reserved | step_size
,→| min_unit | total |
10. Verify that host aggregates were created for each segment in the Compute service (for the sake of
brevity, only one of the segments is shown in this example).
+----+---------------------------------------------------------+------
,→-------------+
| 10 | Neutron segment id 053b7925-9a89-4489-9992-e164c8cc8763 | None
,→ |
+----+---------------------------------------------------------+------
,→-------------+
11. Launch one or more instances. Each instance obtains IP addresses according to the segment it
uses on the particular compute node.
Note: If a fixed IP is specified by the user in the port create request, that particular IP is allocated
immediately to the port. However, creating a port and passing it to an instance yields a different
behavior than conventional networks. If the fixed IP is not specified on the port create request,
the Networking service defers assignment of IP addresses to the port until the particular compute
node becomes apparent. For example:
Migration of existing non-routed networks is only possible if there is only one segment and one subnet
on the network. To migrate a candidate network, update the subnet and set id of the existing network
segment as segment_id.
Note: In the case where there are multiple subnets or segments it is not possible to safely migrate. The
reason for this is that in non-routed networks addresses from the subnets allocation pools are assigned
to ports without considering to which network segment the port is bound.
Example
The following steps migrate an existing non-routed network with one subnet and one segment to a routed
one.
1. Source the administrative project credentials.
2. Get the id of the current network segment on the network that is being migrated.
6. Verify that the subnet is now associated with the desired network segment.
Service function chain (SFC) essentially refers to the software-defined networking (SDN) version of
policy-based routing (PBR). In many cases, SFC involves security, although it can include a variety of
other features.
Fundamentally, SFC routes packets through one or more service functions instead of conventional rout-
ing that routes packets using destination IP address. Service functions essentially emulate a series of
physical network devices with cables linking them together.
A basic example of SFC involves routing packets from one location to another through a firewall that
lacks a next hop IP address from a conventional routing perspective. A more complex example involves
an ordered series of service functions, each implemented using multiple instances (VMs). Packets must
flow through one instance and a hashing algorithm distributes flows across multiple instances at each
hop.
Architecture
All OpenStack Networking services and OpenStack Compute instances connect to a virtual network via
ports making it possible to create a traffic steering model for service chaining using only ports. Including
these ports in a port chain enables steering of traffic through one or more instances providing service
functions.
A port chain, or service function path, consists of the following:
• A set of ports that define the sequence of service functions.
• A set of flow classifiers that specify the classified traffic flows entering the chain.
If a service function involves a pair of ports, the first port acts as the ingress port of the service function
and the second port acts as the egress port. If both ports use the same value, they function as a single
virtual bidirectional port.
A port chain is a unidirectional service chain. The first port acts as the head of the service function chain
and the second port acts as the tail of the service function chain. A bidirectional service function chain
consists of two unidirectional port chains.
A flow classifier can only belong to one port chain to prevent ambiguity as to which chain should
handle packets in the flow. A check prevents such ambiguity. However, you can associate multiple flow
classifiers with a port chain because multiple flows can request the same service function path.
Resources
Port chain
• id - Port chain ID
• project_id - Project ID
• name - Readable name
• description - Readable description
• port_pair_groups - List of port pair group IDs
• flow_classifiers - List of flow classifier IDs
• chain_parameters - Dictionary of chain parameters
A port chain consists of a sequence of port pair groups. Each port pair group is a hop in the port chain.
A group of port pairs represents service functions providing equivalent functionality. For example, a
group of firewall service functions.
A flow classifier identifies a flow. A port chain can contain multiple flow classifiers. Omitting the flow
classifier effectively prevents steering of traffic through the port chain.
The chain_parameters attribute contains one or more parameters for the port chain. Currently, it
only supports a correlation parameter that defaults to mpls for consistency with Open vSwitch (OVS)
capabilities. Future values for the correlation parameter may include the network service header (NSH).
Port pair
• id - Port pair ID
• project_id - Project ID
• name - Readable name
• description - Readable description
• ingress - Ingress port
• egress - Egress port
• service_function_parameters - Dictionary of service function parameters
A port pair represents a service function instance that includes an ingress and egress port. A service
function containing a bidirectional port uses the same ingress and egress port.
The service_function_parameters attribute includes one or more parameters for the service
function. Currently, it only supports a correlation parameter that determines association of a packet with
a chain. This parameter defaults to none for legacy service functions that lack support for correlation
such as the NSH. If set to none, the data plane implementation must provide service function proxy
functionality.
Flow classifier
• id - Flow classifier ID
• project_id - Project ID
• name - Readable name
• description - Readable description
• ethertype - Ethertype (IPv4/IPv6)
• protocol - IP protocol
Operations
The following example uses the openstack command-line interface (CLI) to create a port chain con-
sisting of three service function instances to handle HTTP (TCP) traffic flows from 192.0.2.11:1000 to
198.51.100.11:80.
• Instance 1
– Name: vm1
– Function: Firewall
– Port pair: [p1, p2]
• Instance 2
– Name: vm2
– Function: Firewall
– Port pair: [p3, p4]
• Instance 3
– Name: vm3
– Function: Intrusion detection system (IDS)
– Port pair: [p5, p6]
Note: The example network net1 must exist before creating ports on it.
1. Source the credentials of the project that owns the net1 network.
3. Launch service function instance vm1 using ports p1 and p2, vm2 using ports p3 and p4, and
vm3 using ports p5 and p6.
$ openstack server create --nic port-id=P1_ID --nic port-id=P2_ID vm1
$ openstack server create --nic port-id=P3_ID --nic port-id=P4_ID vm2
$ openstack server create --nic port-id=P5_ID --nic port-id=P6_ID vm3
Replace P1_ID, P2_ID, P3_ID, P4_ID, P5_ID, and P6_ID with the UUIDs of the respective
ports.
Note: This command requires additional options to successfully launch an instance. See the CLI
reference for more information.
Alternatively, you can launch each instance with one network interface and attach additional ports
later.
4. Create flow classifier FC1 that matches the appropriate packet headers.
$ openstack sfc flow classifier create \
--description "HTTP traffic from 192.0.2.11 to 198.51.100.11" \
--ethertype IPv4 \
--source-ip-prefix 192.0.2.11/32 \
--destination-ip-prefix 198.51.100.11/32 \
--protocol tcp \
--source-port 1000:1000 \
--destination-port 80:80 FC1
Note: When using the (default) OVS driver, the --logical-source-port parameter is also
required
5. Create port pair PP1 with ports p1 and p2, PP2 with ports p3 and p4, and PP3 with ports p5
and p6.
$ openstack sfc port pair create \
--description "Firewall SF instance 1" \
--ingress p1 \
--egress p2 PP1
6. Create port pair group PPG1 with port pair PP1 and PP2 and PPG2 with port pair PP3.
Note: You can repeat the --port-pair option for multiple port pairs of functionally equivalent
service functions.
7. Create port chain PC1 with port pair groups PPG1 and PPG2 and flow classifier FC1.
Note: You can repeat the --port-pair-group option to specify additional port pair groups
in the port chain. A port chain must contain at least one port pair group.
You can repeat the --flow-classifier option to specify multiple flow classifiers for a port
chain. Each flow classifier identifies a flow.
• Use the openstack sfc port chain set command to dynamically add or remove port
pair groups or flow classifiers on a port chain.
– For example, add port pair group PPG3 to port chain PC1:
SFC steers traffic matching the additional flow classifier to the port pair groups in the port
chain.
• Use the openstack sfc port pair group set command to perform dynamic scale-
out or scale-in operations by adding or removing port pairs on a port pair group.
SFC performs load balancing/distribution over the additional service functions in the port pair
group.
8.2.27 SR-IOV
The purpose of this page is to describe how to enable SR-IOV functionality available in OpenStack (us-
ing OpenStack Networking). This functionality was first introduced in the OpenStack Juno release. This
page intends to serve as a guide for how to configure OpenStack Networking and OpenStack Compute
to create SR-IOV ports.
The basics
PCI-SIG Single Root I/O Virtualization and Sharing (SR-IOV) functionality is available in OpenStack
since the Juno release. The SR-IOV specification defines a standardized mechanism to virtualize PCIe
devices. This mechanism can virtualize a single PCIe Ethernet controller to appear as multiple PCIe
devices. Each device can be directly assigned to an instance, bypassing the hypervisor and virtual
switch layer. As a result, users are able to achieve low latency and near-line wire speed.
The following terms are used throughout this document:
Term Definition
PF Physical Function. The physical Ethernet controller that supports SR-IOV.
VF Virtual Function. The virtual PCIe device created from a physical Ethernet controller.
SR-IOV agent
The SR-IOV agent allows you to set the admin state of ports, configure port security (enable and disable
spoof checking), and configure QoS rate limiting and minimum bandwidth. You must include the SR-
IOV agent on each compute node using SR-IOV ports.
Note: The SR-IOV agent was optional before Mitaka, and was not enabled by default before Liberty.
Note: The ability to control port security and QoS rate limit settings was added in Liberty.
Note: Throughout this guide, eth3 is used as the PF and physnet2 is used as the provider network
configured as a VLAN range. These ports may vary in different environments.
Create the VFs for the network interface that will be used for SR-IOV. We use eth3 as PF, which is
also used as the interface for the VLAN provider network and has access to the private networks of all
machines.
Note: The steps detail how to create VFs using Mellanox ConnectX-4 and newer/Intel SR-IOV Ethernet
cards on an Intel system. Steps may differ for different hardware configurations.
Note: On some PCI devices, observe that when changing the amount of VFs you receive the
error Device or resource busy. In this case, you must first set sriov_numvfs to 0,
then set it to your new value.
Note: A network interface could be used both for PCI passthrough, using the PF, and SR-IOV,
using the VFs. If the PF is used, the VF number stored in the sriov_numvfs file is lost. If
the PF is attached again to the operating system, the number of VFs assigned to this interface
will be zero. To keep the number of VFs always assigned to this interface, modify the interfaces
configuration file adding an ifup script command.
On Ubuntu, modify the /etc/network/interfaces file:
auto eth3
iface eth3 inet dhcp
pre-up echo '4' > /sys/class/net/eth3/device/sriov_numvfs
#!/bin/sh
if [[ "$1" == "eth3" ]]
then
echo '4' > /sys/class/net/eth3/device/sriov_numvfs
fi
Warning: Alternatively, you can create VFs by passing the max_vfs to the kernel module
of your network interface. However, the max_vfs parameter has been deprecated, so the PCI
SYS interface is the preferred method.
# cat /sys/class/net/eth3/device/sriov_totalvfs
63
4. Verify that the VFs have been created and are in up state. For example:
If the interfaces are down, set them to up before launching a guest, otherwise the instance will
fail to spawn:
Note: The suggested way of making PCI SYS settings persistent is through the sysfsutils
tool. However, this is not available by default on many major distributions.
1. Configure which PCI devices the nova-compute service may use. Edit the nova.conf file:
[pci]
passthrough_whitelist = { "devname": "eth3", "physical_network":
,→"physnet2"}
This tells the Compute service that all VFs belonging to eth3 are allowed to be passed through
to instances and belong to the provider network physnet2.
Alternatively the [pci] passthrough_whitelist parameter also supports allowing de-
vices by:
• PCI address: The address uses the same syntax as in lspci and an asterisk (*) can be used
to match anything.
[pci]
passthrough_whitelist = { "address": "[[[[<domain>]:]<bus>]:][
,→<slot>][.[<function>]]", "physical_network": "physnet2" }
For example, to match any domain, bus 0a, slot 00, and all functions:
[pci]
passthrough_whitelist = { "address": "*:0a:00.*", "physical_
,→network": "physnet2" }
[pci]
passthrough_whitelist = { "vendor_id": "<id>", "product_id": "<id>
,→", "physical_network": "physnet2" }
If the device defined by the PCI address or devname corresponds to an SR-IOV PF, all VFs under
the PF will match the entry. Multiple [pci] passthrough_whitelist entries per host are
supported.
In order to enable SR-IOV to request trusted mode, the [pci] passthrough_whitelist
parameter also supports a trusted tag.
Note: This capability is only supported starting with version 18.0.0 (Rocky) release of the com-
pute service configured to use the libvirt driver.
Important: There are security implications of enabling trusted ports. The trusted VFs can be set
into VF promiscuous mode which will enable it to receive unmatched and multicast traffic sent to
the physical function.
For example, to allow users to request SR-IOV devices with trusted capabilities on device eth3:
[pci]
passthrough_whitelist = { "devname": "eth3", "physical_network":
,→"physnet2", "trusted":"true" }
The ports will have to be created with a binding profile to match the trusted tag, see Launching
instances with SR-IOV ports.
2. Restart the nova-compute service for the changes to go into effect.
1. Add sriovnicswitch as mechanism driver. Edit the ml2_conf.ini file on each controller:
[ml2]
mechanism_drivers = openvswitch,sriovnicswitch
2. Ensure your physnet is configured for the chosen network type. Edit the ml2_conf.ini file on
each controller:
[ml2_type_vlan]
network_vlan_ranges = physnet2
3. Add the plugin.ini file as a parameter to the neutron-server service. Edit the appropri-
ate initialization script to configure the neutron-server service to load the plugin configura-
tion file:
--config-file /etc/neutron/neutron.conf
--config-file /etc/neutron/plugin.ini
[filter_scheduler]
enabled_filters = AvailabilityZoneFilter, ComputeFilter,
,→ComputeCapabilitiesFilter, ImagePropertiesFilter,
,→ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter,
,→PciPassthroughFilter
available_filters = nova.scheduler.filters.all_filters
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
[sriov_nic]
physical_device_mappings = physnet2:eth3
exclude_devices =
The exclude_devices parameter is empty, therefore, all the VFs associated with eth3 may
be configured by the agent. To exclude specific VFs, add them to the exclude_devices
parameter as follows:
exclude_devices = eth1:0000:07:00.2;0000:07:00.3,eth2:0000:05:00.1;
,→0000:05:00.2
# neutron-sriov-nic-agent \
--config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/sriov_agent.ini
Forwarding DataBase (FDB) population is an L2 agent extension to OVS agent or Linux bridge. Its
objective is to update the FDB table for existing instance using normal port. This enables communication
between SR-IOV instances and normal instances. The use cases of the FDB population extension are:
• Direct port and normal port instances reside on the same compute node.
• Direct port instance that uses floating IP address and network node are located on the same host.
For additional information describing the problem, refer to: Virtual switching technologies and Linux
bridge.
1. Edit the ovs_agent.ini or linuxbridge_agent.ini file on each compute node. For
example:
[agent]
extensions = fdb
2. Add the FDB section and the shared_physical_device_mappings parameter. This pa-
rameter maps each physical port to its physical network name. Each physical network can be
mapped to several ports:
[FDB]
shared_physical_device_mappings = physnet1:p1p1, physnet1:p1p2
Once configuration is complete, you can launch instances with SR-IOV ports.
1. If it does not already exist, create a network and subnet for the chosen physnet. This is the network
to which SR-IOV ports will be attached. For example:
2. Get the id of the network where you want the SR-IOV port to be created:
3. Create the SR-IOV port. vnic-type=direct is used here, but other options include normal,
direct-physical, and macvtap:
Alternatively, to request that the SR-IOV port accept trusted capabilities, the binding profile should
be enhanced with the trusted tag.
5. Create the instance. Specify the SR-IOV port created in step two for the NIC:
Note: There are two ways to attach VFs to an instance. You can create an SR-IOV port or use
the pci_alias in the Compute service. For more information about using pci_alias, refer
to nova-api configuration.
In contrast to Mellanox newer generation NICs, ConnectX-3 family network adapters expose a single
PCI device (PF) in the system regardless of the number of physical ports. When the device is dual
port and SR-IOV is enabled and configured we can observe some inconsistencies in linux networking
subsystem.
Note: In the example below enp4s0 represents PF net device associated with physical port 1 and
enp4s0d1 represents PF net device associated with physical port 2.
Example: A system with ConnectX-3 dual port device and a total of four VFs configured, two VFs
assigned to port one and two VFs assigned to port two.
$ ip link show
31: enp4s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop master ovs-system
,→state DOWN mode DEFAULT group default qlen 1000
link/ether f4:52:14:01:d9:e1 brd ff:ff:ff:ff:ff:ff
vf 0 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state
,→auto
vf 1 MAC 00:00:00:00:00:00, vlan 4095, spoof checking off, link-state
,→auto
ip command identifies each PF associated net device as having four VFs each.
Note: Mellanox mlx4 driver allows ip commands to perform configuration of all VFs from either PF
To allow neutron SR-IOV agent to properly identify the VFs that belong to the correct PF network device
(thus to the correct network port) Admin is required to provide the exclude_devices configuration
option in sriov_agent.ini
Step 1: derive the VF to Port mapping from mlx4 driver configuration file: /etc/modprobe.d/
mlnx.conf or /etc/modprobe.d/mlx4.conf
Where:
num_vfs=n1,n2,n3 - The driver will enable n1 VFs on physical port 1, n2 VFs on physical port 2
and n3 dual port VFs (applies only to dual port HCA when all ports are Ethernet ports).
probe_vfs=m1,m2,m3 - the driver probes m1 single port VFs on physical port 1, m2 single port VFs
on physical port 2 (applies only if such a port exist) m3 dual port VFs. Those VFs are attached to the
hypervisor. (applies only if all ports are configured as Ethernet).
The VFs will be enumerated in the following order:
1. port 1 VFs
2. port 2 VFs
3. dual port VFs
In our example:
[sriov_nic]
physical_device_mappings = physnet1:enp4s0,physnet2:enp4s0d1
exclude_devices = enp4s0:0000:04:00.3;0000:04:00.4,enp4s0d1:0000:04:00.1;
,→0000:04:00.2
The support for SR-IOV with InfiniBand allows a Virtual PCI device (VF) to be directly mapped to
the guest, allowing higher performance and advanced features such as RDMA (remote direct memory
access). To use this feature, you must:
1. Use InfiniBand enabled network adapters.
2. Run InfiniBand subnet managers to enable InfiniBand fabric.
All InfiniBand networks must have a subnet manager running for the network to function. This is
true even when doing a simple network of two machines with no switch and the cards are plugged
in back-to-back. A subnet manager is required for the link on the cards to come up. It is possible
to have more than one subnet manager. In this case, one of them will act as the primary, and any
other will act as a backup that will take over when the primary subnet manager fails.
3. Install the ebrctl utility on the compute nodes.
Check that ebrctl is listed somewhere in /etc/nova/rootwrap.d/*:
If ebrctl does not appear in any of the rootwrap files, add this to the /etc/nova/
rootwrap.d/compute.filters file in the [Filters] section.
[Filters]
ebrctl: CommandFilter, ebrctl, root
Known limitations
• When using Quality of Service (QoS), max_burst_kbps (burst over max_kbps) is not sup-
ported. In addition, max_kbps is rounded to Mbps.
• Security groups are not supported when using SR-IOV, thus, the firewall driver must be disabled.
This can be done in the neutron.conf file.
[securitygroup]
firewall_driver = neutron.agent.firewall.NoopFirewallDriver
• SR-IOV is not integrated into the OpenStack Dashboard (horizon). Users must use the CLI or API
to configure SR-IOV interfaces.
• Live migration support has been added to the Libvirt Nova virt-driver in the Train release for
instances with neutron SR-IOV ports. Indirect mode SR-IOV interfaces (vnic-type: macvtap or
virtio-forwarder) can now be migrated transparently to the guest. Direct mode SR-IOV inter-
faces (vnic-type: direct or direct-physical) are detached before the migration and reattached after
the migration so this is not transparent to the guest. To avoid loss of network connectivy when
live migrating with direct mode sriov the user should create a failover bond in the guest with a
transparently live migration port type e.g. vnic-type normal or indirect mode SR-IOV.
Note: SR-IOV features may require a specific NIC driver version, depending on the vendor. Intel
NICs, for example, require ixgbe version 4.4.6 or greater, and ixgbevf version 3.2.2 or greater.
• Attaching SR-IOV ports to existing servers is supported starting with the Victoria release.
Subnet pools have been made available since the Kilo release. It is a simple feature that has the potential
to improve your workflow considerably. It also provides a building block from which other new features
will be built in to OpenStack Networking.
To see if your cloud has this feature available, you can check that it is listed in the supported aliases.
You can do this with the OpenStack client.
Before Kilo, Networking had no automation around the addresses used to create a subnet. To create one,
you had to come up with the addresses on your own without any help from the system. There are valid
use cases for this but if you are interested in the following capabilities, then subnet pools might be for
you.
First, would not it be nice if you could turn your pool of addresses over to Neutron to take care of? When
you need to create a subnet, you just ask for addresses to be allocated from the pool. You do not have to
worry about what you have already used and what addresses are in your pool. Subnet pools can do this.
Second, subnet pools can manage addresses across projects. The addresses are guaranteed not to overlap.
If the addresses come from an externally routable pool then you know that all of the projects have
addresses which are routable and unique. This can be useful in the following scenarios.
1. IPv6 since OpenStack Networking has no IPv6 floating IPs.
2. Routing directly to a project network from an external network.
A subnet pool manages a pool of addresses from which subnets can be allocated. It ensures that there is
no overlap between any two subnets allocated from the same pool.
As a regular project in an OpenStack cloud, you can create a subnet pool of your own and use it to
manage your own pool of addresses. This does not require any admin privileges. Your pool will not be
visible to any other project.
If you are an admin, you can create a pool which can be accessed by any regular project. Being a shared
resource, there is a quota mechanism to arbitrate access.
Quotas
Subnet pools have a quota system which is a little bit different than other quotas in Neutron. Other
quotas in Neutron count discrete instances of an object against a quota. Each time you create something
like a router, network, or a port, it uses one from your total quota.
With subnets, the resource is the IP address space. Some subnets take more of it than others. For ex-
ample, 203.0.113.0/24 uses 256 addresses in one subnet but 198.51.100.224/28 uses only 16. If address
space is limited, the quota system can encourage efficient use of the space.
With IPv4, the default_quota can be set to the number of absolute addresses any given project is al-
lowed to consume from the pool. For example, with a quota of 128, I might get 203.0.113.128/26,
203.0.113.224/28, and still have room to allocate 48 more addresses in the future.
With IPv6 it is a little different. It is not practical to count individual addresses. To avoid ridiculously
large numbers, the quota is expressed in the number of /64 subnets which can be allocated. For example,
with a default_quota of 3, I might get 2001:db8:c18e:c05a::/64, 2001:db8:221c:8ef3::/64, and still have
room to allocate one more prefix in the future.
Beginning with Mitaka, a subnet pool can be marked as the default. This is handled with a new extension.
An administrator can mark a pool as default. Only one pool from each address family can be marked
default.
Demo
If you have access to an OpenStack Kilo or later based neutron, you can play with this feature now. Give
it a try. All of the following commands work equally as well with IPv6 addresses.
First, as admin, create a shared subnet pool:
The default_prefix_length defines the subnet size you will get if you do not specify
--prefix-length when creating a subnet.
Do essentially the same thing for IPv6 and there are now two subnet pools. Regular projects can see
them. (the output is trimmed a bit for display)
$ openstack subnet pool list
+------------------+------------------+--------------------+
| ID | Name | Prefixes |
+------------------+------------------+--------------------+
| 2b7cc19f-0114-4e | demo-subnetpool | 2001:db8:a583::/48 |
| f4-ad86-c1bb91fc | | |
| d1f9 | | |
| d3aefb76-2527-43 | demo-subnetpool4 | 203.0.113.0/24 |
| d4-bc21-0ec25390 | | |
| 8545 | | |
+------------------+------------------+--------------------+
You can request a specific subnet from the pool. You need to specify a subnet that falls within the pools
prefixes. If the subnet is not already allocated, the request succeeds. You can leave off the IP version
because it is deduced from the subnet pool.
The subnet onboard feature allows you to take existing subnets that have been created outside of a
subnet pool and move them into an existing subnet pool. This enables you to begin using subnet pools
and address scopes if you havent allocated existing subnets from subnet pools. It also allows you to
move individual subnets between subnet pools, and by extension, move them between address scopes.
How it works
One of the fundamental constraints of subnet pools is that all subnets of the same address family (IPv4,
IPv6) on a network must be allocated from the same subnet pool. Because of this constraint, subnets
must be moved, or onboarded, into a subnet pool as a group at the network level rather than being
handled individually. As such, the onboarding of subnets requires users to supply the UUID of the
network the subnet(s) to onboard are associated with, and the UUID of the target subnet pool to perform
the operation.
To test that subnet onboard is supported in your environment, execute the following command:
Support for subnet onboard exists in the ML2 plugin as of the Stein release. If you require subnet
onboard but your current environment does not support it, consider upgrading to a release that supports
subnet onboard. When using third-party plugins with neutron, check with the supplier of the plugin
regarding support for subnet onboard.
Demo
Suppose an administrator has an existing provider network in their environment that was created without
allocating its subnets from a subnet pool.
The administrator has created a subnet pool named routable-prefixes and wants to onboard the
subnets associated with network provider-net-1. The administrator now wants to manage the
address space for provider networks using a subnet pool, but doesnt have the prefixes used by these
provider networks under the management of a subnet pool or address scope.
The administrator can use the following command to bring these subnets under the management of a
subnet pool:
The subnets on provider-net-1 should now all have their subnetpool_id updated to match the
UUID of the routable-prefixes subnet pool:
The subnet pool will also now show the onboarded prefix(es) in its prefix list:
Service subnets enable operators to define valid port types for each subnet on a network without limiting
networks to one subnet or manually creating ports with a specific subnet ID. Using this feature, operators
can ensure that ports for instances and router interfaces, for example, always use different subnets.
Operation
Define one or more service types for one or more subnets on a particular network. Each service type
must correspond to a valid device owner within the port model in order for it to be used.
During IP allocation, the IPAM driver returns an address from a subnet with a service type matching the
port device owner. If no subnets match, or all matching subnets lack available IP addresses, the IPAM
driver attempts to use a subnet without any service types to preserve compatibility. If all subnets on
a network have a service type, the IPAM driver cannot preserve compatibility. However, this feature
enables strict IP allocation from subnets with a matching device owner. If multiple subnets contain the
same service type, or a subnet without a service type exists, the IPAM driver selects the first subnet
with a matching service type. For example, a floating IP agent gateway port uses the following selection
process:
• network:floatingip_agent_gateway
• None
Note: Ports with the device owner network:dhcp are exempt from the above IPAM logic for sub-
nets with dhcp_enabled set to True. This preserves the existing automatic DHCP port creation
behaviour for DHCP-enabled subnets.
Creating or updating a port with a specific subnet skips this selection process and explicitly uses the
given subnet.
Usage
Example 1 - Proof-of-concept
This following example is not typical of an actual deployment. It is shown to allow users to experiment
with configuring service subnets.
1. Create a network.
2. Create a subnet on the network with one or more service types. For example, the compute:nova
service type enables instances to use this subnet.
3. Optionally, create another subnet on the network with a different service type. For example, the
compute:foo arbitrary service type.
$ openstack subnet create demo-subnet2 --subnet-range 198.51.100.0/24
,→\
--service-type 'compute:foo' --network demo-net1
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| id | ea139dcd-17a3-4f0a-8cca-dff8b4e03f8a |
| ip_version | 4 |
| cidr | 198.51.100.0/24 |
| name | demo-subnet2 |
| network_id | b5b729d8-31cc-4d2c-8284-72b3291fec02 |
| revision_number | 1 |
| service_types | ['compute:foo'] |
| tags | [] |
| tenant_id | a8b3054cc1214f18b1186b291525650f |
+-------------------+--------------------------------------+
4. Launch an instance using the network. For example, using the cirros image and m1.tiny
flavor.
$ openstack server create demo-instance1 --flavor m1.tiny \
--image cirros --nic net-id=b5b729d8-31cc-4d2c-8284-72b3291fec02
+--------------------------------------+------------------------------
,→-----------------+
| Field | Value
,→ |
+--------------------------------------+------------------------------
,→-----------------+
| OS-DCF:diskConfig | MANUAL
,→ |
| OS-EXT-AZ:availability_zone |
,→ |
| OS-EXT-SRV-ATTR:host | None
,→ |
| OS-EXT-SRV-ATTR:hypervisor_hostname | None
,→ |
| OS-EXT-SRV-ATTR:instance_name | instance-00000009
,→ | (continues on next page)
5. Check the instance status. The Networks field contains an IP address from the subnet having
the compute:nova service type.
The following example outlines how you can configure service subnets in a DVR-enabled deployment,
with the goal of minimizing public IP address consumption. This example uses three subnets on the
same external network:
• 192.0.2.0/24 for instance floating IP addresses
• 198.51.100.0/24 for floating IP agent gateway IPs configured on compute nodes
• 203.0.113.0/25 for all other IP allocations on the external network
This example uses again the private network, demo-net1 (b5b729d8-31cc-4d2c-8284-72b3291fec02)
which was created in Example 1 - Proof-of-concept.
1. Create an external network:
2. Create a subnet on the external network for the instance floating IP addresses. This uses the
network:floatingip service type.
3. Create a subnet on the external network for the floating IP agent gateway IP ad-
dresses, which are configured by DVR on compute nodes. This will use the
network:floatingip_agent_gateway service type.
4. Create a subnet on the external network for all other IP addresses allocated on the external net-
work. This will not use any service type. It acts as a fall back for allocations that do not match
either of the above two service subnets.
5. Create a router:
7. Set the external gateway for the router, which will create an interface and allocate an IP address
on demo-ext-net:
8. Launch an instance on a private network and retrieve the neutron port ID that was allocated. As
above, use the cirros image and m1.tiny flavor:
9. Associate a floating IP with the instance port and verify it was allocated an IP address from the
correct subnet:
10. As the admin user, verify the neutron routers are allocated IP addresses from their correct subnets.
Use openstack port list to find ports associated with the routers.
First, the router gateway external port:
The general principle is that L2 connectivity will be bound to a single rack. Everything outside the
switches of the rack will be routed using BGP. To perform the BGP announcement, neutron-dynamic-
routing is leveraged.
To achieve this, on each rack, servers are setup with a different management network using a vlan ID per
rack (light green and orange network below). Note that a unique vlan ID per rack isnt mandatory, its also
possible to use the same vlan ID on all racks. The point here is only to isolate L2 segments (typically,
routing between the switch of each racks will be done over BGP, without L2 connectivity).
On the OpenStack side, a provider network must be setup, which is using a different subnet range and
vlan ID for each rack. This includes:
• an address scope
• some network segments for that network, which are attached to a named physical network
• a subnet pool using that address scope
• one provider network subnet per segment (each subnet+segment pair matches one rack physical
network name)
A segment is attached to a specific vlan and physical network name. In the above figure, the provider
network is represented by 2 subnets: the dark green and the red ones. The dark green subnet is on
one network segment, and the red one on another. Both subnet are of the subnet service type net-
work:floatingip_agent_gateway, so that they cannot be used by virtual machines directly.
On top of all of this, a floating IP subnet without a segment is added, which spans in all of the racks.
This subnet must have the below service types:
• network:routed
• network:floatingip
• network:router_gateway
Since the network:routed subnet isnt bound to a segment, it can be used on all racks. As the service
types network:floatingip and network:router_gateway are used for the provider network, the subnet can
only be used for floating IPs and router gateways, meaning that the subnet using segments will be used
as floating IP gateways (ie: the next HOP to reach these floating IP / router external gateways).
On the controller side (ie: API and RPC server), only the Neutron Dynamic Routing Python library must
be installed (for example, in the Debian case, that would be the neutron-dynamic-routing-common and
python3-neutron-dynamic-routing packages). On top of that, segments and bgp must be added to the list
of plugins in service_plugins. For example in neutron.conf:
[DEFAULT]
service_plugins=router,metering,qos,trunk,segments,bgp
The neutron-bgp-agent must be installed. Best is to install it twice per rack, on any machine (it doesnt
mater much where). Then each of these BGP agents will establish a session with one switch, and
advertise all of the BGP configuration.
A peer that represents the network equipment must be created. Then a matching BGP speaker needs to
be created. Then, the BGP speaker must be associated to a dynamic-routing-agent (in our example, the
dynamic-routing agents run on compute 1 and 4). Finally, the peer is added to the BGP speaker, so the
speaker initiates a BGP session to the network equipment.
It is possible to repeat this operation for a 2nd machine on the same rack, if the deployment is using
bonding (and then, LACP between both switches), as per the figure above. It also can be done on each
rack. One way to deploy is to select two computers in each rack (for example, one compute node and
one network node), and install the neutron-dynamic-routing-agent on each of them, so they can talk to
both switches of the rack. All of this depends on what the configuration is on the switch side. It may be
that you only need to talk to two ToR racks in the whole deployment. The thing you must know is that
you can deploy as many dynamic-routing agent as needed, and that one agent can talk to a single device.
Before setting-up the provider network, the physical network name must be set in each
host, according to the rack names. On the compute or network nodes, this is done in
/etc/neutron/plugins/ml2/openvswitch_agent.ini using the bridge_mappings directive:
[ovs]
bridge_mappings = physnet-rack1:br-ex
All of the physical networks created this way must be added in the configuration of the neutron-server
as well (ie: this is used by both neutron-api and neutron-rpc-server). For example, with 3 racks, heres
how /etc/neutron/plugins/ml2/ml2_conf.ini should look like:
[ml2_type_flat]
flat_networks = physnet-rack1,physnet-rack2,physnet-rack3
[ml2_type_vlan]
network_vlan_ranges = physnet-rack1,physnet-rack2,physnet-rack3
Once this is done, the provider network can be created, using physnet-rack1 as physical network.
Everything that is in the provider networks scope will be advertised through BGP. Here is how to create
the network scope:
Then, the network can be ceated using the physical network name set above:
This automatically creates a network AND a segment. Though by default, this segment has no name,
which isnt convenient. This name can be changed though:
The 2nd segment, which will be attached to our provider network, is created this way:
Setting-up the provider subnets for the BGP next HOP routing
These subnets will be in use in different racks, depending on what physical network is in use in the
machines. In order to use the address scope, subnet pools must be used. Here is how to create the subnet
pool with the two ranges to use later when creating the subnets:
$ # Create the provider subnet pool which includes all ranges for
,→all racks
$ openstack subnet pool create \
--pool-prefix 10.1.0.0/24 \
--pool-prefix 10.2.0.0/24 \
--address-scope provider-addr-scope \
--share \
provider-subnet-pool
Then, this is how to create the two subnets. In this example, we are keeping the addresses in .1 for the
gateway, .2 for the DHCP server, and .253 +.254, as these addresses will be used by the switches for the
BGP announcements:
Note the service types. network:floatingip_agent_gateway makes sure that these subnets will be in use
only as gateways (ie: the next BGP hop). The above can be repeated for each new rack.
This is to be repeated each time a new subnet must be created for floating IPs and router gateways. First,
the range is added in the subnet pool, then the subnet itself is created:
$ # Add a new prefix in the subnet pool for the floating IPs:
$ openstack subnet pool set \
--pool-prefix 203.0.113.0/24 \
provider-subnet-pool
The service-type network:routed ensures were using BGP through the provider network to advertize the
IPs. network:floatingip and network:router_gateway limits the use of these IPs to floating IPs and router
gateways.
The provider network needs to be added to each of the BGP speakers. This means each time a new rack
is setup, the provider network must be added to the 2 BGP speakers of that rack.
In this example, weve selected two compute nodes that are also running an instance of the neutron-
dynamic-routing-agent daemon.
This can be done by each customer. A subnet pool isnt mandatory, but it is nice to have. Typically, the
customer network will not be advertized through BGP (but this can be done if needed).
$ # Self-service subnet:
$ openstack subnet create \
(continues on next page)
$ # Set the router's default gateway. This will use one public IP.
$ openstack router set \
--external-gateway provider-network tenant-router
Because of the way Neutron works, for each new port associated with an IP address, a GARP is issued,
to inform the switch about the new MAC / IP association. Unfortunately, this confuses the switches
where they may think they should use local ARP table to route the packet, rather than giving it to the
next HOP to route. The definitive solution would be to patch Neutron to make it stop sending GARP for
any port on a subnet with the network:routed service type. Such patch would be hard to write, though
lucky, theres a fix that works (at least with Cumulus switches). Heres how.
In /etc/network/switchd.conf we change this:
This reboots the switch ASIC of the switch, so it may be a dangerous thing to do with no switch redun-
dancy (so be careful when doing it). The completely safe procedure, if having 2 switches per rack, looks
like this:
# make sure that this switch is not the primary clag switch.
,→otherwise the
# secondary switch will also shutdown all interfaces when loosing
,→contact
# with the primary switch.
clagctl priority 16535
Verification
If everything goes well, the floating IPs are advertized over BGP through the provider network. Here is
an example with 4 VMs deployed on 2 racks. Neutron is here picking-up IPs on the segmented network
as Nexthop.
8.2.32 Trunking
The network trunk service allows multiple networks to be connected to an instance using a single virtual
NIC (vNIC). Multiple networks can be presented to an instance by connecting it to a single port.
Operation
Network trunking consists of a service plug-in and a set of drivers that manage trunks on different layer-
2 mechanism drivers. Users can create a port, associate it with a trunk, and launch an instance on that
port. Users can dynamically attach and detach additional networks without disrupting operation of the
instance.
Every trunk has a parent port and can have any number of subports. The parent port is the port that the
trunk is associated with. Users create instances and specify the parent port of the trunk when launching
instances attached to a trunk.
The network presented by the subport is the network of the associated port. When creating a subport,
a segmentation-id may be required by the driver. segmentation-id defines the segmentation
ID on which the subport network is presented to the instance. segmentation-type may be required
by certain drivers like OVS. At this time the following segmentation-type values are supported:
• vlan uses VLAN for segmentation.
• inherit uses the segmentation-type from the network the subport is connected to if
no segmentation-type is specified for the subport. Note that using the inherit type
requires the provider extension to be enabled and only works when the connected networks
segmentation-type is vlan.
Note: The segmentation-type and segmentation-id parameters are optional in the Net-
working API. However, all drivers as of the Newton release require both to be provided when adding a
subport to a trunk. Future drivers may be implemented without this requirement.
The segmentation-type and segmentation-id specified by the user on the subports is in-
tentionally decoupled from the segmentation-type and ID of the networks. For example, it is
possible to configure the Networking service with tenant_network_types = vxlan and still
create subports with segmentation_type = vlan. The Networking service performs remapping
as necessary.
Example configuration
The ML2 plug-in supports trunking with the following mechanism drivers:
• Open vSwitch (OVS)
• Linux bridge
• Open Virtual Network (OVN)
When using a segmentation-type of vlan, the OVS and Linux bridge drivers present the network
of the parent port as the untagged VLAN and all subports as tagged VLANs.
Controller node
[DEFAULT]
service_plugins = trunk
1. Source the administrative project credentials and list the enabled extensions.
2. Use the command openstack extension list --network to verify that the Trunk
Extension and Trunk port details extensions are enabled.
Workflow
At a high level, the basic steps to launching an instance on a trunk are the following:
1. Create networks and subnets for the trunk and subports
2. Create the trunk
3. Add subports to the trunk
4. Launch an instance on the trunk
Create the appropriate networks for the trunk and subports that will be added to the trunk. Create subnets
on these networks to ensure the desired layer-3 connectivity over the trunk.
• Create the trunk using --parent-port to reference the port from the previous step:
Subports can be added to a trunk in two ways: creating the trunk with subports or adding subports to an
existing trunk.
• Create trunk with subports:
This method entails creating the trunk with subports specified at trunk creation.
$ openstack port create --network project-net-A trunk-parent
+-------------------+-------------------------------------------------
,→------------------------+
| Field | Value
,→ |
+-------------------+-------------------------------------------------
,→------------------------+
| admin_state_up | UP
,→ |
| binding_vif_type | unbound
,→ |
| binding_vnic_type | normal
,→ |
| fixed_ips | ip_address='192.0.2.7',subnet_id='8b957198-d3cf-
,→4953-8449-ad4e4dd712cc' |
| id | 73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38
,→ |
| mac_address | fa:16:3e:dd:c4:d1
,→ |
| name | trunk-parent
,→ |
| network_id | 1b47d3e7-cda5-48e4-b0c8-d20bd7e35f55
,→ |
| revision_number | 1
,→ |
| tags | []
,→ |
+-------------------+-------------------------------------------------
,→------------------------+
| Field | Value
,→ |
+-------------------+-------------------------------------------------
,→---------------------------+
| admin_state_up | UP
,→ |
| binding_vif_type | unbound
,→ |
| binding_vnic_type | normal
,→ |
| fixed_ips | ip_address='198.51.100.8',subnet_id='2a860e2c-
,→922b-437b-a149-b269a8c9b120' |
| id | 91f9dde8-80a4-4506-b5da-c287feb8f5d8
,→ |
| mac_address | fa:16:3e:ba:f0:4d
,→ | (continues on next page)
| Field | Value
,→ |
+----------------+----------------------------------------------------
,→---------------------------------------------+
| admin_state_up | UP
,→ |
| id | 61d8e620-fe3a-4d8f-b9e6-e1b0dea6d9e3
,→ |
| name | trunk1
,→ |
| port_id | 73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38
,→ |
| revision_number| 1
,→ |
| sub_ports | port_id='73fb9d54-43a7-4bb1-a8dc-569e0e0a0a38',
,→segmentation_id='100', segmentation_type='vlan' |
| tags | []
,→ |
+----------------+----------------------------------------------------
,→---------------------------------------------+
• When using the OVN driver, additional logical switch port information is available using the
following commands:
• Launch the instance by specifying port-id using the value of port_id from the trunk details.
Launching an instance on a subport is not supported.
When configuring instances to use a subport, ensure that the interface on the instance is set to use the
MAC address assigned to the port by the Networking service. Instances are not made aware of changes
made to the trunk after they are active. For example, when a subport with a segmentation-type of
vlan is added to a trunk, any operations specific to the instance operating system that allow the instance
to send and receive traffic on the new VLAN must be handled outside of the Networking service.
When creating subports, the MAC address of the trunk parent port can be set on the subport. This will
allow VLAN subinterfaces inside an instance launched on a trunk to be configured without explicitly
setting a MAC address. Although unique MAC addresses can be used for subports, this can present
issues with ARP spoof protections and the native OVS firewall driver. If the native OVS firewall driver
is to be used, we recommend that the MAC address of the parent port be re-used on all subports.
Trunk states
• ACTIVE
The trunk is ACTIVE when both the logical and physical resources have been created. This means
that all operations within the Networking and Compute services have completed and the trunk is
ready for use.
• DOWN
A trunk is DOWN when it is first created without an instance launched on it, or when the instance
associated with the trunk has been deleted.
• DEGRADED
A trunk can be in a DEGRADED state when a temporary failure during the provisioning process is
encountered. This includes situations where a subport add or remove operation fails. When in a
degraded state, the trunk is still usable and some subports may be usable as well. Operations that
cause the trunk to go into a DEGRADED state can be retried to fix temporary failures and move the
trunk into an ACTIVE state.
• ERROR
A trunk is in ERROR state if the request leads to a conflict or an error that cannot be fixed by
retrying the request. The ERROR status can be encountered if the network is not compatible with
the trunk configuration or the binding process leads to a persistent failure. When a trunk is in
ERROR state, it must be brought to a sane state (ACTIVE), or else requests to add subports will
be rejected.
• BUILD
A trunk is in BUILD state while the resources associated with the trunk are in the process of being
provisioned. Once the trunk and all of the subports have been provisioned successfully, the trunk
transitions to ACTIVE. If there was a partial failure, the trunk transitions to DEGRADED.
When admin_state is set to DOWN, the user is blocked from performing operations on the
trunk. admin_state is set by the user and should not be used to monitor the health of the trunk.
• In neutron-ovs-agent the use of iptables_hybrid firewall driver and trunk ports are
not compatible with each other. The iptables_hybrid firewall is not going to filter the traffic
of subports. Instead use other firewall drivers like openvswitch.
• See bugs for more information.
This document is a guide to deploying neutron using WSGI. There are two ways to deploy using WSGI:
uwsgi and Apache mod_wsgi.
Please note that if you intend to use mode uwsgi, you should install the mode_proxy_uwsgi module.
For example on deb-based system:
WSGI Application
[uwsgi]
chmod-socket = 666
socket = /var/run/uwsgi/neutron-api.socket
lazy-apps = true
add-header = Connection: close
buffer-size = 65535
hook-master-start = unix_signal:15 gracefully_kill_them_all
thunder-lock = true
plugins = python
enable-threads = true
worker-reload-mercy = 90
exit-on-reload = false
die-on-term = true
master = true
processes = 2
wsgi-file = <path-to-neutron-bin-dir>/neutron-api
Start neutron-api:
Listen 9696
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-agent}i\"
,→%D(us)" neutron_combined
<Directory /usr/local/bin>
Require all granted
</Directory>
<VirtualHost *:9696>
WSGIDaemonProcess neutron-server processes=1 threads=1 user=stack
,→display-name=%{GROUP}
WSGIProcessGroup neutron-server
WSGIScriptAlias / <path-to-neutron-bin-dir>/neutron-api
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%M"
ErrorLog /var/log/neutron/neutron.log
CustomLog /var/log/neutron/neutron_access.log neutron_combined
</VirtualHost>
WSGISocketPrefix /var/run/apache2
For deb-based systems copy or symlink the file to /etc/apache2/sites-available. Then en-
able the neutron site:
# a2ensite neutron
# systemctl reload apache2.service
For rpm-based systems copy the file to /etc/httpd/conf.d. Then enable the neutron site:
When Neutron API is served by a web server (like Apache2) it is difficult to start an rpc listener thread.
So start the Neutron RPC server process to serve this job:
Neutron will attempt to spawn a number of child processes for handling API and RPC requests. The
number of API workers is set to the number of CPU cores, further limited by available memory, and the
number of RPC workers is set to half that number.
It is strongly recommended that all deployers set these values themselves, via the api_workers and
rpc_workers configuration parameters.
For a cloud with a high load to a relatively small number of objects, a smaller value for api_workers
will provide better performance than many (somewhere around 4-8.) For a cloud with a high load to
lots of different objects, then the more the better. Budget neutron-server using about 2GB of RAM in
steady-state.
For rpc_workers, there needs to be enough to keep up with incoming events from the various neutron
agents. Signs that there are too few can be agent heartbeats arriving late, nova vif bindings timing out
on the hypervisors, or rpc message timeout exceptions in agent logs.
The following deployment examples provide building blocks of increasing architectural complexity us-
ing the Networking service reference architecture which implements the Modular Layer 2 (ML2) plug-in
and either the Open vSwitch (OVS) or Linux bridge mechanism drivers. Both mechanism drivers sup-
port the same basic features such as provider networks, self-service networks, and routers. However,
more complex features often require a particular mechanism driver. Thus, you should consider the re-
quirements (or goals) of your cloud before choosing a mechanism driver.
After choosing a mechanism driver, the deployment examples generally include the following building
blocks:
1. Provider (public/external) networks using IPv4 and IPv6
2. Self-service (project/private/internal) networks including routers using IPv4 and IPv6
3. High-availability features
4. Other features such as BGP dynamic routing
8.3.1 Prerequisites
Prerequisites, typically hardware requirements, generally increase with each building block. Each build-
ing block depends on proper deployment and operation of prior building blocks. For example, the first
building block (provider networks) only requires one controller and two compute nodes, the second
building block (self-service networks) adds a network node, and the high-availability building blocks
typically add a second network node for a total of five nodes. Each building block could also require
additional infrastructure or changes to existing infrastructure such as networks.
For basic configuration of prerequisites, see the latest Install Tutorials and Guides.
Note: Example commands using the openstack client assume version 3.2.0 or higher.
Nodes
Note: You can virtualize these nodes for demonstration, training, or proof-of-concept purposes. How-
ever, you must use physical hosts for evaluation of performance or scaling.
The deployment examples refer to one or more of the following networks and network interfaces:
• Management: Handles API requests from clients and control plane traffic for OpenStack services
including their dependencies.
• Overlay: Handles self-service networks using an overlay protocol such as VXLAN or GRE.
• Provider: Connects virtual and physical networks at layer-2. Typically uses physical network
infrastructure for switching/routing traffic to external networks such as the Internet.
Note: For best performance, 10+ Gbps physical network infrastructure should support jumbo frames.
For illustration purposes, the configuration examples typically reference the following IP address ranges:
• Provider network 1:
– IPv4: 203.0.113.0/24
– IPv6: fd00:203:0:113::/64
• Provider network 2:
– IPv4: 192.0.2.0/24
– IPv6: fd00:192:0:2::/64
• Self-service networks:
– IPv4: 198.51.100.0/24 in /24 segments
– IPv6: fd00:198:51::/48 in /64 segments
You may change them to work with your particular network infrastructure.
The Linux bridge mechanism driver uses only Linux bridges and veth pairs as interconnection devices.
A layer-2 agent manages Linux bridges on each compute node and any other node that provides layer-3
(routing), DHCP, metadata, or other network services.
nftables replaces iptables, ip6tables, arptables and ebtables, in order to provide a single API for all
Netfilter operations. nftables provides a backwards compatibility set of tools for those replaced
binaries that present the legacy API to the user while using the new packet classification framework. As
reported in LP#1915341 and LP#1922892, the tool ebtables-nft is not totally compatible with the
legacy API and returns some errors. To use Linux Bridge mechanism driver in newer operating systems
that use nftables by default, it is needed to switch back to the legacy tool.
Since LP#1922127 and LP#1922892 were fixed, Neutron Linux Bridge mechanism driver is compatible
with the nftables binaries using the legacy API.
Note: Just to unravel the possible terminology confusion, these are the three Netfilter available
framework alternatives:
• The legacy binaries (iptables, ip6tables, arptables and ebtables) that use the
legacy API.
• The new nftables binaries that use the legacy API, to help in the transition to this new frame-
work. Those binaries replicate the same commands as the legacy one but using the new frame-
work. The binaries have the same name ended in -nft.
• The new nftables framework using the new API. All Netfilter operations are executed using
this new API and one single binary, nft.
Currently we support the first two options. The migration (total or partial) to the new API is tracked in
LP#1508155.
In order to use the nftables binaries with the legacy API, it is needed to execute the following
commands.
The ipset tool is not compatible with nftables. To disable it, enable_ipset must be set to
False in the ML2 plugin configuration file /etc/neutron/plugins/ml2/ml2_conf.ini.
[securitygroup]
# ...
enable_ipset = False
The provider networks architecture example provides layer-2 connectivity between instances and the
physical network infrastructure using VLAN (802.1q) tagging. It supports one untagged (flat) network
and up to 4095 tagged (VLAN) networks. The actual quantity of VLAN networks depends on the
physical network infrastructure. For more information on provider networks, see Provider networks.
Prerequisites
• OpenStack Networking Linux bridge layer-2 agent, DHCP agent, metadata agent, and any depen-
dencies.
Note: Larger deployments typically deploy the DHCP and metadata agents on a subset of compute
nodes to increase performance and redundancy. However, too many agents can overwhelm the message
bus. Also, to further simplify any deployment, you can omit the metadata agent and use a configuration
drive to provide metadata to instances.
Architecture
The following figure shows components and connectivity for one untagged (flat) network. In this par-
ticular case, the instance resides on the same compute node as the DHCP agent for the network. If the
DHCP agent resides on another compute node, the latter only contains a DHCP namespace and Linux
bridge with a port on the provider physical network interface.
The following figure describes virtual connectivity among components for two tagged (VLAN) net-
works. Essentially, each network uses a separate bridge that contains a port on the VLAN sub-interface
on the provider physical network interface. Similar to the single untagged network case, the DHCP
agent may reside on a different compute node.
Note: These figures omit the controller node because it does not handle instance network traffic.
Example configuration
Use the following example configuration as a template to deploy provider networks in your environment.
Controller node
1. Install the Networking service components that provides the neutron-server service and
ML2 plug-in.
2. In the neutron.conf file:
• Configure common options:
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
(continues on next page)
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and Configuration Reference for your Open-
Stack release to obtain the appropriate additional configuration for the [DEFAULT],
[database], [keystone_authtoken], [nova], and [agent] sections.
• Disable service plug-ins because provider networks do not require any. However, this breaks
portions of the dashboard that manage the Networking service. See the latest Install Tutorials
and Guides for more information.
[DEFAULT]
service_plugins =
• Enable two DHCP agents per network so both compute nodes can provide DHCP service
provider networks.
[DEFAULT]
dhcp_agents_per_network = 2
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vlan]
network_vlan_ranges = provider
Compute nodes
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and Configuration Reference for your OpenStack re-
lease to obtain the appropriate additional configuration for the [DEFAULT], [database],
[keystone_authtoken], [nova], and [agent] sections.
3. In the linuxbridge_agent.ini file, configure the Linux bridge agent:
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE
[vxlan]
enable_vxlan = False
[securitygroup]
firewall_driver = iptables
Replace PROVIDER_INTERFACE with the name of the underlying interface that handles
provider networks. For example, eth1.
4. In the dhcp_agent.ini file, configure the DHCP agent:
[DEFAULT]
interface_driver = linuxbridge
enable_isolated_metadata = True
force_metadata = True
Note: The force_metadata option forces the DHCP agent to provide a host route to the
metadata service on 169.254.169.254 regardless of whether the subnet contains an interface
on a router, thus maintaining similar and predictable metadata behavior among subnets.
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = METADATA_SECRET
The value of METADATA_SECRET must match the value of the same option in the [neutron]
section of the nova.conf file.
6. Start the following services:
• Linux bridge agent
• DHCP agent
• Metadata agent
The configuration supports one flat or multiple VLAN provider networks. For simplicity, the following
procedure creates one flat provider network.
1. Source the administrative project credentials.
2. Create a flat network.
Note: The share option allows any project to use this network. To limit access to provider
networks, see Role-Based Access Control (RBAC).
Important: Enabling DHCP causes the Networking service to provide DHCP which can interfere
with existing DHCP services on the physical network infrastructure. Use the --no-dhcp option
to have the subnet managed by existing DHCP services.
Note: The Networking service uses the layer-3 agent to provide router advertisement. Provider
networks rely on physical network infrastructure for layer-3 services rather than the layer-3 agent.
Thus, the physical network infrastructure must provide router advertisement on provider networks
for proper operation of IPv6.
# ip netns
qdhcp-8b868082-e312-4110-8627-298109d4401c
4. Launch an instance with an interface on the provider network. For example, a CirrOS image using
flavor ID 1.
6. On the controller node or any host with access to the provider network, ping the IPv4 and IPv6
addresses of the instance.
$ ping -c 4 203.0.113.13
PING 203.0.113.13 (203.0.113.13) 56(84) bytes of data.
64 bytes from 203.0.113.13: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.13: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.13: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.13: icmp_req=4 ttl=63 time=0.929 ms
$ ping6 -c 4 fd00:203:0:113:f816:3eff:fe58:be4e
PING
,→fd00:203:0:113:f816:3eff:fe58:be4e(fd00:203:0:113:f816:3eff:fe58:be4e)
,→56 data bytes
64 bytes from fd00:203:0:113:f816:3eff:fe58:be4e icmp_seq=1 ttl=64
,→time=1.25 ms
64 bytes from fd00:203:0:113:f816:3eff:fe58:be4e icmp_seq=2 ttl=64
,→time=0.683 ms
64 bytes from fd00:203:0:113:f816:3eff:fe58:be4e icmp_seq=3 ttl=64
,→time=0.762 ms
The following sections describe the flow of network traffic in several common scenarios. North-south
network traffic travels between an instance and external network such as the Internet. East-west network
traffic travels between instances on the same or different networks. In all scenarios, the physical network
infrastructure handles switching and routing among provider networks and external networks such as the
Internet. Each case references one or more of the following components:
• Provider network 1 (VLAN)
– VLAN ID 101 (tagged)
– IP address ranges 203.0.113.0/24 and fd00:203:0:113::/64
– Gateway (via physical network infrastructure)
Instances on the same network communicate directly between compute nodes containing those instances.
• Instance 1 resides on compute node 1 and uses provider network 1.
• Instance 2 resides on compute node 2 and uses provider network 1.
• Instance 1 sends a packet to instance 2.
The following steps involve compute node 1:
1. The instance 1 interface (1) forwards the packet to the provider bridge instance port (2) via veth
pair.
2. Security group rules (3) on the provider bridge handle firewalling and connection tracking for the
packet.
3. The VLAN sub-interface port (4) on the provider bridge forwards the packet to the physical net-
work interface (5).
4. The physical network interface (5) adds VLAN tag 101 to the packet and forwards it to the physical
network infrastructure switch (6).
The following steps involve the physical network infrastructure:
1. The switch forwards the packet from compute node 1 to compute node 2 (7).
The following steps involve compute node 2:
1. The physical network interface (8) removes VLAN tag 101 from the packet and forwards it to the
VLAN sub-interface port (9) on the provider bridge.
2. Security group rules (10) on the provider bridge handle firewalling and connection tracking for
the packet.
3. The provider bridge instance port (11) forwards the packet to the instance 2 interface (12) via
veth pair.
Note: Both instances reside on the same compute node to illustrate how VLAN tagging enables multiple
logical layer-2 networks to use the same physical layer-2 network.
This architecture example augments Linux bridge: Provider networks to support a nearly limitless quan-
tity of entirely virtual networks. Although the Networking service supports VLAN self-service net-
works, this example focuses on VXLAN self-service networks. For more information on self-service
networks, see Self-service networks.
Note: The Linux bridge agent lacks support for other overlay protocols such as GRE and Geneve.
Prerequisites
Note: You can keep the DHCP and metadata agents on each compute node or move them to the network
node.
Architecture
The following figure shows components and connectivity for one self-service network and one untagged
(flat) provider network. In this particular case, the instance resides on the same compute node as the
DHCP agent for the network. If the DHCP agent resides on another compute node, the latter only
contains a DHCP namespace and Linux bridge with a port on the overlay physical network interface.
Example configuration
Use the following example configuration as a template to add support for self-service networks to an
existing operational environment that supports provider networks.
Controller node
[DEFAULT]
service_plugins = router
allow_overlapping_ips = True
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
[ml2]
mechanism_drivers = linuxbridge,l2population
[ml2_type_vxlan]
vni_ranges = VNI_START:VNI_END
Network node
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and Configuration Reference for your OpenStack re-
lease to obtain the appropriate additional configuration for the [DEFAULT], [database],
[keystone_authtoken], [nova], and [agent] sections.
3. In the linuxbridge_agent.ini file, configure the layer-2 agent.
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE
[vxlan]
enable_vxlan = True
l2_population = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
(continues on next page)
[securitygroup]
firewall_driver = iptables
Warning: By default, Linux uses UDP port 8472 for VXLAN tunnel traffic. This default
value doesnt follow the IANA standard, which assigned UDP port 4789 for VXLAN com-
munication. As a consequence, if this node is part of a mixed deployment, where nodes with
both OVS and Linux bridge must communicate over VXLAN tunnels, it is recommended that
a line containing udp_dstport = 4789 be added to the [vxlan] section of all the Linux
bridge agents. OVS follows the IANA standard.
Replace PROVIDER_INTERFACE with the name of the underlying interface that handles
provider networks. For example, eth1.
Replace OVERLAY_INTERFACE_IP_ADDRESS with the IP address of the interface that han-
dles VXLAN overlays for self-service networks.
4. In the l3_agent.ini file, configure the layer-3 agent.
[DEFAULT]
interface_driver = linuxbridge
Compute nodes
[vxlan]
enable_vxlan = True
l2_population = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
Warning: By default, Linux uses UDP port 8472 for VXLAN tunnel traffic. This default
value doesnt follow the IANA standard, which assigned UDP port 4789 for VXLAN com-
munication. As a consequence, if this node is part of a mixed deployment, where nodes with
both OVS and Linux bridge must communicate over VXLAN tunnels, it is recommended that
a line containing udp_dstport = 4789 be added to the [vxlan] section of all the Linux
bridge agents. OVS follows the IANA standard.
The configuration supports multiple VXLAN self-service networks. For simplicity, the following pro-
cedure creates one self-service network and a router with a gateway on the flat provider network. The
router uses NAT for IPv4 network traffic and directly routes IPv6 network traffic.
Note: IPv6 connectivity with self-service networks often requires addition of static routes to nodes and
physical network infrastructure.
7. Create a router.
# ip netns
qdhcp-8b868082-e312-4110-8627-298109d4401c
qdhcp-8fbc13ca-cfe0-4b8a-993b-e33f37ba66d1
# ip netns
qrouter-17db2a15-e024-46d0-9250-4cd4d336a2cc
4. Create the appropriate security group rules to allow ping and SSH access instances using the
network.
5. Launch an instance with an interface on the self-service network. For example, a CirrOS image
using flavor ID 1.
Warning: The IPv4 address resides in a private IP address range (RFC1918). Thus, the Net-
working service performs source network address translation (SNAT) for the instance to access
external networks such as the Internet. Access from external networks such as the Internet to
the instance requires a floating IPv4 address. The Networking service performs destination
network address translation (DNAT) from the floating IPv4 address to the instance IPv4 ad-
dress on the self-service network. On the other hand, the Networking service architecture for
IPv6 lacks support for NAT due to the significantly larger address space and complexity of
NAT. Thus, floating IP addresses do not exist for IPv6 and the Networking service only per-
forms routing for IPv6 subnets on self-service networks. In other words, you cannot rely on
NAT to hide instances with IPv4 and IPv6 addresses or only IPv6 addresses and must properly
implement security groups to restrict access.
7. On the controller node or any host with access to the provider network, ping the IPv6 address of
the instance.
$ ping6 -c 4 fd00:192:0:2:f816:3eff:fe30:9cb0
PING
,→fd00:192:0:2:f816:3eff:fe30:9cb0(fd00:192:0:2:f816:3eff:fe30:9cb0)
,→56 data bytes
64 bytes from fd00:192:0:2:f816:3eff:fe30:9cb0: icmp_seq=1 ttl=63
,→time=2.08 ms
64 bytes from fd00:192:0:2:f816:3eff:fe30:9cb0: icmp_seq=2 ttl=63
,→time=1.88 ms
64 bytes from fd00:192:0:2:f816:3eff:fe30:9cb0: icmp_seq=3 ttl=63
,→time=1.55 ms
64 bytes from fd00:192:0:2:f816:3eff:fe30:9cb0: icmp_seq=4 ttl=63
,→time=1.62 ms
8. Optionally, enable IPv4 access from external networks such as the Internet to the instance.
1. Create a floating IPv4 address on the provider network.
3. On the controller node or any host with access to the provider network, ping the floating
IPv4 address of the instance.
$ ping -c 4 203.0.113.16
PING 203.0.113.16 (203.0.113.16) 56(84) bytes of data.
64 bytes from 203.0.113.16: icmp_seq=1 ttl=63 time=3.41 ms
64 bytes from 203.0.113.16: icmp_seq=2 ttl=63 time=1.67 ms
64 bytes from 203.0.113.16: icmp_seq=3 ttl=63 time=1.47 ms
64 bytes from 203.0.113.16: icmp_seq=4 ttl=63 time=1.59 ms
The following sections describe the flow of network traffic in several common scenarios. North-south
network traffic travels between an instance and external network such as the Internet. East-west network
traffic travels between instances on the same or different networks. In all scenarios, the physical network
infrastructure handles switching and routing among provider networks and external networks such as the
Internet. Each case references one or more of the following components:
• Provider network (VLAN)
– VLAN ID 101 (tagged)
• Self-service network 1 (VXLAN)
– VXLAN ID (VNI) 101
• Self-service network 2 (VXLAN)
– VXLAN ID (VNI) 102
• Self-service router
– Gateway on the provider network
– Interface on self-service network 1
– Interface on self-service network 2
• Instance 1
• Instance 2
For instances with a fixed IPv4 address, the network node performs SNAT on north-south traffic passing
from self-service to external networks such as the Internet. For instances with a fixed IPv6 address, the
network node performs conventional routing of traffic between self-service and external networks.
• The instance resides on compute node 1 and uses self-service network 1.
• The instance sends a packet to a host on the Internet.
The following steps involve compute node 1:
1. The instance interface (1) forwards the packet to the self-service bridge instance port (2) via veth
pair.
2. Security group rules (3) on the self-service bridge handle firewalling and connection tracking for
the packet.
3. The self-service bridge forwards the packet to the VXLAN interface (4) which wraps the packet
using VNI 101.
4. The underlying physical interface (5) for the VXLAN interface forwards the packet to the network
node via the overlay network (6).
The following steps involve the network node:
1. The underlying physical interface (7) for the VXLAN interface forwards the packet to the VXLAN
interface (8) which unwraps the packet.
2. The self-service bridge router port (9) forwards the packet to the self-service network interface
(10) in the router namespace.
• For IPv4, the router performs SNAT on the packet which changes the source IP address to
the router IP address on the provider network and sends it to the gateway IP address on the
provider network via the gateway interface on the provider network (11).
• For IPv6, the router sends the packet to the next-hop IP address, typically the gateway IP
address on the provider network, via the provider gateway interface (11).
3. The router forwards the packet to the provider bridge router port (12).
4. The VLAN sub-interface port (13) on the provider bridge forwards the packet to the provider
physical network interface (14).
5. The provider physical network interface (14) adds VLAN tag 101 to the packet and forwards it to
the Internet via physical network infrastructure (15).
Note: Return traffic follows similar steps in reverse. However, without a floating IPv4 address, hosts on
the provider or external networks cannot originate connections to instances on the self-service network.
For instances with a floating IPv4 address, the network node performs SNAT on north-south traffic
passing from the instance to external networks such as the Internet and DNAT on north-south traffic
passing from external networks to the instance. Floating IP addresses and NAT do not apply to IPv6.
Thus, the network node routes IPv6 traffic in this scenario.
• The instance resides on compute node 1 and uses self-service network 1.
• A host on the Internet sends a packet to the instance.
The following steps involve the network node:
1. The physical network infrastructure (1) forwards the packet to the provider physical network in-
terface (2).
2. The provider physical network interface removes VLAN tag 101 and forwards the packet to the
VLAN sub-interface on the provider bridge.
3. The provider bridge forwards the packet to the self-service router gateway port on the provider
network (5).
• For IPv4, the router performs DNAT on the packet which changes the destination IP address
to the instance IP address on the self-service network and sends it to the gateway IP address
on the self-service network via the self-service interface (6).
• For IPv6, the router sends the packet to the next-hop IP address, typically the gateway IP
address on the self-service network, via the self-service interface (6).
4. The router forwards the packet to the self-service bridge router port (7).
5. The self-service bridge forwards the packet to the VXLAN interface (8) which wraps the packet
using VNI 101.
6. The underlying physical interface (9) for the VXLAN interface forwards the packet to the network
node via the overlay network (10).
The following steps involve the compute node:
1. The underlying physical interface (11) for the VXLAN interface forwards the packet to the
VXLAN interface (12) which unwraps the packet.
2. Security group rules (13) on the self-service bridge handle firewalling and connection tracking for
the packet.
3. The self-service bridge instance port (14) forwards the packet to the instance interface (15) via
veth pair.
Note: Egress instance traffic flows similar to north-south scenario 1, except SNAT changes the source
IP address of the packet to the floating IPv4 address rather than the router IP address on the provider
network.
Instances with a fixed IPv4/IPv6 or floating IPv4 address on the same network communicate directly
between compute nodes containing those instances.
By default, the VXLAN protocol lacks knowledge of target location and uses multicast to discover it.
After discovery, it stores the location in the local forwarding database. In large deployments, the discov-
ery process can generate a significant amount of network that all nodes must process. To eliminate the
latter and generally increase efficiency, the Networking service includes the layer-2 population mecha-
nism driver that automatically populates the forwarding database for VXLAN interfaces. The example
configuration enables this driver. For more information, see ML2 plug-in.
• Instance 1 resides on compute node 1 and uses self-service network 1.
• Instance 2 resides on compute node 2 and uses self-service network 1.
• Instance 1 sends a packet to instance 2.
The following steps involve compute node 1:
1. The instance 1 interface (1) forwards the packet to the self-service bridge instance port (2) via
veth pair.
2. Security group rules (3) on the self-service bridge handle firewalling and connection tracking for
the packet.
3. The self-service bridge forwards the packet to the VXLAN interface (4) which wraps the packet
using VNI 101.
4. The underlying physical interface (5) for the VXLAN interface forwards the packet to compute
node 2 via the overlay network (6).
The following steps involve compute node 2:
1. The underlying physical interface (7) for the VXLAN interface forwards the packet to the VXLAN
interface (8) which unwraps the packet.
2. Security group rules (9) on the self-service bridge handle firewalling and connection tracking for
the packet.
3. The self-service bridge instance port (10) forwards the packet to the instance 1 interface (11) via
veth pair.
Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate via router on the net-
work node. The self-service networks must reside on the same router.
• Instance 1 resides on compute node 1 and uses self-service network 1.
• Instance 2 resides on compute node 1 and uses self-service network 2.
• Instance 1 sends a packet to instance 2.
Note: Both instances reside on the same compute node to illustrate how VXLAN enables multiple
overlays to use the same layer-3 network.
This architecture example augments the self-service deployment example with a high-availability mech-
anism using the Virtual Router Redundancy Protocol (VRRP) via keepalived and provides failover
of routing for self-service networks. It requires a minimum of two network nodes because VRRP creates
one master (active) instance and at least one backup instance of each router.
During normal operation, keepalived on the master router periodically transmits heartbeat pack-
ets over a hidden network that connects all VRRP routers for a particular project. Each project with
VRRP routers uses a separate hidden network. By default this network uses the first value in the
tenant_network_types option in the ml2_conf.ini file. For additional control, you can
specify the self-service network type and physical network name for the hidden network using the
l3_ha_network_type and l3_ha_network_name options in the neutron.conf file.
If keepalived on the backup router stops receiving heartbeat packets, it assumes failure of the master
router and promotes the backup router to master router by configuring IP addresses on the interfaces in
the qrouter namespace. In environments with more than one backup router, keepalived on the
backup router with the next highest priority promotes that backup router to master router.
Note: This high-availability mechanism configures VRRP using the same priority for all routers. There-
fore, VRRP promotes the backup router with the highest IP address to the master router.
Warning: There is a known bug with keepalived v1.2.15 and earlier which can cause packet
loss when max_l3_agents_per_router is set to 3 or more. Therefore, we recommend that
you upgrade to keepalived v1.2.16 or greater when using this feature.
Interruption of VRRP heartbeat traffic between network nodes, typically due to a network interface or
physical network infrastructure failure, triggers a failover. Restarting the layer-3 agent, or failure of it,
does not trigger a failover providing keepalived continues to operate.
Consider the following attributes of this high-availability mechanism to determine practicality in your
environment:
• Instance network traffic on self-service networks using a particular router only traverses the master
instance of that router. Thus, resource limitations of a particular network node can impact all
master instances of routers on that network node without triggering failover to another network
node. However, you can configure the scheduler to distribute the master instance of each router
uniformly across a pool of network nodes to reduce the chance of resource contention on any
particular network node.
• Only supports self-service networks using a router. Provider networks operate at layer-2 and rely
on physical network infrastructure for redundancy.
• For instances with a floating IPv4 address, maintains state of network connections during failover
as a side effect of 1:1 static NAT. The mechanism does not actually implement connection track-
ing.
For production deployments, we recommend at least three network nodes with sufficient resources to
handle network traffic for the entire environment if one network node fails. Also, the remaining two
nodes can continue to provide redundancy.
Warning: This high-availability mechanism is not compatible with the layer-2 population mech-
anism. You must disable layer-2 population in the linuxbridge_agent.ini file and restart
the Linux bridge agent on all existing network and compute nodes prior to deploying the example
configuration.
Prerequisites
Note: You can keep the DHCP and metadata agents on each compute node or move them to the network
nodes.
Architecture
The following figure shows components and connectivity for one self-service network and one untagged
(flat) network. The master router resides on network node 1. In this particular case, the instance resides
on the same compute node as the DHCP agent for the network. If the DHCP agent resides on another
compute node, the latter only contains a DHCP namespace and Linux bridge with a port on the overlay
physical network interface.
Example configuration
Use the following example configuration as a template to add support for high-availability using VRRP
to an existing operational environment that supports self-service networks.
Controller node
[DEFAULT]
l3_ha = True
Network node 1
No changes.
Network node 2
1. Install the Networking service Linux bridge layer-2 agent and layer-3 agent.
2. In the neutron.conf file, configure common options:
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and Configuration Reference for your OpenStack re-
lease to obtain the appropriate additional configuration for the [DEFAULT], [database],
[keystone_authtoken], [nova], and [agent] sections.
3. In the linuxbridge_agent.ini file, configure the layer-2 agent.
[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE
(continues on next page)
[vxlan]
enable_vxlan = True
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
[securitygroup]
firewall_driver = iptables
Warning: By default, Linux uses UDP port 8472 for VXLAN tunnel traffic. This default
value doesnt follow the IANA standard, which assigned UDP port 4789 for VXLAN com-
munication. As a consequence, if this node is part of a mixed deployment, where nodes with
both OVS and Linux bridge must communicate over VXLAN tunnels, it is recommended that
a line containing udp_dstport = 4789 be added to the [vxlan] section of all the Linux
bridge agents. OVS follows the IANA standard.
Replace PROVIDER_INTERFACE with the name of the underlying interface that handles
provider networks. For example, eth1.
Replace OVERLAY_INTERFACE_IP_ADDRESS with the IP address of the interface that han-
dles VXLAN overlays for self-service networks.
4. In the l3_agent.ini file, configure the layer-3 agent.
[DEFAULT]
interface_driver = linuxbridge
Compute nodes
No changes.
Similar to the self-service deployment example, this configuration supports multiple VXLAN self-
service networks. After enabling high-availability, all additional routers use VRRP. The following pro-
cedure creates an additional self-service network and router. The Networking service also supports
adding high-availability to existing routers. However, the procedure requires administratively disabling
and enabling each router which temporarily interrupts network connectivity for self-service networks
with interfaces on that router.
1. Source a regular (non-administrative) project credentials.
2. Create a self-service network.
5. Create a router.
3. On each network node, verify creation of a qrouter namespace with the same ID.
Network node 1:
# ip netns
qrouter-b6206312-878e-497c-8ef7-eb384f8add96
Network node 2:
# ip netns
qrouter-b6206312-878e-497c-8ef7-eb384f8add96
Note: The namespace for router 1 from Linux bridge: Self-service networks should only appear
on network node 1 because of creation prior to enabling VRRP.
4. On each network node, show the IP address of interfaces in the qrouter namespace. With the
exception of the VRRP interface, only one namespace belonging to the master router instance
contains IP addresses on the interfaces.
Network node 1:
# ip netns exec qrouter-b6206312-878e-497c-8ef7-eb384f8add96 ip addr
,→show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
,→group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ha-eb820380-40@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450
,→qdisc noqueue state UP group default qlen 1000
Network node 2:
# ip netns exec qrouter-b6206312-878e-497c-8ef7-eb384f8add96 ip addr
,→show
(continues on next page)
5. Launch an instance with an interface on the additional self-service network. For example, a Cir-
rOS image using flavor ID 1.
1. Begin a continuous ping of both the floating IPv4 address and IPv6 address of the instance.
While performing the next three steps, you should see a minimal, if any, interruption of connec-
tivity to the instance.
2. On the network node with the master router, administratively disable the overlay network inter-
face.
3. On the other network node, verify promotion of the backup router to master router by noting
addition of IP addresses to the interfaces in the qrouter namespace.
4. On the original network node in step 2, administratively enable the overlay network interface.
Note that the master router remains on the network node in step 3.
The health of your keepalived instances can be automatically monitored via a bash script that verifies
connectivity to all available and configured gateway addresses. In the event that connectivity is lost, the
master router is rescheduled to another node.
If all routers lose connectivity simultaneously, the process of selecting a new master router will be
repeated in a round-robin fashion until one or more routers have their connectivity restored.
To enable this feature, edit the l3_agent.ini file:
ha_vrrp_health_check_interval = 30
This high-availability mechanism simply augments Linux bridge: Self-service networks with failover of
layer-3 services to another router if the master router fails. Thus, you can reference Self-service network
traffic flow for normal operation.
The Open vSwitch (OVS) mechanism driver uses a combination of OVS and Linux bridges as inter-
connection devices. However, optionally enabling the OVS native implementation of security groups
removes the dependency on Linux bridges.
We recommend using Open vSwitch version 2.4 or higher. Optional features may require a higher
minimum version.
This architecture example provides layer-2 connectivity between instances and the physical network
infrastructure using VLAN (802.1q) tagging. It supports one untagged (flat) network and up to 4095
tagged (VLAN) networks. The actual quantity of VLAN networks depends on the physical network
infrastructure. For more information on provider networks, see Provider networks.
Warning: Linux distributions often package older releases of Open vSwitch that can introduce is-
sues during operation with the Networking service. We recommend using at least the latest long-term
stable (LTS) release of Open vSwitch for the best experience and support from Open vSwitch. See
http://www.openvswitch.org for available releases and the installation instructions for more details.
Prerequisites
Note: Larger deployments typically deploy the DHCP and metadata agents on a subset of compute
nodes to increase performance and redundancy. However, too many agents can overwhelm the message
bus. Also, to further simplify any deployment, you can omit the metadata agent and use a configuration
drive to provide metadata to instances.
Architecture
The following figure shows components and connectivity for one untagged (flat) network. In this par-
ticular case, the instance resides on the same compute node as the DHCP agent for the network. If the
DHCP agent resides on another compute node, the latter only contains a DHCP namespace with a port
on the OVS integration bridge.
The following figure describes virtual connectivity among components for two tagged (VLAN) net-
works. Essentially, all networks use a single OVS integration bridge with different internal VLAN tags.
The internal VLAN tags almost always differ from the network VLAN assignment in the Networking
service. Similar to the untagged network case, the DHCP agent may reside on a different compute node.
Note: These figures omit the controller node because it does not handle instance network traffic.
Example configuration
Use the following example configuration as a template to deploy provider networks in your environment.
Controller node
1. Install the Networking service components that provide the neutron-server service and ML2
plug-in.
2. In the neutron.conf file:
• Configure common options:
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and Configuration Reference for your Open-
Stack release to obtain the appropriate additional configuration for the [DEFAULT],
[database], [keystone_authtoken], [nova], and [agent] sections.
• Disable service plug-ins because provider networks do not require any. However, this breaks
portions of the dashboard that manage the Networking service. See the latest Install Tutorials
and Guides for more information.
[DEFAULT]
service_plugins =
• Enable two DHCP agents per network so both compute nodes can provide DHCP service
provider networks.
[DEFAULT]
dhcp_agents_per_network = 2
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = openvswitch
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_vlan]
network_vlan_ranges = provider
Compute nodes
1. Install the Networking service OVS layer-2 agent, DHCP agent, and metadata agent.
2. Install OVS.
3. In the neutron.conf file, configure common options:
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and Configuration Reference for your OpenStack re-
lease to obtain the appropriate additional configuration for the [DEFAULT], [database],
[keystone_authtoken], [nova], and [agent] sections.
4. In the openvswitch_agent.ini file, configure the OVS agent:
[ovs]
bridge_mappings = provider:br-provider
[securitygroup]
firewall_driver = iptables_hybrid
[DEFAULT]
interface_driver = openvswitch
(continues on next page)
Note: The force_metadata option forces the DHCP agent to provide a host route to the
metadata service on 169.254.169.254 regardless of whether the subnet contains an interface
on a router, thus maintaining similar and predictable metadata behavior among subnets.
The value of METADATA_SECRET must match the value of the same option in the [neutron]
section of the nova.conf file.
7. Start the following services:
• OVS
8. Create the OVS provider bridge br-provider:
$ ovs-vsctl add-br br-provider
9. Add the provider network interface as a port on the OVS provider bridge br-provider:
$ ovs-vsctl add-port br-provider PROVIDER_INTERFACE
Replace PROVIDER_INTERFACE with the name of the underlying interface that handles
provider networks. For example, eth1.
10. Start the following services:
• OVS agent
• DHCP agent
• Metadata agent
The configuration supports one flat or multiple VLAN provider networks. For simplicity, the following
procedure creates one flat provider network.
1. Source the administrative project credentials.
2. Create a flat network.
Note: The share option allows any project to use this network. To limit access to provider
networks, see Role-Based Access Control (RBAC).
Important: Enabling DHCP causes the Networking service to provide DHCP which can interfere
with existing DHCP services on the physical network infrastructure. Use the --no-dhcp option
to have the subnet managed by existing DHCP services.
Note: The Networking service uses the layer-3 agent to provide router advertisement. Provider
networks rely on physical network infrastructure for layer-3 services rather than the layer-3 agent.
Thus, the physical network infrastructure must provide router advertisement on provider networks
for proper operation of IPv6.
4. Launch an instance with an interface on the provider network. For example, a CirrOS image using
flavor ID 1.
$ openstack server create --flavor 1 --image cirros \
--nic net-id=NETWORK_ID provider-instance1
,→cirros | m1.tiny |
+--------------------------------------+--------------------+--------
,→+------------------------------------------------------------+------
,→--+---------+
6. On the controller node or any host with access to the provider network, ping the IPv4 and IPv6
addresses of the instance.
$ ping -c 4 203.0.113.13
PING 203.0.113.13 (203.0.113.13) 56(84) bytes of data.
64 bytes from 203.0.113.13: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.13: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.13: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.13: icmp_req=4 ttl=63 time=0.929 ms
$ ping6 -c 4 fd00:203:0:113:f816:3eff:fe58:be4e
PING
(continues on next page)
,→fd00:203:0:113:f816:3eff:fe58:be4e(fd00:203:0:113:f816:3eff:fe58:be4e)
The following sections describe the flow of network traffic in several common scenarios. North-south
network traffic travels between an instance and external network such as the Internet. East-west network
traffic travels between instances on the same or different networks. In all scenarios, the physical network
infrastructure handles switching and routing among provider networks and external networks such as the
Internet. Each case references one or more of the following components:
• Provider network 1 (VLAN)
– VLAN ID 101 (tagged)
– IP address ranges 203.0.113.0/24 and fd00:203:0:113::/64
– Gateway (via physical network infrastructure)
North-south
Instances on the same network communicate directly between compute nodes containing those instances.
• Instance 1 resides on compute node 1 and uses provider network 1.
• Instance 2 resides on compute node 2 and uses provider network 1.
• Instance 1 sends a packet to instance 2.
The following steps involve compute node 1:
1. The instance 1 interface (1) forwards the packet to the security group bridge instance port (2) via
veth pair.
2. Security group rules (3) on the security group bridge handle firewalling and connection tracking
for the packet.
3. The security group bridge OVS port (4) forwards the packet to the OVS integration bridge security
group port (5) via veth pair.
4. The OVS integration bridge adds an internal VLAN tag to the packet.
5. The OVS integration bridge int-br-provider patch port (6) forwards the packet to the OVS
provider bridge phy-br-provider patch port (7).
6. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag 101.
7. The OVS provider bridge provider network port (8) forwards the packet to the physical network
interface (9).
8. The physical network interface forwards the packet to the physical network infrastructure switch
(10).
The following steps involve the physical network infrastructure:
1. The switch forwards the packet from compute node 1 to compute node 2 (11).
The following steps involve compute node 2:
1. The physical network interface (12) forwards the packet to the OVS provider bridge provider
network port (13).
2. The OVS provider bridge phy-br-provider patch port (14) forwards the packet to the OVS
integration bridge int-br-provider patch port (15).
3. The OVS integration bridge swaps the actual VLAN tag 101 with the internal VLAN tag.
4. The OVS integration bridge security group port (16) forwards the packet to the security group
bridge OVS port (17).
5. Security group rules (18) on the security group bridge handle firewalling and connection tracking
for the packet.
6. The security group bridge instance port (19) forwards the packet to the instance 2 interface (20)
via veth pair.
Note: Both instances reside on the same compute node to illustrate how VLAN tagging enables multiple
logical layer-2 networks to use the same physical layer-2 network.
3. The OVS integration bridge swaps the actual VLAN tag 102 with the internal VLAN tag.
4. The OVS integration bridge security group port (20) removes the internal VLAN tag and forwards
the packet to the security group bridge OVS port (21).
5. Security group rules (22) on the security group bridge handle firewalling and connection tracking
for the packet.
6. The security group bridge instance port (23) forwards the packet to the instance 2 interface (24)
via veth pair.
This architecture example augments Open vSwitch: Provider networks to support a nearly limitless
quantity of entirely virtual networks. Although the Networking service supports VLAN self-service
networks, this example focuses on VXLAN self-service networks. For more information on self-service
networks, see Self-service networks.
Prerequisites
Note: You can keep the DHCP and metadata agents on each compute node or move them to the network
node.
Architecture
The following figure shows components and connectivity for one self-service network and one untagged
(flat) provider network. In this particular case, the instance resides on the same compute node as the
DHCP agent for the network. If the DHCP agent resides on another compute node, the latter only
contains a DHCP namespace and with a port on the OVS integration bridge.
Example configuration
Use the following example configuration as a template to add support for self-service networks to an
existing operational environment that supports provider networks.
Controller node
[DEFAULT]
service_plugins = router
allow_overlapping_ips = True
[ml2]
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
[ml2]
mechanism_drivers = openvswitch,l2population
[ml2_type_vxlan]
vni_ranges = VNI_START:VNI_END
Network node
1. Install the Networking service OVS layer-2 agent and layer-3 agent.
2. Install OVS.
3. In the neutron.conf file, configure common options:
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
(continues on next page)
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and Configuration Reference for your OpenStack re-
lease to obtain the appropriate additional configuration for the [DEFAULT], [database],
[keystone_authtoken], [nova], and [agent] sections.
4. Start the following services:
• OVS
5. Create the OVS provider bridge br-provider:
6. Add the provider network interface as a port on the OVS provider bridge br-provider:
Replace PROVIDER_INTERFACE with the name of the underlying interface that handles
provider networks. For example, eth1.
7. In the openvswitch_agent.ini file, configure the layer-2 agent.
[ovs]
bridge_mappings = provider:br-provider
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
[agent]
tunnel_types = vxlan
l2_population = True
[securitygroup]
firewall_driver = iptables_hybrid
[DEFAULT]
interface_driver = openvswitch
Compute nodes
[agent]
tunnel_types = vxlan
l2_population = True
+--------------------------------------+--------------------+---------
,→-+-------------------+-------+-------+---------------------------+
(continues on next page)
The configuration supports multiple VXLAN self-service networks. For simplicity, the following pro-
cedure creates one self-service network and a router with a gateway on the flat provider network. The
router uses NAT for IPv4 network traffic and directly routes IPv6 network traffic.
Note: IPv6 connectivity with self-service networks often requires addition of static routes to nodes and
physical network infrastructure.
7. Create a router.
+-----------+-----------+
| Field | Value |
+-----------+-----------+
| direction | ingress |
| ethertype | IPv6 |
| protocol | ipv6-icmp |
+-----------+-----------+
5. Launch an instance with an interface on the self-service network. For example, a CirrOS image
using flavor ID 1.
,→--+---------+
| ID | Name |
,→Status | Networks |
,→Image | Flavor |
+--------------------------------------+-----------------------+------
,→--+----------------------------------------------------------+------
,→--+---------+
| c055cdb0-ebb4-4d65-957c-35cbdbd59306 | selfservice-instance1 |
,→ACTIVE | selfservice1=192.0.2.4, fd00:192:0:2:f816:3eff:fe30:9cb0 |
,→cirros | m1.tiny |
+--------------------------------------+-----------------------+------
,→--+----------------------------------------------------------+------
,→--+---------+
Warning: The IPv4 address resides in a private IP address range (RFC1918). Thus, the Net-
working service performs source network address translation (SNAT) for the instance to access
external networks such as the Internet. Access from external networks such as the Internet to
the instance requires a floating IPv4 address. The Networking service performs destination
network address translation (DNAT) from the floating IPv4 address to the instance IPv4 ad-
dress on the self-service network. On the other hand, the Networking service architecture for
IPv6 lacks support for NAT due to the significantly larger address space and complexity of
NAT. Thus, floating IP addresses do not exist for IPv6 and the Networking service only per-
forms routing for IPv6 subnets on self-service networks. In other words, you cannot rely on
NAT to hide instances with IPv4 and IPv6 addresses or only IPv6 addresses and must properly
implement security groups to restrict access.
7. On the controller node or any host with access to the provider network, ping the IPv6 address of
the instance.
$ ping6 -c 4 fd00:192:0:2:f816:3eff:fe30:9cb0
PING
,→fd00:192:0:2:f816:3eff:fe30:9cb0(fd00:192:0:2:f816:3eff:fe30:9cb0)
,→56 data bytes
64 bytes from fd00:192:0:2:f816:3eff:fe30:9cb0: icmp_seq=1 ttl=63
,→time=2.08 ms
64 bytes from fd00:192:0:2:f816:3eff:fe30:9cb0: icmp_seq=2 ttl=63
,→time=1.88 ms
8. Optionally, enable IPv4 access from external networks such as the Internet to the instance.
1. Create a floating IPv4 address on the provider network.
3. On the controller node or any host with access to the provider network, ping the floating
IPv4 address of the instance.
$ ping -c 4 203.0.113.16
PING 203.0.113.16 (203.0.113.16) 56(84) bytes of data.
64 bytes from 203.0.113.16: icmp_seq=1 ttl=63 time=3.41 ms
64 bytes from 203.0.113.16: icmp_seq=2 ttl=63 time=1.67 ms
64 bytes from 203.0.113.16: icmp_seq=3 ttl=63 time=1.47 ms
64 bytes from 203.0.113.16: icmp_seq=4 ttl=63 time=1.59 ms
10. Test IPv4 and IPv6 connectivity to the Internet or other external network.
The following sections describe the flow of network traffic in several common scenarios. North-south
network traffic travels between an instance and external network such as the Internet. East-west network
traffic travels between instances on the same or different networks. In all scenarios, the physical network
infrastructure handles switching and routing among provider networks and external networks such as the
Internet. Each case references one or more of the following components:
• Provider network (VLAN)
– VLAN ID 101 (tagged)
• Self-service network 1 (VXLAN)
– VXLAN ID (VNI) 101
• Self-service network 2 (VXLAN)
– VXLAN ID (VNI) 102
• Self-service router
– Gateway on the provider network
– Interface on self-service network 1
– Interface on self-service network 2
• Instance 1
• Instance 2
For instances with a fixed IPv4 address, the network node performs SNAT on north-south traffic passing
from self-service to external networks such as the Internet. For instances with a fixed IPv6 address, the
network node performs conventional routing of traffic between self-service and external networks.
• The instance resides on compute node 1 and uses self-service network 1.
• The instance sends a packet to a host on the Internet.
The following steps involve compute node 1:
1. The instance interface (1) forwards the packet to the security group bridge instance port (2) via
veth pair.
2. Security group rules (3) on the security group bridge handle firewalling and connection tracking
for the packet.
3. The security group bridge OVS port (4) forwards the packet to the OVS integration bridge security
group port (5) via veth pair.
4. The OVS integration bridge adds an internal VLAN tag to the packet.
5. The OVS integration bridge exchanges the internal VLAN tag for an internal tunnel ID.
6. The OVS integration bridge patch port (6) forwards the packet to the OVS tunnel bridge patch
port (7).
7. The OVS tunnel bridge (8) wraps the packet using VNI 101.
8. The underlying physical interface (9) for overlay networks forwards the packet to the network
node via the overlay network (10).
The following steps involve the network node:
1. The underlying physical interface (11) for overlay networks forwards the packet to the OVS tunnel
bridge (12).
2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
4. The OVS tunnel bridge patch port (13) forwards the packet to the OVS integration bridge patch
port (14).
5. The OVS integration bridge port for the self-service network (15) removes the internal VLAN tag
and forwards the packet to the self-service network interface (16) in the router namespace.
• For IPv4, the router performs SNAT on the packet which changes the source IP address to
the router IP address on the provider network and sends it to the gateway IP address on the
provider network via the gateway interface on the provider network (17).
• For IPv6, the router sends the packet to the next-hop IP address, typically the gateway IP
address on the provider network, via the provider gateway interface (17).
6. The router forwards the packet to the OVS integration bridge port for the provider network (18).
7. The OVS integration bridge adds the internal VLAN tag to the packet.
8. The OVS integration bridge int-br-provider patch port (19) forwards the packet to the OVS
provider bridge phy-br-provider patch port (20).
9. The OVS provider bridge swaps the internal VLAN tag with actual VLAN tag 101.
10. The OVS provider bridge provider network port (21) forwards the packet to the physical network
interface (22).
11. The physical network interface forwards the packet to the Internet via physical network infras-
tructure (23).
Note: Return traffic follows similar steps in reverse. However, without a floating IPv4 address, hosts on
the provider or external networks cannot originate connections to instances on the self-service network.
For instances with a floating IPv4 address, the network node performs SNAT on north-south traffic
passing from the instance to external networks such as the Internet and DNAT on north-south traffic
passing from external networks to the instance. Floating IP addresses and NAT do not apply to IPv6.
Thus, the network node routes IPv6 traffic in this scenario.
• The instance resides on compute node 1 and uses self-service network 1.
• A host on the Internet sends a packet to the instance.
The following steps involve the network node:
1. The physical network infrastructure (1) forwards the packet to the provider physical network in-
terface (2).
2. The provider physical network interface forwards the packet to the OVS provider bridge provider
network port (3).
3. The OVS provider bridge swaps actual VLAN tag 101 with the internal VLAN tag.
4. The OVS provider bridge phy-br-provider port (4) forwards the packet to the OVS integra-
tion bridge int-br-provider port (5).
5. The OVS integration bridge port for the provider network (6) removes the internal VLAN tag and
forwards the packet to the provider network interface (6) in the router namespace.
• For IPv4, the router performs DNAT on the packet which changes the destination IP address
to the instance IP address on the self-service network and sends it to the gateway IP address
on the self-service network via the self-service interface (7).
• For IPv6, the router sends the packet to the next-hop IP address, typically the gateway IP
address on the self-service network, via the self-service interface (8).
6. The router forwards the packet to the OVS integration bridge port for the self-service network (9).
7. The OVS integration bridge adds an internal VLAN tag to the packet.
8. The OVS integration bridge exchanges the internal VLAN tag for an internal tunnel ID.
9. The OVS integration bridge patch-tun patch port (10) forwards the packet to the OVS tunnel
bridge patch-int patch port (11).
10. The OVS tunnel bridge (12) wraps the packet using VNI 101.
11. The underlying physical interface (13) for overlay networks forwards the packet to the network
node via the overlay network (14).
The following steps involve the compute node:
1. The underlying physical interface (15) for overlay networks forwards the packet to the OVS tunnel
bridge (16).
2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
4. The OVS tunnel bridge patch-int patch port (17) forwards the packet to the OVS integration
bridge patch-tun patch port (18).
5. The OVS integration bridge removes the internal VLAN tag from the packet.
6. The OVS integration bridge security group port (19) forwards the packet to the security group
bridge OVS port (20) via veth pair.
7. Security group rules (21) on the security group bridge handle firewalling and connection tracking
for the packet.
8. The security group bridge instance port (22) forwards the packet to the instance interface (23) via
veth pair.
Note: Egress instance traffic flows similar to north-south scenario 1, except SNAT changes the source
IP address of the packet to the floating IPv4 address rather than the router IP address on the provider
network.
Instances with a fixed IPv4/IPv6 address or floating IPv4 address on the same network communicate
directly between compute nodes containing those instances.
By default, the VXLAN protocol lacks knowledge of target location and uses multicast to discover it.
After discovery, it stores the location in the local forwarding database. In large deployments, the discov-
ery process can generate a significant amount of network that all nodes must process. To eliminate the
latter and generally increase efficiency, the Networking service includes the layer-2 population mecha-
nism driver that automatically populates the forwarding database for VXLAN interfaces. The example
configuration enables this driver. For more information, see ML2 plug-in.
• Instance 1 resides on compute node 1 and uses self-service network 1.
• Instance 2 resides on compute node 2 and uses self-service network 1.
• Instance 1 sends a packet to instance 2.
The following steps involve compute node 1:
1. The instance 1 interface (1) forwards the packet to the security group bridge instance port (2) via
veth pair.
2. Security group rules (3) on the security group bridge handle firewalling and connection tracking
for the packet.
3. The security group bridge OVS port (4) forwards the packet to the OVS integration bridge security
group port (5) via veth pair.
4. The OVS integration bridge adds an internal VLAN tag to the packet.
5. The OVS integration bridge exchanges the internal VLAN tag for an internal tunnel ID.
6. The OVS integration bridge patch port (6) forwards the packet to the OVS tunnel bridge patch
port (7).
7. The OVS tunnel bridge (8) wraps the packet using VNI 101.
8. The underlying physical interface (9) for overlay networks forwards the packet to compute node
2 via the overlay network (10).
The following steps involve compute node 2:
1. The underlying physical interface (11) for overlay networks forwards the packet to the OVS tunnel
bridge (12).
2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
4. The OVS tunnel bridge patch-int patch port (13) forwards the packet to the OVS integration
bridge patch-tun patch port (14).
5. The OVS integration bridge removes the internal VLAN tag from the packet.
6. The OVS integration bridge security group port (15) forwards the packet to the security group
bridge OVS port (16) via veth pair.
7. Security group rules (17) on the security group bridge handle firewalling and connection tracking
for the packet.
8. The security group bridge instance port (18) forwards the packet to the instance 2 interface (19)
via veth pair.
Instances using a fixed IPv4/IPv6 address or floating IPv4 address communicate via router on the net-
work node. The self-service networks must reside on the same router.
• Instance 1 resides on compute node 1 and uses self-service network 1.
• Instance 2 resides on compute node 1 and uses self-service network 2.
• Instance 1 sends a packet to instance 2.
Note: Both instances reside on the same compute node to illustrate how VXLAN enables multiple
overlays to use the same layer-3 network.
10. The OVS integration bridge patch-tun patch port (19) forwards the packet to the OVS tunnel
bridge patch-int patch port (20).
11. The OVS tunnel bridge (21) wraps the packet using VNI 102.
12. The underlying physical interface (22) for overlay networks forwards the packet to the compute
node via the overlay network (23).
The following steps involve the compute node:
1. The underlying physical interface (24) for overlay networks forwards the packet to the OVS tunnel
bridge (25).
2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
4. The OVS tunnel bridge patch-int patch port (26) forwards the packet to the OVS integration
bridge patch-tun patch port (27).
5. The OVS integration bridge removes the internal VLAN tag from the packet.
6. The OVS integration bridge security group port (28) forwards the packet to the security group
bridge OVS port (29) via veth pair.
7. Security group rules (30) on the security group bridge handle firewalling and connection tracking
for the packet.
8. The security group bridge instance port (31) forwards the packet to the instance interface (32) via
veth pair.
This architecture example augments the self-service deployment example with a high-availability mech-
anism using the Virtual Router Redundancy Protocol (VRRP) via keepalived and provides failover
of routing for self-service networks. It requires a minimum of two network nodes because VRRP creates
one master (active) instance and at least one backup instance of each router.
During normal operation, keepalived on the master router periodically transmits heartbeat pack-
ets over a hidden network that connects all VRRP routers for a particular project. Each project with
VRRP routers uses a separate hidden network. By default this network uses the first value in the
tenant_network_types option in the ml2_conf.ini file. For additional control, you can
specify the self-service network type and physical network name for the hidden network using the
l3_ha_network_type and l3_ha_network_name options in the neutron.conf file.
If keepalived on the backup router stops receiving heartbeat packets, it assumes failure of the master
router and promotes the backup router to master router by configuring IP addresses on the interfaces in
the qrouter namespace. In environments with more than one backup router, keepalived on the
backup router with the next highest priority promotes that backup router to master router.
Note: This high-availability mechanism configures VRRP using the same priority for all routers. There-
fore, VRRP promotes the backup router with the highest IP address to the master router.
Warning: There is a known bug with keepalived v1.2.15 and earlier which can cause packet
loss when max_l3_agents_per_router is set to 3 or more. Therefore, we recommend that
you upgrade to keepalived v1.2.16 or greater when using this feature.
Interruption of VRRP heartbeat traffic between network nodes, typically due to a network interface or
physical network infrastructure failure, triggers a failover. Restarting the layer-3 agent, or failure of it,
does not trigger a failover providing keepalived continues to operate.
Consider the following attributes of this high-availability mechanism to determine practicality in your
environment:
• Instance network traffic on self-service networks using a particular router only traverses the master
instance of that router. Thus, resource limitations of a particular network node can impact all
master instances of routers on that network node without triggering failover to another network
node. However, you can configure the scheduler to distribute the master instance of each router
uniformly across a pool of network nodes to reduce the chance of resource contention on any
particular network node.
• Only supports self-service networks using a router. Provider networks operate at layer-2 and rely
on physical network infrastructure for redundancy.
• For instances with a floating IPv4 address, maintains state of network connections during failover
as a side effect of 1:1 static NAT. The mechanism does not actually implement connection track-
ing.
For production deployments, we recommend at least three network nodes with sufficient resources to
handle network traffic for the entire environment if one network node fails. Also, the remaining two
nodes can continue to provide redundancy.
Prerequisites
Note: You can keep the DHCP and metadata agents on each compute node or move them to the network
nodes.
Architecture
The following figure shows components and connectivity for one self-service network and one untagged
(flat) network. The primary router resides on network node 1. In this particular case, the instance resides
on the same compute node as the DHCP agent for the network. If the DHCP agent resides on another
compute node, the latter only contains a DHCP namespace and Linux bridge with a port on the overlay
physical network interface.
Example configuration
Use the following example configuration as a template to add support for high-availability using VRRP
to an existing operational environment that supports self-service networks.
Controller node
[DEFAULT]
l3_ha = True
Network node 1
No changes.
Network node 2
1. Install the Networking service OVS layer-2 agent and layer-3 agent.
2. Install OVS.
3. In the neutron.conf file, configure common options:
[DEFAULT]
core_plugin = ml2
auth_strategy = keystone
[database]
# ...
[keystone_authtoken]
# ...
[nova]
# ...
[agent]
# ...
See the Installation Tutorials and Guides and Configuration Reference for your OpenStack re-
lease to obtain the appropriate additional configuration for the [DEFAULT], [database],
[keystone_authtoken], [nova], and [agent] sections.
4. Start the following services:
• OVS
6. Add the provider network interface as a port on the OVS provider bridge br-provider:
Replace PROVIDER_INTERFACE with the name of the underlying interface that handles
provider networks. For example, eth1.
7. In the openvswitch_agent.ini file, configure the layer-2 agent.
[ovs]
bridge_mappings = provider:br-provider
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
[agent]
tunnel_types = vxlan
l2_population = true
[securitygroup]
firewall_driver = iptables_hybrid
[DEFAULT]
interface_driver = openvswitch
Compute nodes
No changes.
Similar to the self-service deployment example, this configuration supports multiple VXLAN self-
service networks. After enabling high-availability, all additional routers use VRRP. The following pro-
cedure creates an additional self-service network and router. The Networking service also supports
adding high-availability to existing routers. However, the procedure requires administratively disabling
and enabling each router which temporarily interrupts network connectivity for self-service networks
with interfaces on that router.
1. Source a regular (non-administrative) project credentials.
2. Create a self-service network.
$ openstack network create selfservice2
+-------------------------+--------------+
| Field | Value |
+-------------------------+--------------+
| admin_state_up | UP |
| mtu | 1450 |
| name | selfservice2 |
| port_security_enabled | True |
(continues on next page)
5. Create a router.
3. On each network node, verify creation of a qrouter namespace with the same ID.
Network node 1:
# ip netns
qrouter-b6206312-878e-497c-8ef7-eb384f8add96
Network node 2:
# ip netns
qrouter-b6206312-878e-497c-8ef7-eb384f8add96
Note: The namespace for router 1 from Linux bridge: Self-service networks should only appear
on network node 1 because of creation prior to enabling VRRP.
4. On each network node, show the IP address of interfaces in the qrouter namespace. With the
exception of the VRRP interface, only one namespace belonging to the master router instance
contains IP addresses on the interfaces.
Network node 1:
# ip netns exec qrouter-b6206312-878e-497c-8ef7-eb384f8add96 ip addr
,→show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
,→group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ha-eb820380-40@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450
,→qdisc noqueue state UP group default qlen 1000
Network node 2:
# ip netns exec qrouter-b6206312-878e-497c-8ef7-eb384f8add96 ip addr
,→show
(continues on next page)
5. Launch an instance with an interface on the additional self-service network. For example, a Cir-
rOS image using flavor ID 1.
1. Begin a continuous ping of both the floating IPv4 address and IPv6 address of the instance.
While performing the next three steps, you should see a minimal, if any, interruption of connec-
tivity to the instance.
2. On the network node with the master router, administratively disable the overlay network inter-
face.
3. On the other network node, verify promotion of the backup router to master router by noting
addition of IP addresses to the interfaces in the qrouter namespace.
4. On the original network node in step 2, administratively enable the overlay network interface.
Note that the master router remains on the network node in step 3.
The health of your keepalived instances can be automatically monitored via a bash script that verifies
connectivity to all available and configured gateway addresses. In the event that connectivity is lost, the
master router is rescheduled to another node.
If all routers lose connectivity simultaneously, the process of selecting a new master router will be
repeated in a round-robin fashion until one or more routers have their connectivity restored.
To enable this feature, edit the l3_agent.ini file:
ha_vrrp_health_check_interval = 30
This high-availability mechanism simply augments Open vSwitch: Self-service networks with failover
of layer-3 services to another router if the primary router fails. Thus, you can reference Self-service
network traffic flow for normal operation.
This architecture example augments the self-service deployment example with the Distributed Virtual
Router (DVR) high-availability mechanism that provides connectivity between self-service and provider
networks on compute nodes rather than network nodes for specific scenarios. For instances with a
floating IPv4 address, routing between self-service and provider networks resides completely on the
compute nodes to eliminate single point of failure and performance issues with network nodes. Routing
also resides completely on the compute nodes for instances with a fixed or floating IPv4 address using
self-service networks on the same distributed virtual router. However, instances with a fixed IP address
still rely on the network node for routing and SNAT services between self-service and provider networks.
Consider the following attributes of this high-availability mechanism to determine practicality in your
environment:
• Only provides connectivity to an instance via the compute node on which the instance resides
if the instance resides on a self-service network with a floating IPv4 address. Instances on self-
service networks with only an IPv6 address or both IPv4 and IPv6 addresses rely on the network
node for IPv6 connectivity.
• The instance of a router on each compute node consumes an IPv4 address on the provider network
on which it contains a gateway.
Prerequisites
Note: Consider adding at least one additional network node to provide high-availability for instances
with a fixed IP address. See See Distributed Virtual Routing with VRRP for more information.
Architecture
The following figure shows components and connectivity for one self-service network and one untagged
(flat) network. In this particular case, the instance resides on the same compute node as the DHCP agent
for the network. If the DHCP agent resides on another compute node, the latter only contains a DHCP
namespace with a port on the OVS integration bridge.
Example configuration
Use the following example configuration as a template to add support for high-availability using DVR
to an existing operational environment that supports self-service networks.
Controller node
[DEFAULT]
router_distributed = True
Note: For a large scale cloud, if your deployment is running DVR with DHCP, we recommend
you set host_dvr_for_dhcp=False to achieve higher L3 agent router processing performance.
When this is set to False, DNS functionality will not be available via the DHCP namespace (dns-
masq) however, a different nameserver will have to be configured, for example, by specifying a value in
dns_nameservers for subnets.
Network node
[agent]
enable_distributed_routing = True
2. In the l3_agent.ini file, configure the layer-3 agent to provide SNAT services.
[DEFAULT]
agent_mode = dvr_snat
Compute nodes
Similar to the self-service deployment example, this configuration supports multiple VXLAN self-
service networks. After enabling high-availability, all additional routers use distributed routing. The
following procedure creates an additional self-service network and router. The Networking service also
supports adding distributed routing to existing routers.
1. Source a regular (non-administrative) project credentials.
2. Create a self-service network.
5. Create a router.
3. On each compute node, verify creation of a qrouter namespace with the same ID.
Compute node 1:
# ip netns
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
Compute node 2:
# ip netns
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
4. On the network node, verify creation of the snat and qrouter namespaces with the same ID.
# ip netns
snat-78d2f628-137c-4f26-a257-25fc20f203c1
qrouter-78d2f628-137c-4f26-a257-25fc20f203c1
Note: The namespace for router 1 from Open vSwitch: Self-service networks should also appear
on network node 1 because of creation prior to enabling distributed routing.
5. Launch an instance with an interface on the additional self-service network. For example, a Cir-
rOS image using flavor ID 1.
,→--+---------+
| ID | Name |
,→Status | Networks |
,→Image | Flavor |
+--------------------------------------+-----------------------+------
,→--+----------------------------------------------------------+------
,→--+---------+
| bde64b00-77ae-41b9-b19a-cd8e378d9f8b | selfservice-instance2 |
,→ACTIVE | selfservice2=fd00:192:0:2:f816:3eff:fe71:e93e, 192.0.2.4 |
,→cirros | m1.tiny |
+--------------------------------------+-----------------------+------
,→--+----------------------------------------------------------+------
,→--+---------+
9. On the compute node containing the instance, verify creation of the fip namespace with the same
ID as the provider network.
# ip netns
fip-4bfa3075-b4b2-4f7d-b88e-df1113942d43
The following sections describe the flow of network traffic in several common scenarios. North-south
network traffic travels between an instance and external network such as the Internet. East-west network
traffic travels between instances on the same or different networks. In all scenarios, the physical network
infrastructure handles switching and routing among provider networks and external networks such as the
Internet. Each case references one or more of the following components:
• Provider network (VLAN)
– VLAN ID 101 (tagged)
• Self-service network 1 (VXLAN)
– VXLAN ID (VNI) 101
• Self-service network 2 (VXLAN)
– VXLAN ID (VNI) 102
• Self-service router
– Gateway on the provider network
– Interface on self-service network 1
– Interface on self-service network 2
• Instance 1
• Instance 2
This section only contains flow scenarios that benefit from distributed virtual routing or that differ from
conventional operation. For other flow scenarios, see Network traffic flow.
Similar to North-south scenario 1: Instance with a fixed IP address, except the router namespace on the
network node becomes the SNAT namespace. The network node still contains the router namespace, but
it serves no purpose in this case.
For instances with a floating IPv4 address using a self-service network on a distributed router, the com-
pute node containing the instance performs SNAT on north-south traffic passing from the instance to
external networks such as the Internet and DNAT on north-south traffic passing from external networks
to the instance. Floating IP addresses and NAT do not apply to IPv6. Thus, the network node routes
IPv6 traffic in this scenario. north-south traffic passing between the instance and external networks such
as the Internet.
• Instance 1 resides on compute node 1 and uses self-service network 1.
• A host on the Internet sends a packet to the instance.
The following steps involve the compute node:
1. The physical network infrastructure (1) forwards the packet to the provider physical network in-
terface (2).
2. The provider physical network interface forwards the packet to the OVS provider bridge provider
network port (3).
3. The OVS provider bridge swaps actual VLAN tag 101 with the internal VLAN tag.
4. The OVS provider bridge phy-br-provider port (4) forwards the packet to the OVS integra-
tion bridge int-br-provider port (5).
5. The OVS integration bridge port for the provider network (6) removes the internal VLAN tag
and forwards the packet to the provider network interface (7) in the floating IP namespace. This
interface responds to any ARP requests for the instance floating IPv4 address.
6. The floating IP namespace routes the packet (8) to the distributed router namespace (9) using a
pair of IP addresses on the DVR internal network. This namespace contains the instance floating
IPv4 address.
7. The router performs DNAT on the packet which changes the destination IP address to the instance
IP address on the self-service network via the self-service network interface (10).
8. The router forwards the packet to the OVS integration bridge port for the self-service network
(11).
9. The OVS integration bridge adds an internal VLAN tag to the packet.
10. The OVS integration bridge removes the internal VLAN tag from the packet.
11. The OVS integration bridge security group port (12) forwards the packet to the security group
bridge OVS port (13) via veth pair.
12. Security group rules (14) on the security group bridge handle firewalling and connection tracking
for the packet.
13. The security group bridge instance port (15) forwards the packet to the instance interface (16) via
veth pair.
Note: Egress traffic follows similar steps in reverse, except SNAT changes the source IPv4 address of
the packet to the floating IPv4 address.
Instances with fixed IPv4/IPv6 address or floating IPv4 address on the same compute node communicate
via router on the compute node. Instances on different compute nodes communicate via an instance of
the router on each compute node.
Note: This scenario places the instances on different compute nodes to show the most complex situa-
tion.
3. The security group bridge OVS port (4) forwards the packet to the OVS integration bridge security
group port (5) via veth pair.
4. The OVS integration bridge adds an internal VLAN tag to the packet.
5. The OVS integration bridge port for self-service network 1 (6) removes the internal VLAN tag
and forwards the packet to the self-service network 1 interface in the distributed router namespace
(6).
6. The distributed router namespace routes the packet to self-service network 2.
7. The self-service network 2 interface in the distributed router namespace (8) forwards the packet
to the OVS integration bridge port for self-service network 2 (9).
8. The OVS integration bridge adds an internal VLAN tag to the packet.
9. The OVS integration bridge exchanges the internal VLAN tag for an internal tunnel ID.
10. The OVS integration bridge patch-tun port (10) forwards the packet to the OVS tunnel bridge
patch-int port (11).
11. The OVS tunnel bridge (12) wraps the packet using VNI 101.
12. The underlying physical interface (13) for overlay networks forwards the packet to compute node
2 via the overlay network (14).
The following steps involve compute node 2:
1. The underlying physical interface (15) for overlay networks forwards the packet to the OVS tunnel
bridge (16).
2. The OVS tunnel bridge unwraps the packet and adds an internal tunnel ID to it.
3. The OVS tunnel bridge exchanges the internal tunnel ID for an internal VLAN tag.
4. The OVS tunnel bridge patch-int patch port (17) forwards the packet to the OVS integration
bridge patch-tun patch port (18).
5. The OVS integration bridge removes the internal VLAN tag from the packet.
6. The OVS integration bridge security group port (19) forwards the packet to the security group
bridge OVS port (20) via veth pair.
7. Security group rules (21) on the security group bridge handle firewalling and connection tracking
for the packet.
8. The security group bridge instance port (22) forwards the packet to the instance 2 interface (23)
via veth pair.
Note: Routing between self-service networks occurs on the compute node containing the instance
sending the packet. In this scenario, routing occurs on compute node 1 for packets from instance 1 to
instance 2 and on compute node 2 for packets from instance 2 to instance 1.
8.4 Operations
Network IP Availability is an information-only API extension that allows a user or process to determine
the number of IP addresses that are consumed across networks and the allocation pools of their subnets.
This extension was added to neutron in the Mitaka release.
This section illustrates how you can get the Network IP address availability through the command-line
interface.
Get Network IP address availability for all IPv4 networks:
$ openstack ip availability list
+--------------------------------------+--------------+-----------+--------
,→--+
| Network ID | Network Name | Total IPs | Used
,→IPs |
+--------------------------------------+--------------+-----------+--------
,→--+
+--------------------------------------+--------------+--------------------
,→--+----------+
| Network ID | Network Name | Total IPs
,→ | Used IPs |
+--------------------------------------+--------------+--------------------
,→--+----------+
| 363a611a-b08b-4281-b64e-198d90cb94fd | private |
,→18446744073709551614 | 3 |
| c92d0605-caf2-4349-b1b8-8d5f9ac91df8 | public |
,→18446744073709551614 | 1 |
+--------------------------------------+--------------+--------------------
,→--+----------+
+------------------------+-------------------------------------------------
,→-------------+
| Field | Value
,→ |
+------------------------+-------------------------------------------------
,→-------------+
| network_id | 0bf90de6-fc0f-4dba-b80d-96670dfb331a
,→ | (continues on next page)
Various virtual networking resources support tags for use by external systems or any other clients of the
Networking service API.
All resources that support standard attributes are applicable for tagging. This includes:
• networks
• subnets
• subnetpools
• ports
• routers
• floatingips
• logs
• security-groups
• security-group-rules
• segments
• policies
• trunks
• network_segment_ranges
Use cases
The following use cases refer to adding tags to networks, but the same can be applicable to any other
supported Networking service resource:
1. Ability to map different networks in different OpenStack locations to one logically same network
(for multi-site OpenStack).
2. Ability to map IDs from different management/orchestration systems to OpenStack networks in
mixed environments. For example, in the Kuryr project, the Docker network ID is mapped to the
Neutron network ID.
3. Ability to leverage tags by deployment tools.
4. Ability to tag information about provider networks (for example, high-bandwidth, low-latency,
and so on).
The API allows searching/filtering of the GET /v2.0/networks API. The following query parame-
ters are supported:
• tags
• tags-any
• not-tags
• not-tags-any
To request the list of networks that have a single tag, tags argument should be set to the desired tag
name. Example:
GET /v2.0/networks?tags=red
To request the list of networks that have two or more tags, the tags argument should be set to the list of
tags, separated by commas. In this case, the tags given must all be present for a network to be included
in the query result. Example that returns networks that have the red and blue tags:
GET /v2.0/networks?tags=red,blue
To request the list of networks that have one or more of a list of given tags, the tags-any argument
should be set to the list of tags, separated by commas. In this case, as long as one of the given tags is
present, the network will be included in the query result. Example that returns the networks that have
the red or the blue tag:
GET /v2.0/networks?tags-any=red,blue
To request the list of networks that do not have one or more tags, the not-tags argument should be set
to the list of tags, separated by commas. In this case, only the networks that do not have any of the given
tags will be included in the query results. Example that returns the networks that do not have either red
or blue tag:
GET /v2.0/networks?not-tags=red,blue
To request the list of networks that do not have at least one of a list of tags, the not-tags-any
argument should be set to the list of tags, separated by commas. In this case, only the networks that
do not have at least one of the given tags will be included in the query result. Example that returns the
networks that do not have the red tag, or do not have the blue tag:
GET /v2.0/networks?not-tags-any=red,blue
The tags, tags-any, not-tags, and not-tags-any arguments can be combined to build more
complex queries. Example:
GET /v2.0/networks?tags=red,blue&tags-any=green,orange
The above example returns any networks that have the red and blue tags, plus at least one of green and
orange.
Complex queries may have contradictory parameters. Example:
GET /v2.0/networks?tags=blue¬-tags=blue
In this case, we should let the Networking service find these networks. Obviously, there are no such
networks and the service will return an empty list.
User workflow
| admin_state_up | UP
,→ |
| availability_zone_hints |
,→ |
| availability_zones | nova
,→ |
| created_at | 2018-07-11T09:44:50Z
,→ |
| description |
,→ |
| dns_domain | None
,→ |
| id | ab442634-1cc9-49e5-bd49-0dac9c811f69
,→ |
| ipv4_address_scope | None
,→ |
| ipv6_address_scope | None
,→ |
| is_default | None
,→ |
| is_vlan_transparent | None
,→ |
(continues on next page)
| admin_state_up | UP
,→ |
| availability_zone_hints |
,→ |
| availability_zones | nova
,→ |
| created_at | 2018-07-11T09:44:50Z
,→ |
| description |
,→ |
| dns_domain | None
,→ | (continues on next page)
| admin_state_up | UP
,→ |
| availability_zone_hints |
,→ |
| availability_zones | nova
,→ |
| created_at | 2018-07-11T09:44:50Z
,→ |
| description |
,→ |
| dns_domain | None
,→ |
| id | ab442634-1cc9-49e5-bd49-0dac9c811f69
,→ |
| ipv4_address_scope | None
,→ |
| ipv6_address_scope | None
,→ |
| is_default | None
,→ |
| is_vlan_transparent | None
,→ |
| mtu | 1450
,→ |
| name | net
,→ |
| port_security_enabled | True
,→ |
| project_id | e6710680bfd14555891f265644e1dd5c
,→ |
| provider:network_type | vxlan
,→ |
| provider:physical_network | None
,→ |
| provider:segmentation_id | 1047
,→ |
| qos_policy_id | None
,→ |
| revision_number | 5
,→ |
| router:external | Internal
,→ |
| segments | None
,→ |
| shared | False
,→ |
| status | ACTIVE
,→ |
(continues on next page)
Get list of resources with tag filters from networks. The networks are: test-net1 with red tag, test-net2
with red and blue tags, test-net3 with red, blue, and green tags, and test-net4 with green tag.
Get list of resources with tags filter:
Limitations
Filtering resources with a tag whose name contains a comma is not supported. Thus, do not put such a
tag name to resources.
Future support
In future releases, the Networking service may support setting tags for additional resources.
The Networking service provides a purge mechanism to delete the following network resources for a
project:
• Networks
• Subnets
• Ports
• Router interfaces
• Routers
• Floating IP addresses
• Security groups
Typically, one uses this mechanism to delete networking resources for a defunct project regardless of its
existence in the Identity service.
Usage
1. Source the necessary project credentials. The administrative project can delete resources for all
other projects. A regular project can delete its own network resources and those belonging to other
projects for which it has sufficient access.
2. Delete the network resources for a particular project.
A quota limits the number of available resources. A default quota might be enforced for all projects.
When you try to create more resources than the quota allows, an error occurs:
Per-project quota configuration is also supported by the quota extension API. See Configure per-project
quotas for details.
In the Networking default quota mechanism, all projects have the same quota values, such as the number
of resources that a project can create.
The quota value is defined in the OpenStack Networking /etc/neutron/neutron.conf config-
uration file. This example shows the default quota values:
[quotas]
# number of networks allowed per tenant, and minus means unlimited
quota_network = 10
OpenStack Networking also supports quotas for L3 resources: router and floating IP. Add these lines to
the quotas section in the /etc/neutron/neutron.conf file:
[quotas]
# number of routers allowed per tenant, and minus means unlimited
quota_router = 10
# number of floating IPs allowed per tenant, and minus means unlimited
quota_floatingip = 50
OpenStack Networking also supports quotas for security group resources: number of security groups
and number of rules. Add these lines to the quotas section in the /etc/neutron/neutron.
conf file:
[quotas]
# number of security groups per tenant, and minus means unlimited
quota_security_group = 10
# number of security rules allowed per tenant, and minus means unlimited
quota_security_group_rule = 100
OpenStack Networking also supports per-project quota limit by quota extension API.
Todo: This document needs to be migrated to using openstack commands rather than the deprecated
neutron commands.
quota_driver = neutron.db.quota_db.DbQuotaDriver
When you set this option, the output for Networking commands shows quotas.
2. List Networking extensions.
To list the Networking extensions, run this command:
The command shows the quotas extension, which provides per-project quota management sup-
port.
Note: Many of the extensions shown below are supported in the Mitaka release and later.
+------------------------+------------------------+-------------------
,→-------+
Note: Only some plug-ins support per-project quotas. Specifically, Open vSwitch, Linux Bridge,
and VMware NSX support them, but new versions of other plug-ins might bring additional func-
tionality. See the documentation for each plug-in.
$ neutron quota-list
+------------+---------+------+--------+--------+---------------------
,→-------------+
| 25 | 10 | 30 | 10 | 10 |
,→bff5c9455ee24231b5bc713c1b96d422 |
+------------+---------+------+--------+--------+---------------------
,→-------------+
The following command shows the command output for a non-administrative user.
$ neutron quota-show
+---------------------+-------+
| Field | Value |
+---------------------+-------+
| floatingip | 50 |
| network | 10 |
| port | 50 |
| rbac_policy | 10 |
| router | 10 |
| security_group | 10 |
| security_group_rule | 100 |
| subnet | 10 |
| subnetpool | -1 |
+---------------------+-------+
You can update quotas for multiple resources through one command.
$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --
,→subnet 5 --port 20
+---------------------+-------+
| Field | Value |
+---------------------+-------+
| floatingip | 50 |
| network | 5 |
| port | 20 |
| rbac_policy | 10 |
| router | 10 |
| security_group | 10 |
| security_group_rule | 100 |
| subnet | 5 |
| subnetpool | -1 |
+---------------------+-------+
To update the limits for an L3 resource such as, router or floating IP, you must define new values
for the quotas after the -- directive.
This example updates the limit of the number of floating IPs for the specified project.
$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --
,→floatingip 20
+---------------------+-------+
| Field | Value |
+---------------------+-------+
| floatingip | 20 |
| network | 5 |
| port | 20 |
| rbac_policy | 10 |
| router | 10 |
| security_group | 10 |
| security_group_rule | 100 |
| subnet | 5 |
| subnetpool | -1 |
+---------------------+-------+
You can update the limits of multiple resources by including L2 resources and L3 resource through
one command:
$ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 \
--network 3 --subnet 3 --port 3 --floatingip 3 --router 3
+---------------------+-------+
| Field | Value |
+---------------------+-------+
| floatingip | 3 |
| network | 3 |
(continues on next page)
After you run this command, you can see that quota values for the project are reset to the default
values.
Note: Listing defualt quotas with the OpenStack command line client will provide all quotas for
networking and other services. Previously, the neutron quota-show --tenant_id would list
only networking quotas.
8.5 Migration
8.5.1 Database
The upgrade of the Networking service database is implemented with Alembic migration chains. The
migrations in the alembic/versions contain the changes needed to migrate from older Networking
service releases to newer ones.
Since Liberty, Networking maintains two parallel Alembic migration branches.
The first branch is called expand and is used to store expansion-only migration rules. These rules are
strictly additive and can be applied while the Neutron server is running.
The second branch is called contract and is used to store those migration rules that are not safe to apply
while Neutron server is running.
The intent of separate branches is to allow invoking those safe migrations from the expand branch while
the Neutron server is running and therefore reducing downtime needed to upgrade the service.
A database management command-line tool uses the Alembic library to manage the migration.
The database management command-line tool is called neutron-db-manage. Pass the --help
option to the tool for usage information.
The tool takes some options followed by some commands:
The tool needs to access the database connection string, which is provided in the neutron.conf
configuration file in an installation. The tool automatically reads from /etc/neutron/neutron.
conf if it is present. If the configuration is in a different location, use the following command:
$ neutron-db-manage --database-connection
mysql+pymysql://root:[email protected]/neutron?charset=utf8 <commands>
The branches, current, and history commands all accept a --verbose option, which, when passed,
will instruct neutron-db-manage to display more verbose output for the specified command:
Note: The tool usage examples below do not show the options. It is assumed that you use the options
that you need for your environment.
In new deployments, you start with an empty database and then upgrade to the latest database version
using the following command:
After installing a new version of the Neutron server, upgrade the database using the following command:
In existing deployments, check the current database version using the following command:
$ neutron-db-manage current
To check if any contract migrations are pending and therefore if offline migration is required, use the
following command:
$ neutron-db-manage has_offline_migrations
Note: Offline migration requires all Neutron server instances in the cluster to be shutdown before you
apply any contract scripts.
To generate a script of the command instead of operating immediately on the database, use the following
command:
.. note::
The `--sql` option causes the command to generate a script. The script
can be run later (online or offline), perhaps after verifying and/or
modifying it.
To look for differences between the schema generated by the upgrade command and the schema defined
by the models, use the revision --autogenerate command:
Note: This generates a prepopulated template with the changes needed to match the database state with
the models.
Two networking models exist in OpenStack. The first is called legacy networking (nova-network) and
it is a sub-process embedded in the Compute project (nova). This model has some limitations, such as
creating complex network topologies, extending its back-end implementation to vendor-specific tech-
nologies, and providing project-specific networking elements. These limitations are the main reasons
the OpenStack Networking (neutron) model was created.
This section describes the process of migrating clouds based on the legacy networking model to the
OpenStack Networking model. This process requires additional changes to both compute and network-
ing to support the migration. This document describes the overall process and the features required in
both Networking and Compute.
The current process as designed is a minimally viable migration with the goal of deprecating and then re-
moving legacy networking. Both the Compute and Networking teams agree that a one-button migration
process from legacy networking to OpenStack Networking (neutron) is not an essential requirement for
the deprecation and removal of the legacy networking at a future date. This section includes a process
and tools which are designed to solve a simple use case migration.
Users are encouraged to take these tools, test them, provide feedback, and then expand on the feature set
to suit their own deployments; deployers that refrain from participating in this process intending to wait
for a path that better suits their use case are likely to be disappointed.
The migration process from the legacy nova-network networking service to OpenStack Networking
(neutron) has some limitations and impacts on the operational state of the cloud. It is critical to under-
stand them in order to decide whether or not this process is acceptable for your cloud and all users.
Management impact
The Networking REST API is publicly read-only until after the migration is complete. During the
migration, Networking REST API is read-write only to nova-api, and changes to Networking are only
allowed via nova-api.
The Compute REST API is available throughout the entire process, although there is a brief period
where it is made read-only during a database migration. The Networking REST API will need to ex-
pose (to nova-api) all details necessary for reconstructing the information previously held in the legacy
networking database.
Compute needs a per-hypervisor has_transitioned boolean change in the data model to be used during
the migration process. This flag is no longer required once the process is complete.
Operations impact
In order to support a wide range of deployment options, the migration process described here requires a
rolling restart of hypervisors. The rate and timing of specific hypervisor restarts is under the control of
the operator.
The migration may be paused, even for an extended period of time (for example, while testing or inves-
tigating issues) with some hypervisors on legacy networking and some on Networking, and Compute
API remains fully functional. Individual hypervisors may be rolled back to legacy networking during
this stage of the migration, although this requires an additional restart.
In order to support the widest range of deployer needs, the process described here is easy to automate
but is not already automated. Deployers should expect to perform multiple manual steps or write some
simple scripts in order to perform this migration.
Performance impact
During the migration, nova-network API calls will go through an additional internal conversion to Net-
working calls. This will have different and likely poorer performance characteristics compared with
either the pre-migration or post-migration APIs.
1. Start neutron-server in intended final config, except with REST API restricted to read-write only
by nova-api.
2. Make the Compute REST API read-only.
3. Run a DB dump/restore tool that creates Networking data structures representing current legacy
networking config.
4. Enable a nova-api proxy that recreates internal Compute objects from Networking information
(via the Networking REST API).
5. Make Compute REST API read-write again. This means legacy networking DB is now unused,
new changes are now stored in the Networking DB, and no rollback is possible from here without
losing those new changes.
Note: At this moment the Networking DB is the source of truth, but nova-api is the only public read-
write API.
Next, youll need to migrate each hypervisor. To do that, follow these steps:
1. Disable the hypervisor. This would be a good time to live migrate or evacuate the compute node,
if supported.
2. Disable nova-compute.
3. Enable the Networking agent.
4. Set the has_transitioned flag in the Compute hypervisor database/config.
5. Reboot the hypervisor (or run smart live transition tool if available).
This section describes the process of migrating from a classic router to an L3 HA router, which is
available starting from the Mitaka release.
Similar to the classic scenario, all network traffic on a project network that requires routing actively
traverses only one network node regardless of the quantity of network nodes providing HA for the
router. Therefore, this high-availability implementation primarily addresses failure situations instead of
bandwidth constraints that limit performance. However, it supports random distribution of routers on
different network nodes to reduce the chances of bandwidth constraints and to improve scaling.
This section references parts of Linux bridge: High availability using VRRP and Open vSwitch: High
availability using VRRP. For details regarding needed infrastructure and configuration to allow actual
L3 HA deployment, read the relevant guide before continuing with the migration process.
Migration
The migration process is quite simple, it involves turning down the router by setting the routers
admin_state_up attribute to False, upgrading the router to L3 HA and then setting the routers
admin_state_up attribute back to True.
Warning: Once starting the migration, south-north connections (instances to internet) will be sev-
ered. New connections will be able to start only when the migration is complete.
4. Set the admin_state_up to True. After this, south-north connections can start.
+-------------------------+-------------------------------------------
,→+
| admin_state_up | UP
,→|
| distributed | False
,→|
| external_gateway_info |
,→|
| ha | True
,→|
| id | 6b793b46-d082-4fd5-980f-a6f80cbb0f2a
,→|
| name | router1
,→|
| project_id | bb8b84ab75be4e19bd0dfe02f6c3f5c1
,→|
| routes |
,→|
| status | ACTIVE
,→|
| tags | []
,→|
+-------------------------+-------------------------------------------
,→+
L3 HA to Legacy
To return to classic mode, turn down the router again, turning off L3 HA and starting the router again.
Warning: Once starting the migration, south-north connections (instances to internet) will be sev-
ered. New connections will be able to start only when the migration is complete.
4. Set the admin_state_up to True. After this, south-north connections can start.
$ openstack router set router1 --enable
| admin_state_up | UP
,→|
| distributed | False
,→|
| external_gateway_info |
,→|
(continues on next page)
8.6 Miscellaneous
Most OpenStack deployments use the libvirt toolkit for interacting with the hypervisor. Specifically,
OpenStack Compute uses libvirt for tasks such as booting and terminating virtual machine instances.
When OpenStack Compute boots a new instance, libvirt provides OpenStack with the VIF associated
with the instance, and OpenStack Compute plugs the VIF into a virtual device provided by OpenStack
Network. The libvirt toolkit itself does not provide any networking functionality in OpenStack deploy-
ments.
However, libvirt is capable of providing networking services to the virtual machines that it manages. In
particular, libvirt can be configured to provide networking functionality akin to a simplified, single-node
version of OpenStack. Users can use libvirt to create layer 2 networks that are similar to OpenStack
Networkings networks, confined to a single node.
By default, libvirts networking functionality is enabled, and libvirt creates a network when the system
boots. To implement this network, libvirt leverages some of the same technologies that OpenStack
Network does. In particular, libvirt uses:
• Linux bridging for implementing a layer 2 network
• dnsmasq for providing IP addresses to virtual machines using DHCP
• iptables to implement SNAT so instances can connect out to the public internet, and to ensure that
virtual machines are permitted to communicate with dnsmasq using DHCP
By default, libvirt creates a network named default. The details of this network may vary by distribution;
on Ubuntu this network involves:
• a Linux bridge named virbr0 with an IP address of 192.0.2.1/24
• a dnsmasq process that listens on the virbr0 interface and hands out IP addresses in the range
192.0.2.2-192.0.2.254
*nat
-A POSTROUTING -s 192.0.2.0/24 -d 224.0.0.0/24 -j RETURN
-A POSTROUTING -s 192.0.2.0/24 -d 255.255.255.255/32 -j RETURN
-A POSTROUTING -s 192.0.2.0/24 ! -d 192.0.2.0/24 -p tcp -j MASQUERADE --to-
,→ports 1024-65535
-A POSTROUTING -s 192.0.2.0/24 ! -d 192.0.2.0/24 -p udp -j MASQUERADE --to-
,→ports 1024-65535
-A POSTROUTING -s 192.0.2.0/24 ! -d 192.0.2.0/24 -j MASQUERADE
*mangle
-A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-
,→fill
*filter
-A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A FORWARD -d 192.0.2.0/24 -o virbr0 -m conntrack --ctstate RELATED,
,→ESTABLISHED -j ACCEPT
-A FORWARD -s 192.0.2.0/24 -i virbr0 -j ACCEPT
-A FORWARD -i virbr0 -o virbr0 -j ACCEPT
-A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A OUTPUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
The following shows the dnsmasq process that libvirt manages as it appears in the output of ps:
Although OpenStack does not make use of libvirts networking, this networking will not interfere with
OpenStacks behavior, and can be safely left enabled. However, libvirts networking can be a nuisance
when debugging OpenStack networking issues. Because libvirt creates an additional bridge, dnsmasq
process, and iptables ruleset, these may distract an operator engaged in network troubleshooting. Unless
you need to start up virtual machines using libvirt directly, you can safely disable libvirts network.
To view the defined libvirt networks and their state:
# virsh net-list
Name State Autostart Persistent
----------------------------------------------------------
default active yes yes
Deactivating the network will remove the virbr0 bridge, terminate the dnsmasq process, and remove
the iptables rules.
To prevent the network from automatically starting on boot:
Description
Automated removal of empty bridges has been disabled to fix a race condition between the Compute
(nova) and Networking (neutron) services. Previously, it was possible for a bridge to be deleted during
the time when the only instance using it was rebooted.
Usage
Use this script to remove empty bridges on compute nodes by running the following command:
$ neutron-linuxbridge-cleanup
Important: Do not use this tool when creating or migrating an instance as it throws an error when the
bridge does not exist.
Note: Using this script can still trigger the original race condition. Only run this script if you have
evacuated all instances off a compute node and you want to clean up the bridges. In addition to evacu-
ating all instances, you should fence off the compute node where you are going to run this script so new
instances do not get scheduled on it.
Enabling VPNaaS
This section describes the setting for the reference implementation. Vendor plugins or drivers can have
different setup procedure and perhaps they provide their version of manuals.
1. Enable the VPNaaS plug-in in the /etc/neutron/neutron.conf file by appending
vpnaas to service_plugins in [DEFAULT]:
[DEFAULT]
# ...
service_plugins = vpnaas
Note: vpnaas is just example of reference implementation. It depends on a plugin that you are
going to use. Consider to set suitable plugin for your own deployment.
[service_providers]
service_provider = VPN:strongswan:neutron_vpnaas.services.vpn.service_
,→drivers.ipsec.IPsecVPNDriver:default
Note: There are several kinds of service drivers. Depending upon the Linux dis-
tribution, you may need to override this value. Select libreswan for RHEL/CentOS,
the config will like this: service_provider = VPN:openswan:neutron_vpnaas.
services.vpn.service_drivers.ipsec.IPsecVPNDriver:default. Consider
to use the appropriate one for your deployment.
[AGENT]
extensions = vpnaas
[vpnagent]
vpn_device_driver = neutron_vpnaas.services.vpn.device_drivers.
,→strongswan_ipsec.StrongSwanDriver
Note: There are several kinds of device drivers. Depending upon the Linux distribu-
tion, you may need to override this value. Select LibreSwanDriver for RHEL/CentOS,
the config will like this: vpn_device_driver = neutron_vpnaas.services.vpn.
device_drivers.libreswan_ipsec.LibreSwanDriver. Consider to use the appro-
priate drivers for your deployment.
Note: In order to run the above command, you need to have neutron-vpnaas package installed on
controller node.
IPsec site-to-site connections will support multiple local subnets, in addition to the current multiple peer
CIDRs. The multiple local subnet feature is triggered by not specifying a local subnet, when creating a
VPN service. Backwards compatibility is maintained with single local subnets, by providing the subnet
in the VPN service creation.
To support multiple local subnets, a new capability called End Point Groups has been added. Each
endpoint group will define one or more endpoints of a specific type, and can be used to specify both
local and peer endpoints for IPsec connections. The endpoint groups separate the what gets connected
from the how to connect for a VPN service, and can be used for different flavors of VPN, in the future.
Refer Multiple Local Subnets for more detail.
Create the IKE policy, IPsec policy, VPN service, local endpoint group and peer endpoint group. Then,
create an IPsec site connection that applies the above policies and service.
1. Create an IKE policy:
$ openstack vpn ike policy create ikepolicy
+-------------------------------+-----------------------------------
,→-----+
| Field | Value
,→ |
+-------------------------------+-----------------------------------
,→-----+
| Authentication Algorithm | sha1
,→ |
| Description |
,→ |
| Encryption Algorithm | aes-128
,→ |
| ID | 735f4691-3670-43b2-b389-
,→f4d81a60ed56 |
| IKE Version | v1
,→ |
| Lifetime | {u'units': u'seconds', u'value':
,→3600} |
| Name | ikepolicy
,→ |
| Perfect Forward Secrecy (PFS) | group5
,→ |
| Phase1 Negotiation Mode | main
,→ |
| Project | 095247cb2e22455b9850c6efff407584
,→ |
| project_id | 095247cb2e22455b9850c6efff407584
,→ |
+-------------------------------+-----------------------------------
,→-----+
Note: Please do not specify --peer-cidr option in this case. Peer CIDR(s) are provided by a
peer endpoint group.
Create the IKE policy, IPsec policy, VPN service. Then, create an ipsec site connection that applies the
above policies and service.
1. Create an IKE policy:
$ openstack vpn ike policy create ikepolicy1
+-------------------------------+-----------------------------------
,→-----+
| Field | Value
,→ |
+-------------------------------+-----------------------------------
,→-----+
| Authentication Algorithm | sha1
,→ |
| Description |
,→ |
| Encryption Algorithm | aes-128
,→ |
| ID | 99e4345d-8674-4d73-acb4-
,→0e2524425e34 |
| IKE Version | v1
,→ |
| Lifetime | {u'units': u'seconds', u'value':
,→3600} |
| Name | ikepolicy1
,→ |
| Perfect Forward Secrecy (PFS) | group5
,→ |
| Phase1 Negotiation Mode | main
,→ |
| Project | 095247cb2e22455b9850c6efff407584
,→ |
| project_id | 095247cb2e22455b9850c6efff407584
,→ |
+-------------------------------+-----------------------------------
,→-----+
• https://blog.russellbryant.net/2016/12/19/comparing-openstack-neutron-ml2ovs-and-ovn-control-plane/
• https://blog.russellbryant.net/2016/11/11/ovn-logical-flows-and-ovn-trace/
• https://blog.russellbryant.net/2016/09/29/ovs-2-6-and-the-first-release-of-ovn/
• http://galsagie.github.io/2015/11/23/ovn-l3-deepdive/
• http://blog.russellbryant.net/2015/10/22/openstack-security-groups-using-ovn-acls/
• http://galsagie.github.io/sdn/openstack/ovs/2015/05/30/ovn-deep-dive/
• http://blog.russellbryant.net/2015/05/14/an-ez-bake-ovn-for-openstack/
• http://galsagie.github.io/sdn/openstack/ovs/2015/04/26/ovn-containers/
• http://blog.russellbryant.net/2015/04/21/ovn-and-openstack-status-2015-04-21/
• http://blog.russellbryant.net/2015/04/08/ovn-and-openstack-integration-development-update/
• http://dani.foroselectronica.es/category/openstack/ovn/
8.7.2 Features
Open Virtual Network (OVN) offers the following virtual network services:
• Layer-2 (switching)
Native implementation. Replaces the conventional Open vSwitch (OVS) agent.
• Layer-3 (routing)
Native implementation that supports distributed routing. Replaces the conventional Neutron L3
agent. This includes transparent L3HA :doc::routing support, based on BFD monitorization inte-
grated in core OVN.
• DHCP
Native distributed implementation. Replaces the conventional Neutron DHCP agent. Note that
the native implementation does not yet support DNS features.
• DPDK
OVN and the OVN mechanism driver may be used with OVS using either the Linux kernel data-
path or the DPDK datapath.
• Trunk driver
Uses OVNs functionality of parent port and port tagging to support trunk service plugin. One has
to enable the trunk service plugin in neutron configuration files to use this feature.
• VLAN tenant networks
The OVN driver does support VLAN tenant networks when used with OVN version 2.11 (or
higher).
• DNS
Native implementation. Since the version 2.8 OVN contains a built-in DNS implementation.
• Port Forwarding
The OVN driver supports port forwarding as an extension of floating IPs. Enable the
port_forwarding service plugin in neutron configuration files to use this feature.
• Packet Logging
Packet logging service is designed as a Neutron plug-in that captures network packets for relevant
resources when the registered events occur. OVN supports this feature based on security groups.
• Segments
Allows for Network segments ranges to be used with OVN. Requires OVN version 20.06 or higher.
• Routed provider networks
Allows for multiple localnet ports to be attached to a single Logical Switch entry. This work also
assumes that only a single localnet port (of the same Logical Switch) is actually mapped to a given
hypervisor. Requires OVN version 20.06 or higher.
The following Neutron API extensions are supported with OVN:
8.7.3 Routing
North/South
North/South traffic flows through the active chassis for each router for SNAT traffic, and also for FIPs.
Distributed Floating IP
In the following diagram we can see how VMs with no Floating IP (VM1, VM6) still communicate
throught the gateway nodes using SNAT on the edge routers R1 and R2.
While VM3, VM4, and VM5 have an assigned floating IP, and its traffic flows directly through the local
provider bridge/interface to the external network.
L3HA support
Ovn driver implements L3 high availability in a transparent way. You dont need to enable any config
flags. As soon as you have more than one chassis capable of acting as an l3 gateway to the specific
external network attached to the router it will schedule the router gateway port to multiple chassis,
making use of the gateway_chassis column on OVNs Logical_Router_Port table.
In order to have external connectivity, either:
• Some gateway nodes have ovn-cms-options with the value enable-chassis-as-gw in
Open_vSwitch tables external_ids column, or
• if no gateway node exists with the external ids column set with that value, then all nodes would
be eligible to host gateway chassis.
Example to how to enabled chassis to host gateways:
At the low level, functionality is all implemented mostly by OpenFlow rules with bundle active_passive
outputs. The ARP responder and router enablement/disablement is handled by ovn-controller. Gratu-
itous ARPs for FIPs and router external addresses are periodically sent by ovn-controller itself.
BFD monitoring
OVN monitors the availability of the chassis via the BFD protocol, which is encapsulated on top of the
Geneve tunnels established from chassis to chassis.
Each chassis that is marked as a gateway chassis will monitor all the other gateway chassis in the de-
ployment as well as compute node chassis, to let the gateways enable/disable routing of packets and
ARP responses / announcements.
Each compute node chassis will monitor each gateway chassis via BFD to automatically steer external
traffic (snat/dnat) through the active chassis for a given router.
The gateway nodes monitor each other in star topology. Compute nodes dont monitor each other because
thats not necessary.
Compute nodes BFD monitoring of the gateway nodes will detect that tunnel endpoint going to gateway
node 1 is down, so. So traffic output that needs to get into the external network through the router will
be directed to the lower priority chassis for R1. R2 stays the same because Gateway Node 2 was already
the highest priority chassis for R2.
Gateway node 2 will detect that tunnel endpoint to gateway node 1 is down, so it will become responsible
for the external leg of R1, and its ovn-controller will populate flows for the external ARP responder,
traffic forwarding (N/S) and periodic gratuitous ARPs.
Gateway node 2 will also bind the external port of the router (represented as a chassis-redirect port on
the South Bound database).
If Gateway node 1 is still alive, failure over interface 2 will be detected because its not seeing any other
nodes.
No mechanisms are still present to detect external network failure, so as good practice to detect network
failure we recommend that all interfaces are handled over a single bonded interface with VLANs.
Supported failure modes are:
• gateway chassis becomes disconnected from network (tunneling interface)
• ovs-vswitchd is stopped (its responsible for BFD signaling)
• ovn-controller is stopped, as ovn-controller will remove himself as a registered chassis.
Note: As a side note, its also important to understand, that as for VRRP or CARP protocols, this
detection mechanism only works for link failures, but not for routing failures.
Failback
L3HA behaviour is preemptive in OVN (at least for the time being) since that would balance back the
routers to the original chassis, avoiding any of the gateway nodes becoming a bottleneck.
East/West
East/West traffic on ovn driver is completely distributed, that means that routing will happen internally
on the compute nodes without the need to go through the gateway nodes.
Traffic going through a virtual router, and going from a virtual network/subnet to another will flow
directly from compute to compute node encapsulated as usual, while all the routing operations like
decreasing TTL or switching MAC addresses will be handled in OpenFlow at the source host of the
packet.
Traffic across a subnet will happen as described in the following diagram, although this kind of commu-
nication doesnt make use of routing at all (just encapsulation) its been included for completeness.
Traffic goes directly from instance to instance through br-int in the case of both instances living in the
same host (VM1 and VM2), or via encapsulation when living on different hosts (VM3 and VM4).
How to enable it
In order to enable IGMP snooping with the OVN driver the following configuration needs to be set in
the /etc/neutron/neutron.conf file of the controller nodes:
# OVN does reuse the OVS option, therefore the option group is [ovs]
[ovs]
igmp_snooping_enable = True
...
Upon restarting the Neutron service all existing networks (Logical_Switch, in OVN terms) will be up-
dated in OVN to enable or disable IGMP snooping based on the igmp_snooping_enable configu-
ration value.
Note: Currently the OVN driver does not configure IGMP querier in OVN so ovn-controller will not
send IGMP group memberships IP querier to retrieve IGMP membership reports from active members.
To find more information about the learnt IGMP groups by OVN use the command below (populated
only when igmp_snooping_enable is True):
Note: Since IGMP querier is not yet supported in the OVN driver, restarting the ovn-controller ser-
vice(s) will result in OVN unlearning the IGMP groups and broadcast all the multicast traffic. This
behavior can impact when updating/upgrading the OVN services.
Extra information
When multicast IP traffic is sent to a multicast group address which is in the 224.0.0.X range, the
multicast traffic will be flooded, even when IGMP snooping is enabled. See the RFC 4541 session 2.1.2:
The OVN project documentation includes an in depth tutorial of using OVN with OpenStack.
OpenStack and OVN Tutorial
The reference architecture defines the minimum environment necessary to deploy OpenStack with Open
Virtual Network (OVN) integration for the Networking service in production with sufficient expecta-
tions of scale and performance. For evaluation purposes, you can deploy this environment using the
Installation Guide or Vagrant. Any scaling or performance evaluations should use bare metal instead of
virtual machines.
Layout
Note: For functional evaluation only, you can combine the controller and database nodes.
• OVS database service (ovsdb-server) with OVS local configuration (conf.db) database
• OVN metadata agent (ovn-metadata-agent)
The gateway nodes contain the following components:
• Three network interfaces for management, overlay networks and provider networks.
• OVN controller service (ovn-controller)
• OVS data plane service (ovs-vswitchd)
• OVS database service (ovsdb-server) with OVS local configuration (conf.db) database
Note: Each OVN metadata agent provides metadata service locally on the compute nodes in a
lightweight way. Each network being accessed by the instances of the compute node will have a corre-
sponding metadata ovn-metadata-$net_uuid namespace, and inside an haproxy will funnel the requests
to the ovn-metadata-agent over a unix socket.
Such namespace can be very helpful for debug purposes to access the local instances on the compute
node. If you login as root on such compute node you can execute:
ip netns ovn-metadata-$net_uuid exec ssh [email protected]
Hardware layout
Service layout
The reference architecture deploys the Networking service with OVN integration as described in the
following scenarios:
With ovn driver, all the E/W traffic which traverses a virtual router is completely distributed, going from
compute to compute node without passing through the gateway nodes.
N/S traffic that needs SNAT (without floating IPs) will always pass through the centralized gateway
nodes, although, as soon as you have more than one gateway node ovn driver will make use of the HA
capabilities of ovn.
In this architecture, all the N/S router traffic (snat and floating IPs) goes through the gateway nodes.
The compute nodes dont need connectivity to the external network, although it could be provided if we
wanted to have direct connectivity to such network from some instances.
For external connectivity, gateway nodes have to set ovn-cms-options with
enable-chassis-as-gw in Open_vSwitch tables external_ids column, for example:
In this architecture, the floating IP N/S traffic flows directly from/to the compute nodes through the
specific provider network bridge. In this case compute nodes need connectivity to the external network.
Each compute node contains the following network components:
Note: The Networking service creates a unique network namespace for each virtual network that
enables the metadata service.
Several external connections can be optionally created via provider bridges. Those can be used for direct
vm connectivity to the specific networks or the use of distributed floating ips.
OVN stores configuration data in a collection of OVS database tables. The following commands show
the contents of the most common database tables in the northbound and southbound databases. The
example database output in this section uses these commands with various output filters.
Note: By default, you must run these commands from the node containing the OVN databases.
When you add a compute node to the environment, the OVN controller service on it connects to the
OVN southbound database and registers the node as a chassis.
_uuid : 9be8639d-1d0b-4e3d-9070-03a655073871
encaps : [2fcefdf4-a5e7-43ed-b7b2-62039cc7e32e]
external_ids : {ovn-bridge-mappings=""}
hostname : "compute1"
name : "410ee302-850b-4277-8610-fa675d620cb7"
vtep_logical_switches: []
The encaps field value refers to tunnel endpoint information for the compute node.
_uuid : 2fcefdf4-a5e7-43ed-b7b2-62039cc7e32e
ip : "10.0.0.32"
options : {}
type : geneve
Security Groups/Rules
When a Neutron Security Group is created, the equivalent Port Group in OVN (pg-<security_group_id>
is created). This Port Group references Neutron SG id in its external_ids column.
When a Neutron Port is created, the equivalent Logical Port in OVN is added to those Port Groups
associated to the Neutron Security Groups this port belongs to.
When a Neutron Port is deleted, the associated Logical Port in OVN is deleted. Since the schema
includes a weak reference to the port, when the LSP gets deleted, it is automatically deleted from any
Port Group entry where it was previously present.
Every time a security group rule is created, instead of figuring out the ports affected by its SG and
inserting an ACL row which will be referenced by different Logical Switches, we just reference it from
the associated Port Group.
OVN operations
1. Creating a security group will cause the OVN mechanism driver to create a port group in the
Port_Group table of the northbound DB:
_uuid : e96c5994-695d-4b9c-a17b-c7375ad281e2
acls : [33c3c2d0-bc7b-421b-ace9-10884851521a, c22170ec-
,→da5d-4a59-b118-f7f0e370ebc4]
external_ids : {"neutron:security_group_id"="ccbeffee-7b98-
,→4b6f-adf7-d42027ca6447"}
name : pg_ccbeffee_7b98_4b6f_adf7_d42027ca6447
ports : []
And it also creates the default ACLs for egress traffic in the ACL table of the northbound DB:
_uuid : 33c3c2d0-bc7b-421b-ace9-10884851521a
action : allow-related
direction : from-lport
external_ids : {"neutron:security_group_rule_id"="655b0d7e-
,→144e-4bd8-9243-10a261b91041"}
log : false
match : "inport == @pg_ccbeffee_7b98_4b6f_adf7_
,→d42027ca6447 && ip4"
meter : []
name : []
priority : 1002
severity : []
_uuid : c22170ec-da5d-4a59-b118-f7f0e370ebc4
action : allow-related
direction : from-lport
external_ids : {"neutron:security_group_rule_id"="a303a34f-
,→5f19-494f-a9e2-e23f246bfcad"}
log : false
match : "inport == @pg_ccbeffee_7b98_4b6f_adf7_
,→d42027ca6447 && ip6"
meter : []
name : []
priority : 1002
severity : []
When a port doesnt belong to any Security Group and port security is enabled, we, by default, drop all
the traffic to/from that port. In order to implement this through Port Groups, well create a special Port
Group with a fixed name (neutron_pg_drop) which holds the ACLs to drop all the traffic.
This PG is created automatically once before neutron-server forks into workers.
Networks
Provider networks
A provider (external) network bridges instances to physical network infrastructure that provides layer-
3 services. In most cases, provider networks implement layer-2 segmentation using VLAN IDs. A
provider network maps to a provider bridge on each compute node that supports launching instances on
the provider network. You can create more than one provider bridge, each one requiring a unique name
and underlying physical network interface to prevent switching loops. Provider networks and bridges
can use arbitrary names, but each mapping must reference valid provider network and bridge names.
Each provider bridge can contain one flat (untagged) network and up to the maximum number of
vlan (tagged) networks that the physical network infrastructure supports, typically around 4000.
Creating a provider network involves several commands at the host, OVS, and Networking service levels
that yield a series of operations at the OVN level to create the virtual network components. The following
example creates a flat provider network provider using the provider bridge br-provider and
binds a subnet to it.
1. On each compute node, create the provider bridge, map the provider network to it, and add the
underlying physical or logical (typically a bond) network interface to it.
# ovs-vsctl --may-exist add-br br-provider -- set bridge br-provider \
protocols=OpenFlow13
# ovs-vsctl set Open_vSwitch . external-ids:ovn-bridge-
,→mappings=provider:br-provider
# ovs-vsctl --may-exist add-port br-provider INTERFACE_NAME
4. On the controller node, create the provider network in the Networking service. In this case,
instances and routers in other projects can use the network.
$ openstack network create --external --share \
--provider-physical-network provider --provider-network-type flat \
provider
+---------------------------+--------------------------------------+
(continues on next page)
OVN operations
The OVN mechanism driver and OVN perform the following operations during creation of a provider
network.
1. The mechanism driver translates the network into a logical switch in the OVN northbound
database.
_uuid : 98edf19f-2dbc-4182-af9b-79cafa4794b6
acls : []
external_ids : {"neutron:network_name"=provider}
load_balancer : []
name : "neutron-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"
ports : [92ee7c2f-cd22-4cac-a9d9-68a374dc7b17]
.. note::
2. In addition, because the provider network is handled by a separate bridge, the following logical
port is created in the OVN northbound database.
_uuid : 92ee7c2f-cd22-4cac-a9d9-68a374dc7b17
addresses : [unknown]
enabled : []
external_ids : {}
name : "provnet-e4abf6df-f8cf-49fd-85d4-3ea399f4d645"
options : {network_name=provider}
parent_name : []
port_security : []
tag : []
type : localnet
up : false
3. The OVN northbound service translates these objects into datapath bindings, port bindings, and
the appropriate multicast groups in the OVN southbound database.
• Datapath bindings
_uuid : f1f0981f-a206-4fac-b3a1-dc2030c9909f
external_ids : {logical-switch="98edf19f-2dbc-4182-af9b-
,→79cafa4794b6"}
tunnel_key : 109
• Port bindings
_uuid : 8427506e-46b5-41e5-a71b-a94a6859e773
chassis : []
datapath : f1f0981f-a206-4fac-b3a1-dc2030c9909f
logical_port : "provnet-e4abf6df-f8cf-49fd-85d4-
,→3ea399f4d645"
mac : [unknown]
options : {network_name=provider}
parent_port : []
tag : []
tunnel_key : 1
type : localnet
• Logical flows
action=(next;)
table= 1( ls_in_port_sec_ip), priority= 0, match=(1),
action=(next;)
table= 2( ls_in_port_sec_nd), priority= 0, match=(1),
action=(next;)
table= 3( ls_in_pre_acl), priority= 0, match=(1),
action=(next;)
table= 4( ls_in_pre_lb), priority= 0, match=(1),
(continues on next page)
• Multicast groups
_uuid : 0102f08d-c658-4d0a-a18a-ec8adcaddf4f
datapath : f1f0981f-a206-4fac-b3a1-dc2030c9909f
name : _MC_unknown
ports : [8427506e-46b5-41e5-a71b-a94a6859e773]
tunnel_key : 65534
_uuid : fbc38e51-ac71-4c57-a405-e6066e4c101e
datapath : f1f0981f-a206-4fac-b3a1-dc2030c9909f
name : _MC_flood
ports : [8427506e-46b5-41e5-a71b-a94a6859e773]
tunnel_key : 65535
The provider network requires at least one subnet that contains the IP address allocation available for
instances, default gateway IP address, and metadata such as name resolution.
1. On the controller node, create a subnet bound to the provider network provider.
If using DHCP to manage instance IP addresses, adding a subnet causes a series of operations in the
Networking service and OVN.
• The Networking service schedules the network on appropriate number of DHCP agents. The
example environment contains three DHCP agents.
• Each DHCP agent spawns a network namespace with a dnsmasq process using an IP address
from the subnet allocation.
• The OVN mechanism driver creates a logical switch port object in the OVN northbound database
for each dnsmasq process.
OVN operations
The OVN mechanism driver and OVN perform the following operations during creation of a subnet on
the provider network.
1. If the subnet uses DHCP for IP address management, create logical ports ports for each DHCP
agent serving the subnet and bind them to the logical switch. In this example, the subnet contains
two DHCP agents.
_uuid : 5e144ab9-3e08-4910-b936-869bbbf254c8
addresses : ["fa:16:3e:57:f9:ca 203.0.113.101"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "6ab052c2-7b75-4463-b34f-fd3426f61787"
options : {}
parent_name : []
port_security : []
tag : []
type : ""
up : true
_uuid : 38cf8b52-47c4-4e93-be8d-06bf71f6a7c9
addresses : ["fa:16:3e:e0:eb:6d 203.0.113.102"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "94aee636-2394-48bc-b407-8224ab6bb1ab"
options : {}
parent_name : []
port_security : []
tag : []
type : ""
up : true
_uuid : 924500c4-8580-4d5f-a7ad-8769f6e58ff5
acls : []
external_ids : {"neutron:network_name"=provider}
load_balancer : []
name : "neutron-670efade-7cd0-4d87-8a04-27f366eb8941"
ports : [38cf8b52-47c4-4e93-be8d-06bf71f6a7c9,
5e144ab9-3e08-4910-b936-869bbbf254c8,
a576b812-9c3e-4cfb-9752-5d8500b3adf9]
2. The OVN northbound service creates port bindings for these logical ports and adds them to the
appropriate multicast group.
• Port bindings
_uuid : 030024f4-61c3-4807-859b-07727447c427
chassis : fc5ab9e7-bc28-40e8-ad52-2949358cc088
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
logical_port : "6ab052c2-7b75-4463-b34f-fd3426f61787"
mac : ["fa:16:3e:57:f9:ca 203.0.113.101"]
options : {}
parent_port : []
tag : []
tunnel_key : 2
type : ""
_uuid : cc5bcd19-bcae-4e29-8cee-3ec8a8a75d46
chassis : 6a9d0619-8818-41e6-abef-2f3d9a597c03
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
logical_port : "94aee636-2394-48bc-b407-8224ab6bb1ab"
mac : ["fa:16:3e:e0:eb:6d 203.0.113.102"]
options : {}
parent_port : []
tag : []
tunnel_key : 3
type : ""
• Multicast groups
_uuid : 39b32ccd-fa49-4046-9527-13318842461e
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
name : _MC_flood
ports : [030024f4-61c3-4807-859b-07727447c427,
904c3108-234d-41c0-b93c-116b7e352a75,
cc5bcd19-bcae-4e29-8cee-3ec8a8a75d46]
tunnel_key : 65535
3. The OVN northbound service translates the logical ports into additional logical flows in the OVN
southbound database.
• The OVN controller service translates these logical flows into flows on the integration bridge.
cookie=0x0, duration=17.731s, table=0, n_packets=3, n_bytes=258,
idle_age=16, priority=100,in_port=7
actions=load:0x2->NXM_NX_REG5[],load:0x4->OXM_OF_METADATA[],
load:0x2->NXM_NX_REG6[],resubmit(,16)
cookie=0x0, duration=17.730s, table=0, n_packets=15, n_bytes=954,
idle_age=2, priority=100,in_port=8,vlan_tci=0x0000/0x1000
actions=load:0x1->NXM_NX_REG5[],load:0x4->OXM_OF_METADATA[],
load:0x1->NXM_NX_REG6[],resubmit(,16)
cookie=0x0, duration=17.730s, table=0, n_packets=0, n_bytes=0,
idle_age=17, priority=100,in_port=8,dl_vlan=0
actions=strip_vlan,load:0x1->NXM_NX_REG5[],
load:0x4->OXM_OF_METADATA[],load:0x1->NXM_NX_REG6[],
resubmit(,16)
cookie=0x0, duration=17.732s, table=16, n_packets=0, n_bytes=0,
idle_age=17, priority=100,metadata=0x4,
dl_src=01:00:00:00:00:00/01:00:00:00:00:00
actions=drop
cookie=0x0, duration=17.732s, table=16, n_packets=0, n_bytes=0,
idle_age=17, priority=100,metadata=0x4,vlan_tci=0x1000/0x1000
actions=drop
cookie=0x0, duration=17.732s, table=16, n_packets=3, n_bytes=258,
idle_age=16, priority=50,reg6=0x2,metadata=0x4
,→actions=resubmit(,17)
cookie=0x0, duration=17.732s, table=16, n_packets=0, n_bytes=0,
idle_age=17, priority=50,reg6=0x3,metadata=0x4
,→actions=resubmit(,17)
cookie=0x0, duration=17.732s, table=16, n_packets=15, n_bytes=954,
idle_age=2, priority=50,reg6=0x1,metadata=0x4
,→actions=resubmit(,17)
cookie=0x0, duration=21.714s, table=17, n_packets=18, n_
,→bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,18)
cookie=0x0, duration=21.714s, table=18, n_packets=18, n_
,→bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,19)
cookie=0x0, duration=21.714s, table=19, n_packets=18, n_
,→bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,20)
cookie=0x0, duration=21.714s, table=20, n_packets=18, n_
,→bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,21)
cookie=0x0, duration=21.714s, table=21, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ip,reg0=0x1/0x1,metadata=0x4
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=21.714s, table=21, n_packets=0, n_bytes=0,
idle_age=21, priority=100,ipv6,reg0=0x1/0x1,metadata=0x4
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=21.714s, table=21, n_packets=18, n_
,→bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,22)
(continues on next page)
idle_age=6, priority=100,metadata=0x4,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=load:0xffff->NXM_NX_REG7[],resubmit(,32)
(continues on next page)
resubmit(,34),load:0xffff->NXM_NX_REG7[]
cookie=0x0, duration=17.730s, table=33, n_packets=0, n_bytes=0,
idle_age=17, priority=100,reg7=0x1,metadata=0x4
actions=load:0x1->NXM_NX_REG5[],resubmit(,34)
cookie=0x0, duration=17.697s, table=33, n_packets=0, n_bytes=0,
idle_age=17, priority=100,reg7=0x3,metadata=0x4
actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
cookie=0x0, duration=17.731s, table=34, n_packets=3, n_bytes=258,
idle_age=16, priority=100,reg6=0x2,reg7=0x2,metadata=0x4
actions=drop
cookie=0x0, duration=17.730s, table=34, n_packets=15, n_bytes=954,
idle_age=2, priority=100,reg6=0x1,reg7=0x1,metadata=0x4
actions=drop
cookie=0x0, duration=21.714s, table=48, n_packets=18, n_
,→bytes=1212,
idle_age=6, priority=0,metadata=0x4 actions=resubmit(,49)
cookie=0x0, duration=21.714s, table=49, n_packets=18, n_
,→bytes=1212,
Self-service networks
A self-service (project) network includes only virtual components, thus enabling projects to manage
them without additional configuration of the underlying physical network. The OVN mechanism driver
supports Geneve and VLAN network types with a preference toward Geneve. Projects can choose to
isolate self-service networks, connect two or more together via routers, or connect them to provider
networks via routers with appropriate capabilities. Similar to provider networks, self-service networks
can use arbitrary names.
Note: Similar to provider networks, self-service VLAN networks map to a unique bridge on each
compute node that supports launching instances on those networks. Self-service VLAN networks also
require several commands at the host and OVS levels. The following example assumes use of Geneve
self-service networks.
Creating a self-service network involves several commands at the Networking service level that yield a
series of operations at the OVN level to create the virtual network components. The following example
creates a Geneve self-service network and binds a subnet to it. The subnet uses DHCP to distribute IP
addresses to instances.
1. On the controller node, source the credentials for a regular (non-privileged) project. The following
example uses the demo project.
2. On the controller node, create a self-service network in the Networking service.
OVN operations
The OVN mechanism driver and OVN perform the following operations during creation of a self-service
network.
1. The mechanism driver translates the network into a logical switch in the OVN northbound
database.
uuid : 0ab40684-7cf8-4d6c-ae8b-9d9143762d37
acls : []
external_ids : {"neutron:network_name"="selfservice"}
name : "neutron-d5aadceb-d8d6-41c8-9252-c5e0fe6c26a5"
ports : []
2. The OVN northbound service translates this object into new datapath bindings and logical flows
in the OVN southbound database.
• Datapath bindings
_uuid : 0b214af6-8910-489c-926a-fd0ed16a8251
external_ids : {logical-switch="15e2c80b-1461-4003-9869-
,→80416cd97de5"}
tunnel_key : 5
• Logical flows
action=(drop;)
table= 1( ls_in_port_sec_ip), priority= 0, match=(1),
action=(next;)
table= 2( ls_in_port_sec_nd), priority= 0, match=(1),
action=(next;)
table= 3( ls_in_pre_acl), priority= 0, match=(1),
action=(next;)
table= 4( ls_in_pre_lb), priority= 0, match=(1),
action=(next;)
table= 5( ls_in_pre_stateful), priority= 100, match=(reg0[0]
,→== 1),
action=(ct_next;)
table= 5( ls_in_pre_stateful), priority= 0, match=(1),
action=(next;)
table= 6( ls_in_acl), priority= 0, match=(1),
action=(next;)
table= 7( ls_in_lb), priority= 0, match=(1),
action=(next;)
table= 8( ls_in_stateful), priority= 100, match=(reg0[2]
,→== 1),
action=(ct_lb;)
table= 8( ls_in_stateful), priority= 100, match=(reg0[1]
,→== 1),
action=(ct_commit; next;)
table= 8( ls_in_stateful), priority= 0, match=(1),
action=(next;)
table= 9( ls_in_arp_rsp), priority= 0, match=(1),
action=(next;)
table=10( ls_in_l2_lkup), priority= 100, match=(eth.
,→mcast),
action=(outport = "_MC_flood"; output;)
Datapath: 0b214af6-8910-489c-926a-fd0ed16a8251 Pipeline: egress
table= 0( ls_out_pre_lb), priority= 0, match=(1),
action=(next;)
table= 1( ls_out_pre_acl), priority= 0, match=(1),
action=(next;)
table= 2(ls_out_pre_stateful), priority= 100, match=(reg0[0]
,→== 1),
action=(ct_next;)
(continues on next page)
action=(ct_lb;)
table= 5( ls_out_stateful), priority= 0, match=(1),
action=(next;)
table= 6( ls_out_port_sec_ip), priority= 0, match=(1),
action=(next;)
table= 7( ls_out_port_sec_l2), priority= 100, match=(eth.
,→mcast),
action=(output;)
A self-service network requires at least one subnet. In most cases, the environment provides suitable
values for IP address allocation for instances, default gateway IP address, and metadata such as name
resolution.
1. On the controller node, create a subnet bound to the self-service network selfservice.
OVN operations
The OVN mechanism driver and OVN perform the following operations during creation of a subnet on
a self-service network.
1. If the subnet uses DHCP for IP address management, create logical ports ports for each DHCP
agent serving the subnet and bind them to the logical switch. In this example, the subnet contains
two DHCP agents.
_uuid : 1ed7c28b-dc69-42b8-bed6-46477bb8b539
addresses : ["fa:16:3e:94:db:5e 192.168.1.2"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "0cfbbdca-ff58-4cf8-a7d3-77daaebe3056"
options : {}
parent_name : []
port_security : []
tag : []
type : ""
up : true
_uuid : ae10a5e0-db25-4108-b06a-d2d5c127d9c4
addresses : ["fa:16:3e:90:bd:f1 192.168.1.3"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "74930ace-d939-4bca-b577-fccba24c3fca"
options : {}
parent_name : []
port_security : []
tag : []
type : ""
up : true
_uuid : 0ab40684-7cf8-4d6c-ae8b-9d9143762d37
acls : []
external_ids : {"neutron:network_name"="selfservice"}
name : "neutron-d5aadceb-d8d6-41c8-9252-c5e0fe6c26a5"
ports : [1ed7c28b-dc69-42b8-bed6-46477bb8b539,
ae10a5e0-db25-4108-b06a-d2d5c127d9c4]
2. The OVN northbound service creates port bindings for these logical ports and adds them to the
appropriate multicast group.
• Port bindings
_uuid : 3e463ca0-951c-46fd-b6cf-05392fa3aa1f
chassis : 6a9d0619-8818-41e6-abef-2f3d9a597c03
datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
logical_port : "a203b410-97c1-4e4a-b0c3-558a10841c16"
mac : ["fa:16:3e:a1:dc:58 192.168.1.3"]
options : {}
(continues on next page)
_uuid : fa7b294d-2a62-45ae-8de3-a41c002de6de
chassis : d63e8ae8-caf3-4a6b-9840-5c3a57febcac
datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
logical_port : "39b23721-46f4-4747-af54-7e12f22b3397"
mac : ["fa:16:3e:1a:b4:23 192.168.1.2"]
options : {}
parent_port : []
tag : []
tunnel_key : 1
type : ""
• Multicast groups
_uuid : c08d0102-c414-4a47-98d9-dd3fa9f9901c
datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
name : _MC_flood
ports : [3e463ca0-951c-46fd-b6cf-05392fa3aa1f,
fa7b294d-2a62-45ae-8de3-a41c002de6de]
tunnel_key : 65535
3. The OVN northbound service translates the logical ports into logical flows in the OVN southbound
database.
9(tap39b23721-46): addr:00:00:00:00:b0:5d
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max
• The OVN controller service translates these objects into flows on the integration bridge.
cookie=0x0, duration=21.074s, table=0, n_packets=8, n_bytes=648,
idle_age=11, priority=100,in_port=9
actions=load:0x2->NXM_NX_REG5[],load:0x5->OXM_OF_METADATA[],
load:0x1->NXM_NX_REG6[],resubmit(,16)
cookie=0x0, duration=21.076s, table=16, n_packets=0, n_bytes=0,
idle_age=21, priority=100,metadata=0x5,
dl_src=01:00:00:00:00:00/01:00:00:00:00:00
actions=drop
cookie=0x0, duration=21.075s, table=16, n_packets=0, n_bytes=0,
idle_age=21, priority=100,metadata=0x5,vlan_tci=0x1000/0x1000
actions=drop
cookie=0x0, duration=21.076s, table=16, n_packets=0, n_bytes=0,
idle_age=21, priority=50,reg6=0x2,metadata=0x5
actions=resubmit(,17)
cookie=0x0, duration=21.075s, table=16, n_packets=8, n_bytes=648,
(continues on next page)
Routers
Routers
Create a router
1. On the controller node, source the credentials for a regular (non-privileged) project. The following
example uses the demo project.
2. On the controller node, create router in the Networking service.
OVN operations
The OVN mechanism driver and OVN perform the following operations when creating a router.
1. The OVN mechanism driver translates the router into a logical router object in the OVN north-
bound database.
_uuid : 1c2e340d-dac9-496b-9e86-1065f9dab752
default_gw : []
enabled : []
external_ids : {"neutron:router_name"="router"}
name : "neutron-a24fd760-1a99-4eec-9f02-24bb284ff708"
ports : []
static_routes : []
2. The OVN northbound service translates this object into logical flows and datapath bindings in the
OVN southbound database.
• Datapath bindings
_uuid : 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa
external_ids : {logical-router="1c2e340d-dac9-496b-9e86-
,→1065f9dab752"}
tunnel_key : 3
• Logical flows
action=(drop;)
(continues on next page)
3. The OVN controller service on each compute node translates these objects into flows on the inte-
gration bridge br-int.
# ovs-ofctl dump-flows br-int
cookie=0x0, duration=6.402s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=100,metadata=0x5,vlan_tci=0x1000/0x1000
actions=drop
cookie=0x0, duration=6.402s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=100,metadata=0x5,
dl_src=01:00:00:00:00:00/01:00:00:00:00:00
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_dst=127.0.0.0/8
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_dst=0.0.0.0/8
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_dst=224.0.0.0/4
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=50,ip,metadata=0x5,nw_dst=224.0.0.0/4
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_src=255.255.255.255
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_src=127.0.0.0/8
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x5,nw_src=0.0.0.0/8
actions=drop
cookie=0x0, duration=6.402s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=90,arp,metadata=0x5,arp_op=2
actions=push:NXM_NX_REG0[],push:NXM_OF_ETH_SRC[],
push:NXM_NX_ARP_SHA[],push:NXM_OF_ARP_SPA[],
(continues on next page)
Self-service networks, particularly subnets, must interface with a router to enable connectivity with other
self-service and provider networks.
1. On the controller node, add the self-service network subnet selfservice-v4 to the router
router.
OVN operations
The OVN mechanism driver and OVN perform the following operations when adding a subnet as an
interface on a router.
1. The OVN mechanism driver translates the operation into logical objects and devices in the OVN
northbound database and performs a series of operations on them.
• Create a logical port.
_uuid : 4c9e70b1-fff0-4d0d-af8e-42d3896eb76f
addresses : ["fa:16:3e:0c:55:62 192.168.1.1"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "5b72d278-5b16-44a6-9aa0-9e513a429506"
options : {router-port="lrp-5b72d278-5b16-44a6-9aa0-
,→9e513a429506"}
parent_name : []
port_security : []
tag : []
type : router
up : false
_uuid : 0ab40684-7cf8-4d6c-ae8b-9d9143762d37
acls : []
external_ids : {"neutron:network_name"="selfservice"}
name : "neutron-d5aadceb-d8d6-41c8-9252-
,→c5e0fe6c26a5"
ports : [1ed7c28b-dc69-42b8-bed6-46477bb8b539,
4c9e70b1-fff0-4d0d-af8e-42d3896eb76f,
ae10a5e0-db25-4108-b06a-d2d5c127d9c4]
_uuid : f60ccb93-7b3d-4713-922c-37104b7055dc
enabled : []
external_ids : {}
mac : "fa:16:3e:0c:55:62"
name : "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"
network : "192.168.1.1/24"
peer : []
_uuid : 1c2e340d-dac9-496b-9e86-1065f9dab752
default_gw : []
enabled : []
external_ids : {"neutron:router_name"="router"}
name : "neutron-a24fd760-1a99-4eec-9f02-
,→24bb284ff708"
ports : [f60ccb93-7b3d-4713-922c-37104b7055dc]
static_routes : []
2. The OVN northbound service translates these objects into logical flows, datapath bindings, and
the appropriate multicast groups in the OVN southbound database.
• Logical flows in the logical router datapath
• Port bindings
_uuid : 0f86395b-a0d8-40fd-b22c-4c9e238a7880
chassis : []
datapath : 4a7485c6-a1ef-46a5-b57c-5ddb6ac15aaa
logical_port : "lrp-5b72d278-5b16-44a6-9aa0-9e513a429506"
mac : []
options : {peer="5b72d278-5b16-44a6-9aa0-9e513a429506
,→"}
parent_port : []
tag : []
tunnel_key : 1
type : patch
_uuid : 8d95ab8c-c2ea-4231-9729-7ecbfc2cd676
chassis : []
datapath : 4aef86e4-e54a-4c83-bb27-d65c670d4b51
logical_port : "5b72d278-5b16-44a6-9aa0-9e513a429506"
mac : ["fa:16:3e:0c:55:62 192.168.1.1"]
options : {peer="lrp-5b72d278-5b16-44a6-9aa0-
,→9e513a429506"}
parent_port : []
tag : []
tunnel_key : 3
type : patch
• Multicast groups
_uuid : 4a6191aa-d8ac-4e93-8306-b0d8fbbe4e35
datapath : 4aef86e4-e54a-4c83-bb27-d65c670d4b51
name : _MC_flood
ports : [8d95ab8c-c2ea-4231-9729-7ecbfc2cd676,
be71fac3-9f04-41c9-9951-f3f7f1fa1ec5,
da5c1269-90b7-4df2-8d76-d4575754b02d]
tunnel_key : 65535
In addition, if the self-service network contains ports with IP addresses (typically instances or
DHCP servers), OVN creates a logical flow for each port, similar to the following example.
3. On each compute node, the OVN controller service creates patch ports, similar to the following
example.
7(patch-f112b99a-): addr:4e:01:91:2a:73:66
config: 0
(continues on next page)
4. On all compute nodes, the OVN controller service creates the following additional flows:
cookie=0x0, duration=6.667s, table=0, n_packets=0, n_bytes=0,
idle_age=6, priority=100,in_port=8
actions=load:0x9->OXM_OF_METADATA[],load:0x1->NXM_NX_REG6[],
resubmit(,16)
cookie=0x0, duration=6.667s, table=0, n_packets=0, n_bytes=0,
idle_age=6, priority=100,in_port=7
actions=load:0x7->OXM_OF_METADATA[],load:0x4->NXM_NX_REG6[],
resubmit(,16)
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg6=0x4,metadata=0x7
actions=resubmit(,17)
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg6=0x1,metadata=0x9,
dl_dst=fa:16:3e:fa:76:8f
actions=resubmit(,17)
cookie=0x0, duration=6.674s, table=16, n_packets=0, n_bytes=0,
idle_age=6, priority=50,reg6=0x1,metadata=0x9,
dl_dst=01:00:00:00:00:00/01:00:00:00:00:00
actions=resubmit(,17)
cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x9,nw_src=192.168.1.1
actions=drop
cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x9,nw_src=192.168.1.255
actions=drop
cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=90,arp,reg6=0x1,metadata=0x9,
arp_tpa=192.168.1.1,arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:fa:76:8f,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163efa768f->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a80101->NXM_OF_ARP_SPA[],load:0x1->NXM_NX_REG7[],
load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=6.673s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=90,icmp,metadata=0x9,nw_dst=192.168.1.1,
icmp_type=8,icmp_code=0
actions=move:NXM_OF_IP_SRC[]->NXM_OF_IP_DST[],mod_nw_src:192.168.
,→1.1,
load:0xff->NXM_NX_IP_TTL[],load:0->NXM_OF_ICMP_TYPE[],
load:0->NXM_NX_REG6[],load:0->NXM_OF_IN_PORT[],resubmit(,18)
cookie=0x0, duration=6.674s, table=17, n_packets=0, n_bytes=0,
idle_age=6, priority=60,ip,metadata=0x9,nw_dst=192.168.1.1
actions=drop
cookie=0x0, duration=6.674s, table=20, n_packets=0, n_bytes=0,
idle_age=6, priority=24,ip,metadata=0x9,nw_dst=192.168.1.0/24
(continues on next page)
5. On compute nodes not containing a port on the network, the OVN controller also creates additional
flows.
actions=resubmit(,20)
cookie=0x0, duration=6.673s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=110,icmp6,metadata=0x7,icmp_type=135,icmp_
,→code=0
actions=resubmit(,20)
cookie=0x0, duration=6.674s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,metadata=0x7
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=6.670s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,metadata=0x7
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=6.674s, table=19, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,20)
cookie=0x0, duration=6.673s, table=20, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,21)
cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,reg0=0x1/0x1,metadata=0x7
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=6.670s, table=21, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,reg0=0x1/0x1,metadata=0x7
actions=ct(table=22,zone=NXM_NX_REG5[0..15])
cookie=0x0, duration=6.674s, table=21, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,22)
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,ct_state=-new+est-rel-inv+trk,
,→metadata=0x7
actions=resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,ct_state=-new-est+rel-inv+trk,
,→metadata=0x7
actions=resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,ct_state=+inv+trk,metadata=0x7
actions=drop
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=135,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=65535,icmp6,metadata=0x7,icmp_type=136,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,udp,reg6=0x3,metadata=0x7,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,udp,reg6=0x3,metadata=0x7,
(continues on next page)
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2002,ct_state=+new+trk,ip,reg6=0x3,
,→metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2001,ip,reg6=0x3,metadata=0x7
actions=drop
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=2001,ipv6,reg6=0x3,metadata=0x7
actions=drop
cookie=0x0, duration=6.674s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=1,ipv6,metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=1,ip,metadata=0x7
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=6.673s, table=22, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,23)
cookie=0x0, duration=6.673s, table=23, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,24)
cookie=0x0, duration=6.674s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,reg0=0x2/0x2,metadata=0x7
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
cookie=0x0, duration=6.674s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,reg0=0x2/0x2,metadata=0x7
actions=ct(commit,zone=NXM_NX_REG5[0..15]),resubmit(,25)
cookie=0x0, duration=6.673s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ipv6,reg0=0x4/0x4,metadata=0x7
actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=6.670s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=100,ip,reg0=0x4/0x4,metadata=0x7
actions=ct(table=25,zone=NXM_NX_REG5[0..15],nat)
cookie=0x0, duration=6.674s, table=24, n_packets=0, n_bytes=0,
idle_age=6, priority=0,metadata=0x7
actions=resubmit(,25)
cookie=0x0, duration=6.673s, table=25, n_packets=0, n_bytes=0,
idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.11,
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:b6:91:70,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163eb69170->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a8010b->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=6.670s, table=25, n_packets=0, n_bytes=0,
idle_age=6, priority=50,arp,metadata=0x7,arp_tpa=192.168.1.3,arp_
,→op=1 (continues on next page)
nw_src=192.168.1.11
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=6.670s, table=52, n_packets=0, n_bytes=0,
(continues on next page)
6. On compute nodes containing a port on the network, the OVN controller also creates an additional
flow.
cookie=0x0, duration=13.358s, table=52, n_packets=0, n_bytes=0,
idle_age=13, priority=2002,ct_state=+new+trk,ipv6,reg7=0x3,
metadata=0x7,ipv6_src=::
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
Instances
Launching an instance causes the same series of operations regardless of the network. The following
example uses the provider provider network, cirros image, m1.tiny flavor, default security
group, and mykey key.
1. On the controller node, source the credentials for a regular (non-privileged) project. The following
example uses the demo project.
2. On the controller node, launch an instance using the UUID of the provider network.
$ openstack server create --flavor m1.tiny --image cirros \
--nic net-id=0243277b-4aa8-46d8-9e10-5c9ad5e01521 \
--security-group default --key-name mykey provider-instance
+--------------------------------------+------------------------------
,→-----------------+
| Property | Value
,→ |
+--------------------------------------+------------------------------
,→-----------------+
| OS-DCF:diskConfig | MANUAL
,→ |
| OS-EXT-AZ:availability_zone | nova
,→ |
(continues on next page)
OVN operations
The OVN mechanism driver and OVN perform the following operations when launching an instance.
1. The OVN mechanism driver creates a logical port for the instance.
_uuid : cc891503-1259-47a1-9349-1c0293876664
addresses : ["fa:16:3e:1c:ca:6a 203.0.113.103"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "cafd4862-c69c-46e4-b3d2-6141ce06b205"
options : {}
parent_name : []
port_security : ["fa:16:3e:1c:ca:6a 203.0.113.103"]
tag : []
type : ""
up : true
2. The OVN mechanism driver updates the appropriate Address Set entry with the address of this
instance:
_uuid : d0becdea-e1ed-48c4-9afc-e278cdef4629
addresses : ["203.0.113.103"]
external_ids : {"neutron:security_group_name"=default}
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
3. The OVN mechanism driver creates ACL entries for this port and any other ports in the project.
_uuid : f8d27bfc-4d74-4e73-8fac-c84585443efd
action : drop
direction : from-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-
,→6141ce06b205"}
log : false
match : "inport == \"cafd4862-c69c-46e4-b3d2-
,→6141ce06b205\" && ip"
priority : 1001
_uuid : a61d0068-b1aa-4900-9882-e0671d1fc131
action : allow
direction : to-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-
,→6141ce06b205"}
log : false
match : "outport == \"cafd4862-c69c-46e4-b3d2-
,→6141ce06b205\" && ip4 && ip4.src == 203.0.113.0/24 && udp && udp.
,→src == 67 && udp.dst == 68"
priority : 1002
_uuid : a5a787b8-7040-4b63-a20a-551bd73eb3d1
action : allow-related
direction : from-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-
,→6141ce06b205"}
log : false
match : "inport == \"cafd4862-c69c-46e4-b3d2-
,→6141ce06b205\" && ip6"
(continues on next page)
_uuid : 7b3f63b8-e69a-476c-ad3d-37de043232b2
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-
,→6141ce06b205"}
log : false
match : "outport == \"cafd4862-c69c-46e4-b3d2-
,→6141ce06b205\" && ip4 && ip4.src = $as_ip4_90a78a43_b5649_4bee_8822_
,→21fcccab58dc"
priority : 1002
_uuid : 36dbb1b1-cd30-4454-a0bf-923646eb7c3f
action : allow
direction : from-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-
,→6141ce06b205"}
log : false
match : "inport == \"cafd4862-c69c-46e4-b3d2-
,→6141ce06b205\" && ip4 && (ip4.dst == 255.255.255.255 || ip4.dst ==
,→203.0.113.0/24) && udp && udp.src == 68 && udp.dst == 67"
priority : 1002
_uuid : 05a92f66-be48-461e-a7f1-b07bfbd3e667
action : allow-related
direction : from-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-
,→6141ce06b205"}
log : false
match : "inport == \"cafd4862-c69c-46e4-b3d2-
,→6141ce06b205\" && ip4"
priority : 1002
_uuid : 37f18377-d6c3-4c44-9e4d-2170710e50ff
action : drop
direction : to-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-
,→6141ce06b205"}
log : false
match : "outport == \"cafd4862-c69c-46e4-b3d2-
,→6141ce06b205\" && ip"
priority : 1001
_uuid : 6d4db3cf-c1f1-4006-ad66-ae582a6acd21
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="cafd4862-c69c-46e4-b3d2-
,→6141ce06b205"}
log : false
match : "outport == \"cafd4862-c69c-46e4-b3d2-
,→6141ce06b205\" && ip6 && ip6.src = $as_ip6_90a78a43_b5649_4bee_8822_
,→21fcccab58dc"
priority : 1002
4. The OVN mechanism driver updates the logical switch information with the UUIDs of these
objects.
_uuid : 924500c4-8580-4d5f-a7ad-8769f6e58ff5
acls : [05a92f66-be48-461e-a7f1-b07bfbd3e667,
36dbb1b1-cd30-4454-a0bf-923646eb7c3f,
37f18377-d6c3-4c44-9e4d-2170710e50ff,
7b3f63b8-e69a-476c-ad3d-37de043232b2,
a5a787b8-7040-4b63-a20a-551bd73eb3d1,
a61d0068-b1aa-4900-9882-e0671d1fc131,
f8d27bfc-4d74-4e73-8fac-c84585443efd]
external_ids : {"neutron:network_name"=provider}
name : "neutron-670efade-7cd0-4d87-8a04-27f366eb8941"
ports : [38cf8b52-47c4-4e93-be8d-06bf71f6a7c9,
5e144ab9-3e08-4910-b936-869bbbf254c8,
a576b812-9c3e-4cfb-9752-5d8500b3adf9,
cc891503-1259-47a1-9349-1c0293876664]
5. The OVN northbound service creates port bindings for the logical ports and adds them to the
appropriate multicast group.
• Port bindings
_uuid : e73e3fcd-316a-4418-bbd5-a8a42032b1c3
chassis : fc5ab9e7-bc28-40e8-ad52-2949358cc088
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
logical_port : "cafd4862-c69c-46e4-b3d2-6141ce06b205"
mac : ["fa:16:3e:1c:ca:6a 203.0.113.103"]
options : {}
parent_port : []
tag : []
tunnel_key : 4
type : ""
• Multicast groups
_uuid : 39b32ccd-fa49-4046-9527-13318842461e
datapath : bd0ab2b3-4cf4-4289-9529-ef430f6a89e6
name : _MC_flood
ports : [030024f4-61c3-4807-859b-07727447c427,
904c3108-234d-41c0-b93c-116b7e352a75,
cc5bcd19-bcae-4e29-8cee-3ec8a8a75d46,
e73e3fcd-316a-4418-bbd5-a8a42032b1c3]
tunnel_key : 65535
6. The OVN northbound service translates the Address Set change into the new Address Set in the
OVN southbound database.
_uuid : 2addbee3-7084-4fff-8f7b-15b1efebdaff
addresses : ["203.0.113.103"]
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
7. The OVN northbound service translates the ACL and logical port objects into logical flows in the
OVN southbound database.
Datapath: bd0ab2b3-4cf4-4289-9529-ef430f6a89e6 Pipeline: ingress
table= 0( ls_in_port_sec_l2), priority= 50,
match=(inport == "cafd4862-c69c-46e4-b3d2-6141ce06b205" &&
(continues on next page)
8. The OVN controller service on each compute node translates these objects into flows on the inte-
gration bridge br-int. Exact flows depend on whether the compute node containing the instance
also contains a DHCP agent on the subnet.
• On the compute node containing the instance, the Compute service creates a port that con-
nects the instance to the integration bridge and OVN creates the following flows:
idle_age=15, priority=100,in_port=9
actions=load:0x3->NXM_NX_REG5[],load:0x4->OXM_OF_METADATA[],
load:0x4->NXM_NX_REG6[],resubmit(,16)
cookie=0x0, duration=191.687s, table=16, n_packets=175, n_
,→bytes=15270,
idle_age=15, priority=50,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a
actions=resubmit(,17)
cookie=0x0, duration=191.687s, table=17, n_packets=2, n_bytes=684,
idle_age=112, priority=90,udp,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a,nw_src=0.0.0.0,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
(continues on next page)
idle_age=15, priority=90,arp,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a,arp_spa=203.0.113.103,
arp_sha=fa:16:3e:1c:ca:6a
actions=resubmit(,19)
cookie=0x0, duration=191.687s, table=18, n_packets=0, n_bytes=0,
idle_age=191, priority=80,icmp6,reg6=0x4,metadata=0x4,
icmp_type=136,icmp_code=0
actions=drop
cookie=0x0, duration=191.687s, table=18, n_packets=0, n_bytes=0,
idle_age=191, priority=80,icmp6,reg6=0x4,metadata=0x4,
icmp_type=135,icmp_code=0
actions=drop
cookie=0x0, duration=191.687s, table=18, n_packets=0, n_bytes=0,
idle_age=191, priority=80,arp,reg6=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=75.033s, table=19, n_packets=0, n_bytes=0,
idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=75.032s, table=19, n_packets=0, n_bytes=0,
idle_age=75, priority=110,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,20)
cookie=0x0, duration=75.032s, table=19, n_packets=34, n_
,→bytes=5170,
idle_age=49, priority=100,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=75.032s, table=19, n_packets=0, n_bytes=0,
idle_age=75, priority=100,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=0, n_bytes=0,
idle_age=75, priority=65535,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=75.032s, table=22, n_packets=13, n_
,→bytes=1118, (continues on next page)
idle_age=44, priority=50,metadata=0x4,dl_dst=fa:16:3e:1c:ca:6a
actions=load:0x4->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=221031.310s, table=33, n_packets=72, n_
,→bytes=6292, (continues on next page)
• For each compute node that only contains a DHCP agent on the subnet, OVN creates the
following flows:
cookie=0x0, duration=189.649s, table=16, n_packets=0, n_bytes=0,
idle_age=189, priority=50,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a
actions=resubmit(,17)
cookie=0x0, duration=189.650s, table=17, n_packets=0, n_bytes=0,
idle_age=189, priority=90,udp,reg6=0x4,metadata=0x4,
dl_src=fa:14:3e:1c:ca:6a,nw_src=0.0.0.0,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=resubmit(,18)
cookie=0x0, duration=189.649s, table=17, n_packets=0, n_bytes=0,
idle_age=189, priority=90,ip,reg6=0x4,metadata=0x4,
dl_src=fa:16:3e:1c:ca:6a,nw_src=203.0.113.103
(continues on next page)
arp_op=1
actions=move:NXM_OF_ETH_SRC[]->NXM_OF_ETH_DST[],
mod_dl_src:fa:16:3e:1c:ca:6a,load:0x2->NXM_OF_ARP_OP[],
move:NXM_NX_ARP_SHA[]->NXM_NX_ARP_THA[],
load:0xfa163ed63dca->NXM_NX_ARP_SHA[],
move:NXM_OF_ARP_SPA[]->NXM_OF_ARP_TPA[],
load:0xc0a81268->NXM_OF_ARP_SPA[],
move:NXM_NX_REG6[]->NXM_NX_REG7[],load:0->NXM_NX_REG6[],
load:0->NXM_OF_IN_PORT[],resubmit(,32)
cookie=0x0, duration=79.450s, table=26, n_packets=8, n_bytes=1258,
idle_age=57, priority=50,metadata=0x4,dl_dst=fa:16:3e:1c:ca:6a
actions=load:0x4->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=182.952s, table=33, n_packets=74, n_
,→bytes=7040,
idle_age=18, priority=100,reg7=0x4,metadata=0x4
actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
cookie=0x0, duration=79.451s, table=49, n_packets=0, n_bytes=0,
idle_age=79, priority=110,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=79.450s, table=49, n_packets=0, n_bytes=0,
(continues on next page)
idle_age=57, priority=100,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=79.450s, table=49, n_packets=0, n_bytes=0,
idle_age=79, priority=100,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x4
actions=resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=6, n_bytes=510,
idle_age=57, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x4
actions=resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=135,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,icmp6,metadata=0x4,icmp_type=136,
icmp_code=0
actions=resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=65535,ct_state=+inv+trk,metadata=0x4
actions=drop
cookie=0x0, duration=79.452s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=2002,udp,reg7=0x4,metadata=0x4,
nw_src=203.0.113.0/24,tp_src=67,tp_dst=68
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=2002,ct_state=+new+trk,ip,reg7=0x4,
metadata=0x4,nw_src=203.0.113.103
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=71.483s, table=52, n_packets=0, n_bytes=0,
idle_age=71, priority=2002,ct_state=+new+trk,ipv6,reg7=0x4,
metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=2001,ipv6,reg7=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=79.450s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=2001,ip,reg7=0x4,metadata=0x4
actions=drop
cookie=0x0, duration=79.453s, table=52, n_packets=0, n_bytes=0,
idle_age=79, priority=1,ipv6,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=79.450s, table=52, n_packets=12, n_
,→bytes=2654,
idle_age=57, priority=1,ip,metadata=0x4
actions=load:0x1->NXM_NX_REG0[1],resubmit(,53)
cookie=0x0, duration=79.452s, table=54, n_packets=0, n_bytes=0,
idle_age=79, priority=90,ip,reg7=0x4,metadata=0x4,
(continues on next page)
To launch an instance on a self-service network, follow the same steps as launching an instance on the
provider network, but using the UUID of the self-service network.
OVN operations
The OVN mechanism driver and OVN perform the following operations when launching an instance.
1. The OVN mechanism driver creates a logical port for the instance.
_uuid : c754d1d2-a7fb-4dd0-b14c-c076962b06b9
addresses : ["fa:16:3e:15:7d:13 192.168.1.5"]
enabled : true
external_ids : {"neutron:port_name"=""}
name : "eaf36f62-5629-4ec4-b8b9-5e562c40e7ae"
options : {}
parent_name : []
port_security : ["fa:16:3e:15:7d:13 192.168.1.5"]
tag : []
type : ""
up : true
2. The OVN mechanism driver updates the appropriate Address Set object(s) with the address of the
new instance:
_uuid : d0becdea-e1ed-48c4-9afc-e278cdef4629
addresses : ["192.168.1.5", "203.0.113.103"]
external_ids : {"neutron:security_group_name"=default}
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
3. The OVN mechanism driver creates ACL entries for this port and any other ports in the project.
_uuid : 00ecbe8f-c82a-4e18-b688-af2a1941cff7
action : allow
direction : from-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae"}
log : false
match : "inport == \"eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae\" && ip4 && (ip4.dst == 255.255.255.255 || ip4.dst ==
_uuid : 2bf5b7ed-008e-4676-bba5-71fe58897886
action : allow-related
direction : from-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae"}
log : false
match : "inport == \"eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae\" && ip4"
priority : 1002
_uuid : 330b4e27-074f-446a-849b-9ab0018b65c5
action : allow
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae\" && ip4 && ip4.src == 192.168.1.0/24 && udp && udp.
,→src == 67 && udp.dst == 68"
priority : 1002
_uuid : 683f52f2-4be6-4bd7-a195-6c782daa7840
action : allow-related
direction : from-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae"}
log : false
match : "inport == \"eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae\" && ip6"
priority : 1002
_uuid : 8160f0b4-b344-43d5-bbd4-ca63a71aa4fc
action : drop
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae\" && ip"
priority : 1001
_uuid : 97c6b8ca-14ea-4812-8571-95d640a88f4f
action : allow-related
direction : to-lport
(continues on next page)
priority : 1002
_uuid : 9cfd8eb5-5daa-422e-8fe8-bd22fd7fa826
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae\" && ip4 && ip4.src == 0.0.0.0/0 && icmp4"
priority : 1002
_uuid : f72c2431-7a64-4cea-b84a-118bdc761be2
action : drop
direction : from-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae"}
log : false
match : "inport == \"eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae\" && ip"
priority : 1001
_uuid : f94133fa-ed27-4d5e-a806-0d528e539cb3
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae\" && ip4 && ip4.src == $as_ip4_90a78a43_b549_4bee_8822_
,→21fcccab58dc"
priority : 1002
_uuid : 7f7a92ff-b7e9-49b0-8be0-0dc388035df3
action : allow-related
direction : to-lport
external_ids : {"neutron:lport"="eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae"}
log : false
match : "outport == \"eaf36f62-5629-4ec4-b8b9-
,→5e562c40e7ae\" && ip6 && ip6.src == $as_ip4_90a78a43_b549_4bee_8822_
,→21fcccab58dc"
priority : 1002
4. The OVN mechanism driver updates the logical switch information with the UUIDs of these
objects.
_uuid : 15e2c80b-1461-4003-9869-80416cd97de5
acls : [00ecbe8f-c82a-4e18-b688-af2a1941cff7,
2bf5b7ed-008e-4676-bba5-71fe58897886,
330b4e27-074f-446a-849b-9ab0018b65c5,
(continues on next page)
5. With address sets, it is no longer necessary for the OVN mechanism driver to create separate ACLs
for other instances in the project. That is handled automagically via address sets.
6. The OVN northbound service translates the updated Address Set object(s) into updated Address
Set objects in the OVN southbound database:
_uuid : 2addbee3-7084-4fff-8f7b-15b1efebdaff
addresses : ["192.168.1.5", "203.0.113.103"]
name : "as_ip4_90a78a43_b549_4bee_8822_21fcccab58dc"
7. The OVN northbound service adds a Port Binding for the new Logical Switch Port object:
_uuid : 7a558e7b-ed7a-424f-a0cf-ab67d2d832d7
chassis : b67d6da9-0222-4ab1-a852-ab2607610bf8
datapath : 3f6e16b5-a03a-48e5-9b60-7b7a0396c425
logical_port : "e9cb7857-4cb1-4e91-aae5-165a7ab5b387"
mac : ["fa:16:3e:b6:91:70 192.168.1.5"]
options : {}
parent_port : []
tag : []
tunnel_key : 3
type : ""
8. The OVN northbound service updates the flooding multicast group for the logical datapath with
the new port binding:
_uuid : c08d0102-c414-4a47-98d9-dd3fa9f9901c
datapath : 0b214af6-8910-489c-926a-fd0ed16a8251
name : _MC_flood
ports : [3e463ca0-951c-46fd-b6cf-05392fa3aa1f,
794a6f03-7941-41ed-b1c6-0e00c1e18da0,
fa7b294d-2a62-45ae-8de3-a41c002de6de]
tunnel_key : 65535
9. The OVN northbound service adds Logical Flows based on the updated Address Set, ACL and
Logical_Switch_Port objects:
10. The OVN controller service on each compute node translates these objects into flows on the inte-
gration bridge br-int. Exact flows depend on whether the compute node containing the instance
also contains a DHCP agent on the subnet.
• On the compute node containing the instance, the Compute service creates a port that con-
nects the instance to the integration bridge and OVN creates the following flows:
idle_age=0, priority=100,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=47.068s, table=19, n_packets=0, n_bytes=0,
idle_age=47, priority=100,ipv6,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,20)
cookie=0x0, duration=47.068s, table=22, n_packets=15, n_
,→bytes=1392,
idle_age=0, priority=65535,ct_state=-new+est-rel-inv+trk,
metadata=0x5
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,ct_state=-new-est+rel-inv+trk,
metadata=0x5
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=135,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,icmp6,metadata=0x5,icmp_type=136,
icmp_code=0
actions=resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=65535,ct_state=+inv+trk,metadata=0x5
actions=drop
cookie=0x0, duration=47.068s, table=22, n_packets=0, n_bytes=0,
idle_age=47, priority=2002,ct_state=+new+trk,ipv6,reg6=0x3,
metadata=0x5
actions=load:0x1->NXM_NX_REG0[1],resubmit(,23)
cookie=0x0, duration=47.068s, table=22, n_packets=16, n_
,→bytes=1922,
idle_age=2, priority=2002,ct_state=+new+trk,ip,reg6=0x3,
metadata=0x5
(continues on next page)
idle_age=1, priority=50,metadata=0x5,dl_dst=fa:16:3e:15:7d:13
actions=load:0x3->NXM_NX_REG7[],resubmit(,32)
cookie=0x0, duration=469.575s, table=33, n_packets=74, n_
,→bytes=7040,
idle_age=305, priority=100,reg7=0x4,metadata=0x4
actions=load:0x1->NXM_NX_REG7[],resubmit(,33)
cookie=0x0, duration=179.460s, table=34, n_packets=2, n_bytes=684,
idle_age=84, priority=100,reg6=0x3,reg7=0x3,metadata=0x5
actions=drop
cookie=0x0, duration=47.069s, table=49, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=135,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=47.068s, table=49, n_packets=0, n_bytes=0,
idle_age=47, priority=110,icmp6,metadata=0x5,icmp_type=136,
icmp_code=0
actions=resubmit(,50)
cookie=0x0, duration=47.068s, table=49, n_packets=34, n_
,→bytes=4455,
idle_age=0, priority=100,ip,metadata=0x5
actions=load:0x1->NXM_NX_REG0[0],resubmit(,50)
(continues on next page)
idle_age=0, priority=90,ip,reg7=0x3,metadata=0x5,
dl_dst=fa:16:3e:15:7d:13,nw_dst=192.168.1.11
actions=resubmit(,55)
(continues on next page)
idle_age=1, priority=100,reg7=0x3,metadata=0x5
actions=output:12
• For each compute node that only contains a DHCP agent on the subnet, OVN creates the
following flows:
cookie=0x0, duration=192.587s, table=16, n_packets=0, n_bytes=0,
idle_age=192, priority=50,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13
actions=resubmit(,17)
cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
idle_age=192, priority=90,ip,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13,nw_src=192.168.1.5
actions=resubmit(,18)
cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
idle_age=192, priority=90,udp,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13,nw_src=0.0.0.0,
nw_dst=255.255.255.255,tp_src=68,tp_dst=67
actions=resubmit(,18)
cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
idle_age=192, priority=80,ipv6,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=192.587s, table=17, n_packets=0, n_bytes=0,
idle_age=192, priority=80,ip,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13
actions=drop
cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
idle_age=192, priority=90,arp,reg6=0x3,metadata=0x5,
dl_src=fa:16:3e:15:7d:13,arp_spa=192.168.1.5,
arp_sha=fa:16:3e:15:7d:13
actions=resubmit(,19)
cookie=0x0, duration=192.587s, table=18, n_packets=0, n_bytes=0,
(continues on next page)
• For each compute node that contains neither the instance nor a DHCP agent on the subnet,
OVN creates the following flows:
Configuration Settings
The following configuration parameter needs to be set in the Neutron ML2 plugin configuration file
under the ovn section to enable DPDK support.
vhost_sock_dir This is the directory path in which vswitch daemon in all the compute nodes creates
the virtio socket. Follow the instructions in INSTALL.DPDK.md in openvswitch source tree to
know how to configure DPDK support in vswitch daemons.
Compute nodes configured with OVS DPDK should set the datapath_type as netdev for the integration
bridge (managed by OVN) and all other bridges if connected to the integration bridge via patch ports.
The below command can be used to set the datapath_type.
8.7.8 Troubleshooting
The following section describe common problems that you might encounter after/during the installation
of the OVN ML2 driver with Devstack and possible solutions to these problems.
Disable AppArmor
Using Ubuntu you might encounter libvirt permission errors when trying to create OVS ports after
launching a VM (from the nova compute log). Disabling AppArmor might help with this problem,
check out https://help.ubuntu.com/community/AppArmor for instructions on how to disable it.
By default OVN creates tunnels between compute nodes using the Geneve protocol. Older kernels
(< 3.18) dont support the Geneve module and hence tunneling cant work. You can check it with this
command lsmod | grep openvswitch (geneve should show up in the result list)
For more information about which upstream Kernel version is required for support of each tunnel type,
see the answer to Why do tunnels not work when using a kernel module other than the one packaged
with Open vSwitch? in the OVS FAQ.
MTU configuration
This problem is not unique to OVN but is amplified due to the possible larger size of geneve header
compared to other common tunneling protocols (VXLAN). If you are using VMs as compute nodes
make sure that you either lower the MTU size on the virtual interface or enable fragmentation on it.
The purpose of this page is to describe how SR-IOV works with OVN. Prior to reading this document,
it is recommended to first read the basic guide for SR-IOV.
External ports
In order for SR-IOV to work with the Neutron driver we are leveraging the external ports feature from
the OVN project. When virtual machines are booted on hypervisors supporting SR-IOV nics, the local
ovn-controllers are unable to reply to the VMs DHCP, internal DNS, IPv6 router solicitation requests,
etc since the hypervisor is bypassed in the SR-IOV case. OVN then introduced the idea of having
external ports which are able to reply on behalf of those VM ports external to the hypervisor that
they are running on.
The OVN Neutron driver will create a port of the type external for ports with the following VNICs
set:
• direct
• direct-physical
• macvtap
Also, ports of the type external will be scheduled on the gateway nodes (controller or networker
nodes) in HA mode by the OVN Neutron driver. Check the OVN Database information section for more
information.
There are a very few differences between setting up an environment for SR-IOV for the OVS and OVN
Neutron drivers. As mentioned at the beginning of this document, the instructions from the the basic
guide for SR-IOV are required for getting SR-IOV working with the OVN driver.
The only differences required for an OVN deployment are:
• When configuring the mechanism_drivers in the ml2_conf.ini file we should specify ovn
driver instead of the openvswitch driver
• Disabling the Neutron DHCP agent
• Deploying the OVN Metadata agent on the gateway nodes (controller or networker nodes)
Before getting into the ports information, the previous sections talks about gateway nodes, the OVN
Neutron driver identifies a gateway node by the ovn-cms-options=enable-chassis-as-gw
and ovn-bridge-mappings options in the external_ids column from the Chassis table in the
OVN Southbound database:
For more information about both of these options, please take a look at the ovn-controller documentation.
These options can be set by running the following command locally on each gateway node (note, the
ovn-bridge-mappings will need to be adapted to your environment):
As mentioned in the External ports section, every time a Neutron port with a certain VNIC is created
the OVN driver will create a port of the type external in the OVN Northbound database. These ports
can be found by issuing the following command:
The ha_chassis_group column indicates which HA Chassis Group that port belongs to, to find that
group do:
name : default_ha_chassis_group
Note: For now, the OVN driver only has one HA Chassis Group created called
default_ha_chassis_group. All external ports in the system will belong to this group.
The chassis that are members of the default_ha_chassis_group HA Chassis Group are listed
in the ha_chassis column. Those are the gateway nodes (controller or networker nodes) in the
deployment and its where the external ports will be scheduled. In order to find which gateway node
the external ports are scheduled on use the following command:
_uuid : 72c7671e-dd48-4100-9741-c47221672961
chassis_name : "a0cb9d55-a6da-4f84-857f-d4b674088c8c"
external_ids : {}
priority : 32766
Note the priority column from the previous command, the chassis with the highest priority from
that list is the chassis that will have the external ports scheduled on it. In our example above, the chassis
with the UUID 1a462946-ccfd-46a6-8abf-9dca9eb558fb is the one.
Whenever the chassis with the highest priority goes down, the ports will be automatically scheduled on
the next chassis with the highest priority which is alive. So, the external ports are HA out of the box.
Known limitations
The current SR-IOV implementation for the OVN Neutron driver has a few known limitations that should
be addressed in the future:
1. At the moment, all external ports will be scheduled on a single gateway node since theres only
one HA Chassis Group for all of those ports.
2. Routing on VLAN tenant network will not work with SR-IOV. This is because the external ports
are not being co-located with the logical routers gateway ports, for more information take a look
at bug #1875852.
The purpose of this page is to describe how the router availability zones works with OVN. Prior to
reading this document, it is recommended to first read ML2/OVS driver Availability Zones guide.
How to configure it
Different from the ML2/OVS driver for Neutron the availability zones for the OVN driver is not con-
figured via a configuration file. Since ML2/OVN does not rely on an external agent such as the L3
agent, certain nodes (e.g gateway/networker node) wont have any Neutron configuration file present.
For this reason, OVN uses the local OVSDB for configuring the availability zones that instance of
ovn-controller running on that hypervisor belongs to.
The configuration is done via the ovn-cms-options entry in external_ids column of the local
Open_vSwitch table:
$ ovs-vsctl set Open_vSwitch . external-ids:ovn-cms-options="enable-
,→chassis-as-gw,availability-zones=az-0:az-1:az-2"
The above command is adding two configurations to the ovn-cms-options option, the
enable-chassis-as-gw option which tells the OVN driver that this is a gateway/networker node
and the availability-zones option specifying three availability zones: az-0, az-1 and az-2.
Note that, the syntax used to specify the availability zones is the availability-zones word,
followed by an equal sign (=) and a colon separated list of the availability zones that this local
ovn-controller instance belongs to.
To confirm the specific ovn-controller availability zones, check the Availability Zone column in
the output of the command below:
$ openstack network agent list
+--------------------------------------+------------------------------+----
,→------------+-------------------+-------+-------+----------------+
| ID | Agent Type |
,→Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+------------------------------+----
,→------------+-------------------+-------+-------+----------------+
| 2d1924b2-99a4-4c6c-a4f2-0be64c0cec8c | OVN Controller Gateway agent |
,→gateway-host-0 | az0, az1, az2 | :-) | UP | ovn-controller |
+--------------------------------------+------------------------------+----
,→------------+-------------------+-------+-------+----------------+
Note: If you know the UUID of the agent the openstack network agent show <UUID> command can
also be used.
Its also possible to set the default availability zones via the /etc/neutron/neutron.conf configuration file:
[DEFAULT]
default_availability_zones = az-0,az-2
...
When scheduling the gateway ports of a router, the OVN driver will take into consideration the router
availability zones and make sure that the ports are scheduled on the nodes belonging to those availability
zones.
Note that in the router object we have two attributes related to availability zones:
availability_zones and availability_zone_hints:
This distinction makes more sense in the ML2/OVS driver which relies on the L3 agent for its
router placement (see the ML2/OVS driver Availability Zones guide for more information). In
ML2/OVN the ovn-controller service will be running on all nodes of the cluster so the
availability_zone_hints will always match the availability_zones attribute.
In order to check the availability zones of a router via the OVN Northbound database, one can look for
the neutron:availability_zone_hints key in the external_ids column for its entry in
the Logical_Router table:
To check the availability zones of the Chassis, look at the ovn-cms-options key in the
other_config column (or external_ids for an older version of OVN) of the Chassis table in
the OVN Southbound database:
As mentioned in the Using router availability zones section, the scheduling of the gateway router ports
will take into consideration the availability zones that the router belongs to. We can confirm this behavior
by looking in the Gateway_Chassis table from the OVN Southbound database:
options : {}
priority : 2
_uuid : c1b7763b-1784-4e5a-a948-853662faeddc
chassis_name : "1cde2542-69f9-4598-b20b-d4f68304deb0"
external_ids : {}
name : lrp-5a40eeca-5233-4029-a470-9018aa8b3de9_1cde2542-
,→69f9-4598-b20b-d4f68304deb0
options : {}
priority : 1
Each entry on this table represents an instance of the gateway port (L3 HA, for more information see
Routing in OVN), the chassis_name column indicates which Chassis that port instance is scheduled
onto. If we co-relate each entry and their chassis_name we will see that this port has been only
scheduled to Chassis matching with the routers availability zones.
The Routed Provider Networks feature is used to present a multi-segmented layer-3 network as a single
entity in Neutron.
After creating a provider network with multiple segments as described in the Neutron documentation,
each segment connects to a provider Local_Switch entry as Logical_Switch_Port entries with
the localnet port type.
For example, in the OVN Northbound database, this is how a VLAN Provider Network with two seg-
ments (VLAN: 100, 200) is related to their Logical_Switch counterpart:
As you can see, the two localnet ports are configured with a VLAN tag and are related to a single
Logical_Switch entry. When ovn-controller sees that a port in that network has been bound to the
node its running on it will create a patch port to the provider bridge accordingly to the bridge mappings
configuration.
For example, when a port in the multisegment network gets bound to compute-1, ovn-controller will
create a patch-port between br-int and br-provider1.
An important note here is that, on a given hypervisor only ports belonging to the same segment should
be present. It is not allowed to mix ports from different segments on the same hypervisor for the
same network (Logical_Switch).
Note: Contents here have been moved from the unified version of Administration Guide. They will be
merged into the Networking Guide gradually.
The Networking service, code-named neutron, provides an API that lets you define network connectivity
and addressing in the cloud. The Networking service enables operators to leverage different networking
technologies to power their cloud networking. The Networking service also provides an API to configure
and manage a variety of network services ranging from L3 forwarding and NAT to edge firewalls, and
IPsec VPN.
For a detailed description of the Networking API abstractions and their attributes, see the OpenStack
Networking API v2.0 Reference.
Note: If you use the Networking service, do not run the Compute nova-network service (like you do
in traditional Compute deployments). When you configure networking, see the Compute-related topics
in this Networking section.
Networking API
Networking is a virtual network service that provides a powerful API to define the network connectivity
and IP addressing that devices from other services, such as Compute, use.
The Compute API has a virtual server abstraction to describe computing resources. Similarly, the Net-
working API has virtual network, subnet, and port abstractions to describe networking resources.
Re- Description
source
Net- An isolated L2 segment, analogous to VLAN in the physical networking world.
work
Sub- A block of v4 or v6 IP addresses and associated configuration state.
net
Port A connection point for attaching a single device, such as the NIC of a virtual server, to a
virtual network. Also describes the associated network configuration, such as the MAC and
IP addresses to be used on that port.
Networking resources
To configure rich network topologies, you can create and configure networks and subnets and instruct
other OpenStack services like Compute to attach virtual devices to ports on these networks.
In particular, Networking supports each project having multiple private networks and enables projects to
choose their own IP addressing scheme, even if those IP addresses overlap with those that other projects
use.
The Networking service:
• Enables advanced cloud networking use cases, such as building multi-tiered web applications and
enabling migration of applications to the cloud without changing IP addresses.
• Offers flexibility for administrators to customize network offerings.
• Enables developers to extend the Networking API. Over time, the extended functionality becomes
part of the core Networking API.
OpenStack Networking supports SSL for the Networking API server. By default, SSL is disabled but
you can enable it in the neutron.conf file.
Set these options to configure SSL:
use_ssl = True Enables SSL on the networking API server.
ssl_cert_file = PATH_TO_CERTFILE Certificate file that is used when you securely start the
Networking API server.
ssl_key_file = PATH_TO_KEYFILE Private key file that is used when you securely start the
Networking API server.
ssl_ca_file = PATH_TO_CAFILE Optional. CA certificate file that is used when you securely
start the Networking API server. This file verifies connecting clients. Set this option when API
clients must authenticate to the API server by using SSL certificates that are signed by a trusted
CA.
tcp_keepidle = 600 The value of TCP_KEEPIDLE, in seconds, for each server socket when
starting the API server. Not supported on OS X.
retry_until_window = 30 Number of seconds to keep retrying to listen.
backlog = 4096 Number of backlog requests with which to configure the socket.
Allowed-address-pairs
Allowed-address-pairs enables you to specify mac_address and ip_address(cidr) pairs that pass
through a port regardless of subnet. This enables the use of protocols such as VRRP, which floats an IP
address between two instances to enable fast data plane failover.
Note: Currently, only the ML2, Open vSwitch, and VMware NSX plug-ins support the allowed-
address-pairs extension.
Virtual-Private-Network-as-a-Service (VPNaaS)
The VPNaaS extension enables OpenStack projects to extend private networks across the internet.
VPNaaS is a service. It is a parent object that associates a VPN with a specific subnet and router. Only
one VPN service object can be created for each router and each subnet. However, each VPN service
object can have any number of IP security connections.
The Internet Key Exchange (IKE) policy specifies the authentication and encryption algorithms to use
during phase one and two negotiation of a VPN connection. The IP security policy specifies the authen-
tication and encryption algorithm and encapsulation mode to use for the established VPN connection.
Note that you cannot update the IKE and IPSec parameters for live tunnels.
You can set parameters for site-to-site IPsec connections, including peer CIDRs, MTU, authentication
mode, peer address, DPD settings, and status.
The current implementation of the VPNaaS extension provides:
• Site-to-site VPN that connects two private networks.
• Multiple VPN connections per project.
• IKEv1 policy support with 3des, aes-128, aes-256, or aes-192 encryption.
• IPSec policy support with 3des, aes-128, aes-192, or aes-256 encryption, sha1 authentication,
ESP, AH, or AH-ESP transform protocol, and tunnel or transport mode encapsulation.
• Dead Peer Detection (DPD) with hold, clear, restart, disabled, or restart-by-peer actions.
The VPNaaS driver plugin can be configured in the neutron configuration file. You can then enable the
service.
Before you deploy Networking, it is useful to understand the Networking services and how they interact
with the OpenStack components.
Overview
Agent Description
plug-in agent Runs on each hypervisor to perform local vSwitch configuration. The agent
(neutron-*-agent) that runs, depends on the plug-in that you use. Certain plug-ins do not
require an agent.
dhcp agent Provides DHCP services to project networks. Required by certain plug-ins.
(neutron-dhcp-agent)
l3 agent Provides L3/NAT forwarding to provide external network access for VMs
(neutron-l3-agent) on project networks. Required by certain plug-ins.
metering agent Provides L3 traffic metering for project networks.
(neutron-metering-agent)
These agents interact with the main neutron process through RPC (for example, RabbitMQ or Qpid) or
through the standard Networking API. In addition, Networking integrates with OpenStack components
in a number of ways:
• Networking relies on the Identity service (keystone) for the authentication and authorization of all
API requests.
• Compute (nova) interacts with Networking through calls to its standard API. As part of creating
a VM, the nova-compute service communicates with the Networking API to plug each virtual
NIC on the VM into a particular network.
• The dashboard (horizon) integrates with the Networking API, enabling administrators and project
users to create and manage network services through a web-based GUI.
OpenStack Networking uses the NSX plug-in to integrate with an existing VMware vCenter deploy-
ment. When installed on the network nodes, the NSX plug-in enables a NSX controller to centrally
manage configuration settings and push them to managed network nodes. Network nodes are consid-
ered managed when they are added as hypervisors to the NSX controller.
The diagrams below depict some VMware NSX deployment examples. The first diagram illustrates the
traffic flow between VMs on separate Compute nodes, and the second diagram between two VMs on a
single compute node. Note the placement of the VMware NSX plug-in and the neutron-server service on
the network node. The green arrow indicates the management relationship between the NSX controller
and the network node.
For configurations options, see Networking configuration options in Configuration Reference. These
sections explain how to configure specific plug-ins.
core_plugin = bigswitch
service_plugins = neutron.plugins.bigswitch.l3_router_plugin.
,→L3RestProxy
server = CONTROLLER_IP:PORT
For database configuration, see Install Networking Services in the Installation Tutorials and
Guides. (The link defaults to the Ubuntu version.)
4. Restart the neutron-server to apply the settings:
1. Install the Brocade-modified Python netconf client (ncclient) library, which is available at https:
//github.com/brocade/ncclient:
core_plugin = brocade
[SWITCH]
username = ADMIN
password = PASSWORD
address = SWITCH_MGMT_IP_ADDRESS
ostype = NOS
For database configuration, see Install Networking Services in any of the Installation Tutorials and
Guides in the OpenStack Documentation index. (The link defaults to the Ubuntu version.)
The instructions in this section refer to the VMware NSX-mh platform, formerly known as Nicira NVP.
1. Install the NSX plug-in:
core_plugin = vmware
core_plugin = vmware
rabbit_host = 192.168.203.10
allow_overlapping_ips = True
3. To configure the NSX-mh controller cluster for OpenStack Networking, locate the [default]
section in the /etc/neutron/plugins/vmware/nsx.ini file and add the following en-
tries:
• To establish and configure the connection with the controller cluster you must set some
parameters, including NSX-mh API endpoints, access credentials, and optionally specify
settings for HTTP timeouts, redirects and retries in case of connection failures:
nsx_user = ADMIN_USER_NAME
nsx_password = NSX_USER_PASSWORD
http_timeout = HTTP_REQUEST_TIMEOUT # (seconds) default 75 seconds
retries = HTTP_REQUEST_RETRIES # default 2
redirects = HTTP_REQUEST_MAX_REDIRECTS # default 2
nsx_controllers = API_ENDPOINT_LIST # comma-separated list
To ensure correct operations, the nsx_user user must have administrator credentials on
the NSX-mh platform.
A controller API endpoint consists of the IP address and port for the controller; if you omit
the port, port 443 is used. If multiple API endpoints are specified, it is up to the user to ensure
that all these endpoints belong to the same controller cluster. The OpenStack Networking
VMware NSX-mh plug-in does not perform this check, and results might be unpredictable.
When you specify multiple API endpoints, the plug-in takes care of load balancing requests
on the various API endpoints.
• The UUID of the NSX-mh transport zone that should be used by default when a project
creates a network. You can get this value from the Transport Zones page for the NSX-mh
manager:
Alternatively the transport zone identifier can be retrieved by query the NSX-mh API: /ws.
v1/transport-zone
default_tz_uuid = TRANSPORT_ZONE_UUID
• default_l3_gw_service_uuid = GATEWAY_SERVICE_UUID
Warning: Ubuntu packaging currently does not update the neutron init script to point to
the NSX-mh configuration file. Instead, you must manually update /etc/default/
neutron-server to add this line:
NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini
For database configuration, see Install Networking Services in the Installation Tutorials and
Guides.
4. Restart neutron-server to apply settings:
Warning: The neutron NSX-mh plug-in does not implement initial re-synchronization of
Neutron resources. Therefore resources that might already exist in the database when Neutron
is switched to the NSX-mh plug-in will not be created on the NSX-mh backend upon restart.
[DEFAULT]
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
nsx_user=admin
nsx_password=changeme
nsx_controllers=10.127.0.100,10.127.0.200:8888
Note: To debug nsx.ini configuration issues, run this command from the host that runs neutron-
server:
# neutron-check-nsx-config PATH_TO_NSX.INI
This command tests whether neutron-server can log into all of the NSX-mh controllers and the
SQL server, and whether all UUID values are correct.
core_plugin = plumgrid
[PLUMgridDirector]
director_server = "PLUMgrid-director-ip-address"
director_server_port = "PLUMgrid-director-port"
username = "PLUMgrid-director-admin-username"
password = "PLUMgrid-director-admin-password"
For database configuration, see Install Networking Services in the Installation Tutorials and
Guides.
3. Restart the neutron-server service to apply the settings:
Plug-ins typically have requirements for particular software that must be run on each node that han-
dles data packets. This includes any node that runs nova-compute and nodes that run dedicated
OpenStack Networking service agents such as neutron-dhcp-agent, neutron-l3-agent or
neutron-metering-agent.
A data-forwarding node typically has a network interface with an IP address on the management network
and another interface on the data network.
This section shows you how to install and configure a subset of the available plug-ins, which might
include the installation of switching software (for example, Open vSwitch) and as agents used to
communicate with the neutron-server process running elsewhere in the data center.
If you use the NSX plug-in, you must also install Open vSwitch on each data-forwarding node. However,
you do not need to install an additional agent on each node.
Warning: It is critical that you run an Open vSwitch version that is compatible with the current
version of the NSX Controller software. Do not use the Open vSwitch version that is installed by
default on Ubuntu. Instead, use the Open vSwitch version that is provided on the VMware support
portal for your NSX Controller version.
1. Ensure that each data-forwarding node has an IP address on the management network, and an IP
address on the data network that is used for tunneling data traffic. For full details on configuring
your forwarding node, see the NSX Administration Guide.
2. Use the NSX Administrator Guide to add the node as a Hypervisor by using the NSX Man-
ager GUI. Even if your forwarding node has no VMs and is only used for services agents like
neutron-dhcp-agent , it should still be added to NSX as a Hypervisor.
3. After following the NSX Administrator Guide, use the page for this Hypervisor in the NSX Man-
ager GUI to confirm that the node is properly connected to the NSX Controller Cluster and that
the NSX Controller Cluster can see the br-int integration bridge.
The DHCP service agent is compatible with all existing plug-ins and is required for all deployments
where VMs should automatically receive IP addresses through DHCP.
To install and configure the DHCP agent
1. You must configure the host running the neutron-dhcp-agent as a data forwarding node according
to the requirements for your plug-in.
2. Install the DHCP agent:
3. Update any options in the /etc/neutron/dhcp_agent.ini file that depend on the plug-in
in use. See the sub-sections.
Important: If you reboot a node that runs the DHCP agent, you must run the
neutron-ovs-cleanup command before the neutron-dhcp-agent service starts.
On Red Hat, SUSE, and Ubuntu based systems, the neutron-ovs-cleanup service runs the
neutron-ovs-cleanup command automatically. However, on Debian-based systems, you
must manually run this command or write your own system script that runs on boot before the
neutron-dhcp-agent service starts.
Networking dhcp-agent can use dnsmasq driver which supports stateful and stateless DHCPv6 for sub-
nets created with --ipv6_address_mode set to dhcpv6-stateful or dhcpv6-stateless.
For example:
If no dnsmasq process for subnets network is launched, Networking will launch a new one on subnets
dhcp port in qdhcp-XXX namespace. If previous dnsmasq process is already launched, restart dnsmasq
with a new configuration.
Networking will update dnsmasq process and restart it when subnet gets updated.
Note: For dhcp-agent to operate in IPv6 mode use at least dnsmasq v2.63.
After a certain, configured timeframe, networks uncouple from DHCP agents when the agents are no
longer in use. You can configure the DHCP agent to automatically detach from a network when the
agent is out of service, or no longer needed.
This feature applies to all plug-ins that support DHCP scaling. For more information, see the DHCP
agent configuration options listed in the OpenStack Configuration Reference.
These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the OVS
plug-in:
[DEFAULT]
enable_isolated_metadata = True
interface_driver = openvswitch
These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the NSX
plug-in:
[DEFAULT]
enable_metadata_network = True
enable_isolated_metadata = True
interface_driver = openvswitch
These DHCP agent options are required in the /etc/neutron/dhcp_agent.ini file for the
Linux-bridge plug-in:
[DEFAULT]
enabled_isolated_metadata = True
interface_driver = linuxbridge
Configure L3 agent
The OpenStack Networking service has a widely used API extension to allow administrators and projects
to create routers to interconnect L2 networks, and floating IPs to make ports on private networks publicly
accessible.
Many plug-ins rely on the L3 service agent to implement the L3 functionality. However, the following
plug-ins already have built-in L3 capabilities:
• Big Switch/Floodlight plug-in, which supports both the open source Floodlight controller and the
proprietary Big Switch controller.
Note: Only the proprietary BigSwitch controller implements L3 functionality. When using
Floodlight as your OpenFlow controller, L3 functionality is not available.
Warning: Do not configure or use neutron-l3-agent if you use one of these plug-ins.
2. To uplink the node that runs neutron-l3-agent to the external network, create a bridge
named br-ex and attach the NIC for the external network to this bridge.
For example, with Open vSwitch and NIC eth1 connected to the external network, run:
When the br-ex port is added to the eth1 interface, external communication is interrupted. To
avoid this, edit the /etc/network/interfaces file to contain the following information:
## External bridge
auto br-ex
iface br-ex inet static
address 192.27.117.101
netmask 255.255.240.0
gateway 192.27.127.254
dns-nameservers 8.8.8.8
Note: The external bridge configuration address is the external IP address. This address and
gateway should be configured in /etc/network/interfaces.
Do not manually configure an IP address on the NIC connected to the external network for the
node running neutron-l3-agent. Rather, you must have a range of IP addresses from the
external network that can be used by OpenStack Networking for routers that uplink to the external
network. This range must be large enough to have an IP address for each router in the deployment,
as well as each floating IP.
3. The neutron-l3-agent uses the Linux IP stack and iptables to perform L3 forward-
ing and NAT. In order to support multiple routers with potentially overlapping IP addresses,
neutron-l3-agent defaults to using Linux network namespaces to provide isolated forward-
ing contexts. As a result, the IP addresses of routers are not visible simply by running the ip
addr list or ifconfig command on the node. Similarly, you cannot directly ping fixed
IPs.
To do either of these things, you must run the command within a particular network namespace for
the router. The namespace has the name qrouter-ROUTER_UUID. These example commands
run in the router namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b:
Important: If you reboot a node that runs the L3 agent, you must run the
neutron-ovs-cleanup command before the neutron-l3-agent service starts.
On Red Hat, SUSE and Ubuntu based systems, the neutron-ovs-cleanup service runs the
neutron-ovs-cleanup command automatically. However, on Debian-based systems, you
must manually run this command or write your own system script that runs on boot before the
neutron-l3-agent service starts.
How routers are assigned to L3 agents By default, a router is assigned to the L3 agent
with the least number of routers (LeastRoutersScheduler). This can be changed by altering the
router_scheduler_driver setting in the configuration file.
2. If you use one of the following plug-ins, you need to configure the metering agent with these lines
as well:
• An OVS-based plug-in such as OVS, NSX, NEC, BigSwitch/Floodlight:
interface_driver = openvswitch
interface_driver = linuxbridge
driver = iptables
service_plugins = metering
If this option is already defined, add metering to the list, using a comma as separator. For
example:
service_plugins = router,metering
Before you install the OpenStack Networking Hyper-V L2 agent on a Hyper-V compute node, ensure
the compute node has been configured correctly using these instructions.
To install the OpenStack Networking Hyper-V agent and configure the node
1. Download the OpenStack Networking code from the repository:
> cd C:\OpenStack\
> git clone https://opendev.org/openstack/neutron
> cd C:\OpenStack\neutron\
> python setup.py install
[DEFAULT]
control_exchange = neutron
policy_file = C:\etc\policy.yaml
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = IP_ADDRESS
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = <password>
logdir = C:\OpenStack\Log
logfile = neutron-hyperv-agent.log
(continues on next page)
[AGENT]
polling_interval = 2
physical_network_vswitch_mappings = *:YOUR_BRIDGE_NAME
enable_metrics_collection = true
[SECURITYGROUP]
firewall_driver = hyperv.neutron.security_groups_driver.
,→HyperVSecurityGroupsDriver
enable_security_group = true
This table shows examples of Networking commands that enable you to complete basic operations on
agents.
Operation Command
List all available agents. $ openstack network agent list
Show information of a given agent. $ openstack network agent show
AGENT_ID
Update the admin status and description for $ neutron agent-update
a specified agent. The command can be --admin-state-up False AGENT_ID
used to enable and disable agents by using
--admin-state-up parameter set to False
or True.
Delete a given agent. Consider disabling the $ openstack network agent delete
agent before deletion. AGENT_ID
function get_id () {
echo `"$@" | awk '/ id / { print $4 }'`
}
$ source .bashrc
For example:
• If you are using the template driver, specify the following parameters in your Com-
pute catalog template file (default_catalog.templates), along with the region
($REGION) and IP address of the Networking server ($IP).
catalog.$REGION.network.publicURL = http://$IP:9696
catalog.$REGION.network.adminURL = http://$IP:9696
catalog.$REGION.network.internalURL = http://$IP:9696
catalog.$REGION.network.name = Network Service
For example:
catalog.$Region.network.publicURL = http://10.211.55.17:9696
catalog.$Region.network.adminURL = http://10.211.55.17:9696
catalog.$Region.network.internalURL = http://10.211.55.17:9696
catalog.$Region.network.name = Network Service
For information about how to create service entries and users, see the Ocata Installation Tutorials and
Guides for your distribution.
Compute
If you use Networking, do not run the Compute nova-network service (like you do in traditional
Compute deployments). Instead, Compute delegates most network-related decisions to Networking.
Note: Uninstall nova-network and reboot any physical nodes that have been running
nova-network before using them to run Networking. Inadvertently running the nova-network
process while using Networking can cause problems, as can stale iptables rules pushed down by previ-
ously running nova-network.
Compute proxies project-facing API calls to manage security groups and floating IPs to Networking
APIs. However, operator-facing tools such as nova-manage, are not proxied and should not be used.
Warning: When you configure networking, you must use this guide. Do not rely on Compute
networking documentation or past experience with Compute. If a nova command or configuration
option related to networking is not mentioned in this guide, the command is probably not supported
for use with Networking. In particular, you cannot use CLI tools like nova-manage and nova to
manage networks or IP addressing, including both fixed and floating IPs, with Networking.
To ensure that Compute works properly with Networking (rather than the legacy nova-network
mechanism), you must adjust settings in the nova.conf configuration file.
Each time you provision or de-provision a VM in Compute, nova-\* services communicate with
Networking using the standard API. For this to happen, you must configure the following items in the
nova.conf file (used by each nova-compute and nova-api instance).
The Networking service provides security group functionality using a mechanism that is more flexible
and powerful than the security group capabilities built into Compute. Therefore, if you use Networking,
you should always disable built-in security groups and proxy all security group calls to the Networking
API. If you do not, security policies will conflict by being simultaneously applied by both services.
To proxy security groups to Networking, use the following configuration values in the nova.conf file:
nova.conf security group settings
Item Configuration
Update to nova.virt.firewall.NoopFirewallDriver, so that nova-
firewall_driver
compute does not perform iptables-based filtering itself.
Configure metadata
The Compute service allows VMs to query metadata associated with a VM by making a web request
to a special 169.254.169.254 address. Networking supports proxying those requests to nova-api, even
when the requests are made from isolated networks, or from multiple networks that use overlapping IP
addresses.
To enable proxying the requests, you must update the following fields in [neutron] section in the
nova.conf.
nova.conf metadata settings
Item Configuration
Update to true, otherwise nova-api will not properly respond to requests from the
service_metadata_proxy
neutron-metadata-agent.
Update to a string password value. You must also configure the same value in the
metadata_proxy_shared_secret
metadata_agent.ini file, to authenticate requests made for metadata.
The default value of an empty string in both files will allow metadata to function, but
will not be secure if any non-trusted entities have access to the metadata APIs exposed
by nova-api.
Example values for the above settings, assuming a cloud controller node running Compute and Net-
working with an IP address of 192.168.1.2:
[DEFAULT]
use_neutron = True
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[neutron]
url=http://192.168.1.2:9696
auth_strategy=keystone
admin_tenant_name=service
admin_username=neutron
admin_password=password
admin_auth_url=http://192.168.1.2:5000/v2.0
service_metadata_proxy=true
metadata_proxy_shared_secret=foo
This section describes advanced configuration options for various system components. For example,
configuration options where the default works but that the user wants to customize options. After in-
stalling from packages, $NEUTRON_CONF_DIR is /etc/neutron.
L3 metering agent
You can run an L3 metering agent that enables layer-3 traffic metering. In general, you should launch
the metering agent on all nodes that run the L3 agent:
You must configure a driver that matches the plug-in that runs on the service. The driver adds metering
to the routing interface.
Option Value
Open vSwitch
interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini) openvswitch
Linux Bridge
interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini) linuxbridge
L3 metering driver
You must configure any driver that implements the metering abstraction. Currently the only available
implementation uses iptables for metering.
driver = iptables
To enable L3 metering, you must set the following option in the neutron.conf file on the host that
runs neutron-server:
service_plugins = metering
This section is fully described at the High-availability for DHCP in the Networking Guide.
You can manage OpenStack Networking services by using the service command. For example:
After installing and configuring Networking (neutron), projects and administrators can perform create-
read-update-delete (CRUD) API networking operations. This is performed using the Networking
API directly with either the neutron command-line interface (CLI) or the openstack CLI. The
neutron CLI is a wrapper around the Networking API. Every Networking API call has a correspond-
ing neutron command.
The openstack CLI is a common interface for all OpenStack projects, however, not every API oper-
ation has been implemented. For the list of available commands, see Command List.
The neutron CLI includes a number of options. For details, see Create and manage networks.
To learn about advanced capabilities available through the neutron command-line interface (CLI),
read the networking section Create and manage networks in the OpenStack End User Guide.
This table shows example openstack commands that enable you to complete basic network opera-
tions:
Operation Command
Creates a network. $ openstack network create net1
Creates a subnet that is associated with $ openstack subnet create subnet1
net1. --subnet-range 10.0.0.0/24 --network
net1
Lists ports for a specified project. $ openstack port list
Lists ports for a specified project and dis- $ openstack port list -c ID -c "Fixed
plays the ID, Fixed IP Addresses IP Addresses
Shows information for a specified port. $ openstack port show PORT_ID
Note: The device_owner field describes who owns the port. A port whose device_owner begins
with:
• network is created by Networking.
• compute is created by Compute.
Administrative operations
The administrator can run any openstack command on behalf of projects by specifying an Identity
project in the command, as follows:
For example:
Note: To view all project IDs in Identity, run the following command as an Identity service admin user:
This table shows example CLI commands that enable you to complete advanced network operations:
Operation Command
Creates a network that all $ openstack network create --share public-net
projects can use.
Creates a subnet with a $ openstack subnet create subnet1 --gateway 10.
specified gateway IP ad- 0.0.254 --network net1
dress.
Creates a subnet that has $ openstack subnet create subnet1 --no-gateway
no gateway IP address. --network net1
Creates a subnet with $ openstack subnet create subnet1 --network
DHCP disabled. net1 --no-dhcp
Specifies a set of host $ openstack subnet create subnet1 --network
routes net1 --host-route destination=40.0.1.0/24,
gateway=40.0.0.2
Creates a subnet with a $ openstack subnet create subnet1 --network
specified set of dns name net1 --dns-nameserver 8.8.4.4
servers.
Displays all ports and IPs $ openstack port list --network NET_ID
allocated on a network.
Note: During port creation and update, specific extra-dhcp-options can be left blank. For example,
router and classless-static-route. This causes dnsmasq to have an empty option in the
opts file related to the network. For example:
tag:tag0,option:classless-static-route,
tag:tag0,option:router,
This table shows example openstack commands that enable you to complete basic VM networking
operations:
Action Command
Checks available networks. $ openstack network list
Boots a VM with a single NIC on a selected $ openstack server create --image
Networking network. IMAGE --flavor FLAVOR --nic
net-id=NET_ID VM_NAME
Searches for ports with a device_id that $ openstack port list --server
matches the Compute instance UUID. See :ref: VM_ID
Create and delete VMs
Searches for ports, but shows only the $ openstack port list -c "MAC
mac_address of the port. Address" --server VM_ID
Temporarily disables a port from sending traf- $ openstack port set PORT_ID
fic. --disable
Note:
• When you boot a Compute VM, a port on the network that corresponds to the VM NIC is auto-
matically created and associated with the default security group. You can configure security group
rules to enable users to access the VM.
This table shows example openstack commands that enable you to complete advanced VM creation
operations:
Operation Command
Boots a VM with multiple NICs. $ openstack server create --image
IMAGE --flavor FLAVOR --nic
net-id=NET_ID VM_NAME net-id=NET2-ID
VM_NAME
Boots a VM with a specific IP address. $ openstack server create
Note that you cannot use the --max or --image IMAGE --flavor FLAVOR
--min parameters in this case. --nic net-id=NET_ID VM_NAME
v4-fixed-ip=IP-ADDR VM_NAME
Boots a VM that connects to all networks $ openstack server create --image
that are accessible to the project who sub- IMAGE --flavor FLAVOR
mits the request (without the --nic op-
tion).
Note: Cloud images that distribution vendors offer usually have only one active NIC configured. When
you boot with multiple NICs, you must configure additional interfaces on the image or the NICs are not
reachable.
The following Debian/Ubuntu-based example shows how to set up the interfaces within the instance in
the /etc/network/interfaces file. You must apply this configuration to the image.
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet dhcp
You must configure security group rules depending on the type of plug-in you are using. If you are using
a plug-in that:
• Implements Networking security groups, you can configure security group rules directly by using
the openstack security group rule create command. This example enables ping
and ssh access to your VMs.
• Does not implement Networking security groups, you can configure security group rules by us-
ing the openstack security group rule create or euca-authorize command.
These openstack commands enable ping and ssh access to your VMs.
Note: If your plug-in implements Networking security groups, you can also leverage Compute security
groups by setting use_neutron = True in the nova.conf file. After you set this option, all
Compute security group commands are proxied to Networking.
Several plug-ins implement API extensions that provide capabilities similar to what was available in
nova-network. These plug-ins are likely to be of interest to the OpenStack community.
Provider networks
Networks can be categorized as either project networks or provider networks. Project networks are
created by normal users and details about how they are physically realized are hidden from those users.
Provider networks are created with administrative credentials, specifying the details of how the network
is physically realized, usually to match some existing network in the data center.
Provider networks enable administrators to create networks that map directly to the physical networks
in the data center. This is commonly used to give projects direct access to a public network that can be
used to reach the Internet. It might also be used to integrate with VLANs in the network that already
have a defined meaning (for example, enable a VM from the marketing department to be placed on the
same VLAN as bare-metal marketing hosts in the same data center).
The provider extension allows administrators to explicitly manage the relationship between Networking
virtual networks and underlying physical mechanisms such as VLANs and tunnels. When this extension
is supported, Networking client users with administrative privileges see additional provider attributes on
all virtual networks and are able to specify these attributes in order to create provider networks.
The provider extension is supported by the Open vSwitch and Linux Bridge plug-ins. Configuration of
these plug-ins requires familiarity with this extension.
Terminology
A number of terms are used in the provider extension and in the configuration of plug-ins supporting the
provider extension:
The ML2, Open vSwitch, and Linux Bridge plug-ins support VLAN networks, flat networks, and local
networks. Only the ML2 and Open vSwitch plug-ins currently support GRE and VXLAN networks,
provided that the required features exist in the hosts Linux kernel, Open vSwitch, and iproute2 packages.
Provider attributes
The provider extension extends the Networking network resource with these attributes:
To view or set provider extended attributes, a client must be authorized for the
extension:provider_network:view and extension:provider_network:set
actions in the Networking policy configuration. The default Networking configuration authorizes both
actions for users with the admin role. An authorized client or an administrative user can view and set
the provider extended attributes through Networking API calls. See the section called Authentication
and authorization for details on policy configuration.
The Networking API provides abstract L2 network segments that are decoupled from the technology
used to implement the L2 network. Networking includes an API extension that provides abstract L3
routers that API users can dynamically provision and configure. These Networking routers can connect
multiple L2 Networking networks and can also provide a gateway that connects one or more private L2
networks to a shared external network. For example, a public network for access to the Internet. See the
OpenStack Configuration Reference for details on common models of deploying Networking L3 routers.
The L3 router provides basic NAT capabilities on gateway ports that uplink the router to external net-
works. This router SNATs all traffic by default and supports floating IPs, which creates a static one-to-
one mapping from a public IP on the external network to a private IP on one of the other subnets attached
to the router. This allows a project to selectively expose VMs on private networks to other hosts on the
external network (and often to all hosts on the Internet). You can allocate and map floating IPs from one
port to another, as needed.
Basic L3 operations
External networks are visible to all users. However, the default policy settings enable only administrative
users to create, update, and delete external networks.
This table shows example openstack commands that enable you to complete basic L3 operations:
An internal router port can have only one IPv4 subnet and mul-
tiple IPv6 subnets that belong to the same network ID. When
you call router-interface-add with an IPv6 subnet,
this operation adds the interface to an existing internal port
with the same network ID. If a port with the same network ID
does not exist, a new port is created.
Connects a router to an external net-
work, which enables that router to $ openstack router set --external-gateway
act as a NAT gateway for external ,→EXT_NET_ID router1
Security groups
Security groups and security group rules allow administrators and projects to specify the type of traffic
and direction (ingress/egress) that is allowed to pass through a port. A security group is a container for
security group rules.
When a port is created in Networking it is associated with a security group. If a security group is not
specified the port is associated with a default security group. By default, this group drops all ingress
traffic and allows all egress. Rules can be added to this group in order to change the behavior.
To use the Compute security group APIs or use Compute to orchestrate the creation of ports for instances
on specific security groups, you must complete additional configuration. You must configure the /
etc/nova/nova.conf file and set the use_neutron=True option on every node that runs nova-
compute, nova-conductor and nova-api. After you make this change, restart those nova services to pick
up this change. Then, you can use both the Compute and OpenStack Network security group APIs at
the same time.
Note:
• To use the Compute security group API with Networking, the Networking plug-in must implement
the security group API. The following plug-ins currently implement this: ML2, Open vSwitch,
Linux Bridge, NEC, and VMware NSX.
• You must configure the correct firewall driver in the securitygroup section of the plug-
in/agent configuration file. Some plug-ins and agents, such as Linux Bridge Agent and Open
vSwitch Agent, use the no-operation driver as the default, which results in non-working security
groups.
• When using the security group API through Compute, security groups are applied to all ports on
an instance. The reason for this is that Compute security group APIs are instances based and not
port based as Networking.
• When creating or updating a port with a specified security group, the admin tenant can use the
security groups of other tenants.
This table shows example neutron commands that enable you to complete basic security group opera-
tions:
Each vendor can choose to implement additional API extensions to the core API. This section describes
the extensions for each plug-in.
The VMware NSX QoS extension rate-limits network ports to guarantee a specific amount of bandwidth
for each port. This extension, by default, is only accessible by a project with an admin role but is
configurable through the policy.yaml file. To use this extension, create a queue and specify the
min/max bandwidth rates (kbps) and optionally set the QoS Marking and DSCP value (if your network
fabric uses these values to make forwarding decisions). Once created, you can associate a queue with
a network. Then, when ports are created on that network they are automatically created and associated
with the specific queue size that was associated with the network. Because one size queue for a every
port on a network might not be optimal, a scaling factor from the nova flavor rxtx_factor is passed
in from Compute when creating the port to scale the queue.
Lastly, if you want to set a specific baseline QoS policy for the amount of bandwidth a single port can
use (unless a network queue is specified with the network a port is created on) a default queue can be
created in Networking which then causes ports created to be associated with a queue of that size times
the rxtx scaling factor. Note that after a network or default queue is specified, queues are added to ports
that are subsequently created but are not added to existing ports.
This table shows example neutron commands that enable you to complete basic queue operations:
Provider networks can be implemented in different ways by the underlying NSX platform.
The FLAT and VLAN network types use bridged transport connectors. These network types enable the
attachment of large number of ports. To handle the increased scale, the NSX plug-in can back a single
OpenStack Network with a chain of NSX logical switches. You can specify the maximum number of
ports on each logical switch in this chain on the max_lp_per_bridged_ls parameter, which has a
default value of 5,000.
The recommended value for this parameter varies with the NSX version running in the back-end, as
shown in the following table.
Recommended values for max_lp_per_bridged_ls
In addition to these network types, the NSX plug-in also supports a special l3_ext network type, which
maps external networks to specific NSX gateway services as discussed in the next section.
NSX exposes its L3 capabilities through gateway services which are usually configured out
of band from OpenStack. To use NSX with L3 capabilities, first create an L3 gateway
service in the NSX Manager. Next, in /etc/neutron/plugins/vmware/nsx.ini set
default_l3_gw_service_uuid to this value. By default, routers are mapped to this gateway
service.
Starting with the Havana release, the VMware NSX plug-in provides an asynchronous mechanism for
retrieving the operational status for neutron resources from the NSX back-end; this applies to network,
port, and router resources.
The back-end is polled periodically and the status for every resource is retrieved; then the status in the
Networking database is updated only for the resources for which a status change occurred. As opera-
tional status is now retrieved asynchronously, performance for GET operations is consistently improved.
Data to retrieve from the back-end are divided in chunks in order to avoid expensive API requests;
this is achieved leveraging NSX APIs response paging capabilities. The minimum chunk size can be
specified using a configuration option; the actual chunk size is then determined dynamically according
to: total number of resources to retrieve, interval between two synchronization task runs, minimum delay
between two subsequent requests to the NSX back-end.
The operational status synchronization can be tuned or disabled using the configuration options reported
in this table; it is however worth noting that the default values work fine in most cases.
When running multiple OpenStack Networking server instances, the status synchronization task should
not run on every node; doing so sends unnecessary traffic to the NSX back-end and performs unnecessary
DB operations. Set the state_sync_interval configuration option to a non-zero value exclusively
on a node designated for back-end status synchronization.
The fields=status parameter in Networking API requests always triggers an explicit query to the
NSX back end, even when you enable asynchronous state synchronization. For example, GET /v2.
0/networks/NET_ID?fields=status&fields=name.
Big Switch allows router rules to be added to each project router. These rules can be used to enforce
routing policies such as denying traffic between subnets or traffic to external networks. By enforcing
these at the router level, network segmentation policies can be enforced across many VMs that have
differing security groups.
Each project router has a set of router rules associated with it. Each router rule has the attributes in this
table. Router rules and their attributes can be set using the neutron router-update command,
through the horizon interface or the Networking API.
The order of router rules has no effect. Overlapping rules are evaluated using longest prefix matching on
the source and destination fields. The source field is matched first so it always takes higher precedence
over the destination field. In other words, longest prefix matching is used on the destination field only if
there are multiple matching rules with the same source.
Router rules are configured with a router update operation in OpenStack Networking. The update over-
rides any previous rules so all rules must be provided at the same time.
Update a router with rules to permit traffic by default but block traffic from external networks to the
10.10.10.0/24 subnet:
L3 metering
The L3 metering API extension enables administrators to configure IP ranges and assign a specified
label to them to be able to measure traffic that goes through a virtual router.
The L3 metering extension is decoupled from the technology that implements the measurement. Two
abstractions have been added: One is the metering label that can contain metering rules. Because a
metering label is associated with a project, all virtual routers in this project are associated with this
label.
Logging settings
Networking components use Python logging module to do logging. Logging configuration can be pro-
vided in neutron.conf or as command-line options. Command options override ones in neutron.
conf.
To configure logging for Networking components, use one of these methods:
• Provide logging settings in a logging configuration file.
See Python logging how-to to learn more about logging.
• Provide logging setting in neutron.conf.
[DEFAULT]
# Default log level is WARNING
# Show debugging output in logs (sets DEBUG log level output)
# debug = False
# use_syslog = False
# syslog_log_facility = LOG_USER
Notifications
Notifications can be sent when Networking resources such as network, subnet and port are created,
updated or deleted.
Notification options
To support DHCP agent, rpc_notifier driver must be set. To set up the notification, edit notification
options in neutron.conf:
Setting cases
These options configure the Networking server to send notifications through logging and RPC. The
logging options are described in OpenStack Configuration Reference . RPC notifications go to
notifications.info queue bound to a topic exchange defined by control_exchange in
neutron.conf.
Notification System Options
A notification can be sent when a network, subnet, or port is created, updated or deleted. The notification
system options are:
• notification_driver Defines the driver or drivers to handle the sending of a notification.
The six available options are:
– messaging Send notifications using the 1.0 message format.
– messagingv2 Send notifications using the 2.0 message format (with a message en-
velope).
– routing Configurable routing notifier (by priority or event_type).
– log Publish notifications using Python logging infrastructure.
– test Store notifications in memory for test verification.
– noop Disable sending notifications entirely.
• default_notification_level Is used to form topic names or to set a logging level.
• default_publisher_id Is a part of the notification payload.
• notification_topics AMQP topic used for OpenStack notifications. They can
be comma-separated values. The actual topic names will be the values of
default_notification_level.
• control_exchange This is an option defined in oslo.messaging. It is the default exchange
under which topics are scoped. May be overridden by an exchange name specified in the
transport_url option. It is a string value.
Below is a sample neutron.conf configuration file:
notification_driver = messagingv2
default_notification_level = INFO
host = myhost.com
default_publisher_id = $host
notification_topics = notifications
control_exchange = openstack
Networking uses the Identity service as the default authentication service. When the Identity service
is enabled, users who submit requests to the Networking service must provide an authentication token
in X-Auth-Token request header. Users obtain this token by authenticating with the Identity service
endpoint. For more information about authentication with the Identity service, see OpenStack Identity
service API v3 Reference. When the Identity service is enabled, it is not mandatory to specify the project
ID for resources in create requests because the project ID is derived from the authentication token.
The default authorization settings only allow administrative users to create resources on behalf of a
different project. Networking uses information received from Identity to authorize user requests. Net-
working handles two kind of authorization policies:
• Operation-based policies specify access criteria for specific operations, possibly with fine-
grained control over specific attributes.
• Resource-based policies specify whether access to specific resource is granted or not according
to the permissions configured for the resource (currently available only for the network resource).
The actual authorization policies enforced in Networking might vary from deployment to deploy-
ment.
The policy engine reads entries from the policy.yaml file. The actual location of this file might vary
from distribution to distribution. Entries can be updated while the system is running, and no service
restart is required. Every time the policy file is updated, the policies are automatically reloaded. Cur-
rently the only way of updating such policies is to edit the policy file. In this section, the terms policy and
rule refer to objects that are specified in the same way in the policy file. There are no syntax differences
between a rule and a policy. A policy is something that is matched directly from the Networking policy
engine. A rule is an element in a policy, which is evaluated. For instance in "create_subnet":
"rule:admin_or_network_owner", create_subnet is a policy, and admin_or_network_owner is
a rule.
Policies are triggered by the Networking policy engine whenever one of them matches a Networking API
operation or a specific attribute being used in a given operation. For instance the create_subnet
policy is triggered every time a POST /v2.0/subnets request is sent to the Networking server;
on the other hand create_network:shared is triggered every time the shared attribute is explic-
itly specified (and set to a value different from its default) in a POST /v2.0/networks request.
It is also worth mentioning that policies can also be related to specific API extensions; for instance
extension:provider_network:set is triggered if the attributes defined by the Provider Net-
work extensions are specified in an API request.
An authorization policy can be composed by one or more rules. If more rules are specified then the eval-
uation policy succeeds if any of the rules evaluates successfully; if an API operation matches multiple
policies, then all the policies must evaluate successfully. Also, authorization rules are recursive. Once a
rule is matched, the rule(s) can be resolved to another rule, until a terminal rule is reached.
The Networking policy engine currently defines the following kinds of terminal rules:
• Role-based rules evaluate successfully if the user who submits the request has the specified role.
For instance "role:admin" is successful if the user who submits the request is an administra-
tor.
• Field-based rules evaluate successfully if a field of the resource specified in the current request
matches a specific value. For instance "field:networks:shared=True" is successful if
the shared attribute of the network resource is set to true.
• Generic rules compare an attribute in the resource with an attribute extracted from the users
security credentials and evaluates successfully if the comparison is successful. For instance
"tenant_id:%(tenant_id)s" is successful if the project identifier in the resource is equal
to the project identifier of the user submitting the request.
This extract is from the default policy.yaml file:
• A rule that evaluates successfully if the current user is an administrator or the owner of the resource
specified in the request (project identifier is equal).
• The default policy that is always evaluated if an API operation does not match any of the policies
in policy.yaml.
"default": "rule:admin_or_owner"
"create_subnet": "rule:admin_or_network_owner"
"get_subnet": "rule:admin_or_owner or rule:shared"
"update_subnet": "rule:admin_or_network_owner"
"delete_subnet": "rule:admin_or_network_owner"
"create_network": ""
• This policy restricts the ability to manipulate the shared attribute for a network to administrators
only.
"update_network": "rule:admin_or_owner"
"delete_network": "rule:admin_or_owner"
"create_port": ""
"create_port:mac_address": "rule:admin_or_network_owner"
"create_port:fixed_ips": "rule:admin_or_network_owner"
• This policy restricts the ability to manipulate the mac_address attribute for a port only to admin-
istrators and the owner of the network where the port is attached.
"get_port": "rule:admin_or_owner"
"update_port": "rule:admin_or_owner"
"delete_port": "rule:admin_or_owner"
In some cases, some operations are restricted to administrators only. This example shows you how to
modify a policy file to permit project to define networks, see their resources, and permit administrative
users to perform all other operations:
NINE
CONFIGURATION GUIDE
This section provides a list of all configuration options for various neutron services. These are auto-
generated from neutron code when this documentation is built.
Configuration filenames used below are filenames usually used, but there is no restriction on configura-
tion filename in neutron and you can use arbitrary file names.
9.1.1 neutron.conf
DEFAULT
state_path
Type string
Default /var/lib/neutron
Where to store Neutron state files. This directory must be writable by the agent.
bind_host
Type host address
Default 0.0.0.0
The host IP to bind to.
bind_port
Type port number
Default 9696
Minimum Value 0
Maximum Value 65535
The port to bind to
api_extensions_path
Type string
Default ''
643
Neutron Documentation, Release 18.1.0.dev178
The path for API extensions. Note that this can be a colon-separated list of paths. For exam-
ple: api_extensions_path = extensions:/path/to/more/exts:/even/more/exts. The __path__ of neu-
tron.extensions is appended to this, so if your extensions are in there you dont need to specify
them here.
auth_strategy
Type string
Default keystone
The type of authentication to use
core_plugin
Type string
Default <None>
The core plugin Neutron will use
service_plugins
Type list
Default []
The service plugins Neutron will use
base_mac
Type string
Default fa:16:3e:00:00:00
The base MAC address Neutron will use for VIFs. The first 3 octets will remain unchanged. If
the 4th octet is not 00, it will also be used. The others will be randomly generated.
allow_bulk
Type boolean
Default True
Allow the usage of the bulk API
pagination_max_limit
Type string
Default -1
The maximum number of items returned in a single response, value was infinite or negative integer
means no limit
default_availability_zones
Type list
Default []
Default value of availability zone hints. The availability zone aware schedulers use this when
the resources availability_zone_hints is empty. Multiple availability zones can be specified by a
comma separated string. This value can be empty. In this case, even if availability_zone_hints
for a resource is empty, availability zone is considered for high availability while scheduling the
resource.
max_dns_nameservers
Type integer
Default 5
Maximum number of DNS nameservers per subnet
max_subnet_host_routes
Type integer
Default 20
Maximum number of host routes per subnet
ipv6_pd_enabled
Type boolean
Default False
Enables IPv6 Prefix Delegation for automatic subnet CIDR allocation. Set to True to enable
IPv6 Prefix Delegation for subnet allocation in a PD-capable environment. Users making subnet
creation requests for IPv6 subnets without providing a CIDR or subnetpool ID will be given a
CIDR via the Prefix Delegation mechanism. Note that enabling PD will override the behavior of
the default IPv6 subnetpool.
dhcp_lease_duration
Type integer
Default 86400
DHCP lease duration (in seconds). Use -1 to tell dnsmasq to use infinite lease times.
dns_domain
Type string
Default openstacklocal
Domain to use for building the hostnames
external_dns_driver
Type string
Default <None>
Driver for external DNS integration.
dhcp_agent_notification
Type boolean
Default True
Allow sending resource operation notification to DHCP agent
allow_overlapping_ips
Type boolean
Default False
Allow overlapping IP support in Neutron. Attention: the following parameter MUST be set to
False if Neutron is being used in conjunction with Nova security groups.
host
Type host address
Default example.domain
This option has a sample default set, which means that its actual default value may vary from the
one documented above.
Hostname to be used by the Neutron server, agents and services running on this machine. All the
agents and services running on this machine must use the same host value.
network_link_prefix
Type string
Default <None>
This string is prepended to the normal URL that is returned in links to the OpenStack Network
API. If it is empty (the default), the URLs are returned unchanged.
notify_nova_on_port_status_changes
Type boolean
Default True
Send notification to nova when port status changes
notify_nova_on_port_data_changes
Type boolean
Default True
Send notification to nova when port data (fixed_ips/floatingip) changes so nova can update its
cache.
send_events_interval
Type integer
Default 2
Number of seconds between sending events to nova if there are any events to send.
setproctitle
Type string
Default on
Set process name to match child worker role. Available options are: off - retains the previous
behavior; on - renames processes to neutron-server: role (original string); brief - renames the
same as on, but without the original string, such as neutron-server: role.
ipam_driver
Type string
Default internal
Neutron IPAM (IP address management) driver to use. By default, the reference implementation
of the Neutron IPAM driver is used.
vlan_transparent
Type boolean
Default False
If True, then allow plugins that support it to create VLAN transparent networks.
filter_validation
Type boolean
Default True
If True, then allow plugins to decide whether to perform validations on filter parameters. Filter
validation is enabled if this config is turned on and it is supported by all plugins
global_physnet_mtu
Type integer
Default 1500
MTU of the underlying physical network. Neutron uses this value to calculate MTU for all virtual
network components. For flat and VLAN networks, neutron uses this value without modifica-
tion. For overlay networks such as VXLAN, neutron automatically subtracts the overlay protocol
overhead from this value. Defaults to 1500, the standard value for Ethernet.
http_retries
Type integer
Default 3
Minimum Value 0
Number of times client connections (nova, ironic) should be retried on a failed HTTP call. 0 (zero)
means connection is attempted only once (not retried). Setting to any positive integer means that
on failure the connection is retried that many times. For example, setting to 3 means total attempts
to connect will be 4.
enable_traditional_dhcp
Type boolean
Default True
If False, neutron-server will disable the following DHCP-agent related functions:1. DHCP pro-
visioning block 2. DHCP scheduler API extension 3. Network scheduling mechanism 4. DHCP
RPC/notification
backlog
Type integer
Default 4096
Number of backlog requests to configure the socket with
retry_until_window
Type integer
Default 30
Number of seconds to keep retrying to listen
use_ssl
Type boolean
Default False
Enable SSL on the API server
periodic_interval
Type integer
Default 40
Seconds between running periodic tasks.
api_workers
Type integer
Default <None>
Number of separate API worker processes for service. If not specified, the default is equal to the
number of CPUs available for best performance, capped by potential RAM usage.
rpc_workers
Type integer
Default <None>
Number of RPC worker processes for service. If not specified, the default is equal to half the
number of API workers.
rpc_state_report_workers
Type integer
Default 1
Number of RPC worker processes dedicated to state reports queue.
periodic_fuzzy_delay
Type integer
Default 5
Range of seconds to randomly delay when starting the periodic task scheduler to reduce stamped-
ing. (Disable by setting to 0)
rpc_response_max_timeout
Type integer
Default 600
Maximum seconds to wait for a response from an RPC call.
interface_driver
Type string
Default <None>
The driver used to manage the virtual interface.
metadata_proxy_socket
Type string
Default $state_path/metadata_proxy
Location for Metadata Proxy UNIX domain socket.
metadata_proxy_user
Type string
Default ''
User (uid or name) running metadata proxy after its initialization (if empty: agent effective user).
metadata_proxy_group
Type string
Default ''
Group (gid or name) running metadata proxy after its initialization (if empty: agent effective
group).
agent_down_time
Type integer
Default 75
Seconds to regard the agent is down; should be at least twice report_interval, to be sure the agent
is down for good.
dhcp_load_type
Type string
Default networks
Valid Values networks, subnets, ports
Representing the resource type whose load is being reported by the agent. This can be networks,
subnets or ports. When specified (Default is networks), the server will extract particular load
sent as part of its agent configuration object from the agent report state, which is the number
of resources being consumed, at every report_interval.dhcp_load_type can be used in combina-
tion with network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.WeightScheduler
When the network_scheduler_driver is WeightScheduler, dhcp_load_type can be configured to
represent the choice for the resource being balanced. Example: dhcp_load_type=networks
enable_new_agents
Type boolean
Default True
Agent starts with admin_state_up=False when enable_new_agents=False. In the case, users re-
sources will not be scheduled automatically to the agent until admin changes admin_state_up to
True.
max_routes
Type integer
Default 30
Maximum number of routes per router
enable_snat_by_default
Type boolean
Default True
Define the default value of enable_snat if not provided in external_gateway_info.
network_scheduler_driver
Type string
Default neutron.scheduler.dhcp_agent_scheduler.
WeightScheduler
Driver to use for scheduling network to DHCP agent
network_auto_schedule
Type boolean
Default True
Allow auto scheduling networks to DHCP agent.
allow_automatic_dhcp_failover
Type boolean
Default True
Automatically remove networks from offline DHCP agents.
dhcp_agents_per_network
Type integer
Default 1
Minimum Value 1
Number of DHCP agents scheduled to host a tenant network. If this number is greater than 1, the
scheduler automatically assigns multiple DHCP agents for a given tenant network, providing high
availability for DHCP service.
enable_services_on_agents_with_admin_state_down
Type boolean
Default False
Enable services on an agent with admin_state_up False. If this option is False, when ad-
min_state_up of an agent is turned False, services on it will be disabled. Agents with ad-
min_state_up False are not selected for automatic scheduling regardless of this option. But manual
scheduling to such agents is available if this option is True.
dvr_base_mac
Type string
Default fa:16:3f:00:00:00
The base mac address used for unique DVR instances by Neutron. The first 3 octets will remain
unchanged. If the 4th octet is not 00, it will also be used. The others will be randomly gener-
ated. The dvr_base_mac must be different from base_mac to avoid mixing them up with MACs
allocated for tenant ports. A 4 octet example would be dvr_base_mac = fa:16:3f:4f:00:00. The
default is 3 octet
router_distributed
Type boolean
Default False
System-wide flag to determine the type of router that tenants can create. Only admin can override.
enable_dvr
Type boolean
Default True
Determine if setup is configured for DVR. If False, DVR API extension will be disabled.
host_dvr_for_dhcp
Type boolean
Default True
Flag to determine if hosting a DVR local router to the DHCP agent is desired. If False, any L3
function supported by the DHCP agent instance will not be possible, for instance: DNS.
router_scheduler_driver
Type string
Default neutron.scheduler.l3_agent_scheduler.
LeastRoutersScheduler
Driver to use for scheduling router to a default L3 agent
router_auto_schedule
Type boolean
Default True
Allow auto scheduling of routers to L3 agent.
allow_automatic_l3agent_failover
Type boolean
Default False
Type string
Default api-paste.ini
File name for the paste.deploy config for api service
wsgi_log_format
Type string
Default %(client_ip)s "%(request_line)s" status:
%(status_code)s len: %(body_length)s time:
%(wall_seconds).7f
A python format string that is used as the template to generate log lines. The following values can
beformatted into it: client_ip, date_time, request_line, status_code, body_length, wall_seconds.
tcp_keepidle
Type integer
Default 600
Sets the value of TCP_KEEPIDLE in seconds for each server socket. Not supported on OS X.
wsgi_default_pool_size
Type integer
Default 100
Size of the pool of greenthreads used by wsgi
max_header_line
Type integer
Default 16384
Maximum line size of message headers to be accepted. max_header_line may need to be increased
when using large tokens (typically those generated when keystone is configured to use PKI tokens
with big service catalogs).
wsgi_keep_alive
Type boolean
Default True
If False, closes the client socket connection explicitly.
client_socket_timeout
Type integer
Default 900
Timeout for client connections socket operations. If an incoming connection is idle for this number
of seconds it will be closed. A value of 0 means wait forever.
debug
Type boolean
Default False
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [%(request_id)s %(user_identity)s]
%(instance)s%(message)s
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
logging_default_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [-] %(instance)s%(message)s
Format string to use for log messages when context is undefined. Used by
oslo_log.formatters.ContextFormatter
logging_debug_format_suffix
Type string
Default %(funcName)s %(pathname)s:%(lineno)d
Additional data to append to log message when logging level for the message is DEBUG. Used
by oslo_log.formatters.ContextFormatter
logging_exception_prefix
Type string
Default %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s
%(instance)s
Prefix each line of exception output with this format. Used by
oslo_log.formatters.ContextFormatter
logging_user_identity_format
Type string
Default %(user)s %(tenant)s %(domain)s %(user_domain)s
%(project_domain)s
Defines the format string for %(user_identity)s that is used in logging_context_format_string.
Used by oslo_log.formatters.ContextFormatter
default_log_levels
Type list
Default ['amqp=WARN', 'amqplib=WARN', 'boto=WARN',
'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO',
'oslo.messaging=INFO', 'oslo_messaging=INFO',
'iso8601=WARN', 'requests.packages.urllib3.
connectionpool=WARN', 'urllib3.connectionpool=WARN',
'websocket=WARN', 'requests.packages.
urllib3.util.retry=WARN', 'urllib3.util.
retry=WARN', 'keystonemiddleware=WARN', 'routes.
middleware=WARN', 'stevedore=WARN', 'taskflow=WARN',
'keystoneauth=WARN', 'oslo.cache=INFO',
'oslo_policy=INFO', 'dogpile.core.dogpile=INFO']
List of package logging levels in logger=LEVEL pairs. This option is ignored if
log_config_append is set.
publish_errors
Type boolean
Default False
Enables or disables publication of error events.
instance_format
Type string
conn_pool_min_size
Type integer
Default 2
rpc_response_timeout
Type integer
Default 60
Seconds to wait for a response from a call.
transport_url
Type string
Default rabbit://
The network address and optional user credentials for connecting to the messaging backend, in
URL format. The expected format is:
driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query
Example: rabbit://rabbitmq:[email protected]:5672//
For full details on the fields in the URL see the documentation of oslo_messaging.TransportURL
at https://docs.openstack.org/oslo.messaging/latest/reference/transport.html
control_exchange
Type string
Default neutron
The default exchange under which topics are scoped. May be overridden by an exchange name
specified in the transport_url option.
rpc_ping_enabled
Type boolean
Default False
Add an endpoint to answer to ping calls. Endpoint is named oslo_rpc_server_ping
agent
root_helper
Type string
Default sudo
Root helper application. Use sudo neutron-rootwrap /etc/neutron/rootwrap.conf to use the real
root filter facility. Change to sudo to skip the filtering and just run the command directly.
use_helper_for_ns_read
Type boolean
Default True
Use the root helper when listing the namespaces on a system. This may not be required depending
on the security configuration. If the root helper is not required, set this to False for a performance
improvement.
root_helper_daemon
Type string
Default <None>
Root helper daemon application to use when possible.
Use sudo neutron-rootwrap-daemon /etc/neutron/rootwrap.conf to run rootwrap in daemon mode
which has been reported to improve performance at scale. For more information on running
rootwrap in daemon mode, see:
https://docs.openstack.org/oslo.rootwrap/latest/user/usage.html#daemon-mode
report_interval
Type floating point
Default 30
Seconds between nodes reporting state to server; should be less than agent_down_time, best if it
is half or less than agent_down_time.
log_agent_heartbeats
Type boolean
Default False
Log agent heartbeats
comment_iptables_rules
Type boolean
Default True
Add comments to iptables rules. Set to false to disallow the addition of comments to generated ipt-
ables rules that describe each rules purpose. System must support the iptables comments module
for addition of comments.
debug_iptables_rules
Type boolean
Default False
Duplicate every iptables difference calculation to ensure the format being generated matches the
format of iptables-save. This option should not be turned on for production systems because it
imposes a performance penalty.
check_child_processes_action
Type string
Default respawn
Valid Values respawn, exit
Action to be executed when a child process dies
check_child_processes_interval
Type integer
Default 60
Interval between checks of child process liveness (seconds), use 0 to disable
kill_scripts_path
Type string
Default /etc/neutron/kill_scripts/
Location of scripts used to kill external processes. Names of scripts here must follow the pattern:
<process-name>-kill where <process-name> is name of the process which should be killed using
this script. For example, kill script for dnsmasq process should be named dnsmasq-kill. If path is
set to None, then default kill command will be used to stop processes.
availability_zone
Type string
Default nova
Availability zone of this node
cors
allowed_origin
Type list
Default <None>
Indicate whether this resource may be shared with the domain received in the requests ori-
gin header. Format: <protocol>://<host>[:<port>], no trailing slash. Example: https://horizon.
example.com
allow_credentials
Type boolean
Default True
Indicate that the actual request can include user credentials
expose_headers
Type list
Default ['X-Auth-Token', 'X-Subject-Token',
'X-Service-Token', 'X-OpenStack-Request-ID',
'OpenStack-Volume-microversion']
Indicate which headers are safe to expose to the API. Defaults to HTTP Simple Headers.
max_age
Type integer
Default 3600
Maximum cache age of CORS preflight requests.
allow_methods
Type list
Default ['GET', 'PUT', 'POST', 'DELETE', 'PATCH']
Indicate which methods can be used during the actual request.
allow_headers
Type list
Default ['X-Auth-Token', 'X-Identity-Status', 'X-Roles',
'X-Service-Catalog', 'X-User-Id', 'X-Tenant-Id',
'X-OpenStack-Request-ID']
Indicate which header field names may be used during the actual request.
database
engine
Type string
Default ''
Database engine for which script will be generated when using offline migration.
sqlite_synchronous
Type boolean
Default True
If True, SQLite uses synchronous mode.
backend
Type string
Default sqlalchemy
connection
Type string
Default <None>
The SQLAlchemy connection string to use to connect to the database.
slave_connection
Type string
Default <None>
The SQLAlchemy connection string to use to connect to the slave database.
mysql_sql_mode
Type string
Default TRADITIONAL
The SQL mode to be used for MySQL sessions. This option, including the default, overrides any
server-set SQL mode. To use whatever SQL mode is set by the server configuration, set this to no
value. Example: mysql_sql_mode=
mysql_enable_ndb
Type boolean
Default False
If True, transparently enables support for handling MySQL Cluster (NDB).
connection_recycle_time
Type integer
Default 3600
Connections which have been present in the connection pool longer than this number of seconds
will be replaced with a new one the next time they are checked out from the pool.
max_pool_size
Type integer
Default 5
Maximum number of SQL connections to keep open in a pool. Setting a value of 0 indicates no
limit.
max_retries
Type integer
Default 10
Maximum number of database connection retries during startup. Set to -1 to specify an infinite
retry count.
retry_interval
Type integer
Default 10
Interval between retries of opening a SQL connection.
max_overflow
Type integer
Default 50
If set, use this value for max_overflow with SQLAlchemy.
connection_debug
Type integer
Default 0
Minimum Value 0
Maximum Value 100
Verbosity of SQL debugging information: 0=None, 100=Everything.
connection_trace
Type boolean
Default False
Add Python stack traces to SQL as comment strings.
pool_timeout
Type integer
Default <None>
If set, use this value for pool_timeout with SQLAlchemy.
use_db_reconnect
Type boolean
Default False
Enable the experimental use of database reconnect on connection lost.
db_retry_interval
Type integer
Default 1
Seconds between retries of a database transaction.
db_inc_retry_interval
Type boolean
Default True
If True, increases the interval between retries of a database operation up to db_max_retry_interval.
db_max_retry_interval
Type integer
Default 10
If db_inc_retry_interval is set, the maximum seconds between retries of a database operation.
db_max_retries
Type integer
Default 20
Maximum retries in case of connection error or deadlock error before error is raised. Set to -1 to
specify an infinite retry count.
connection_parameters
Type string
Default ''
Optional URL parameters to append onto the connection URL at connect time; specify as
param1=value1¶m2=value2&
ironic
auth_url
Type unknown type
Default <None>
Authentication URL
auth_type
Type unknown type
Default <None>
Authentication type to load
cafile
Type string
Default <None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
Type string
Default <None>
PEM encoded client certificate cert file
collect_timing
Type boolean
Default False
Collect per-API call timing information.
default_domain_id
Type unknown type
Default <None>
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project
domain in v3 and ignored in v2 authentication.
default_domain_name
Type unknown type
Default <None>
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and
project domain in v3 and ignored in v2 authentication.
domain_id
Type unknown type
Default <None>
Domain ID to scope to
domain_name
Type unknown type
Default <None>
Domain name to scope to
insecure
Type boolean
Default False
Verify HTTPS connections.
keyfile
Type string
Default <None>
PEM encoded client certificate key file
password
Type unknown type
Default <None>
Users password
project_domain_id
Type unknown type
Default <None>
Domain ID containing project
project_domain_name
Type unknown type
Default <None>
Domain name containing project
project_id
Type unknown type
Default <None>
Project ID to scope to
project_name
Type unknown type
Default <None>
Project name to scope to
split_loggers
Type boolean
Default False
enable_notifications
Type boolean
Default False
Send notification events to ironic. (For example on relevant port status changes.)
keystone_authtoken
www_authenticate_uri
Type string
Default <None>
Complete public Identity API endpoint. This endpoint should not be an admin endpoint, as it
should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to
authenticate. Although this endpoint should ideally be unversioned, client support in the wild
varies. If youre using a versioned v2 endpoint here, then this should not be the same endpoint the
service user utilizes for validating tokens, because normal end users may not be able to reach that
endpoint.
auth_uri
Type string
Default <None>
Complete public Identity API endpoint. This endpoint should not be an admin endpoint, as it
should be accessible by all end users. Unauthenticated clients are redirected to this endpoint to
authenticate. Although this endpoint should ideally be unversioned, client support in the wild
varies. If youre using a versioned v2 endpoint here, then this should not be the same endpoint the
service user utilizes for validating tokens, because normal end users may not be able to reach that
endpoint. This option is deprecated in favor of www_authenticate_uri and will be removed in the
S release.
Warning: This option is deprecated for removal since Queens. Its value may be silently
ignored in the future.
auth_version
Type string
Default <None>
API version of the Identity API endpoint.
interface
Type string
Default internal
Interface to use for the Identity API endpoint. Valid values are public, internal (default) or admin.
delay_auth_decision
Type boolean
Default False
Do not handle authorization requests within the middleware, but delegate the authorization deci-
sion to downstream WSGI components.
http_connect_timeout
Type integer
Default <None>
Request timeout value for communicating with Identity API server.
http_request_max_retries
Type integer
Default 3
How many times are we trying to reconnect when communicating with Identity API Server.
cache
Type string
Default <None>
Request environment key where the Swift cache object is stored. When auth_token middleware
is deployed with a Swift cache, use this option to have the middleware share a caching backend
with swift. Otherwise, use the memcached_servers option instead.
certfile
Type string
Default <None>
Required if identity server requires client certificate
keyfile
Type string
Default <None>
Required if identity server requires client certificate
cafile
Type string
Default <None>
A PEM encoded Certificate Authority to use when verifying HTTPs connections. Defaults to
system CAs.
insecure
Type boolean
Default False
Verify HTTPS connections.
region_name
Type string
Default <None>
The region in which the identity server can be found.
memcached_servers
Type list
Default <None>
Optionally specify a list of memcached server(s) to use for caching. If left undefined, tokens will
instead be cached in-process.
token_cache_time
Type integer
Default 300
In order to prevent excessive effort spent validating tokens, the middleware caches previously-seen
tokens for a configurable duration (in seconds). Set to -1 to disable caching completely.
memcache_security_strategy
Type string
Default None
Valid Values None, MAC, ENCRYPT
(Optional) If defined, indicate whether token data should be authenticated or authenticated and
encrypted. If MAC, token data is authenticated (with HMAC) in the cache. If ENCRYPT, token
data is encrypted and authenticated in the cache. If the value is not one of these options or empty,
auth_token will raise an exception on initialization.
memcache_secret_key
Type string
Default <None>
(Optional, mandatory if memcache_security_strategy is defined) This string is used for key deriva-
tion.
memcache_pool_dead_retry
Type integer
Default 300
(Optional) Number of seconds memcached server is considered dead before it is tried again.
memcache_pool_maxsize
Type integer
Default 10
(Optional) Maximum total number of open connections to every memcached server.
memcache_pool_socket_timeout
Type integer
Default 3
(Optional) Socket timeout in seconds for communicating with a memcached server.
memcache_pool_unused_timeout
Type integer
Default 60
(Optional) Number of seconds a connection to memcached is held unused in the pool before it is
closed.
memcache_pool_conn_get_timeout
Type integer
Default 10
(Optional) Number of seconds that an operation will wait to get a memcached client connection
from the pool.
memcache_use_advanced_pool
Type boolean
Default False
(Optional) Use the advanced (eventlet safe) memcached client pool. The advanced pool will only
work under python 2.x.
include_service_catalog
Type boolean
Default True
(Optional) Indicate whether to set the X-Service-Catalog header. If False, middleware will not ask
for service catalog on token validation and will not set the X-Service-Catalog header.
enforce_token_bind
Type string
Default permissive
Used to control the use and type of token binding. Can be set to: disabled to not check token
binding. permissive (default) to validate binding information if the bind type is of a form known
to the server and ignore it if not. strict like permissive but if the bind type is unknown the token
will be rejected. required any form of token binding is needed to be allowed. Finally the name of
a binding method that must be present in tokens.
service_token_roles
Type list
Default ['service']
A choice of roles that must be present in a service token. Service tokens are allowed to request
that an expired token can be used and so this check should tightly control that only actual services
should be sending this token. Roles here are applied as an ANY check so any role in this list must
be present. For backwards compatibility reasons this currently only affects the allow_expired
check.
service_token_roles_required
Type boolean
Default False
For backwards compatibility reasons we must let valid service tokens pass that dont pass the
service_token_roles check as valid. Setting this true will become the default in a future release
and should be enabled if possible.
service_type
Type string
Default <None>
The name or type of the service as it appears in the service catalog. This is used to validate tokens
that have restricted access rules.
auth_type
Type unknown type
Default <None>
Authentication type to load
auth_section
nova
region_name
Type string
Default <None>
Name of nova region to use. Useful if keystone manages more than one region.
endpoint_type
Type string
Default public
Valid Values public, admin, internal
Type of the nova endpoint to use. This endpoint will be looked up in the keystone catalog and
should be one of public, internal or admin.
live_migration_events
Type boolean
Default False
When this option is enabled, during the live migration, the OVS agent will only send the vif-
plugged-event when the destination host interface is bound. This option also disables any other
agent (like DHCP) to send to Nova this event when the port is provisioned.This option can be en-
abled if Nova patch https://review.opendev.org/c/openstack/nova/+/767368 is in place.This option
is temporary and will be removed in Y and the behavior will be True.
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason In Y the Nova patch https://review.opendev.org/c/openstack/nova/+/
767368 will be in the code even when running a Nova server in X.
auth_url
Type unknown type
Default <None>
Authentication URL
auth_type
Type unknown type
Default <None>
cafile
Type string
Default <None>
PEM encoded Certificate Authority to use when verifying HTTPs connections.
certfile
Type string
Default <None>
PEM encoded client certificate cert file
collect_timing
Type boolean
Default False
Collect per-API call timing information.
default_domain_id
Type unknown type
Default <None>
Optional domain ID to use with v3 and v2 parameters. It will be used for both the user and project
domain in v3 and ignored in v2 authentication.
default_domain_name
Type unknown type
Default <None>
Optional domain name to use with v3 API and v2 parameters. It will be used for both the user and
project domain in v3 and ignored in v2 authentication.
domain_id
Type unknown type
Default <None>
Domain ID to scope to
domain_name
Type unknown type
Default <None>
Domain name to scope to
insecure
Type boolean
Default False
Verify HTTPS connections.
keyfile
Type string
Default <None>
PEM encoded client certificate key file
password
Type unknown type
Default <None>
Users password
project_domain_id
Type unknown type
Default <None>
Domain ID containing project
project_domain_name
Type unknown type
Default <None>
Domain name containing project
project_id
Type unknown type
Default <None>
Project ID to scope to
project_name
Type unknown type
Default <None>
Project name to scope to
split_loggers
Type boolean
Default False
Log requests to multiple loggers.
system_scope
Type unknown type
Default <None>
Scope for system operations
tenant_id
Type unknown type
Default <None>
Tenant ID
tenant_name
Type unknown type
Default <None>
Tenant Name
timeout
Type integer
Default <None>
Timeout value for http requests
trust_id
Type unknown type
Default <None>
Trust ID
user_domain_id
Type unknown type
Default <None>
Users domain id
user_domain_name
Type unknown type
Default <None>
Users domain name
user_id
Type unknown type
Default <None>
User id
username
Type unknown type
Default <None>
Username
oslo_concurrency
disable_process_locking
Type boolean
Default False
Enables or disables inter-process locks.
lock_path
Type string
Default <None>
Directory to use for lock files. For security, the specified directory should only be writable
by the user running the processes that need locking. Defaults to environment variable
OSLO_LOCK_PATH. If external locks are used, a lock path must be set.
oslo_messaging_amqp
container_name
Type string
Default <None>
Name for the AMQP container. must be globally unique. Defaults to a generated UUID
idle_timeout
Type integer
Default 0
Timeout for inactive connections (in seconds)
trace
Type boolean
Default False
Debug: dump AMQP frames to stdout
ssl
Type boolean
Default False
Attempt to connect via SSL. If no other ssl-related parameters are given, it will use the systems
CA-bundle to verify the servers certificate.
ssl_ca_file
Type string
Default ''
CA certificate PEM file used to verify the servers certificate
ssl_cert_file
Type string
Default ''
Self-identifying certificate PEM file for client authentication
ssl_key_file
Type string
Default ''
Private key PEM file used to sign ssl_cert_file certificate (optional)
ssl_key_password
Type string
Default <None>
Password for decrypting ssl_key_file (if encrypted)
ssl_verify_vhost
Type boolean
Default False
By default SSL checks that the name in the servers certificate matches the hostname in the trans-
port_url. In some configurations it may be preferable to use the virtual hostname instead, for
example if the server uses the Server Name Indication TLS extension (rfc6066) to provide a cer-
tificate per virtual host. Set ssl_verify_vhost to True if the servers SSL certificate uses the virtual
host name instead of the DNS name.
sasl_mechanisms
Type string
Default ''
Space separated list of acceptable SASL mechanisms
sasl_config_dir
Type string
Default ''
Path to directory that contains the SASL configuration
sasl_config_name
Type string
Default ''
Name of configuration file (without .conf suffix)
sasl_default_realm
Type string
Default ''
SASL realm to use if no realm present in username
connection_retry_interval
Type integer
Default 1
Minimum Value 1
Seconds to pause before attempting to re-connect.
connection_retry_backoff
Type integer
Default 2
Minimum Value 0
Increase the connection_retry_interval by this many seconds after each unsuccessful failover at-
tempt.
connection_retry_interval_max
Type integer
Default 30
Minimum Value 1
Maximum limit for connection_retry_interval + connection_retry_backoff
link_retry_delay
Type integer
Default 10
Minimum Value 1
Time to pause between re-connecting an AMQP 1.0 link that failed due to a recoverable error.
default_reply_retry
Type integer
Default 0
Minimum Value -1
The maximum number of attempts to re-send a reply message which failed due to a recoverable
error.
default_reply_timeout
Type integer
Default 30
Minimum Value 5
The deadline for an rpc reply message delivery.
default_send_timeout
Type integer
Default 30
Minimum Value 5
The deadline for an rpc cast or call message delivery. Only used when caller does not provide a
timeout expiry.
default_notify_timeout
Type integer
Default 30
Minimum Value 5
The deadline for a sent notification message delivery. Only used when caller does not provide a
timeout expiry.
default_sender_link_timeout
Type integer
Default 600
Minimum Value 1
The duration to schedule a purge of idle sender links. Detach link after expiry.
addressing_mode
Type string
Default dynamic
Indicates the addressing mode used by the driver. Permitted values: legacy - use legacy non-
routable addressing routable - use routable addresses dynamic - use legacy addresses if the mes-
sage bus does not support routing otherwise use routable addressing
pseudo_vhost
Type boolean
Default True
Enable virtual host support for those message buses that do not natively support virtual hosting
(such as qpidd). When set to true the virtual host name will be added to all message bus addresses,
effectively creating a private subnet per virtual host. Set to False if the message bus supports
virtual hosting using the hostname field in the AMQP 1.0 Open performative as the name of the
virtual host.
server_request_prefix
Type string
Default exclusive
address prefix used when sending to a specific server
broadcast_prefix
Type string
Default broadcast
address prefix used when broadcasting to all servers
group_request_prefix
Type string
Default unicast
rpc_address_prefix
Type string
Default openstack.org/om/rpc
Address prefix for all generated RPC addresses
notify_address_prefix
Type string
Default openstack.org/om/notify
Address prefix for all generated Notification addresses
multicast_address
Type string
Default multicast
Appended to the address prefix when sending a fanout message. Used by the message bus to
identify fanout messages.
unicast_address
Type string
Default unicast
Appended to the address prefix when sending to a particular RPC/Notification server. Used by the
message bus to identify messages sent to a single destination.
anycast_address
Type string
Default anycast
Appended to the address prefix when sending to a group of consumers. Used by the message bus
to identify messages that should be delivered in a round-robin fashion across consumers.
default_notification_exchange
Type string
Default <None>
Exchange name used in notification addresses. Exchange name resolution precedence: Tar-
get.exchange if set else default_notification_exchange if set else control_exchange if set else no-
tify
default_rpc_exchange
Type string
Default <None>
Exchange name used in RPC addresses. Exchange name resolution precedence: Target.exchange
if set else default_rpc_exchange if set else control_exchange if set else rpc
reply_link_credit
Type integer
Default 200
Minimum Value 1
Window size for incoming RPC Reply messages.
rpc_server_credit
Type integer
Default 100
Minimum Value 1
Window size for incoming RPC Request messages
notify_server_credit
Type integer
Default 100
Minimum Value 1
Window size for incoming Notification messages
pre_settled
Type multi-valued
Default rpc-cast
Default rpc-reply
Send messages of this type pre-settled. Pre-settled messages will not receive acknowledgement
from the peer. Note well: pre-settled messages may be silently discarded if the delivery fails.
Permitted values: rpc-call - send RPC Calls pre-settled rpc-reply- send RPC Replies pre-settled
rpc-cast - Send RPC Casts pre-settled notify - Send Notifications pre-settled
oslo_messaging_kafka
kafka_max_fetch_bytes
Type integer
Default 1048576
Max fetch bytes of Kafka consumer
kafka_consumer_timeout
Type floating point
Default 1.0
Default timeout(s) for Kafka consumers
pool_size
Type integer
Default 10
Pool Size for Kafka Consumers
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason Driver no longer uses connection pool.
conn_pool_min_size
Type integer
Default 2
The pool size limit for connections expiration policy
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason Driver no longer uses connection pool.
conn_pool_ttl
Type integer
Default 1200
The time-to-live in sec of idle connections in the pool
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason Driver no longer uses connection pool.
consumer_group
Type string
Default oslo_messaging_consumer
Group id for Kafka consumer. Consumers in one group will coordinate message consumption
producer_batch_timeout
Type floating point
Default 0.0
Upper bound on the delay for KafkaProducer batching in seconds
producer_batch_size
Type integer
Default 16384
oslo_messaging_notifications
driver
Type multi-valued
Default ''
The Drivers(s) to handle sending notifications. Possible values are messaging, messagingv2, rout-
ing, log, test, noop
transport_url
Type string
Default <None>
A URL representing the messaging driver to use for notifications. If not set, we fall back to the
same configuration used for RPC.
topics
Type list
Default ['notifications']
AMQP topic used for OpenStack notifications.
retry
Type integer
Default -1
The maximum number of attempts to re-send a notification message which failed to be delivered
due to a recoverable error. 0 - No retry, -1 - indefinite
oslo_messaging_rabbit
amqp_durable_queues
Type boolean
Default False
Use durable queues in AMQP.
amqp_auto_delete
Type boolean
Default False
Auto-delete queues in AMQP.
ssl
Type boolean
Default False
Connect over SSL.
ssl_version
Type string
Default ''
SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2,
SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
ssl_key_file
Type string
Default ''
ssl_cert_file
Type string
Default ''
SSL cert file (valid only if SSL enabled).
ssl_ca_file
Type string
Default ''
SSL certification authority file (valid only if SSL enabled).
heartbeat_in_pthread
Type boolean
Default True
Run the health check heartbeat thread through a native python thread by default. If this option is
equal to False then the health check heartbeat will inherit the execution model from the parent pro-
cess. For example if the parent process has monkey patched the stdlib by using eventlet/greenlet
then the heartbeat will be run through a green thread.
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
kombu_reconnect_delay
Type floating point
Default 1.0
How long to wait before reconnecting in response to an AMQP consumer cancel notification.
kombu_compression
Type string
Default <None>
EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not be used. This
option may not be available in future versions.
kombu_missing_consumer_retry_timeout
Type integer
Default 60
How long to wait a missing client before abandoning to send it its replies. This value should not
be longer than rpc_response_timeout.
kombu_failover_strategy
Type string
Default round-robin
Valid Values round-robin, shuffle
Determines how the next RabbitMQ node is chosen in case the one we are currently connected to
becomes unavailable. Takes effect only if more than one RabbitMQ node is provided in config.
rabbit_login_method
Type string
Default AMQPLAIN
Valid Values PLAIN, AMQPLAIN, RABBIT-CR-DEMO
The RabbitMQ login method.
rabbit_retry_interval
Type integer
Default 1
How frequently to retry connecting with RabbitMQ.
rabbit_retry_backoff
Type integer
Default 2
How long to backoff for between retries when connecting to RabbitMQ.
rabbit_interval_max
Type integer
Default 30
Maximum interval of RabbitMQ connection retries. Default is 30 seconds.
rabbit_ha_queues
Type boolean
Default False
Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this option, you must wipe
the RabbitMQ database. In RabbitMQ 3.0, queue mirroring is no longer controlled by the x-ha-
policy argument when declaring a queue. If you just want to make sure that all queues (except
those with auto-generated names) are mirrored across all nodes, run: rabbitmqctl set_policy HA
^(?!amq.).* {ha-mode: all}
rabbit_transient_queues_ttl
Type integer
Default 1800
Minimum Value 1
Positive integer representing duration in seconds for queue TTL (x-expires). Queues which are
unused for the duration of the TTL are automatically deleted. The parameter affects only reply
and fanout queues.
rabbit_qos_prefetch_count
Type integer
Default 0
Specifies the number of messages to prefetch. Setting to zero allows unlimited messages.
heartbeat_timeout_threshold
Type integer
Default 60
Number of seconds after which the Rabbit broker is considered down if heartbeats keep-alive fails
(0 disables heartbeat).
heartbeat_rate
Type integer
Default 2
How often times during the heartbeat_timeout_threshold we check the heartbeat.
direct_mandatory_flag
Type boolean
Default True
(DEPRECATED) Enable/Disable the RabbitMQ mandatory flag for direct send. The direct send
is used as reply, so the MessageUndeliverable exception is raised in case the client queue does
not exist.MessageUndeliverable exception will be used to loop for a timeout to lets a chance to
sender to recover.This flag is deprecated and it will not be possible to deactivate this functionality
anymore
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason Mandatory flag no longer deactivable.
enable_cancel_on_failover
Type boolean
Default False
Enable x-cancel-on-ha-failover flag so that rabbitmq server will cancel and notify consumerswhen
queue is down
oslo_middleware
enable_proxy_headers_parsing
Type boolean
Default False
Whether the application is behind a proxy or not. This determines if the middleware should parse
the headers or not.
oslo_policy
enforce_scope
Type boolean
Default False
This option controls whether or not to enforce scope when evaluating policies. If True, the scope
of the token used in the request is compared to the scope_types of the policy being enforced.
If the scopes do not match, an InvalidScope exception will be raised. If False, a message
will be logged informing operators that policies are being invoked with mismatching scope.
enforce_new_defaults
Type boolean
Default False
This option controls whether or not to use old deprecated defaults when evaluating policies. If
True, the old deprecated defaults are not going to be evaluated. This means if any existing token
is allowed for old defaults but is disallowed for new defaults, it will be disallowed. It is encouraged
to enable this flag along with the enforce_scope flag so that you can get the benefits of new
defaults and scope_type together
policy_file
Type string
Default policy.yaml
The relative or absolute path of a file that maps roles to permissions for a given service. Relative
paths must be specified in relation to the configuration file setting this option.
policy_default_rule
Type string
Default default
Default rule. Enforced when a requested rule is not found.
policy_dirs
Type multi-valued
Default policy.d
Directories where policy configuration files are stored. They can be relative to any directory in
the search path defined by the config_dir option, or absolute paths. The file defined by policy_file
must exist for these directories to be searched. Missing or empty directories are ignored.
remote_content_type
Type string
Default application/x-www-form-urlencoded
Valid Values application/x-www-form-urlencoded, application/json
Content Type to send and receive data for REST based policy check
remote_ssl_verify_server_crt
Type boolean
Default False
server identity verification for REST based policy check
remote_ssl_ca_crt_file
Type string
Default <None>
Absolute path to ca cert file for REST based policy check
remote_ssl_client_crt_file
Type string
Default <None>
Absolute path to client cert for REST based policy check
remote_ssl_client_key_file
Type string
Default <None>
Absolute path client key file REST based policy check
privsep
Configuration options for the oslo.privsep daemon. Note that this group name can be changed by the
consuming service. Check the services docs to see if this is the case.
user
Type string
Default <None>
User that the privsep daemon should run as.
group
Type string
Default <None>
Group that the privsep daemon should run as.
capabilities
Type unknown type
Default []
List of Linux capabilities retained by the privsep daemon.
thread_pool_size
Type integer
Default multiprocessing.cpu_count()
Minimum Value 1
This option has a sample default set, which means that its actual default value may vary from the
one documented above.
The number of threads available for privsep to concurrently run processes. Defaults to the number
of CPU cores in the system.
helper_command
Type string
Default <None>
Command to invoke to start the privsep daemon if not using the fork method. If not specified,
a default is generated using sudo privsep-helper and arguments designed to recreate the current
configuration. This command must accept suitable privsep_context and privsep_sock_path argu-
ments.
quotas
default_quota
Type integer
Default -1
Default number of resource allowed per tenant. A negative value means unlimited.
quota_network
Type integer
Default 100
Number of networks allowed per tenant. A negative value means unlimited.
quota_subnet
Type integer
Default 100
ssl
ca_file
Type string
Default <None>
CA certificate file to use to verify connecting clients.
cert_file
Type string
Default <None>
Certificate file to use when starting the server securely.
key_file
Type string
Default <None>
Private key file to use when starting the server securely.
version
Type string
Default <None>
SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and SSLv23. SSLv2,
SSLv3, TLSv1_1, and TLSv1_2 may be available on some distributions.
ciphers
Type string
Default <None>
Sets the list of available ciphers. value should be a string in the OpenSSL cipher list format.
9.1.2 ml2_conf.ini
DEFAULT
debug
Type boolean
Default False
Mutable This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
Type string
Default <None>
Mutable This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging config-
uration files. For details about logging configuration files, see the Python logging module doc-
umentation. Note that when logging configuration files are used then all logging configuration
is set in the configuration file and other logging configuration options are ignored (for example,
log-date-format).
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
ml2
type_drivers
Type list
Default ['local', 'flat', 'vlan', 'gre', 'vxlan', 'geneve']
List of network type driver entrypoints to be loaded from the neutron.ml2.type_drivers namespace.
tenant_network_types
Type list
Default ['local']
Ordered list of network_types to allocate as tenant networks. The default value local is useful for
single-box testing but provides no connectivity between hosts.
mechanism_drivers
Type list
Default []
An ordered list of networking mechanism driver entrypoints to be loaded from the neu-
tron.ml2.mechanism_drivers namespace.
extension_drivers
Type list
Default []
An ordered list of extension driver entrypoints to be loaded from the neu-
tron.ml2.extension_drivers namespace. For example: extension_drivers = port_security,qos
path_mtu
Type integer
Default 0
Maximum size of an IP packet (MTU) that can traverse the underlying physical network infras-
tructure without fragmentation when using an overlay/tunnel protocol. This option allows speci-
fying a physical network MTU value that differs from the default global_physnet_mtu value.
physical_network_mtus
Type list
Default []
A list of mappings of physical networks to MTU values. The format of the mapping is <phys-
net>:<mtu val>. This mapping allows specifying a physical network MTU value that differs from
the default global_physnet_mtu value.
external_network_type
Type string
Default <None>
Default network type for external networks when no provider attributes are specified. By default it
is None, which means that if provider attributes are not specified while creating external networks
then they will have the same type as tenant networks. Allowed values for external_network_type
config option depend on the network type values configured in type_drivers config option.
overlay_ip_version
Type integer
Default 4
IP version of all overlay (tunnel) network endpoints. Use a value of 4 for IPv4 or 6 for IPv6.
ml2_type_flat
flat_networks
Type list
Default *
List of physical_network names with which flat networks can be created. Use default * to allow
flat networks with arbitrary physical_network names. Use an empty list to disable flat networks.
ml2_type_geneve
vni_ranges
Type list
Default []
Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of Geneve VNI IDs
that are available for tenant network allocation
max_header_size
Type integer
Default 30
Geneve encapsulation header size is dynamic, this value is used to calculate the maximum MTU
for the driver. The default size for this field is 30, which is the size of the Geneve header without
any additional option headers.
ml2_type_gre
tunnel_id_ranges
Type list
Default []
Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs
that are available for tenant network allocation
ml2_type_vlan
network_vlan_ranges
Type list
Default []
List of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physi-
cal_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN
tags on each available for allocation to tenant networks.
ml2_type_vxlan
vni_ranges
Type list
Default []
Comma-separated list of <vni_min>:<vni_max> tuples enumerating ranges of VXLAN VNI IDs
that are available for tenant network allocation
vxlan_group
Type string
Default <None>
Multicast group for VXLAN. When configured, will enable sending all broadcast traffic to this
multicast group. When left unconfigured, will disable multicast VXLAN mode.
ovs_driver
vnic_type_prohibit_list
Type list
Default []
Comma-separated list of VNIC types for which support is administratively prohibited by the
mechanism driver. Please note that the supported vnic_types depend on your network interface
card, on the kernel version of your operating system, and on other factors, like OVS version. In
case of ovs mechanism driver the valid vnic types are normal and direct. Note that direct is sup-
ported only from kernel 4.8, and from ovs 2.8.0. Bind DIRECT (SR-IOV) port allows to offload
the OVS flows using tc to the SR-IOV NIC. This allows to support hardware offload via tc and
that allows us to manage the VF by OpenFlow control plane using representor net-device.
securitygroup
firewall_driver
Type string
Default <None>
Driver for security groups firewall in the L2 agent
enable_security_group
Type boolean
Default True
Controls whether the neutron security group API is enabled in the server. It should be false when
using no security groups or using the nova security group API.
enable_ipset
Type boolean
Default True
Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset
is installed on L2 agent node.
permitted_ethertypes
Type list
Default []
Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with 0x). For exam-
ple, 0x4008 to permit InfiniBand.
sriov_driver
vnic_type_prohibit_list
Type list
Default []
Comma-separated list of VNIC types for which support is administratively prohibited by the
mechanism driver. Please note that the supported vnic_types depend on your network interface
card, on the kernel version of your operating system, and on other factors. In case of sriov mech-
anism driver the valid VNIC types are direct, macvtap and direct-physical.
9.1.3 linuxbridge_agent.ini
DEFAULT
rpc_response_max_timeout
Type integer
Default 600
Maximum seconds to wait for a response from an RPC call.
debug
Type boolean
Default False
Mutable This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
Type string
Default <None>
Mutable This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging config-
uration files. For details about logging configuration files, see the Python logging module doc-
umentation. Note that when logging configuration files are used then all logging configuration
is set in the configuration file and other logging configuration options are ignored (for example,
log-date-format).
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [%(request_id)s %(user_identity)s]
%(instance)s%(message)s
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
logging_default_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [-] %(instance)s%(message)s
Format string to use for log messages when context is undefined. Used by
oslo_log.formatters.ContextFormatter
logging_debug_format_suffix
Type string
Default %(funcName)s %(pathname)s:%(lineno)d
Additional data to append to log message when logging level for the message is DEBUG. Used
by oslo_log.formatters.ContextFormatter
logging_exception_prefix
Type string
Default %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s
%(instance)s
Prefix each line of exception output with this format. Used by
oslo_log.formatters.ContextFormatter
logging_user_identity_format
Type string
Default %(user)s %(tenant)s %(domain)s %(user_domain)s
%(project_domain)s
Defines the format string for %(user_identity)s that is used in logging_context_format_string.
Used by oslo_log.formatters.ContextFormatter
default_log_levels
Type list
Default ['amqp=WARN', 'amqplib=WARN', 'boto=WARN',
'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO',
'oslo.messaging=INFO', 'oslo_messaging=INFO',
'iso8601=WARN', 'requests.packages.urllib3.
connectionpool=WARN', 'urllib3.connectionpool=WARN',
'websocket=WARN', 'requests.packages.
urllib3.util.retry=WARN', 'urllib3.util.
retry=WARN', 'keystonemiddleware=WARN', 'routes.
middleware=WARN', 'stevedore=WARN', 'taskflow=WARN',
'keystoneauth=WARN', 'oslo.cache=INFO',
'oslo_policy=INFO', 'dogpile.core.dogpile=INFO']
List of package logging levels in logger=LEVEL pairs. This option is ignored if
log_config_append is set.
publish_errors
Type boolean
Default False
Enables or disables publication of error events.
instance_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance that is passed with the log message.
instance_uuid_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance UUID that is passed with the log message.
rate_limit_interval
Type integer
Default 0
Interval, number of seconds, of log rate limiting.
rate_limit_burst
Type integer
Default 0
Maximum number of logged messages per rate_limit_interval.
rate_limit_except_level
Type string
Default CRITICAL
Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty
string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string
means that all levels are filtered.
fatal_deprecations
Type boolean
Default False
Enables or disables fatal status of deprecations.
agent
polling_interval
Type integer
Default 2
The number of seconds the agent will wait between polling for local device changes.
quitting_rpc_timeout
Type integer
Default 10
Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0,
rpc timeout wont be changed
dscp
Type integer
Default <None>
Minimum Value 0
Maximum Value 63
The DSCP value to use for outer headers during tunnel encapsulation.
dscp_inherit
Type boolean
Default False
If set to True, the DSCP value of tunnel interfaces is overwritten and set to inherit. The DSCP
value of the inner header is then copied to the outer header.
extensions
Type list
Default []
Extensions list to use
linux_bridge
physical_interface_mappings
Type list
Default []
Comma-separated list of <physical_network>:<physical_interface> tuples mapping physical net-
work names to the agents node-specific physical network interfaces to be used for flat and VLAN
networks. All physical networks listed in network_vlan_ranges on the server should have map-
pings to appropriate interfaces on each agent.
bridge_mappings
Type list
Default []
List of <physical_network>:<physical_bridge>
network_log
rate_limit
Type integer
Default 100
Minimum Value 100
Maximum packets logging per second.
burst_limit
Type integer
Default 25
Minimum Value 25
Maximum number of packets per rate_limit.
local_output_log_base
Type string
Default <None>
Output logfile path on agent side, default syslog file.
securitygroup
firewall_driver
Type string
Default <None>
Driver for security groups firewall in the L2 agent
enable_security_group
Type boolean
Default True
Controls whether the neutron security group API is enabled in the server. It should be false when
using no security groups or using the nova security group API.
enable_ipset
Type boolean
Default True
Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset
is installed on L2 agent node.
permitted_ethertypes
Type list
Default []
Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with 0x). For exam-
ple, 0x4008 to permit InfiniBand.
vxlan
enable_vxlan
Type boolean
Default True
Enable VXLAN on the agent. Can be enabled when agent is managed by ml2 plugin using lin-
uxbridge mechanism driver
ttl
Type integer
Default <None>
TTL for vxlan interface protocol packets.
tos
Type integer
Default <None>
TOS for vxlan interface protocol packets. This option is deprecated in favor of the dscp option in
the AGENT section and will be removed in a future release. To convert the TOS value to DSCP,
divide by 4.
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
vxlan_group
Type string
Default 224.0.0.1
Multicast group(s) for vxlan interface. A range of group addresses may be specified by using
CIDR notation. Specifying a range allows different VNIs to use different group addresses, reduc-
ing or eliminating spurious broadcast traffic to the tunnel endpoints. To reserve a unique group
for each possible (24-bit) VNI, use a /8 such as 239.0.0.0/8. This setting must be the same on all
the agents.
local_ip
Type ip address
Default <None>
IP address of local overlay (tunnel) network endpoint. Use either an IPv4 or IPv6 address that
resides on one of the host network interfaces. The IP version of this value must match the value of
the overlay_ip_version option in the ML2 plug-in configuration file on the neutron server node(s).
udp_srcport_min
Type port number
Default 0
Minimum Value 0
Maximum Value 65535
The minimum of the UDP source port range used for VXLAN communication.
udp_srcport_max
Type port number
Default 0
Minimum Value 0
Maximum Value 65535
The maximum of the UDP source port range used for VXLAN communication.
udp_dstport
Type port number
Default <None>
Minimum Value 0
Maximum Value 65535
The UDP port used for VXLAN communication. By default, the Linux kernel doesnt use the
IANA assigned standard value, so if you want to use it, this option must be set to 4789. It is not
set by default because of backward compatibiltiy.
l2_population
Type boolean
Default False
Extension to use alongside ml2 plugins l2population mechanism driver. It enables the plugin to
populate VXLAN forwarding table.
arp_responder
Type boolean
Default False
Enable local ARP responder which provides local responses instead of performing ARP broadcast
into the overlay. Enabling local ARP responder is not fully compatible with the allowed-address-
pairs extension.
multicast_ranges
Type list
Default []
Optional comma-separated list of <multicast address>:<vni_min>:<vni_max> triples describing
how to assign a multicast address to VXLAN according to its VNI ID.
9.1.4 macvtap_agent.ini
DEFAULT
debug
Type boolean
Default False
Mutable This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
Type string
Default <None>
Mutable This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging config-
uration files. For details about logging configuration files, see the Python logging module doc-
umentation. Note that when logging configuration files are used then all logging configuration
is set in the configuration file and other logging configuration options are ignored (for example,
log-date-format).
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [%(request_id)s %(user_identity)s]
%(instance)s%(message)s
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
logging_default_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [-] %(instance)s%(message)s
Format string to use for log messages when context is undefined. Used by
oslo_log.formatters.ContextFormatter
logging_debug_format_suffix
Type string
Default %(funcName)s %(pathname)s:%(lineno)d
Additional data to append to log message when logging level for the message is DEBUG. Used
by oslo_log.formatters.ContextFormatter
logging_exception_prefix
Type string
Default 0
Interval, number of seconds, of log rate limiting.
rate_limit_burst
Type integer
Default 0
Maximum number of logged messages per rate_limit_interval.
rate_limit_except_level
Type string
Default CRITICAL
Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty
string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string
means that all levels are filtered.
fatal_deprecations
Type boolean
Default False
Enables or disables fatal status of deprecations.
agent
polling_interval
Type integer
Default 2
The number of seconds the agent will wait between polling for local device changes.
quitting_rpc_timeout
Type integer
Default 10
Set new timeout in seconds for new rpc calls after agent receives SIGTERM. If value is set to 0,
rpc timeout wont be changed
dscp
Type integer
Default <None>
Minimum Value 0
Maximum Value 63
The DSCP value to use for outer headers during tunnel encapsulation.
dscp_inherit
Type boolean
Default False
If set to True, the DSCP value of tunnel interfaces is overwritten and set to inherit. The DSCP
value of the inner header is then copied to the outer header.
macvtap
physical_interface_mappings
Type list
Default []
Comma-separated list of <physical_network>:<physical_interface> tuples mapping physical net-
work names to the agents node-specific physical network interfaces to be used for flat and VLAN
networks. All physical networks listed in network_vlan_ranges on the server should have map-
pings to appropriate interfaces on each agent.
securitygroup
firewall_driver
Type string
Default <None>
Driver for security groups firewall in the L2 agent
enable_security_group
Type boolean
Default True
Controls whether the neutron security group API is enabled in the server. It should be false when
using no security groups or using the nova security group API.
enable_ipset
Type boolean
Default True
Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset
is installed on L2 agent node.
permitted_ethertypes
Type list
Default []
Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with 0x). For exam-
ple, 0x4008 to permit InfiniBand.
9.1.5 openvswitch_agent.ini
DEFAULT
rpc_response_max_timeout
Type integer
Default 600
Maximum seconds to wait for a response from an RPC call.
debug
Type boolean
Default False
Mutable This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
Type string
Default <None>
Mutable This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging config-
uration files. For details about logging configuration files, see the Python logging module doc-
umentation. Note that when logging configuration files are used then all logging configuration
is set in the configuration file and other logging configuration options are ignored (for example,
log-date-format).
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [%(request_id)s %(user_identity)s]
%(instance)s%(message)s
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
logging_default_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [-] %(instance)s%(message)s
Format string to use for log messages when context is undefined. Used by
oslo_log.formatters.ContextFormatter
logging_debug_format_suffix
Type string
Default %(funcName)s %(pathname)s:%(lineno)d
Additional data to append to log message when logging level for the message is DEBUG. Used
by oslo_log.formatters.ContextFormatter
logging_exception_prefix
Type string
Default %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s
%(instance)s
Prefix each line of exception output with this format. Used by
oslo_log.formatters.ContextFormatter
logging_user_identity_format
Type string
Default %(user)s %(tenant)s %(domain)s %(user_domain)s
%(project_domain)s
Defines the format string for %(user_identity)s that is used in logging_context_format_string.
Used by oslo_log.formatters.ContextFormatter
default_log_levels
Type list
Default ['amqp=WARN', 'amqplib=WARN', 'boto=WARN',
'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO',
'oslo.messaging=INFO', 'oslo_messaging=INFO',
'iso8601=WARN', 'requests.packages.urllib3.
connectionpool=WARN', 'urllib3.connectionpool=WARN',
'websocket=WARN', 'requests.packages.
urllib3.util.retry=WARN', 'urllib3.util.
retry=WARN', 'keystonemiddleware=WARN', 'routes.
middleware=WARN', 'stevedore=WARN', 'taskflow=WARN',
'keystoneauth=WARN', 'oslo.cache=INFO',
'oslo_policy=INFO', 'dogpile.core.dogpile=INFO']
List of package logging levels in logger=LEVEL pairs. This option is ignored if
log_config_append is set.
publish_errors
Type boolean
Default False
Enables or disables publication of error events.
instance_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance that is passed with the log message.
instance_uuid_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance UUID that is passed with the log message.
rate_limit_interval
Type integer
Default 0
Interval, number of seconds, of log rate limiting.
rate_limit_burst
Type integer
Default 0
Maximum number of logged messages per rate_limit_interval.
rate_limit_except_level
Type string
Default CRITICAL
Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty
string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string
means that all levels are filtered.
fatal_deprecations
Type boolean
Default False
Enables or disables fatal status of deprecations.
agent
minimize_polling
Type boolean
Default True
Minimize polling by monitoring ovsdb for interface changes.
ovsdb_monitor_respawn_interval
Type integer
Default 30
The number of seconds to wait before respawning the ovsdb monitor after losing communication
with it.
tunnel_types
Type list
Default []
Network types supported by the agent (gre, vxlan and/or geneve).
vxlan_udp_port
Type port number
Default 4789
Minimum Value 0
Maximum Value 65535
The UDP port to use for VXLAN tunnels.
veth_mtu
Type integer
Default 9000
MTU size of veth interfaces
l2_population
Type boolean
Default False
Use ML2 l2population mechanism driver to learn remote MAC and IPs and improve tunnel scal-
ability.
arp_responder
Type boolean
Default False
Enable local ARP responder if it is supported. Requires OVS 2.1 and ML2 l2population driver.
Allows the switch (when supporting an overlay) to respond to an ARP request locally without
performing a costly ARP broadcast into the overlay. NOTE: If enable_distributed_routing is set
to True then arp_responder will automatically be set to True in the agent, regardless of the setting
in the config file.
dont_fragment
Type boolean
Default True
Set or un-set the dont fragment (DF) bit on outgoing IP packet carrying GRE/VXLAN tunnel.
enable_distributed_routing
Type boolean
Default False
Make the l2 agent run in DVR mode.
drop_flows_on_start
Type boolean
Default False
Reset flow table on start. Setting this to True will cause brief traffic interruption.
tunnel_csum
Type boolean
Default False
Set or un-set the tunnel header checksum on outgoing IP packet carrying GRE/VXLAN tunnel.
baremetal_smartnic
Type boolean
Default False
Enable the agent to process Smart NIC ports.
explicitly_egress_direct
Type boolean
Default False
When set to True, the accepted egress unicast traffic will not use action NORMAL. The accepted
egress packets will be taken care of in the final egress tables direct output flows for unicast traffic.
extensions
Type list
Default []
Extensions list to use
network_log
rate_limit
Type integer
Default 100
Minimum Value 100
Maximum packets logging per second.
burst_limit
Type integer
Default 25
Minimum Value 25
Maximum number of packets per rate_limit.
local_output_log_base
Type string
Default <None>
Output logfile path on agent side, default syslog file.
ovs
integration_bridge
Type string
Default br-int
Integration bridge to use. Do not change this parameter unless you have a good reason to. This
is the name of the OVS integration bridge. There is one per hypervisor. The integration bridge
acts as a virtual patch bay. All VM VIFs are attached to this bridge and then patched according to
their network connectivity.
tunnel_bridge
Type string
Default br-tun
Tunnel bridge to use.
int_peer_patch_port
Type string
Default patch-tun
The inactivity_probe interval in seconds for the local switch connection to the controller. A value
of 0 disables inactivity probes.
ovsdb_connection
Type string
Default tcp:127.0.0.1:6640
The connection string for the OVSDB backend. Will be used for all ovsdb commands and by
ovsdb-client when monitoring
ssl_key_file
Type string
Default <None>
The SSL private key file to use when interacting with OVSDB. Required when using an ssl:
prefixed ovsdb_connection
ssl_cert_file
Type string
Default <None>
The SSL certificate file to use when interacting with OVSDB. Required when using an ssl: pre-
fixed ovsdb_connection
ssl_ca_cert_file
Type string
Default <None>
The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when
using an ssl: prefixed ovsdb_connection
ovsdb_debug
Type boolean
Default False
Enable OVSDB debug logs
securitygroup
firewall_driver
Type string
Default <None>
Driver for security groups firewall in the L2 agent
enable_security_group
Type boolean
Default True
Controls whether the neutron security group API is enabled in the server. It should be false when
using no security groups or using the nova security group API.
enable_ipset
Type boolean
Default True
Use ipset to speed-up the iptables based security groups. Enabling ipset support requires that ipset
is installed on L2 agent node.
permitted_ethertypes
Type list
Default []
Comma-separated list of ethertypes to be permitted, in hexadecimal (starting with 0x). For exam-
ple, 0x4008 to permit InfiniBand.
9.1.6 sriov_agent.ini
DEFAULT
debug
Type boolean
Default False
Mutable This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
Type string
Default <None>
Mutable This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging config-
uration files. For details about logging configuration files, see the Python logging module doc-
umentation. Note that when logging configuration files are used then all logging configuration
is set in the configuration file and other logging configuration options are ignored (for example,
log-date-format).
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [%(request_id)s %(user_identity)s]
%(instance)s%(message)s
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
logging_default_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [-] %(instance)s%(message)s
Format string to use for log messages when context is undefined. Used by
oslo_log.formatters.ContextFormatter
logging_debug_format_suffix
Type string
Default %(funcName)s %(pathname)s:%(lineno)d
Additional data to append to log message when logging level for the message is DEBUG. Used
by oslo_log.formatters.ContextFormatter
logging_exception_prefix
Type string
Default %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s
%(instance)s
Prefix each line of exception output with this format. Used by
oslo_log.formatters.ContextFormatter
logging_user_identity_format
Type string
Default %(user)s %(tenant)s %(domain)s %(user_domain)s
%(project_domain)s
Type string
Default CRITICAL
Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty
string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string
means that all levels are filtered.
fatal_deprecations
Type boolean
Default False
Enables or disables fatal status of deprecations.
agent
extensions
Type list
Default []
Extensions list to use
sriov_nic
physical_device_mappings
Type list
Default []
Comma-separated list of <physical_network>:<network_device> tuples mapping physical net-
work names to the agents node-specific physical network device interfaces of SR-IOV physical
function to be used for VLAN networks. All physical networks listed in network_vlan_ranges on
the server should have mappings to appropriate interfaces on each agent.
exclude_devices
Type list
Default []
Comma-separated list of <network_device>:<vfs_to_exclude> tuples, mapping network_device
to the agents node-specific list of virtual functions that should not be used for virtual networking.
vfs_to_exclude is a semicolon-separated list of virtual functions to exclude from network_device.
The network_device in the mapping should appear in the physical_device_mappings list.
resource_provider_bandwidths
Type list
Default []
Comma-separated list of <network_device>:<egress_bw>:<ingress_bw> tuples, showing the
available bandwidth for the given device in the given direction. The direction is meant from
VM perspective. Bandwidth is measured in kilobits per second (kbps). The device must appear
in physical_device_mappings as the value. But not all devices in physical_device_mappings must
be listed here. For a device not listed here we neither create a resource provider in placement
nor report inventories against. An omitted direction means we do not report an inventory for the
corresponding class.
resource_provider_hypervisors
Type dict
Default {}
Mapping of network devices to hypervisors: <network_device>:<hypervisor>, hypervisor name
is used to locate the parent of the resource provider tree. Only needs to be set in the rare case
when the hypervisor name is different from the DEFAULT.host config option value as known by
the nova-compute managing that hypervisor.
resource_provider_inventory_defaults
Type dict
Default {'allocation_ratio': 1.0, 'min_unit': 1,
'step_size': 1, 'reserved': 0}
Key:value pairs to specify defaults used while reporting resource provider inventories. Possible
keys with their types: allocation_ratio:float, max_unit:int, min_unit:int, reserved:int, step_size:int,
See also: https://docs.openstack.org/api-ref/placement/#update-resource-provider-inventories
9.1.7 ovn.ini
DEFAULT
debug
Type boolean
Default False
Mutable This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
Type string
Default <None>
Mutable This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging config-
uration files. For details about logging configuration files, see the Python logging module doc-
umentation. Note that when logging configuration files are used then all logging configuration
is set in the configuration file and other logging configuration options are ignored (for example,
log-date-format).
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [%(request_id)s %(user_identity)s]
%(instance)s%(message)s
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
logging_default_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [-] %(instance)s%(message)s
Format string to use for log messages when context is undefined. Used by
oslo_log.formatters.ContextFormatter
logging_debug_format_suffix
Type string
Default %(funcName)s %(pathname)s:%(lineno)d
Additional data to append to log message when logging level for the message is DEBUG. Used
by oslo_log.formatters.ContextFormatter
logging_exception_prefix
Type string
Default %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s
%(instance)s
Prefix each line of exception output with this format. Used by
oslo_log.formatters.ContextFormatter
logging_user_identity_format
Type string
Default %(user)s %(tenant)s %(domain)s %(user_domain)s
%(project_domain)s
Defines the format string for %(user_identity)s that is used in logging_context_format_string.
Used by oslo_log.formatters.ContextFormatter
default_log_levels
Type list
Default ['amqp=WARN', 'amqplib=WARN', 'boto=WARN',
'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO',
'oslo.messaging=INFO', 'oslo_messaging=INFO',
'iso8601=WARN', 'requests.packages.urllib3.
connectionpool=WARN', 'urllib3.connectionpool=WARN',
'websocket=WARN', 'requests.packages.
urllib3.util.retry=WARN', 'urllib3.util.
retry=WARN', 'keystonemiddleware=WARN', 'routes.
middleware=WARN', 'stevedore=WARN', 'taskflow=WARN',
'keystoneauth=WARN', 'oslo.cache=INFO',
'oslo_policy=INFO', 'dogpile.core.dogpile=INFO']
List of package logging levels in logger=LEVEL pairs. This option is ignored if
log_config_append is set.
publish_errors
Type boolean
Default False
Enables or disables publication of error events.
instance_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance that is passed with the log message.
instance_uuid_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance UUID that is passed with the log message.
rate_limit_interval
Type integer
Default 0
Interval, number of seconds, of log rate limiting.
rate_limit_burst
Type integer
Default 0
ovn
ovn_nb_connection
Type string
Default tcp:127.0.0.1:6641
The connection string for the OVN_Northbound OVSDB. Use tcp:IP:PORT for TCP connec-
tion. Use ssl:IP:PORT for SSL connection. The ovn_nb_private_key, ovn_nb_certificate and
ovn_nb_ca_cert are mandatory. Use unix:FILE for unix domain socket connection.
ovn_nb_private_key
Type string
Default ''
The PEM file with private key for SSL connection to OVN-NB-DB
ovn_nb_certificate
Type string
Default ''
The PEM file with certificate that certifies the private key specified in ovn_nb_private_key
ovn_nb_ca_cert
Type string
Default ''
The PEM file with CA certificate that OVN should use to verify certificates presented to it by SSL
peers
ovn_sb_connection
Type string
Default tcp:127.0.0.1:6642
The connection string for the OVN_Southbound OVSDB. Use tcp:IP:PORT for TCP connec-
tion. Use ssl:IP:PORT for SSL connection. The ovn_sb_private_key, ovn_sb_certificate and
ovn_sb_ca_cert are mandatory. Use unix:FILE for unix domain socket connection.
ovn_sb_private_key
Type string
Default ''
The PEM file with private key for SSL connection to OVN-SB-DB
ovn_sb_certificate
Type string
Default ''
The PEM file with certificate that certifies the private key specified in ovn_sb_private_key
ovn_sb_ca_cert
Type string
Default ''
The PEM file with CA certificate that OVN should use to verify certificates presented to it by SSL
peers
ovsdb_connection_timeout
Type integer
Default 180
Timeout in seconds for the OVSDB connection transaction
ovsdb_retry_max_interval
Type integer
Default 180
Max interval in seconds between each retry to get the OVN NB and SB IDLs
ovsdb_probe_interval
Type integer
Default 60000
Minimum Value 0
The probe interval in for the OVSDB session in milliseconds. If this is zero, it disables the
connection keepalive feature. If non-zero the value will be forced to at least 1000 milliseconds.
Defaults to 60 seconds.
neutron_sync_mode
Type string
Default log
Valid Values off, log, repair, migrate
The synchronization mode of OVN_Northbound OVSDB with Neutron DB. off - synchroniza-
tion is off log - during neutron-server startup, check to see if OVN is in sync with the Neutron
database. Log warnings for any inconsistencies found so that an admin can investigate repair -
during neutron-server startup, automatically create resources found in Neutron but not in OVN.
Also remove resources from OVN that are no longer in Neutron.migrate - This mode is to OVS to
OVN migration. It will sync the DB just like repair mode but it will additionally fix the Neutron
DB resource from OVS to OVN.
ovn_l3_mode
Type boolean
Default True
Whether to use OVN native L3 support. Do not change the value for existing deployments that
contain routers.
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason This option is no longer used. Native L3 support in OVN is always used.
ovn_l3_scheduler
Type string
Default leastloaded
Valid Values leastloaded, chance
The OVN L3 Scheduler type used to schedule router gateway ports on hypervisors/chassis. least-
loaded - chassis with fewest gateway ports selected chance - chassis randomly selected
enable_distributed_floating_ip
Type boolean
Default False
Enable distributed floating IP support. If True, the NAT action for floating IPs will be done locally
and not in the centralized gateway. This saves the path to the external network. This requires the
user to configure the physical network map (i.e. ovn-bridge-mappings) on each compute node.
vif_type
Type string
Default ovs
Valid Values ovs, vhostuser
Type of VIF to be used for ports valid values are (ovs, vhostuser) default ovs
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason The port VIF type is now determined based on the OVN chassis infor-
mation when the port is bound to a host.
vhost_sock_dir
Type string
Default /var/run/openvswitch
The directory in which vhost virtio socket is created by all the vswitch daemons
dhcp_default_lease_time
Type integer
Default 43200
Default least time (in seconds) to use with OVNs native DHCP service.
ovsdb_log_level
Type string
Default INFO
Valid Values CRITICAL, ERROR, WARNING, INFO, DEBUG
The log level used for OVSDB
ovn_metadata_enabled
Type boolean
Default False
Whether to use metadata service.
dns_servers
Type list
Default []
Comma-separated list of the DNS servers which will be used as forwarders if a subnets
dns_nameservers field is empty. If both subnets dns_nameservers and this option is empty, then
the DNS resolvers on the host running the neutron server will be used.
ovn_dhcp4_global_options
Type dict
Default {}
Dictionary of global DHCPv4 options which will be automatically set on each subnet upon cre-
ation and on all existing subnets when Neutron starts. An empty value for a DHCP option will
cause that option to be unset globally. EXAMPLES: - ntp_server:1.2.3.4,wpad:1.2.3.5 - Set
ntp_server and wpad - ntp_server:,wpad:1.2.3.5 - Unset ntp_server and set wpad See the ovn-
nb(5) man page for available options.
ovn_dhcp6_global_options
Type dict
Default {}
Dictionary of global DHCPv6 options which will be automatically set on each subnet upon cre-
ation and on all existing subnets when Neutron starts. An empty value for a DHCP option will
cause that option to be unset globally. EXAMPLES: - ntp_server:1.2.3.4,wpad:1.2.3.5 - Set
ntp_server and wpad - ntp_server:,wpad:1.2.3.5 - Unset ntp_server and set wpad See the ovn-
nb(5) man page for available options.
ovn_emit_need_to_frag
Type boolean
Default False
Configure OVN to emit need to frag packets in case of MTU mismatch. Before enabling this
configuration make sure that its supported by the host kernel (version >= 5.2) or by checking the
output of the following command: ovs-appctl -t ovs-vswitchd dpif/show-dp-features br-int | grep
Check pkt length action.
ovs
ovsdb_timeout
Type integer
Default 10
Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with
ALARMCLOCK error.
bridge_mac_table_size
Type integer
Default 50000
The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS
agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch
according to the documentation.
igmp_snooping_enable
Type boolean
Default False
Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet
Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True
will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will
disable flooding of unregistered multicast packets to all ports. The switch will send unregistered
multicast packets only to ports connected to multicast routers.
9.1.8 dhcp_agent.ini
DEFAULT
ovs_integration_bridge
Type string
Default br-int
Name of Open vSwitch bridge to use
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason This variable is a duplicate of OVS.integration_bridge. To be removed
in W.
ovs_use_veth
Type boolean
Default False
Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g.
RHEL 6.5) and rate limiting on routers gateway port so long as ovs_use_veth is set to True.
interface_driver
Type string
Default <None>
The driver used to manage the virtual interface.
rpc_response_max_timeout
Type integer
Default 600
Maximum seconds to wait for a response from an RPC call.
resync_interval
Type integer
Default 5
The DHCP agent will resync its state with Neutron to recover from any transient notification or
RPC errors. The interval is maximum number of seconds between attempts. The resync can be
done more often based on the events triggered.
resync_throttle
Type integer
Default 1
Throttle the number of resync state events between the local DHCP state and Neutron to only
once per resync_throttle seconds. The value of throttle introduces a minimum interval between
resync state events. Otherwise the resync may end up in a busy-loop. The value must be less than
resync_interval.
dhcp_driver
Type string
Default neutron.agent.linux.dhcp.Dnsmasq
The driver used to manage the DHCP server.
enable_isolated_metadata
Type boolean
Default False
The DHCP server can assist with providing metadata support on isolated networks. Setting this
value to True will cause the DHCP server to append specific host routes to the DHCP request.
The metadata service will only be activated when the subnet does not contain any router port. The
guest instance must be configured to request host routes via DHCP (Option 121). This option
doesnt have any effect when force_metadata is set to True.
force_metadata
Type boolean
Default False
In some cases the Neutron router is not present to provide the metadata IP but the DHCP server
can be used to provide this info. Setting this value will force the DHCP server to append specific
host routes to the DHCP request. If this option is set, then the metadata service will be activated
for all the networks.
enable_metadata_network
Type boolean
Default False
Allows for serving metadata requests coming from a dedicated metadata access network whose
CIDR is 169.254.169.254/16 (or larger prefix), and is connected to a Neutron router from which
the VMs send metadata:1 request. In this case DHCP Option 121 will not be injected in
VMs, as they will be able to reach 169.254.169.254 through a router. This option requires en-
able_isolated_metadata = True.
num_sync_threads
Type integer
Default 4
Number of threads to use during sync process. Should not exceed connection pool size configured
on server.
bulk_reload_interval
Type integer
Default 0
Minimum Value 0
Time to sleep between reloading the DHCP allocations. This will only be invoked if the value is
not 0. If a network has N updates in X seconds then we will reload once with the port changes in
the X seconds and not N times.
dhcp_confs
Type string
Default $state_path/dhcp
Location to store DHCP server config files.
dnsmasq_config_file
Type string
Default ''
Override the default dnsmasq settings with this file.
dnsmasq_dns_servers
Type list
Default []
Comma-separated list of the DNS servers which will be used as forwarders.
dnsmasq_base_log_dir
Type string
Default <None>
Base log dir for dnsmasq logging. The log contains DHCP and DNS log information and is useful
for debugging issues with either DHCP or DNS. If this section is null, disable dnsmasq log.
dnsmasq_local_resolv
Type boolean
Default False
Enables the dnsmasq service to provide name resolution for instances via DNS resolvers on the
host running the DHCP agent. Effectively removes the no-resolv option from the dnsmasq pro-
cess arguments. Adding custom DNS resolvers to the dnsmasq_dns_servers option disables this
feature.
dnsmasq_lease_max
Type integer
Default 16777216
Limit number of leases to prevent a denial-of-service.
dhcp_broadcast_reply
Type boolean
Default False
Use broadcast in DHCP replies.
dhcp_renewal_time
Type integer
Default 0
DHCP renewal time T1 (in seconds). If set to 0, it will default to half of the lease time.
dhcp_rebinding_time
Type integer
Default 0
DHCP rebinding time T2 (in seconds). If set to 0, it will default to 7/8 of the lease time.
dnsmasq_enable_addr6_list
Type boolean
Default False
Enable dhcp-host entry with list of addresses when port has multiple IPv6 addresses in the same
subnet.
debug
Type boolean
Default False
Mutable This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
Type string
Default <None>
Mutable This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging config-
uration files. For details about logging configuration files, see the Python logging module doc-
umentation. Note that when logging configuration files are used then all logging configuration
is set in the configuration file and other logging configuration options are ignored (for example,
log-date-format).
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
agent
availability_zone
Type string
Default nova
Availability zone of this node
report_interval
Type floating point
Default 30
Seconds between nodes reporting state to server; should be less than agent_down_time, best if it
is half or less than agent_down_time.
log_agent_heartbeats
Type boolean
Default False
Log agent heartbeats
ovs
ovsdb_connection
Type string
Default tcp:127.0.0.1:6640
The connection string for the OVSDB backend. Will be used for all ovsdb commands and by
ovsdb-client when monitoring
ssl_key_file
Type string
Default <None>
The SSL private key file to use when interacting with OVSDB. Required when using an ssl:
prefixed ovsdb_connection
ssl_cert_file
Type string
Default <None>
The SSL certificate file to use when interacting with OVSDB. Required when using an ssl: pre-
fixed ovsdb_connection
ssl_ca_cert_file
Type string
Default <None>
The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when
using an ssl: prefixed ovsdb_connection
ovsdb_debug
Type boolean
Default False
Enable OVSDB debug logs
ovsdb_timeout
Type integer
Default 10
Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with
ALARMCLOCK error.
bridge_mac_table_size
Type integer
Default 50000
The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS
agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch
according to the documentation.
igmp_snooping_enable
Type boolean
Default False
Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet
Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True
will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will
disable flooding of unregistered multicast packets to all ports. The switch will send unregistered
multicast packets only to ports connected to multicast routers.
9.1.9 l3_agent.ini
DEFAULT
ovs_integration_bridge
Type string
Default br-int
Name of Open vSwitch bridge to use
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason This variable is a duplicate of OVS.integration_bridge. To be removed
in W.
ovs_use_veth
Type boolean
Default False
Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g.
RHEL 6.5) and rate limiting on routers gateway port so long as ovs_use_veth is set to True.
interface_driver
Type string
Default <None>
The driver used to manage the virtual interface.
rpc_response_max_timeout
Type integer
Default 600
Maximum seconds to wait for a response from an RPC call.
agent_mode
Type string
Default legacy
Valid Values dvr, dvr_snat, legacy, dvr_no_external
The working mode for the agent. Allowed modes are: legacy - this preserves the existing behavior
where the L3 agent is deployed on a centralized networking node to provide L3 services like
DNAT, and SNAT. Use this mode if you do not want to adopt DVR. dvr - this mode enables DVR
functionality and must be used for an L3 agent that runs on a compute host. dvr_snat - this enables
centralized SNAT support in conjunction with DVR. This mode must be used for an L3 agent
running on a centralized node (or in single-host deployments, e.g. devstack). dvr_no_external -
this mode enables only East/West DVR routing functionality for a L3 agent that runs on a compute
host, the North/South functionality such as DNAT and SNAT will be provided by the centralized
network node that is running in dvr_snat mode. This mode should be used when there is no
external network connectivity on the compute host.
metadata_port
Type port number
Default 9697
Minimum Value 0
Maximum Value 65535
TCP Port used by Neutron metadata namespace proxy.
handle_internal_only_routers
Type boolean
Default True
Indicates that this L3 agent should also handle routers that do not have an external network gate-
way configured. This option should be True only for a single agent in a Neutron deployment, and
may be False for all agents if all routers must have an external network gateway.
ipv6_gateway
Type string
Default ''
With IPv6, the network used for the external gateway does not need to have an associated subnet,
since the automatically assigned link-local address (LLA) can be used. However, an IPv6 gateway
address is needed for use as the next-hop for the default route. If no IPv6 gateway address is
configured here, (and only then) the neutron router will be configured to get its default route from
router advertisements (RAs) from the upstream router; in which case the upstream router must
also be configured to send these RAs. The ipv6_gateway, when configured, should be the LLA of
the interface on the upstream router. If a next-hop using a global unique address (GUA) is desired,
it needs to be done via a subnet allocated to the network and not through this parameter.
prefix_delegation_driver
Type string
Default dibbler
Driver used for ipv6 prefix delegation. This needs to be an entry point defined in the neu-
tron.agent.linux.pd_drivers namespace. See setup.cfg for entry points included with the neutron
source.
enable_metadata_proxy
Type boolean
Default True
Allow running metadata proxy.
metadata_access_mark
Type string
Default 0x1
Iptables mangle mark used to mark metadata valid requests. This mark will be masked with 0xffff
so that only the lower 16 bits will be used.
external_ingress_mark
Type string
Default 0x2
Iptables mangle mark used to mark ingress from external network. This mark will be masked with
0xffff so that only the lower 16 bits will be used.
radvd_user
Type string
Default ''
The username passed to radvd, used to drop root privileges and change user ID to username and
group ID to the primary group of username. If no user specified (by default), the user executing
the L3 agent will be passed. If root specified, because radvd is spawned as root, no username
parameter will be passed.
cleanup_on_shutdown
Type boolean
Default False
Delete all routers on L3 agent shutdown. For L3 HA routers it includes a shutdown of keepalived
and the state change monitor. NOTE: Setting to True could affect the data plane when stopping or
restarting the L3 agent.
keepalived_use_no_track
Type boolean
Default True
If keepalived without support for no_track option is used, this should be set to False. Support for
this option was introduced in keepalived 2.x
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason By keepalived version detection introduced by https://review.opendev.
org/757620 there is no need for this config option. To be removed in X.
periodic_interval
Type integer
Default 40
Seconds between running periodic tasks.
api_workers
Type integer
Default <None>
Number of separate API worker processes for service. If not specified, the default is equal to the
number of CPUs available for best performance, capped by potential RAM usage.
rpc_workers
Type integer
Default <None>
Number of RPC worker processes for service. If not specified, the default is equal to half the
number of API workers.
rpc_state_report_workers
Type integer
Default 1
Number of RPC worker processes dedicated to state reports queue.
periodic_fuzzy_delay
Type integer
Default 5
Range of seconds to randomly delay when starting the periodic task scheduler to reduce stamped-
ing. (Disable by setting to 0)
ha_confs_path
Type string
Default $state_path/ha_confs
Location to store keepalived config files
ha_vrrp_auth_type
Type string
Default PASS
Valid Values AH, PASS
VRRP authentication type
ha_vrrp_auth_password
Type string
Default <None>
VRRP authentication password
ha_vrrp_advert_int
Type integer
Default 2
The advertisement interval in seconds
ha_keepalived_state_change_server_threads
Type integer
Default (1 + <num_of_cpus>) / 2
Minimum Value 1
This option has a sample default set, which means that its actual default value may vary from the
one documented above.
Number of concurrent threads for keepalived server connection requests. More threads create a
higher CPU load on the agent node.
ha_vrrp_health_check_interval
Type integer
Default 0
The VRRP health check interval in seconds. Values > 0 enable VRRP health checks. Setting it to
0 disables VRRP health checks. Recommended value is 5. This will cause pings to be sent to the
gateway IP address(es) - requires ICMP_ECHO_REQUEST to be enabled on the gateway(s). If a
gateway fails, all routers will be reported as primary, and a primary election will be repeated in a
round-robin fashion, until one of the routers restores the gateway connection.
pd_confs
Type string
Default $state_path/pd
Location to store IPv6 PD files.
vendor_pen
Type string
Default 8888
A decimal value as Vendors Registered Private Enterprise Number as required by RFC3315
DUID-EN.
ra_confs
Type string
Default $state_path/ra
Location to store IPv6 RA config files
min_rtr_adv_interval
Type integer
Default 30
MinRtrAdvInterval setting for radvd.conf
max_rtr_adv_interval
Type integer
Default 100
MaxRtrAdvInterval setting for radvd.conf
agent
availability_zone
Type string
Default nova
Availability zone of this node
report_interval
Type floating point
Default 30
Seconds between nodes reporting state to server; should be less than agent_down_time, best if it
is half or less than agent_down_time.
log_agent_heartbeats
Type boolean
Default False
Log agent heartbeats
extensions
Type list
Default []
Extensions list to use
network_log
rate_limit
Type integer
Default 100
Minimum Value 100
Maximum packets logging per second.
burst_limit
Type integer
Default 25
Minimum Value 25
Maximum number of packets per rate_limit.
local_output_log_base
Type string
Default <None>
Output logfile path on agent side, default syslog file.
ovs
ovsdb_connection
Type string
Default tcp:127.0.0.1:6640
The connection string for the OVSDB backend. Will be used for all ovsdb commands and by
ovsdb-client when monitoring
ssl_key_file
Type string
Default <None>
The SSL private key file to use when interacting with OVSDB. Required when using an ssl:
prefixed ovsdb_connection
ssl_cert_file
Type string
Default <None>
The SSL certificate file to use when interacting with OVSDB. Required when using an ssl: pre-
fixed ovsdb_connection
ssl_ca_cert_file
Type string
Default <None>
The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when
using an ssl: prefixed ovsdb_connection
ovsdb_debug
Type boolean
Default False
Enable OVSDB debug logs
ovsdb_timeout
Type integer
Default 10
Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with
ALARMCLOCK error.
bridge_mac_table_size
Type integer
Default 50000
The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS
agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch
according to the documentation.
igmp_snooping_enable
Type boolean
Default False
Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet
Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True
will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will
disable flooding of unregistered multicast packets to all ports. The switch will send unregistered
multicast packets only to ports connected to multicast routers.
9.1.10 metadata_agent.ini
DEFAULT
metadata_proxy_socket
Type string
Default $state_path/metadata_proxy
Location for Metadata Proxy UNIX domain socket.
metadata_proxy_user
Type string
Default ''
User (uid or name) running metadata proxy after its initialization (if empty: agent effective user).
metadata_proxy_group
Type string
Default ''
Group (gid or name) running metadata proxy after its initialization (if empty: agent effective
group).
auth_ca_cert
Type string
Default <None>
Certificate Authority public key (CA cert) file for ssl
nova_metadata_host
Type host address
Default 127.0.0.1
IP address or DNS name of Nova metadata server.
nova_metadata_port
Type port number
Default 8775
Minimum Value 0
Maximum Value 65535
TCP Port used by Nova metadata server.
metadata_proxy_shared_secret
Type string
Default ''
When proxying metadata requests, Neutron signs the Instance-ID header with a shared secret
to prevent spoofing. You may select any string for a secret, but it must match here and in the
configuration used by the Nova Metadata Server. NOTE: Nova uses the same config key, but in
[neutron] section.
nova_metadata_protocol
Type string
Default http
Valid Values http, https
Protocol to access nova metadata, http or https
nova_metadata_insecure
Type boolean
Default False
Allow to perform insecure SSL (https) requests to nova metadata
nova_client_cert
Type string
Default ''
Client certificate for nova metadata api server.
nova_client_priv_key
Type string
Default ''
Private key of client certificate.
metadata_proxy_socket_mode
Type string
Default deduce
Valid Values deduce, user, group, all
Metadata Proxy UNIX domain socket mode, 4 values allowed: deduce: deduce mode from meta-
data_proxy_user/group values, user: set metadata proxy socket mode to 0o644, to use when meta-
data_proxy_user is agent effective user or root, group: set metadata proxy socket mode to 0o664,
to use when metadata_proxy_group is agent effective group or root, all: set metadata proxy socket
mode to 0o666, to use otherwise.
metadata_workers
Type integer
Default <num_of_cpus> / 2
This option has a sample default set, which means that its actual default value may vary from the
one documented above.
Number of separate worker processes for metadata server (defaults to 2 when used with ML2/OVN
and half of the number of CPUs with other backend drivers)
metadata_backlog
Type integer
Default 4096
Number of backlog requests to configure the metadata server socket with
rpc_response_max_timeout
Type integer
Default 600
Maximum seconds to wait for a response from an RPC call.
debug
Type boolean
Default False
Mutable This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
Type string
Default <None>
Mutable This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging config-
uration files. For details about logging configuration files, see the Python logging module doc-
umentation. Note that when logging configuration files are used then all logging configuration
is set in the configuration file and other logging configuration options are ignored (for example,
log-date-format).
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [%(request_id)s %(user_identity)s]
%(instance)s%(message)s
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
logging_default_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [-] %(instance)s%(message)s
Format string to use for log messages when context is undefined. Used by
oslo_log.formatters.ContextFormatter
logging_debug_format_suffix
Type string
Default %(funcName)s %(pathname)s:%(lineno)d
Additional data to append to log message when logging level for the message is DEBUG. Used
by oslo_log.formatters.ContextFormatter
logging_exception_prefix
Type string
Default %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s
%(instance)s
Prefix each line of exception output with this format. Used by
oslo_log.formatters.ContextFormatter
logging_user_identity_format
Type string
Default %(user)s %(tenant)s %(domain)s %(user_domain)s
%(project_domain)s
Defines the format string for %(user_identity)s that is used in logging_context_format_string.
Used by oslo_log.formatters.ContextFormatter
default_log_levels
Type list
Default ['amqp=WARN', 'amqplib=WARN', 'boto=WARN',
'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO',
'oslo.messaging=INFO', 'oslo_messaging=INFO',
'iso8601=WARN', 'requests.packages.urllib3.
connectionpool=WARN', 'urllib3.connectionpool=WARN',
'websocket=WARN', 'requests.packages.
urllib3.util.retry=WARN', 'urllib3.util.
retry=WARN', 'keystonemiddleware=WARN', 'routes.
middleware=WARN', 'stevedore=WARN', 'taskflow=WARN',
'keystoneauth=WARN', 'oslo.cache=INFO',
'oslo_policy=INFO', 'dogpile.core.dogpile=INFO']
List of package logging levels in logger=LEVEL pairs. This option is ignored if
log_config_append is set.
publish_errors
Type boolean
Default False
Enables or disables publication of error events.
instance_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance that is passed with the log message.
instance_uuid_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance UUID that is passed with the log message.
rate_limit_interval
Type integer
Default 0
Interval, number of seconds, of log rate limiting.
rate_limit_burst
Type integer
Default 0
Maximum number of logged messages per rate_limit_interval.
rate_limit_except_level
Type string
Default CRITICAL
Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty
string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string
means that all levels are filtered.
fatal_deprecations
Type boolean
Default False
Enables or disables fatal status of deprecations.
agent
report_interval
Type floating point
Default 30
Seconds between nodes reporting state to server; should be less than agent_down_time, best if it
is half or less than agent_down_time.
log_agent_heartbeats
Type boolean
Default False
Log agent heartbeats
cache
config_prefix
Type string
Default cache.oslo
Prefix for building the configuration dictionary for the cache region. This should not need to be
changed unless there is another dogpile.cache region with the same configuration name.
expiration_time
Type integer
Default 600
Default TTL, in seconds, for any cached item in the dogpile.cache region. This applies to any
cached method that doesnt have an explicit cache expiration time defined for it.
backend
Type string
Default dogpile.cache.null
Valid Values oslo_cache.memcache_pool, oslo_cache.dict, oslo_cache.mongo,
oslo_cache.etcd3gw, dogpile.cache.pymemcache, dogpile.cache.memcached,
dogpile.cache.pylibmc, dogpile.cache.bmemcached, dogpile.cache.dbm, dog-
pile.cache.redis, dogpile.cache.memory, dogpile.cache.memory_pickle, dog-
pile.cache.null
Cache backend module. For eventlet-based or environments with hundreds of threaded servers,
Memcache with pooling (oslo_cache.memcache_pool) is recommended. For environments
with less than 100 threaded servers, Memcached (dogpile.cache.memcached) or Redis (dog-
pile.cache.redis) is recommended. Test environments with a single instance of the server can
use the dogpile.cache.memory backend.
backend_argument
Type multi-valued
Default ''
Arguments supplied to the backend module. Specify this option once per argument to be passed
to the dogpile.cache backend. Example format: <argname>:<value>.
proxies
Type list
Default []
Proxy classes to import that will affect the way the dogpile.cache backend functions. See the
dogpile.cache documentation on changing-backend-behavior.
enabled
Type boolean
Default False
Global toggle for caching.
debug_cache_backend
Type boolean
Default False
Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). This is only really
useful if you need to see the specific cache-backend get/set/delete calls with the keys/values.
Typically this should be left set to false.
memcache_servers
Type list
Default ['localhost:11211']
Memcache servers in the format of host:port. (dogpile.cache.memcached and
oslo_cache.memcache_pool backends only). If a given host refer to an IPv6 or a given
domain refer to IPv6 then you should prefix the given address with the address family
(inet6) (e.g inet6[::1]:11211, inet6:[fd12:3456:789a:1::1]:11211,
inet6:[controller-0.internalapi]:11211). If the address family is not given then
default address family used will be inet which correspond to IPv4
memcache_dead_retry
Type integer
Default 300
Number of seconds memcached server is considered dead before it is tried again. (dog-
pile.cache.memcache and oslo_cache.memcache_pool backends only).
memcache_socket_timeout
Type floating point
Default 1.0
Timeout in seconds for every call to a server. (dogpile.cache.memcache and
oslo_cache.memcache_pool backends only).
memcache_pool_maxsize
Type integer
Default 10
Max total number of open connections to every memcached server. (oslo_cache.memcache_pool
backend only).
memcache_pool_unused_timeout
Type integer
Default 60
Number of seconds a connection to memcached is held unused in the pool before it is closed.
(oslo_cache.memcache_pool backend only).
memcache_pool_connection_get_timeout
Type integer
Default 10
Number of seconds that an operation will wait to get a memcache client connection.
memcache_pool_flush_on_reconnect
Type boolean
Default False
Global toggle if memcache will be flushed on reconnect. (oslo_cache.memcache_pool backend
only).
tls_enabled
Type boolean
Default False
Global toggle for TLS usage when comunicating with the caching servers.
tls_cafile
Type string
Default <None>
Path to a file of concatenated CA certificates in PEM format necessary to establish the caching
servers authenticity. If tls_enabled is False, this option is ignored.
tls_certfile
Type string
Default <None>
Path to a single file in PEM format containing the clients certificate as well as any number of CA
certificates needed to establish the certificates authenticity. This file is only required when client
side authentication is necessary. If tls_enabled is False, this option is ignored.
tls_keyfile
Type string
Default <None>
Path to a single file containing the clients private key in. Otherwhise the private key will be taken
from the file specified in tls_certfile. If tls_enabled is False, this option is ignored.
tls_allowed_ciphers
Type string
Default <None>
Set the available ciphers for sockets created with the TLS context. It should be a string in the
OpenSSL cipher list format. If not specified, all OpenSSL enabled ciphers will be available.
The Neutron metering service enables operators to account the traffic in/out of the OpenStack environ-
ment. The concept is quite simple, operators can create metering labels, and decide if the labels are
applied to all projects (tenants) or if they are applied to a specific one. Then, the operator needs to create
traffic rules in the metering labels. The traffic rules are used to match traffic in/out of the OpenStack
environment, and the accounting of packets and bytes is sent to the notification queue for further pro-
cessing by Ceilometer (or some other system that is consuming that queue). The message sent in the
queue is of type event. Therefore, it requires an event processing configuration to be added/enabled in
Ceilometer.
The metering agent has the following configurations:
• driver: the driver used to implement the metering rules. The default is neutron.
services.metering.drivers.noop, which means, we do not execute anything in the
networking host. The only driver implemented so far is neutron.services.metering.
drivers.iptables.iptables_driver.IptablesMeteringDriver. Therefore,
only iptables is supported so far;
• measure_interval: the interval in seconds used to gather the bytes and packets information
from the network plane. The default value is 30 seconds;
• report_interval: the interval in secodns used to generated the report (message) of the data
that is gathered. The default value is 300 seconds.
• granular_traffic_data: Defines if the metering agent driver should present traffic data in
a granular fashion, instead of grouping all of the traffic data for all projects and routers where the
labels were assigned to. The default value is False for backward compatibility.
The first_update and last_update timestamps represent the moment when the first and last
data collection happened within the report interval. On the other hand, the time represents the differ-
ence between those two timestamp.
The tenant_id is only consistent when labels are not shared. Otherwise, they will contain the project
id of the last router of the last project processed when the agent is started up. In other words, it is better
not use it when dealing with shared labels.
All of the messages generated in this configuration mode are sent to the message bus as l3.meter
events.
The message will also contain some attributes that can be found in the legacy mode such as bytes,
pkts, time, first_update, last_update, and host. As follows we present an example of
JSON message with all of the possible attributes.
{
"resource_id": "router-f0f745d9a59c47fdbbdd187d718f9e41-label-
,→00c714f1-49c8-462c-8f5d-f05f21e035c7",
"project_id": "f0f745d9a59c47fdbbdd187d718f9e41",
"first_update": 1591058790,
"bytes": 0,
"label_id": "00c714f1-49c8-462c-8f5d-f05f21e035c7",
"label_name": "test1",
"last_update": 1591059037,
"host": "<hostname>",
"time": 247,
"pkts": 0,
"label_shared": true
}
The resource_id is a unique identified for the resource being monitored. Here we consider a re-
source to be any of the granularities that we handle.
Sample of metering_agent.ini
As follows we present all of the possible configuration one can use in the metering agent init file.
DEFAULT
ovs_integration_bridge
Type string
Default br-int
Name of Open vSwitch bridge to use
Warning: This option is deprecated for removal. Its value may be silently ignored in the
future.
Reason This variable is a duplicate of OVS.integration_bridge. To be removed
in W.
ovs_use_veth
Type boolean
Default False
Uses veth for an OVS interface or not. Support kernels with limited namespace support (e.g.
RHEL 6.5) and rate limiting on routers gateway port so long as ovs_use_veth is set to True.
interface_driver
Type string
Default <None>
The driver used to manage the virtual interface.
rpc_response_max_timeout
Type integer
Default 600
Maximum seconds to wait for a response from an RPC call.
driver
Type string
Default neutron.services.metering.drivers.noop.noop_driver.
NoopMeteringDriver
Metering driver
measure_interval
Type integer
Default 30
Interval between two metering measures
report_interval
Type integer
Default 300
Interval between two metering reports
granular_traffic_data
Type boolean
Default False
Defines if the metering agent driver should present traffic data in a granular fashion, instead of
grouping all of the traffic data for all projects and routers where the labels were assigned to. The
default value is False for backward compatibility.
debug
Type boolean
Default False
Mutable This option can be changed without restarting.
If set to true, the logging level will be set to DEBUG instead of the default INFO level.
log_config_append
Type string
Default <None>
Mutable This option can be changed without restarting.
The name of a logging configuration file. This file is appended to any existing logging config-
uration files. For details about logging configuration files, see the Python logging module doc-
umentation. Note that when logging configuration files are used then all logging configuration
is set in the configuration file and other logging configuration options are ignored (for example,
log-date-format).
log_date_format
Type string
Default %Y-%m-%d %H:%M:%S
Defines the format string for %(asctime)s in log records. Default: the value above . This option is
ignored if log_config_append is set.
log_file
Type string
Default <None>
(Optional) Name of log file to send logging output to. If no default is set, logging will go to stderr
as defined by use_stderr. This option is ignored if log_config_append is set.
log_dir
Type string
Default <None>
(Optional) The base directory used for relative log_file paths. This option is ignored if
log_config_append is set.
watch_log_file
Type boolean
Default False
Uses logging handler designed to watch file system. When log file is moved or removed this
handler will open a new log file with specified path instantaneously. It makes sense only if log_file
option is specified and Linux platform is used. This option is ignored if log_config_append is set.
use_syslog
Type boolean
Default False
Use syslog for logging. Existing syslog format is DEPRECATED and will be changed later to
honor RFC5424. This option is ignored if log_config_append is set.
use_journal
Type boolean
Default False
Enable journald for logging. If running in a systemd environment you may wish to enable journal
support. Doing so will use the journal native protocol which includes structured metadata in
addition to log messages.This option is ignored if log_config_append is set.
syslog_log_facility
Type string
Default LOG_USER
Syslog facility to receive log lines. This option is ignored if log_config_append is set.
use_json
Type boolean
Default False
Use JSON formatting for logging. This option is ignored if log_config_append is set.
use_stderr
Type boolean
Default False
Log output to standard error. This option is ignored if log_config_append is set.
use_eventlog
Type boolean
Default False
Log output to Windows Event Log.
log_rotate_interval
Type integer
Default 1
The amount of time before the log files are rotated. This option is ignored unless log_rotation_type
is setto interval.
log_rotate_interval_type
Type string
Default days
Valid Values Seconds, Minutes, Hours, Days, Weekday, Midnight
Rotation interval type. The time of the last file change (or the time when the service was started)
is used when scheduling the next rotation.
max_logfile_count
Type integer
Default 30
Maximum number of rotated log files.
max_logfile_size_mb
Type integer
Default 200
Log file maximum size in MB. This option is ignored if log_rotation_type is not set to size.
log_rotation_type
Type string
Default none
Valid Values interval, size, none
Log rotation type.
Possible values
logging_context_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [%(request_id)s %(user_identity)s]
%(instance)s%(message)s
Format string to use for log messages with context. Used by oslo_log.formatters.ContextFormatter
logging_default_format_string
Type string
Default %(asctime)s.%(msecs)03d %(process)d %(levelname)s
%(name)s [-] %(instance)s%(message)s
Format string to use for log messages when context is undefined. Used by
oslo_log.formatters.ContextFormatter
logging_debug_format_suffix
Type string
Default %(funcName)s %(pathname)s:%(lineno)d
Additional data to append to log message when logging level for the message is DEBUG. Used
by oslo_log.formatters.ContextFormatter
logging_exception_prefix
Type string
Default %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s
%(instance)s
Prefix each line of exception output with this format. Used by
oslo_log.formatters.ContextFormatter
logging_user_identity_format
Type string
Default %(user)s %(tenant)s %(domain)s %(user_domain)s
%(project_domain)s
Defines the format string for %(user_identity)s that is used in logging_context_format_string.
Used by oslo_log.formatters.ContextFormatter
default_log_levels
Type list
Default ['amqp=WARN', 'amqplib=WARN', 'boto=WARN',
'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO',
'oslo.messaging=INFO', 'oslo_messaging=INFO',
'iso8601=WARN', 'requests.packages.urllib3.
connectionpool=WARN', 'urllib3.connectionpool=WARN',
'websocket=WARN', 'requests.packages.
urllib3.util.retry=WARN', 'urllib3.util.
retry=WARN', 'keystonemiddleware=WARN', 'routes.
middleware=WARN', 'stevedore=WARN', 'taskflow=WARN',
'keystoneauth=WARN', 'oslo.cache=INFO',
'oslo_policy=INFO', 'dogpile.core.dogpile=INFO']
List of package logging levels in logger=LEVEL pairs. This option is ignored if
log_config_append is set.
publish_errors
Type boolean
Default False
Enables or disables publication of error events.
instance_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance that is passed with the log message.
instance_uuid_format
Type string
Default "[instance: %(uuid)s] "
The format for an instance UUID that is passed with the log message.
rate_limit_interval
Type integer
Default 0
Interval, number of seconds, of log rate limiting.
rate_limit_burst
Type integer
Default 0
Maximum number of logged messages per rate_limit_interval.
rate_limit_except_level
Type string
Default CRITICAL
Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or empty
string. Logs with level greater or equal to rate_limit_except_level are not filtered. An empty string
means that all levels are filtered.
fatal_deprecations
Type boolean
Default False
Enables or disables fatal status of deprecations.
agent
report_interval
Type floating point
Default 30
Seconds between nodes reporting state to server; should be less than agent_down_time, best if it
is half or less than agent_down_time.
log_agent_heartbeats
Type boolean
Default False
Log agent heartbeats
ovs
ovsdb_connection
Type string
Default tcp:127.0.0.1:6640
The connection string for the OVSDB backend. Will be used for all ovsdb commands and by
ovsdb-client when monitoring
ssl_key_file
Type string
Default <None>
The SSL private key file to use when interacting with OVSDB. Required when using an ssl:
prefixed ovsdb_connection
ssl_cert_file
Type string
Default <None>
The SSL certificate file to use when interacting with OVSDB. Required when using an ssl: pre-
fixed ovsdb_connection
ssl_ca_cert_file
Type string
Default <None>
The Certificate Authority (CA) certificate to use when interacting with OVSDB. Required when
using an ssl: prefixed ovsdb_connection
ovsdb_debug
Type boolean
Default False
Enable OVSDB debug logs
ovsdb_timeout
Type integer
Default 10
Timeout in seconds for ovsdb commands. If the timeout expires, ovsdb commands will fail with
ALARMCLOCK error.
bridge_mac_table_size
Type integer
Default 50000
The maximum number of MAC addresses to learn on a bridge managed by the Neutron OVS
agent. Values outside a reasonable range (10 to 1,000,000) might be overridden by Open vSwitch
according to the documentation.
igmp_snooping_enable
Type boolean
Default False
Enable IGMP snooping for integration bridge. If this option is set to True, support for Internet
Group Management Protocol (IGMP) is enabled in integration bridge. Setting this option to True
will also enable Open vSwitch mcast-snooping-disable-flood-unregistered flag. This option will
disable flooding of unregistered multicast packets to all ports. The switch will send unregistered
multicast packets only to ports connected to multicast routers.
Warning: JSON formatted policy file is deprecated since Neutron 18.0.0 (Wallaby). This
oslopolicy-convert-json-to-yaml tool will migrate your existing JSON-formatted policy file to
YAML in a backward-compatible way.
Neutron, like most OpenStack projects, uses a policy language to restrict permissions on REST API
actions.
The following is an overview of all available policies in neutron.
9.2.1 neutron
context_is_admin
Default role:admin
Rule for cloud admin access
owner
Default tenant_id:%(tenant_id)s
Rule for resource owner access
admin_or_owner
Default rule:context_is_admin or rule:owner
Rule for admin or owner access
context_is_advsvc
Default role:advsvc
Rule for advsvc role access
admin_or_network_owner
Default rule:context_is_admin or tenant_id:%(network:tenant_id)s
Rule for admin or network owner access
admin_owner_or_network_owner
• GET /address-groups/{id}
Scope Types
• system
• project
Get an address group
shared_address_scopes
Default field:address_scopes:shared=True
Definition of a shared address scope
create_address_scope
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /address-scopes
Scope Types
• system
• project
Create an address scope
create_address_scope:shared
Default role:admin and system_scope:all
Operations
• POST /address-scopes
Scope Types
• system
• project
Create a shared address scope
get_address_scope
Default (role:reader and system_scope:all) or
(role:reader and project_id:%(project_id)s) or
rule:shared_address_scopes
Operations
• GET /address-scopes
• GET /address-scopes/{id}
Scope Types
• system
• project
update_agent
Default role:admin and system_scope:all
Operations
• PUT /agents/{id}
Scope Types
• system
Update an agent
delete_agent
Default role:admin and system_scope:all
Operations
• DELETE /agents/{id}
Scope Types
• system
Delete an agent
create_dhcp-network
Default role:admin and system_scope:all
Operations
• POST /agents/{agent_id}/dhcp-networks
Scope Types
• system
Add a network to a DHCP agent
get_dhcp-networks
Default role:reader and system_scope:all
Operations
• GET /agents/{agent_id}/dhcp-networks
Scope Types
• system
List networks on a DHCP agent
delete_dhcp-network
Default role:admin and system_scope:all
Operations
• DELETE /agents/{agent_id}/dhcp-networks/
{network_id}
Scope Types
• system
• system
List L3 agents hosting a router
get_auto_allocated_topology
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /auto-allocated-topology/{project_id}
Scope Types
• system
• project
Get a projects auto-allocated topology
delete_auto_allocated_topology
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• DELETE /auto-allocated-topology/{project_id}
Scope Types
• system
• project
Delete a projects auto-allocated topology
get_availability_zone
Default role:reader and system_scope:all
Operations
• GET /availability_zones
Scope Types
• system
List availability zones
create_flavor
Default role:admin and system_scope:all
Operations
• POST /flavors
Scope Types
• system
Create a flavor
get_flavor
Scope Types
• system
Get a service profile
update_service_profile
Default role:admin and system_scope:all
Operations
• PUT /service_profiles/{id}
Scope Types
• system
Update a service profile
delete_service_profile
Default role:admin and system_scope:all
Operations
• DELETE /service_profiles/{id}
Scope Types
• system
Delete a service profile
get_flavor_service_profile
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Scope Types
• system
• project
Get a flavor associated with a given service profiles. There is no corresponding GET operations
in API currently. This rule is currently referred only in the DELETE of flavor_service_profile.
create_flavor_service_profile
Default role:admin and system_scope:all
Operations
• POST /flavors/{flavor_id}/service_profiles
Scope Types
• system
Associate a flavor with a service profile
delete_flavor_service_profile
Default role:admin and system_scope:all
Operations
• DELETE /flavors/{flavor_id}/service_profiles/
{profile_id}
Scope Types
• system
Disassociate a flavor with a service profile
create_floatingip
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /floatingips
Scope Types
• system
• project
Create a floating IP
create_floatingip:floating_ip_address
Default role:admin and system_scope:all
Operations
• POST /floatingips
Scope Types
• system
• project
Create a floating IP with a specific IP address
get_floatingip
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /floatingips
• GET /floatingips/{id}
Scope Types
• system
• project
Get a floating IP
update_floatingip
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /floatingips/{id}
Scope Types
• system
• project
Update a floating IP
delete_floatingip
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• DELETE /floatingips/{id}
Scope Types
• system
• project
Delete a floating IP
get_floatingip_pool
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /floatingip_pools
Scope Types
• system
• project
Get floating IP pools
create_floatingip_port_forwarding
Default (role:admin and system_scope:all) or
(role:member and project_id:%(project_id)s) or
rule:ext_parent_owner
Operations
• POST /floatingips/{floatingip_id}/port_forwardings
Scope Types
• system
• project
Create a floating IP port forwarding
get_floatingip_port_forwarding
• POST /routers/{router_id}/conntrack_helpers
Scope Types
• system
• project
Create a router conntrack helper
get_router_conntrack_helper
Default (role:reader and system_scope:all) or
(role:reader and project_id:%(project_id)s) or
rule:ext_parent_owner
Operations
• GET /routers/{router_id}/conntrack_helpers
• GET /routers/{router_id}/conntrack_helpers/
{conntrack_helper_id}
Scope Types
• system
• project
Get a router conntrack helper
update_router_conntrack_helper
Default (role:admin and system_scope:all) or
(role:member and project_id:%(project_id)s) or
rule:ext_parent_owner
Operations
• PUT /routers/{router_id}/conntrack_helpers/
{conntrack_helper_id}
Scope Types
• system
• project
Update a router conntrack helper
delete_router_conntrack_helper
Default (role:admin and system_scope:all) or
(role:member and project_id:%(project_id)s) or
rule:ext_parent_owner
Operations
• DELETE /routers/{router_id}/conntrack_helpers/
{conntrack_helper_id}
Scope Types
• system
• project
Delete a router conntrack helper
get_loggable_resource
Default role:reader and system_scope:all
Operations
• GET /log/loggable-resources
Scope Types
• system
Get loggable resources
create_log
Default role:admin and system_scope:all
Operations
• POST /log/logs
Scope Types
• system
Create a network log
get_log
Default role:reader and system_scope:all
Operations
• GET /log/logs
• GET /log/logs/{id}
Scope Types
• system
Get a network log
update_log
Default role:admin and system_scope:all
Operations
• PUT /log/logs/{id}
Scope Types
• system
Update a network log
delete_log
Default role:admin and system_scope:all
Operations
• DELETE /log/logs/{id}
Scope Types
• system
Delete a network log
create_metering_label
Default role:admin and system_scope:all
Operations
• POST /metering/metering-labels
Scope Types
• system
• project
Create a metering label
get_metering_label
Default role:reader and system_scope:all
Operations
• GET /metering/metering-labels
• GET /metering/metering-labels/{id}
Scope Types
• system
• project
Get a metering label
delete_metering_label
Default role:admin and system_scope:all
Operations
• DELETE /metering/metering-labels/{id}
Scope Types
• system
• project
Delete a metering label
create_metering_label_rule
Default role:admin and system_scope:all
Operations
• POST /metering/metering-label-rules
Scope Types
• system
• project
Create a metering label rule
get_metering_label_rule
Default role:reader and system_scope:all
Operations
• GET /metering/metering-label-rules
• GET /metering/metering-label-rules/{id}
Scope Types
• system
• project
Get a metering label rule
delete_metering_label_rule
Default role:admin and system_scope:all
Operations
• DELETE /metering/metering-label-rules/{id}
Scope Types
• system
• project
Delete a metering label rule
external
Default field:networks:router:external=True
Definition of an external network
create_network
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /networks
Scope Types
• system
• project
Create a network
create_network:shared
Default role:admin and system_scope:all
Operations
• POST /networks
Scope Types
• system
Create a shared network
create_network:router:external
Default role:admin and system_scope:all
Operations
• POST /networks
Scope Types
• system
Create an external network
create_network:is_default
Default role:admin and system_scope:all
Operations
• POST /networks
Scope Types
• system
Specify is_default attribute when creating a network
create_network:port_security_enabled
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /networks
Scope Types
• system
• project
Specify port_security_enabled attribute when creating a network
create_network:segments
Default role:admin and system_scope:all
Operations
• POST /networks
Scope Types
• system
Specify segments attribute when creating a network
create_network:provider:network_type
• GET /networks
• GET /networks/{id}
Scope Types
• system
• project
Get router:external attribute of a network
get_network:segments
Default role:reader and system_scope:all
Operations
• GET /networks
• GET /networks/{id}
Scope Types
• system
Get segments attribute of a network
get_network:provider:network_type
Default role:reader and system_scope:all
Operations
• GET /networks
• GET /networks/{id}
Scope Types
• system
Get provider:network_type attribute of a network
get_network:provider:physical_network
Default role:reader and system_scope:all
Operations
• GET /networks
• GET /networks/{id}
Scope Types
• system
Get provider:physical_network attribute of a network
get_network:provider:segmentation_id
Default role:reader and system_scope:all
Operations
• GET /networks
• GET /networks/{id}
Scope Types
• system
Get provider:segmentation_id attribute of a network
update_network
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /networks/{id}
Scope Types
• system
• project
Update a network
update_network:segments
Default role:admin and system_scope:all
Operations
• PUT /networks/{id}
Scope Types
• system
Update segments attribute of a network
update_network:shared
Default role:admin and system_scope:all
Operations
• PUT /networks/{id}
Scope Types
• system
Update shared attribute of a network
update_network:provider:network_type
Default role:admin and system_scope:all
Operations
• PUT /networks/{id}
Scope Types
• system
Update provider:network_type attribute of a network
update_network:provider:physical_network
Operations
• PUT /network_segment_ranges/{id}
Scope Types
• system
Update a network segment range
delete_network_segment_range
Default role:admin and system_scope:all
Operations
• DELETE /network_segment_ranges/{id}
Scope Types
• system
Delete a network segment range
network_device
Default field:port:device_owner=~^network:
Definition of port with network device_owner
admin_or_data_plane_int
Default rule:context_is_admin or role:data_plane_integrator
Rule for data plane integration
create_port
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /ports
Scope Types
• system
• project
Create a port
create_port:device_owner
Default not rule:network_device or role:admin
and system_scope:all or role:admin and
project_id:%(project_id)s or rule:context_is_advsvc
or rule:network_owner
Operations
• POST /ports
Scope Types
• system
• project
Specify device_owner attribute when creting a port
create_port:mac_address
Default rule:context_is_advsvc or rule:network_owner or
role:admin and system_scope:all or role:admin and
project_id:%(project_id)s
Operations
• POST /ports
Scope Types
• system
• project
Specify mac_address attribute when creating a port
create_port:fixed_ips
Default rule:context_is_advsvc or rule:network_owner or
role:admin and system_scope:all or role:admin and
project_id:%(project_id)s or rule:shared
Operations
• POST /ports
Scope Types
• system
• project
Specify fixed_ips information when creating a port
create_port:fixed_ips:ip_address
Default rule:context_is_advsvc or rule:network_owner or
role:admin and system_scope:all or role:admin and
project_id:%(project_id)s
Operations
• POST /ports
Scope Types
• system
• project
Specify IP address in fixed_ips when creating a port
create_port:fixed_ips:subnet_id
Default rule:context_is_advsvc or rule:network_owner or
role:admin and system_scope:all or role:admin and
project_id:%(project_id)s or rule:shared
Operations
• POST /ports
Scope Types
• system
• project
Specify subnet ID in fixed_ips when creating a port
create_port:port_security_enabled
Default rule:context_is_advsvc or rule:network_owner or
role:admin and system_scope:all or role:admin and
project_id:%(project_id)s
Operations
• POST /ports
Scope Types
• system
• project
Specify port_security_enabled attribute when creating a port
create_port:binding:host_id
Default role:admin and system_scope:all
Operations
• POST /ports
Scope Types
• system
Specify binding:host_id attribute when creating a port
create_port:binding:profile
Default role:admin and system_scope:all
Operations
• POST /ports
Scope Types
• system
Specify binding:profile attribute when creating a port
create_port:binding:vnic_type
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /ports
Scope Types
• system
• project
Specify binding:vnic_type attribute when creating a port
create_port:allowed_address_pairs
Default role:admin and system_scope:all or role:admin and
project_id:%(project_id)s or rule:network_owner
Operations
• POST /ports
Scope Types
• project
• system
Specify allowed_address_pairs attribute when creating a port
create_port:allowed_address_pairs:mac_address
Default role:admin and system_scope:all or role:admin and
project_id:%(project_id)s or rule:network_owner
Operations
• POST /ports
Scope Types
• project
• system
Specify mac_address` of `allowed_address_pairs attribute when creating a port
create_port:allowed_address_pairs:ip_address
Default role:admin and system_scope:all or role:admin and
project_id:%(project_id)s or rule:network_owner
Operations
• POST /ports
Scope Types
• project
• system
Specify ip_address of allowed_address_pairs attribute when creating a port
get_port
Default rule:context_is_advsvc or (role:reader
and system_scope:all) or (role:reader and
project_id:%(project_id)s)
Operations
• GET /ports
• GET /ports/{id}
Scope Types
• project
• system
Get a port
get_port:binding:vif_type
Default role:reader and system_scope:all
Operations
• GET /ports
• GET /ports/{id}
Scope Types
• system
Get binding:vif_type attribute of a port
get_port:binding:vif_details
Default role:reader and system_scope:all
Operations
• GET /ports
• GET /ports/{id}
Scope Types
• system
Get binding:vif_details attribute of a port
get_port:binding:host_id
Default role:reader and system_scope:all
Operations
• GET /ports
• GET /ports/{id}
Scope Types
• system
Get binding:host_id attribute of a port
get_port:binding:profile
Default role:reader and system_scope:all
Operations
• GET /ports
• GET /ports/{id}
Scope Types
• system
Get binding:profile attribute of a port
get_port:resource_request
Default role:reader and system_scope:all
Operations
• GET /ports
• GET /ports/{id}
Scope Types
• system
Get resource_request attribute of a port
update_port
Default (role:admin and system_scope:all) or
(role:member and project_id:%(project_id)s) or
rule:context_is_advsvc
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Update a port
update_port:device_owner
Default not rule:network_device or rule:context_is_advsvc
or rule:network_owner or role:admin
and system_scope:all or role:admin and
project_id:%(project_id)s
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Update device_owner attribute of a port
update_port:mac_address
Default role:admin and system_scope:all or
rule:context_is_advsvc
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Update mac_address attribute of a port
update_port:fixed_ips
Default rule:context_is_advsvc or rule:network_owner or
role:admin and system_scope:all or role:admin and
project_id:%(project_id)s
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Specify fixed_ips information when updating a port
update_port:fixed_ips:ip_address
Default rule:context_is_advsvc or rule:network_owner or
role:admin and system_scope:all or role:admin and
project_id:%(project_id)s
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Specify IP address in fixed_ips information when updating a port
update_port:fixed_ips:subnet_id
Default rule:context_is_advsvc or rule:network_owner or
role:admin and system_scope:all or role:admin and
project_id:%(project_id)s or rule:shared
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Specify subnet ID in fixed_ips information when updating a port
update_port:port_security_enabled
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Update allowed_address_pairs attribute of a port
update_port:allowed_address_pairs:mac_address
Default role:admin and system_scope:all or role:admin and
project_id:%(project_id)s or rule:network_owner
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Update mac_address of allowed_address_pairs attribute of a port
update_port:allowed_address_pairs:ip_address
Default role:admin and system_scope:all or role:admin and
project_id:%(project_id)s or rule:network_owner
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Update ip_address of allowed_address_pairs attribute of a port
update_port:data_plane_status
Default role:admin and system_scope:all or
role:data_plane_integrator
Operations
• PUT /ports/{id}
Scope Types
• system
• project
Update data_plane_status attribute of a port
delete_port
Operations
• DELETE /qos/policies/{id}
Scope Types
• system
Delete a QoS policy
get_rule_type
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /qos/rule-types
• GET /qos/rule-types/{rule_type}
Scope Types
• system
• project
Get available QoS rule types
get_policy_bandwidth_limit_rule
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /qos/policies/{policy_id}/bandwidth_limit_rules
• GET /qos/policies/{policy_id}/
bandwidth_limit_rules/{rule_id}
Scope Types
• system
• project
Get a QoS bandwidth limit rule
create_policy_bandwidth_limit_rule
Default role:admin and system_scope:all
Operations
• POST /qos/policies/{policy_id}/
bandwidth_limit_rules
Scope Types
• system
Create a QoS bandwidth limit rule
update_policy_bandwidth_limit_rule
Default role:admin and system_scope:all
Operations
• PUT /qos/policies/{policy_id}/
bandwidth_limit_rules/{rule_id}
Scope Types
• system
Update a QoS bandwidth limit rule
delete_policy_bandwidth_limit_rule
Default role:admin and system_scope:all
Operations
• DELETE /qos/policies/{policy_id}/
bandwidth_limit_rules/{rule_id}
Scope Types
• system
Delete a QoS bandwidth limit rule
get_policy_dscp_marking_rule
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /qos/policies/{policy_id}/dscp_marking_rules
• GET /qos/policies/{policy_id}/dscp_marking_rules/
{rule_id}
Scope Types
• system
• project
Get a QoS DSCP marking rule
create_policy_dscp_marking_rule
Default role:admin and system_scope:all
Operations
• POST /qos/policies/{policy_id}/dscp_marking_rules
Scope Types
• system
Create a QoS DSCP marking rule
update_policy_dscp_marking_rule
Default role:admin and system_scope:all
Operations
• PUT /qos/policies/{policy_id}/dscp_marking_rules/
{rule_id}
Scope Types
• system
Update a QoS DSCP marking rule
delete_policy_dscp_marking_rule
Default role:admin and system_scope:all
Operations
• DELETE /qos/policies/{policy_id}/
dscp_marking_rules/{rule_id}
Scope Types
• system
Delete a QoS DSCP marking rule
get_policy_minimum_bandwidth_rule
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /qos/policies/{policy_id}/
minimum_bandwidth_rules
• GET /qos/policies/{policy_id}/
minimum_bandwidth_rules/{rule_id}
Scope Types
• system
• project
Get a QoS minimum bandwidth rule
create_policy_minimum_bandwidth_rule
Default role:admin and system_scope:all
Operations
• POST /qos/policies/{policy_id}/
minimum_bandwidth_rules
Scope Types
• system
Create a QoS minimum bandwidth rule
update_policy_minimum_bandwidth_rule
Default role:admin and system_scope:all
Operations
• PUT /qos/policies/{policy_id}/
minimum_bandwidth_rules/{rule_id}
Scope Types
• system
Update a QoS minimum bandwidth rule
delete_policy_minimum_bandwidth_rule
Default role:admin and system_scope:all
Operations
• DELETE /qos/policies/{policy_id}/
minimum_bandwidth_rules/{rule_id}
Scope Types
• system
Delete a QoS minimum bandwidth rule
get_alias_bandwidth_limit_rule
Default rule:get_policy_bandwidth_limit_rule
Operations
• GET /qos/alias_bandwidth_limit_rules/{rule_id}/
Get a QoS bandwidth limit rule through alias
update_alias_bandwidth_limit_rule
Default rule:update_policy_bandwidth_limit_rule
Operations
• PUT /qos/alias_bandwidth_limit_rules/{rule_id}/
Update a QoS bandwidth limit rule through alias
delete_alias_bandwidth_limit_rule
Default rule:delete_policy_bandwidth_limit_rule
Operations
• DELETE /qos/alias_bandwidth_limit_rules/{rule_id}/
Delete a QoS bandwidth limit rule through alias
get_alias_dscp_marking_rule
Default rule:get_policy_dscp_marking_rule
Operations
• GET /qos/alias_dscp_marking_rules/{rule_id}/
Get a QoS DSCP marking rule through alias
update_alias_dscp_marking_rule
Default rule:update_policy_dscp_marking_rule
Operations
• PUT /qos/alias_dscp_marking_rules/{rule_id}/
Update a QoS DSCP marking rule through alias
delete_alias_dscp_marking_rule
Default rule:delete_policy_dscp_marking_rule
Operations
• DELETE /qos/alias_dscp_marking_rules/{rule_id}/
Delete a QoS DSCP marking rule through alias
get_alias_minimum_bandwidth_rule
Default rule:get_policy_minimum_bandwidth_rule
Operations
• GET /qos/alias_minimum_bandwidth_rules/{rule_id}/
Get a QoS minimum bandwidth rule through alias
update_alias_minimum_bandwidth_rule
Default rule:update_policy_minimum_bandwidth_rule
Operations
• PUT /qos/alias_minimum_bandwidth_rules/{rule_id}/
Update a QoS minimum bandwidth rule through alias
delete_alias_minimum_bandwidth_rule
Default rule:delete_policy_minimum_bandwidth_rule
Operations
• DELETE /qos/alias_minimum_bandwidth_rules/
{rule_id}/
Delete a QoS minimum bandwidth rule through alias
get_quota
Default role:reader and system_scope:all
Operations
• GET /quota
• GET /quota/{id}
Scope Types
• system
Get a resource quota
update_quota
Default role:admin and system_scope:all
Operations
• PUT /quota/{id}
Scope Types
• system
Update a resource quota
delete_quota
Default role:admin and system_scope:all
Operations
• DELETE /quota/{id}
Scope Types
• system
Delete a resource quota
restrict_wildcard
Default (not field:rbac_policy:target_tenant=*) or
rule:admin_only
Definition of a wildcard target_tenant
create_rbac_policy
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /rbac-policies
Scope Types
• system
• project
Create an RBAC policy
create_rbac_policy:target_tenant
Default role:admin and system_scope:all or (not
field:rbac_policy:target_tenant=*)
Operations
• POST /rbac-policies
Scope Types
• system
• project
Specify target_tenant when creating an RBAC policy
update_rbac_policy
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /rbac-policies/{id}
Scope Types
• project
• system
Update an RBAC policy
update_rbac_policy:target_tenant
Default role:admin and system_scope:all or (not
field:rbac_policy:target_tenant=*)
Operations
• PUT /rbac-policies/{id}
Scope Types
• system
• project
Update target_tenant attribute of an RBAC policy
get_rbac_policy
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /rbac-policies
• GET /rbac-policies/{id}
Scope Types
• project
• system
Get an RBAC policy
delete_rbac_policy
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• DELETE /rbac-policies/{id}
Scope Types
• project
• system
Delete an RBAC policy
create_router
• POST /routers
Scope Types
• system
• project
Specify network_id in external_gateway_info information when creating a router
create_router:external_gateway_info:enable_snat
Default role:admin and system_scope:all
Operations
• POST /routers
Scope Types
• system
Specify enable_snat in external_gateway_info information when creating a router
create_router:external_gateway_info:external_fixed_ips
Default role:admin and system_scope:all
Operations
• POST /routers
Scope Types
• system
Specify external_fixed_ips in external_gateway_info information when creating
a router
get_router
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /routers
• GET /routers/{id}
Scope Types
• system
• project
Get a router
get_router:distributed
Default role:reader and system_scope:all
Operations
• GET /routers
• GET /routers/{id}
Scope Types
• system
Get distributed attribute of a router
get_router:ha
Default role:reader and system_scope:all
Operations
• GET /routers
• GET /routers/{id}
Scope Types
• system
Get ha attribute of a router
update_router
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /routers/{id}
Scope Types
• system
• project
Update a router
update_router:distributed
Default role:admin and system_scope:all
Operations
• PUT /routers/{id}
Scope Types
• system
Update distributed attribute of a router
update_router:ha
Default role:admin and system_scope:all
Operations
• PUT /routers/{id}
Scope Types
• system
Update ha attribute of a router
update_router:external_gateway_info
• DELETE /routers/{id}
Scope Types
• system
• project
Delete a router
add_router_interface
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /routers/{id}/add_router_interface
Scope Types
• system
• project
Add an interface to a router
remove_router_interface
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /routers/{id}/remove_router_interface
Scope Types
• system
• project
Remove an interface from a router
admin_or_sg_owner
Default rule:context_is_admin or tenant_id:%(security_group:tenant_id)s
Rule for admin or security group owner access
admin_owner_or_sg_owner
Default rule:owner or rule:admin_or_sg_owner
Rule for resource owner, admin or security group owner access
create_security_group
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /security-groups
Scope Types
• system
• project
Create a security group
get_security_group
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /security-groups
• GET /security-groups/{id}
Scope Types
• system
• project
Get a security group
update_security_group
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /security-groups/{id}
Scope Types
• system
• project
Update a security group
delete_security_group
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• DELETE /security-groups/{id}
Scope Types
• system
• project
Delete a security group
create_security_group_rule
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /security-group-rules
Scope Types
• system
• project
Create a security group rule
get_security_group_rule
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s) or rule:sg_owner
Operations
• GET /security-group-rules
• GET /security-group-rules/{id}
Scope Types
• system
• project
Get a security group rule
delete_security_group_rule
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• DELETE /security-group-rules/{id}
Scope Types
• system
• project
Delete a security group rule
create_segment
Default role:admin and system_scope:all
Operations
• POST /segments
Scope Types
• system
Create a segment
get_segment
Default role:reader and system_scope:all
Operations
• GET /segments
• GET /segments/{id}
Scope Types
• system
Get a segment
update_segment
Default role:admin and system_scope:all
Operations
• PUT /segments/{id}
Scope Types
• system
Update a segment
delete_segment
Default role:admin and system_scope:all
Operations
• DELETE /segments/{id}
Scope Types
• system
Delete a segment
get_service_provider
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /service-providers
Scope Types
• system
• project
Get service providers
create_subnet
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s) or rule:network_owner
Operations
• POST /subnets
Scope Types
• system
• project
Create a subnet
create_subnet:segment_id
Default role:admin and system_scope:all
Operations
• POST /subnets
Scope Types
• system
Specify segment_id attribute when creating a subnet
create_subnet:service_types
Default role:admin and system_scope:all
Operations
• POST /subnets
Scope Types
• system
Specify service_types attribute when creating a subnet
get_subnet
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s) or rule:shared
Operations
• GET /subnets
• GET /subnets/{id}
Scope Types
• system
• project
Get a subnet
get_subnet:segment_id
Default role:reader and system_scope:all
Operations
• GET /subnets
• GET /subnets/{id}
Scope Types
• system
Get segment_id attribute of a subnet
update_subnet
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s) or rule:network_owner
Operations
• PUT /subnets/{id}
Scope Types
• system
• project
Update a subnet
update_subnet:segment_id
Default role:admin and system_scope:all
Operations
• PUT /subnets/{id}
Scope Types
• system
Update segment_id attribute of a subnet
update_subnet:service_types
Default role:admin and system_scope:all
Operations
• PUT /subnets/{id}
Scope Types
• system
Update service_types attribute of a subnet
delete_subnet
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s) or rule:network_owner
Operations
• DELETE /subnets/{id}
Scope Types
• system
• project
Delete a subnet
shared_subnetpools
Default field:subnetpools:shared=True
Definition of a shared subnetpool
create_subnetpool
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /subnetpools
Scope Types
• project
• system
Create a subnetpool
create_subnetpool:shared
Default role:admin and system_scope:all
Operations
• POST /subnetpools
Scope Types
• system
Create a shared subnetpool
create_subnetpool:is_default
Default role:admin and system_scope:all
Operations
• POST /subnetpools
Scope Types
• system
Specify is_default attribute when creating a subnetpool
get_subnetpool
Default (role:reader and system_scope:all) or
(role:reader and project_id:%(project_id)s) or
rule:shared_subnetpools
Operations
• GET /subnetpools
• GET /subnetpools/{id}
Scope Types
• system
• project
Get a subnetpool
update_subnetpool
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /subnetpools/{id}
Scope Types
• system
• project
Update a subnetpool
update_subnetpool:is_default
Default role:admin and system_scope:all
Operations
• PUT /subnetpools/{id}
Scope Types
• system
Update is_default attribute of a subnetpool
delete_subnetpool
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• DELETE /subnetpools/{id}
Scope Types
• system
• project
Delete a subnetpool
onboard_network_subnets
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /subnetpools/{id}/onboard_network_subnets
Scope Types
• system
• project
Onboard existing subnet into a subnetpool
add_prefixes
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /subnetpools/{id}/add_prefixes
Scope Types
• system
• project
Add prefixes to a subnetpool
remove_prefixes
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /subnetpools/{id}/remove_prefixes
Scope Types
• system
• project
Remove unallocated prefixes from a subnetpool
create_trunk
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• POST /trunks
Scope Types
• project
• system
Create a trunk
get_trunk
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /trunks
• GET /trunks/{id}
Scope Types
• project
• system
Get a trunk
update_trunk
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /trunks/{id}
Scope Types
• project
• system
Update a trunk
delete_trunk
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• DELETE /trunks/{id}
Scope Types
• project
• system
Delete a trunk
get_subports
Default (role:reader and system_scope:all) or (role:reader
and project_id:%(project_id)s)
Operations
• GET /trunks/{id}/get_subports
Scope Types
• project
• system
List subports attached to a trunk
add_subports
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /trunks/{id}/add_subports
Scope Types
• project
• system
Add subports to a trunk
remove_subports
Default (role:admin and system_scope:all) or (role:member
and project_id:%(project_id)s)
Operations
• PUT /trunks/{id}/remove_subports
Scope Types
• project
• system
Delete subports from a trunk
TEN
10.1 neutron-debug
The neutron-debug client is an extension to the neutron command-line interface (CLI) for the
OpenStack neutron-debug tool.
This chapter documents neutron-debug version 2.3.0.
For help on a specific neutron-debug command, enter:
843
Neutron Documentation, Release 18.1.0.dev178
Subcommands
probe-create Create probe port - create port and interface within a network namespace.
probe-list List all probes.
probe-clear Clear all probes.
probe-delete Delete probe - delete port then delete the namespace.
probe-exec Execute commands in the namespace of the probe.
ping-all ping-all is an all-in-one command to ping all fixed IPs in a specified network.
Create probe port - create port and interface, then place it into the created network namespace.
Positional arguments
List probes.
Remove a probe.
Positional arguments
All-in-one command to ping all fixed IPs in a specified network. A probe creation is not needed for this
command. A new probe is created automatically. It will, however, need to be deleted manually when it
is no longer needed. When there are multiple networks, the newly created probe will be attached to a
random network and thus the ping will take place from within that random network.
Positional arguments
Optional arguments
Create a probe namespace within the network identified by NET_ID. The namespace will have the name
of qprobe-<UUID of the probe port>
Note: For the following examples to function, the security group rules may need to be modified to
allow the SSH (TCP port 22) or ping (ICMP) traffic into network.
Ping the DHCP server for this network using dhcping to verify it is working.
10.2 neutron-sanity-check
The neutron-sanity-check client is a tool that checks various sanity about the Networking ser-
vice.
This chapter documents neutron-sanity-check version 10.0.0.
10.3 neutron-status
The neutron-status provides routines for checking the status of Neutron deployment.
Categories are:
• upgrade
Detailed descriptions are below.
You can also run with a category argument such as upgrade to see a list of all commands in that
category:
neutron-status upgrade
These sections describe the available categories and arguments for neutron-status.
Command details
History of Checks
21.0.0 (Ussuri)
• A Check was added for NIC Switch agents to ensure nodes are running with kernel 3.13 or
newer. This check serves as a notification for operators to ensure this requirement is fullfiled
on relevant nodes.
ELEVEN
OVN DRIVER
This document details an in-place migration strategy from ML2/OVS to ML2/OVN in either ovs-firewall
or ovs-hybrid mode for a TripleO OpenStack deployment.
For non TripleO deployments, please refer to the file migration/README.rst and the ansible play-
book migration/migrate-to-ovn.yml.
11.1.1 Overview
The migration process is orchestrated through the shell script ovn_migration.sh, which is provided with
the OVN driver.
The administrator uses ovn_migration.sh to perform readiness steps and migration from the undercloud
node. The readiness steps, such as host inventory production, DHCP and MTU adjustments, prepare the
environment for the procedure.
Subsequent steps start the migration via Ansible.
Plan for a 24-hour wait after the setup-mtu-t1 step to allow VMs to catch up with the new MTU size.
The default neutron ML2/OVS configuration has a dhcp_lease_duration of 86400 seconds (24h).
Also, if there are instances using static IP assignment, the administrator should be ready to update the
MTU of those instances to the new value of 8 bytes less than the ML2/OVS (VXLAN) MTU value. For
example, the typical 1500 MTU network value that makes VXLAN tenant networks use 1450 bytes of
MTU will need to change to 1442 under Geneve. Or under the same overlay network, a GRE encapsu-
lated tenant network would use a 1458 MTU, but again a 1442 MTU for Geneve.
If there are instances which use DHCP but dont support lease update during the T1 period the adminis-
trator will need to reboot them to ensure that MTU is updated inside those instances.
853
Neutron Documentation, Release 18.1.0.dev178
1. Install python-networking-ovn-migration-tool.
2. Create a working directory on the undercloud, and copy the ansible playbooks
$ mkdir ~/ovn_migration
$ cd ~/ovn_migration
$ cp -rfp /usr/share/ansible/networking-ovn-migration/playbooks .
3. Create ~/overcloud-deploy-ovn.sh script in your $HOME. This script must source your
stackrc file, and then execute an openstack overcloud deploy with your original de-
ployment parameters, plus the following environment files, added to the end of the command in
the following order:
When your network topology is DVR and your compute nodes have connectivity to the external
network:
-e /usr/share/openstack-tripleo-heat-templates/environments/services/
,→neutron-ovn-dvr-ha.yaml \
-e $HOME/ovn-extras.yaml
When your compute nodes dont have external connectivity and you dont use DVR:
-e /usr/share/openstack-tripleo-heat-templates/environments/services/
,→neutron-ovn-ha.yaml \
-e $HOME/ovn-extras.yaml
Make sure that all users have execution privileges on the script, because it will be called by
ovn_migration.sh/ansible during the migration process.
4. To configure the parameters of your migration you can set the environment variables that will be
used by ovn_migration.sh. You can skip setting any values matching the defaults.
• STACKRC_FILE - must point to your stackrc file in your undercloud. Default: ~/stackrc
• OVERCLOUDRC_FILE - must point to your overcloudrc file in your undercloud. Default:
~/overcloudrc
• OVERCLOUD_OVN_DEPLOY_SCRIPT - must point to the script described in step 1. De-
fault: ~/overcloud-deploy-ovn.sh
• UNDERCLOUD_NODE_USER - user used on the undercloud nodes Default: heat-admin
• STACK_NAME - Name or ID of the heat stack Default: overcloud If the stack that is mi-
grated differs from the default, please set this environment variable to the stack name or
ID.
• PUBLIC_NETWORK_NAME - Name of your public network. Default: public. To sup-
port migration validation, this network must have available floating IPs, and those floating
IPs must be pingable from the undercloud. If thats not possible please configure VALI-
DATE_MIGRATION to False.
• IMAGE_NAME - Name/ID of the glance image to us for booting a test server. De-
fault:cirros. If the image does not exist it will automatically download and use cirros during
the pre-validation / post-validation process.
• VALIDATE_MIGRATION - Create migration resources to validate the migration. The mi-
gration script, before starting the migration, boot a server and validates that the server is
reachable after the migration. Default: True.
• SERVER_USER_NAME - User name to use for logging into the migration instances. De-
fault: cirros.
• DHCP_RENEWAL_TIME - DHCP renewal time in seconds to configure in DHCP agent
configuration file. This renewal time is used only temporarily during migration to ensure a
synchronized MTU switch across the networks. Default: 30
For example:
$ export PUBLIC_NETWORK_NAME=my-public-network
$ ovn_migration.sh .........
$ ovn_migration.sh generate-inventory
At this step the script will inspect the TripleO ansible inventory and generate an inventory of hosts,
specifically tagged to work with the migration playbooks.
6. Run ovn_migration.sh setup-mtu-t1
$ ovn_migration.sh setup-mtu-t1
Warning: If you are using VXLAN or GRE networks, this 24-hour wait step is critical. If
you are using VLAN tenant networks you can proceed to the next step without delay.
Warning: If you have any instance with static IP assignment on VXLAN or GRE tenant
networks, you must manually modify the configuration of those instances. If your instances
dont honor the T1 parameter of DHCP they will need to be rebooted. to configure the new
geneve MTU, which is the current VXLAN MTU minus 8 bytes. For instance, if the VXLAN-
based MTU was 1450, change it to 1442.
Note: Please note that migrating a deployment which uses VLAN for tenant/project networks is
not recommended at this time because of a bug in core ovn, full support is being worked out here:
https://mail.openvswitch.org/pipermail/ovs-dev/2018-May/347594.html
One way to verify that the T1 parameter has propagated to existing VMs is to connect to one of the
compute nodes, and run tcpdump over one of the VM taps attached to a tenant network. If T1
propegation was a success, you should see that requests happen on an interval of approximately
30 seconds.
Note: This verification is not possible with cirros VMs. The cirros udhcpc implementation does
not obey DHCP option 58 (T1). Please try this verification on a port that belongs to a full linux
VM. We recommend you to check all the different types of workloads your system runs (Windows,
different flavors of linux, etc..).
non-VXLAN/GRE networks, so if you use VLAN for tenant networks it will be fine if you find
this step not doing anything.
$ ovn_migration.sh reduce-mtu
This step will go network by network reducing the MTU, and tagging with adapted_mtu the
networks which have been already handled.
Every time a network is updated all the existing L3/DHCP agents connected to such network
will update their internal leg MTU, instances will start fetching the new MTU as the DHCP T1
timer expires. As explained before, instances not obeying the DHCP T1 parameter will need to be
restarted, and instances with static IP assignment will need to be manually updated.
9. Make TripleO prepare the new container images for OVN.
If your deployment didnt have a containers-prepare-parameter.yaml, you can create one with:
$ test -f $HOME/containers-prepare-parameter.yaml || \
openstack tripleo container image prepare default \
--output-env-file $HOME/containers-prepare-parameter.yaml
If you had to create the file, please make sure its included at the end of your $HOME/overcloud-
deploy-ovn.sh and $HOME/overcloud-deploy.sh
Change the neutron_driver in the containers-prepare-parameter.yaml file to ovn:
Note: Its important to provide the full path to your containers-prepare-parameter.yaml otherwise
the command will finish very quickly and wont work (current version doesnt seem to output any
error).
During this step TripleO will build a list of containers, pull them from the remote registry and
push them to your deployment local registry.
10. Run ovn_migration.sh start-migration to kick start the migration process.
$ ovn_migration.sh start-migration
• Update the overcloud stack to deploy OVN alongside reference implementation services
using a temporary bridge br-migration instead of br-int.
• Start the migration process:
1. generate the OVN north db by running neutron-ovn-db-sync util
2. clone the existing resources from br-int to br-migration, so OVN can find the same
resources UUIDS over br-migration
3. re-assign ovn-controller to br-int instead of br-migration
4. cleanup network namespaces (fip, snat, qrouter, qdhcp),
5. remove any unnecessary patch ports on br-int
6. remove br-tun and br-migration ovs bridges
7. delete qr-, ha- and qg-* ports from br-int (via neutron netns cleanup)
• Delete neutron agents and neutron HA internal networks from the database via API.
• Validate connectivity on pre-migration resources.
• Delete pre-migration resources.
• Create post-migration resources.
• Validate connectivity on post-migration resources.
• Cleanup post-migration resources.
• Re-run deployment tool to update OVN on br-int, this step ensures that the TripleO database
is updated with the final integration bridge.
• Run an extra validation round to ensure the final state of the system is fully operational.
Migration is complete !!!
This is a list of some of the currently known gaps between ML2/OVS and OVN. It is not a complete list,
but is enough to be used as a starting point for implementors working on closing these gaps. A TODO
list for OVN is located at1 .
• QoS DSCP support
Currently ML2/OVS supports QoS DSCP tagging and egress bandwidth limiting. Those are basic
QoS features that while integrated in the OVS/OVN C core are not integrated (or fully tested) in
the neutron OVN mechanism driver.
• QoS for Layer 3 IPs
Currently the Neutron L3-agent supports floating IP and gateway IP bandwidth limiting based
on Linux TC. Networking-ovn L3 had a prototype implementation2 based on the meter of open-
vswitch3 utility that has been abandoned. This is supported in user space datapath only, or kernel
versions 4.15+4 .
1
https://github.com/ovn-org/ovn/blob/master/TODO.rst
2
https://review.opendev.org/#/c/539826/
3
https://github.com/openvswitch/ovs/commit/66d89287269ca7e2f7593af0920e910d7f9bcc38
4
https://github.com/torvalds/linux/blob/master/net/openvswitch/meter.h
11.2.1 References
11.3.1 IP version 4
11.3.2 IP version 6
In OVN the DHCP options are stored on a table called DHCP_Options in the OVN Northbound
database.
Lets add a DHCP option to a Neutron port:
For DHCP, the columns that we care about are the dhcpv4_options and dhcpv6_options. These
columns has the uuids of entries in the DHCP_Options table with the DHCP information for this port.
Here you can see that the option tftp_server_address has been set in the options column. Note
that, the tftp_server_address option is the OVN translated name for server-ip-address
(option 150). Take a look at the table in this document to find out more about the supported options and
their counterpart names in OVN.
TWELVE
API REFERENCE
865
Neutron Documentation, Release 18.1.0.dev178
THIRTEEN
13.1 Introduction
This document describes how features are listed in General Feature Support and Provider Network
Support.
13.1.1 Goals
The object of this document is to inform users whether or not features are complete, well documented,
stable, and tested. This approach ensures good user experience for those well maintained features.
Note: Tests are specific to particular combinations of technologies. The plugins chosen for deployment
make a big difference to whether or not features will work.
13.1.2 Concepts
• Immature
• Mature
• Required
• Deprecated (scheduled to be removed in a future release)
867
Neutron Documentation, Release 18.1.0.dev178
Immature
Immature features do not have enough functionality to satisfy real world use cases.
An immature feature is a feature being actively developed, which is only partially functional and up-
stream tested, most likely introduced in a recent release, and that will take time to mature thanks to
feedback from downstream QA.
Users of these features will likely identify gaps and/or defects that were not identified during specifica-
tion and code review.
Mature
Required
Required features are core networking principles that have been thoroughly tested and have been imple-
mented in real world use cases.
In addition they satisfy the same criteria for any mature features.
Note: Any new drivers must prove that they support all required features before they are merged into
neutron.
Deprecated
Deprecated features are no longer supported and only security related fixes or development will happen
towards them.
The deployment rating shows only the state of the tests for each feature on a particular deployment.
Important: Despite the obvious parallels that could be drawn, this list is unrelated to the DefCore
effort. See InteropWG
Warning: Please note, while this document is still being maintained, this is slowly being updated
to re-group and classify features using the definitions described in here: Introduction.
This document covers the maturity and support of the Neutron API and its API extensions. Details about
the API can be found at Networking API v2.0.
When considering which capabilities should be marked as mature the following general guiding princi-
ples were applied:
• Inclusivity - people have shown ability to make effective use of a wide range of network plugins
and drivers with broadly varying feature sets. Aiming to keep the requirements as inclusive as
possible, avoids second-guessing how a user wants to use their networks.
• Bootstrapping - a practical use case test is to consider that starting point for the network deploy
is an empty data center with new machines and network connectivity. Then look at what are
the minimum features required of the network service, in order to get user instances running and
connected over the network.
• Reality - there are many networking drivers and plugins compatible with neutron. Each with their
own supported feature set.
Summary
Details
• Networks Status: mandatory.
API Alias: core
CLI commands:
– openstack network *
Notes: The ability to create, modify and delete networks. https://docs.openstack.org/api-ref/
network/v2/#networks
Driver Support:
– Linux Bridge: complete
– Networking ODL: complete
– OVN: complete
– Open vSwitch: complete
• Subnets Status: mandatory.
API Alias: core
CLI commands:
– openstack subnet *
Notes: The ability to create and manipulate subnets and subnet pools. https://docs.openstack.org/
api-ref/network/v2/#subnets
Driver Support:
– Linux Bridge: complete
– Networking ODL: complete
– OVN: complete
– Open vSwitch: complete
• Ports Status: mandatory.
API Alias: core
CLI commands:
– openstack port *
Notes: The ability to create and manipulate ports. https://docs.openstack.org/api-ref/network/v2/
#ports
Driver Support:
– Linux Bridge: complete
– Networking ODL: complete
– OVN: complete
– Open vSwitch: complete
• Routers Status: mandatory.
API Alias: router
CLI commands:
– openstack router *
Warning: Please note, while this document is still being maintained, this is slowly being updated
to re-group and classify features using the definitions described in here: Introduction.
This document covers the maturity and support for various network isolation technologies.
When considering which capabilities should be marked as mature the following general guiding princi-
ples were applied:
• Inclusivity - people have shown ability to make effective use of a wide range of network plugins
and drivers with broadly varying feature sets. Aiming to keep the requirements as inclusive as
possible, avoids second-guessing how a user wants to use their networks.
• Bootstrapping - a practical use case test is to consider that starting point for the network deploy
is an empty data center with new machines and network connectivity. Then look at what are
the minimum features required of the network service, in order to get user instances running and
connected over the network.
• Reality - there are many networking drivers and plugins compatible with neutron. Each with their
own supported feature set.
Summary
Details
• VLAN provider network support Status: mature.
Driver Support:
– Linux Bridge: complete
– Networking ODL: unknown
– OVN: complete
– Open vSwitch: complete
• VXLAN provider network support Status: mature.
Driver Support:
– Linux Bridge: complete
– Networking ODL: complete
– OVN: missing
– Open vSwitch: complete
• GRE provider network support Status: immature.
Driver Support:
– Linux Bridge: unknown
– Networking ODL: complete
– OVN: missing
– Open vSwitch: complete
• Geneve provider network support Status: immature.
Driver Support:
– Linux Bridge: unknown
– Networking ODL: missing
– OVN: complete
– Open vSwitch: complete
Notes:
• This document is a continuous work in progress
FOURTEEN
CONTRIBUTOR GUIDE
This document describes Neutron for contributors of the project, and assumes that you are already fa-
miliar with Neutron from an end-user perspective.
For general information on contributing to OpenStack, please check out the contributor guide to get
started. It covers all the basics that are common to all OpenStack projects: the accounts you need, the
basics of interacting with our Gerrit review system, how we communicate as a community, etc.
Below will cover the more project specific information you need to get started with Neutron.
Communication
877
Neutron Documentation, Release 18.1.0.dev178
– agenda: https://etherpad.openstack.org/p/neutron-ci-meetings
• Neutron QoS team meeting:
This is the meeting of the Neutron Quality of Service subteam.
– time: http://eavesdrop.openstack.org/#Neutron_QoS_Meeting
• Neutron L3 team meeting:
This is the meeting of the Neutron L3 subteam where all issues related to IPAM, L3 agents, etc.
are discussed.
– time: http://eavesdrop.openstack.org/#Neutron_L3_Sub-team_Meeting
– agenda: https://etherpad.openstack.org/p/neutron-l3-subteam
The list of current Neutron core reviewers is available on gerrit. Overall structure of Neutron team is
available in Neutron teams.
Neutron team uses RFE (Request for Enhancements) to propose new features. RFE should
be submitted as a Launchpad bug first (see section Reporting a Bug). The title of RFE bug should starts
with [RFE] tag. Such RFEs need to be discussed and approved by the Neutron drivers team. In some
cases an additional spec proposed to the Neutron specs repo may be necessary. The complete process is
described in detail in Blueprints guide.
Task Tracking
We track our tasks in Launchpad. If youre looking for some smaller, easier work item to pick up
and get started on, search for the Low hanging fruit tag. List of all official tags which Neutron
team is using is available on bugs. Every week, one of our team members is the bug deputy and at
the end of the week such person usually sends report about new bugs to the mailing list openstack-
[email protected] or talks about it on our team meeting. This is also good place to look for
some work to do.
Reporting a Bug
You found an issue and want to make sure we are aware of it? You can do so on Launchpad. More info
about Launchpad usage can be found on OpenStack docs page.
All changes proposed to the Neutron or one of the Neutron stadium projects require two +2 votes from
Neutron core reviewers before one of the core reviewers can approve patch by giving Workflow +1
vote. More detailed guidelines for reviewers of Neutron patches are available at Code reviews guide.
Neutrons PTL duties are described very well in the All common PTL duties guide. Additionally to what
is described in this guide, Neutrons PTL duties are:
• triage new RFEs and prepare Neutron drivers team meeting,
• maintain list of the stadium projects health - if each project has gotten active team members and
if it is following community and Neutrons guidelines and goals,
• maintain list of the stadium projects lieutenants - check if those people are still active in the
projects, if their contact data are correct, maybe there is someone new who is active in the stadium
project and could be added to this list.
Over the past few years, the Neutron team has followed a mentoring approach for:
• new contributors,
• potential new core reviewers,
• future PTLs.
The Neutron PTLs responsibility is to identify potential new core reviewers and help with their mentor-
ing process. Mentoring of new contributors and potential core reviewers can be of course delegated to
the other members of the Neutron team. Mentoring of future PTLs is responibility of the Neutron PTL.
In the Policies Guide, you will find documented policies for developing with Neutron. This includes the
processes we use for blueprints and specs, bugs, contributor onboarding, core reviewer memberships,
and other procedural items.
The Neutron team uses the neutron-specs repository for its specification reviews. Detailed information
can be found on the wiki. Please also find additional information in the reviews.rst file.
The Neutron team does not enforce deadlines for specs. These can be submitted throughout the release
cycle. The drivers team will review this on a regular basis throughout the release, and based on the load
for the milestones, will assign these into milestones or move them to the backlog for selection into a
future release.
Please note that we use a template for spec submissions. It is not required to fill out all sections in the
template. Review of the spec may require filling in information left out by the submitter.
The neutron-specs repository is only meant for specs from Neutron itself, and the advanced services
repositories as well. This includes VPNaaS for example. Other sub-projects are encouraged to fold their
specs into their own devref code in their sub-project gerrit repositories. Please see additional comments
in the Neutron teams section for reviewer requirements of the neutron-specs repository.
In Liberty the team introduced the concept of feature requests. Feature requests are tracked as Launchpad
bugs, by tagging them with a set of tags starting with rfe, enabling the submission and review of feature
requests before code is submitted. This allows the team to verify the validity of a feature request before
the process of submitting a neutron-spec is undertaken, or code is written. It also allows the community
to express interest in a feature by subscribing to the bug and posting a comment in Launchpad. The rfe
tag should not be used for work that is already well-defined and has an assignee. If you are intending
to submit code immediately, a simple bug report will suffice. Note the temptation to game the system
exists, but given the history in Neutron for this type of activity, it will not be tolerated and will be called
out as such in public on the mailing list.
RFEs can be submitted by anyone and by having the community vote on them in Launchpad, we can
gauge interest in features. The drivers team will evaluate these on a weekly basis along with the specs.
RFEs will be evaluated in the current cycle against existing project priorities and available resources.
The workflow for the life an RFE in Launchpad is as follows:
• The bug is submitted and will by default land in the New state. Anyone can make a bug an RFE
by adding the rfe tag.
• As soon as a member of the neutron-drivers team acknowledges the bug, the rfe tag will be re-
placed with the rfe-confirmed tag. No assignee, or milestone is set at this time. The importance
will be set to Wishlist to signal the fact that the report is indeed a feature or enhancement and there
is no severity associated to it.
• A member of the neutron-drivers team replaces the rfe-confirmed tag with the rfe-triaged tag when
he/she thinks its ready to be discussed in the drivers meeting. The bug will be in this state while
the discussion is ongoing.
• The neutron-drivers team will evaluate the RFE and may advise the submitter to file a spec in
neutron-specs to elaborate on the feature request, in case the RFE requires extra scrutiny, more
design discussion, etc.
• The PTL will work with the Lieutenant for the area being identified by the RFE to evaluate re-
sources against the current workload.
• A member of the Neutron release team (or the PTL) will register a matching Launchpad blueprint
to be used for milestone tracking purposes, and for identifying the responsible assignee and ap-
prover. If the RFE has a spec the blueprint will have a pointer to the spec document, which will
become available on specs.o.o. once it is approved and merged. The blueprint will then be linked
to the original RFE bug report as a pointer to the discussion that led to the approval of the RFE.
The blueprint submitter will also need to identify the following:
– Priority: there will be only two priorities to choose from, High and Low. It is worth noting
that priority is not to be confused with importance, which is a property of Launchpad Bugs.
Priority gives an indication of how promptly a work item should be tackled to allow it to
complete. High priority is to be chosen for work items that must make substantial progress
in the span of the targeted release, and deal with the following aspects:
* Reach out to other reviewers for feedback in areas that may step out of the zone of
her/his confidence.
* Escalate issues, and raise warnings to the release team/PTL if the effort shows slow
progress. Approver and assignee are key parts to land a blueprint: should the approver
and/or assignee be unable to continue the commitment during the release cycle, it is the
Approvers responsibility to reach out the release team/PTL so that replacements can be
identified.
The Neutron team will review the status of blueprints targeted for the milestone during their
weekly meeting to ensure a smooth progression of the work planned. Blueprints for which re-
sources cannot be identified will have to be deferred.
• In either case (a spec being required or not), once the discussion has happened and there is positive
consensus on the RFE, the report is approved, and its tag will move from rfe-triaged to rfe-
approved.
• An RFE can be occasionaly marked as rfe-postponed if the team identifies a dependency between
the proposed RFE and other pending tasks that prevent the RFE from being worked on immedi-
ately.
• Once an RFE is approved, it needs volunteers. Approved RFEs that do not have an assignee but
sound relatively simple or limited in scope (e.g. the addition of a new API with no ramification in
the plugin backends), should be promoted during team meetings or the ML so that volunteers can
pick them up and get started with neutron development. The team will regularly scan rfe-approved
or rfe-postponed RFEs to see what their latest status is and mark them incomplete if no assignees
can be found, or they are no longer relevant.
• As for setting the milestone (both for RFE bugs or blueprints), the current milestone is always cho-
sen, assuming that work will start as soon as the feature is approved. Work that fails to complete
by the defined milestone will roll over automatically until it gets completed or abandoned.
• If the code fails to merge, the bug report may be marked as incomplete, unassigned and untargeted,
and it will be garbage collected by the Launchpad Janitor if no-one takes over in time. Renewed
interest in the feature will have to go through RFE submission process once again.
In summary:
State Meaning
New This is where all RFEs start, as filed by the community.
Incomplete Drivers/LTs - Move to this state to mean, more needed before proceeding
Confirmed Drivers/LTs - Move to this state to mean, yeah, I see that you filed it
Triaged Drivers/LTs - Move to this state to mean, discussion is ongoing
Wont Fix Drivers/LTs - Move to this state to reject an RFE.
Once the triaging (discussion is complete) and the RFE is approved, the tag goes from rfe to rfe-
approved, and at this point the bug report goes through the usual state transition. Note, that the im-
portance will be set to wishlist, to reflect the fact that the bug report is indeed not a bug, but a new
feature or enhancement. This will also help have RFEs that are not followed up by a blueprint standout
in the Launchpad milestone dashboards.
The drivers team will be discussing the following bug reports during their IRC meeting:
• New RFEs
• Incomplete RFEs
• Confirmed RFEs
• Triaged RFEs
Before we dive into the guidelines for writing a good RFE, it is worth mentioning that depending on
your level of engagement with the Neutron project and your role (user, developer, deployer, operator,
etc.), you are more than welcome to have a preliminary discussion of a potential RFE by reaching out
to other people involved in the project. This usually happens by posting mails on the relevant mailing
lists (e.g. openstack-discuss - include [neutron] in the subject) or on #openstack-neutron IRC channel on
Freenode. If current ongoing code reviews are related to your feature, posting comments/questions on
gerrit may also be a way to engage. Some amount of interaction with Neutron developers will give you
an idea of the plausibility and form of your RFE before you submit it. That said, this is not mandatory.
When you submit a bug report on https://bugs.launchpad.net/neutron/+filebug, there are two fields that
must be filled: summary and further information. The summary must be brief enough to fit in one line:
if you cant describe it in a few words it may mean that you are either trying to capture more than one
RFE at once, or that you are having a hard time defining what you are trying to solve at all.
The further information section must be a description of what you would like to see implemented in
Neutron. The description should provide enough details for a knowledgeable developer to understand
what is the existing problem in the current platform that needs to be addressed, or what is the enhance-
ment that would make the platform more capable, both for a functional and a non-functional standpoint.
To this aim it is important to describe why you believe the RFE should be accepted, and motivate the
reason why without it Neutron is a poorer platform. The description should be self contained, and no
external references should be necessary to further explain the RFE.
In other words, when you write an RFE you should ask yourself the following questions:
• What is that I (specify what user - a user can be a human or another system) cannot do today when
interacting with Neutron? On the other hand, is there a Neutron component X that is unable to
accomplish something?
• Is there something that you would like Neutron handle better, ie. in a more scalable, or in a more
reliable way?
• What is that I would like to see happen after the RFE is accepted and implemented?
• Why do you think it is important?
Once you are happy with what you wrote, add rfe as tag, and submit. Do not worry, we are here to help
you get it right! Happy hacking.
There are occasions when a spec will be approved and the code will not land in the cycle it was targeted
at. For these cases, the work flow to get the spec into the next release is as follows:
• During the RC window, the PTL will create a directory named <release> under the backlog direc-
tory in the neutron specs repo, and he/she will move all specs that did not make the release to this
directory.
• Anyone can propose a patch to neutron-specs which moves a spec from the previous release into
the new release directory.
The specs which are moved in this way can be fast-tracked into the next release. Please note that it is
required to re-propose the spec for the new release.
Documentation
The above process involves two places where any given feature can start to be documented - namely in
the RFE bug, and in the spec - and in addition to those Neutron has a substantial developer reference
guide (aka devref), and user-facing docs such as the networking guide. So it might be asked:
• What is the relationship between all of those?
• What is the point of devref documentation, if everything has already been described in the spec?
The answers have been beautifully expressed in an openstack-dev post:
1. RFE: I want X
2. Spec: I plan to implement X like this
3. devref: How X is implemented and how to extend it
4. OS docs: API and guide for using X
Once a feature X has been implemented, we shouldnt have to go to back to its RFE bug or spec to find
information on it. The devref may reuse a lot of content from the spec, but the spec is not maintained
and the implementation may differ in some ways from what was intended when the spec was agreed.
The devref should be kept current with refactorings, etc., of the implementation.
Devref content should be added as part of the implementation of a new feature. Since the spec is
not maintained after the feature is implemented, the devref should include a maintained version of the
information from the spec.
If a feature requires OS docs (4), the feature patch shall include the new, or updated, documentation
changes. If the feature is purely a developer facing thing, (4) is not needed.
Neutron Bugs
Neutron (client, core, VPNaaS) maintains all of its bugs in the following Launchpad projects:
• Launchpad Neutron
• Launchpad python-neutronclient
The Neutron Bugs team in Launchpad is used to allow access to the projects above. Members of the
above group have the ability to set bug priorities, target bugs to releases, and other administrative tasks
around bugs. The administrators of this group are the members of the neutron-drivers-core gerrit group.
Non administrators of this group include anyone who is involved with the Neutron project and has a
desire to assist with bug triage.
If you would like to join this Launchpad group, its best to reach out to a member of the above mentioned
neutron-drivers-core team in #openstack-neutron on Freenode and let them know why you would like to
be a member. The team is more than happy to add additional bug triage capability, but it helps to know
who is requesting access, and IRC is a quick way to make the connection.
As outlined below the bug deputy is a volunteer who wants to help with defect management. Permissions
will have to be granted assuming that people sign up on the deputy role. The permission wont be given
freely, a person must show some degree of prior involvement.
Neutron maintains the notion of a bug deputy. The bug deputy plays an important role in the Neutron
community. As a large project, Neutron is routinely fielding many bug reports. The bug deputy is
responsible for acting as a first contact for these bug reports and performing initial screening/triaging.
The bug deputy is expected to communicate with the various Neutron teams when a bug has been triaged.
In addition, the bug deputy should be reporting High and Critical priority bugs.
To avoid burnout, and to give a chance to everyone to gain experience in defect management, the Neutron
bug deputy is a rotating role. The rotation will be set on a period (typically one or two weeks) determined
by the team during the weekly Neutron IRC meeting and/or according to holidays. During the Neutron
IRC meeting we will expect a volunteer to step up for the period. Members of the Neutron core team are
invited to fill in the role, however non-core Neutron contributors who are interested are also encouraged
to take up the role.
This contributor is going to be the bug deputy for the period, and he/she will be asked to report to the
team during the subsequent IRC meeting. The PTL will also work with the team to assess that everyone
gets his/her fair share at fulfilling this duty. It is reasonable to expect some imbalance from time to time,
and the team will work together to resolve it to ensure that everyone is 100% effective and well rounded
in their role as _custodian_ of Neutron quality. Should the duty load be too much in busy times of the
release, the PTL and the team will work together to assess whether more than one deputy is necessary
in a given period.
The presence of a bug deputy does not mean the rest of the team is simply off the hook for the period,
in fact the bug deputy will have to actively work with the Lieutenants/Drivers, and these should help in
getting the bug report moving down the resolution pipeline.
During the period a member acts as bug deputy, he/she is expected to watch bugs filed against the Neu-
tron projects (as listed above) and do a first screening to determine potential severity, tagging, logstash
queries, other affected projects, affected releases, etc.
From time to time bugs will be filed and auto-assigned by members of the core team to get them to a
swift resolution. Obviously, the deputy is exempt from screening these.
Finally, the PTL will work with the deputy to produce a brief summary of the issues of the week to be
shared with the larger team during the weekly IRC meeting and tracked in the meeting notes. If for some
reason the deputy is not going to attend the team meeting to report, the deputy should consider sending
a brief report to the openstack-discuss@ mailing list in advance of the meeting.
If you are interested in serving as the Neutron bug deputy, there are several steps you will need to follow
in order to be prepared.
• Request to be added to the neutron-bugs team in Launchpad. This request will be approved when
you are assigned a bug deputy slot.
• Read this page in full. Keep this document in mind at all times as it describes the duties of the bug
deputy and how to triage bugs particularly around setting the importance and tags of bugs.
• Sign up for neutron bug emails from LaunchPad.
– Navigate to the LaunchPad Neutron bug list.
– On the right hand side, click on Subscribe to bug mail.
– In the pop-up that is displayed, keep the recipient as Yourself, and your subscription some-
thing useful like Neutron Bugs. You can choose either option for how much mail you get,
but keep in mind that getting mail for all changes - while informative - will result in several
dozen emails per day at least.
– Do the same for the LaunchPad python-neutronclient bug list.
• Configure the information you get from LaunchPad to make visible additional information, espe-
cially the age of the bugs. You accomplish that by clicking the little gear on the left hand side of
the screen at the top of the bugs list. This provides an overview of information for each bug on a
single page.
• Optional: Set up your mail client to highlight bug email that indicates a new bug has been
filed, since those are the ones you will be wanting to triage. Filter based on email from
@bugs.launchpad.net with [NEW] in the subject line.
• Volunteer during the course of the Neutron team meeting, when volunteers to be bug deputy are
requested (usually towards the beginning of the meeting).
• View your scheduled week on the Neutron Meetings page.
• During your shift, if it is feasible for your timezone, plan on attending the Neutron Drivers meet-
ing. That way if you have tagged any bugs as RFE, you can be present to discuss them.
• Scan New bugs to triage. If it doesnt have enough info to triage, ask more info and mark it
Incomplete. If you could confirm it by yourself, mark it Confirmed. Otherwise, find someone
familiar with the topic and ask his/her help.
• Scan Incomplete bugs to see if it got more info. If it was, make it back to New.
• Repeat the above routines for bugs filed in your week at least. If you can, do the same for older
bugs.
• Take a note of bugs you processed. At the end of your week, post a report on openstack-discuss
mailing list.
Many plugins and drivers have backend code that exists in another repository. These repositories may
have their own Launchpad projects for bugs. The teams working on the code in these repos assume full
responsibility for bug handling in those projects. For this reason, bugs whose solution would exist solely
in the plugin/driver repo should not have Neutron in the affected projects section. However, you should
add Neutron (Or any other project) to that list only if you expect that a patch is needed to that repo in
order to solve the bug.
Its also worth adding that some of these projects are part of the so called Neutron stadium. Because
of that, their release is managed centrally by the Neutron release team; requests for releases need to be
funnelled and screened properly before they can happen. Release request process is described here.
When screening bug reports, the first step for the bug deputy is to assess how well written the bug report
is, and whether there is enough information for anyone else besides the bug submitter to reproduce the
bug and come up with a fix. There is plenty of information on the OpenStack Bugs on how to write
a good bug report and to learn how to tell a good bug report from a bad one. Should the bug report
not adhere to these best practices, the bug deputys first step would be to redirect the submitter to this
section, invite him/her to supply the missing information, and mark the bug report as Incomplete. For
future submissions, the reporter can then use the template provided below to ensure speedy triaging.
Done often enough, this practice should (ideally) ensure that in the long run, only good bug reports are
going to be filed.
The more information you provide, the higher the chance of speedy triaging and resolution: identifying
the problem is half the solution. To this aim, when writing a bug report, please consider supplying the
following details and following these suggestions:
• Summary (Bug title): keep it small, possibly one line. If you cannot describe the issue in less than
100 characters, you are probably submitting more than one bug at once.
• Further information (Bug description): conversely from other bug trackers, Launchpad does not
provide a structured way of submitting bug-related information, but everything goes in this sec-
tion. Therefore, you are invited to break down the description in the following fields:
– High level description: provide a brief sentence (a couple of lines) of what are you trying
to accomplish, or would like to accomplish differently; the why is important, but can be
omitted if obvious (not to you of course).
– Pre-conditions: what is the initial state of your system? Please consider enumerating re-
sources available in the system, if useful in diagnosing the problem. Who are you? A
regular user or a super-user? Are you describing service-to-service interaction?
– Step-by-step reproduction steps: these can be actual neutron client commands or raw API
requests; Grab the output if you think it is useful. Please, consider using paste.o.o for long
outputs as Launchpad poorly format the description field, making the reading experience
somewhat painful.
– Expected output: what did you hope to see? How would you have expected the system to
behave? A specific error/success code? The output in a specific format? Or more than a user
was supposed to see, or less?
– Actual output: did the system silently fail (in this case log traces are useful)? Did you get a
different response from what you expected?
– Version:
– Environment: what services are you running (core services like DB and AMQP broker, as
well as Nova/hypervisor if it matters), and which type of deployment (clustered servers); if
you are running DevStack, is it a single node? Is it multi-node? Are you reporting an issue
in your own environment or something you encountered in the OpenStack CI Infrastructure,
aka the Gate?
– Perceived severity: what would you consider the importance to be?
• Tags (Affected component): try to use the existing tags by relying on auto-completion. Please,
refrain from creating new ones, if you need new official tags, please reach out to the PTL. If you
would like a fix to be backported, please add a backport-potential tag. This does not mean you are
gonna get the backport, as the stable team needs to follow the stable branch policy for merging
fixes to stable branches.
• Attachments: consider attaching logs, truncated log snippets are rarely useful. Be proactive, and
consider attaching redacted configuration files if you can, as that will speed up the resolution
process greatly.
vironment, consider reaching out the Lieutenant, go-to person for the affected component; he/she
may be able to help: assign the bug to him/her for further screening. If the bug already has an
assignee, check that a patch is in progress. Sometimes more than one patch is required to address
an issue, make sure that there is at least one patch that Closes the bug or document/question what
it takes to mark the bug as fixed.
• If the bug indicates test or gate failure, look at the failures for that test over time using OpenStack
Health or OpenStack Logstash. This can help to validate whether the bug identifies an issue that
is occurring all of the time, some of the time, or only for the bug submitter.
• If the bug is the result of a misuse of the system, mark the bug either as Wont fix, or Opinion if
you are still on the fence and need other peoples input.
• Assign the importance after reviewing the proposed severity. Bugs that obviously break core and
widely used functionality should get assigned as High or Critical importance. The same applies
to bugs that were filed for gate failures.
• Choose a milestone, if you can. Targeted bugs are especially important close to the end of the
release.
• (Optional). Add comments explaining the issue and possible strategy of fixing/working around
the bug. Also, as good as some are at adding all thoughts to bugs, it is still helpful to share the
in-progress items that might not be captured in a bug description or during our weekly meeting. In
order to provide some guidance and reduce ramp up time as we rotate, tagging bugs with needs-
attention can be useful to quickly identify what reports need further screening/eyes on.
Check for Bugs with the timeout-abandon tag:
• Search for any bugs with the timeout abandon tag: Timeout abandon. This tag indicates that
the bug had a patch associated with it that was automatically abandoned after a timing out with
negative feedback.
• For each bug with this tag, determine if the bug is still valid and update the status accordingly.
For example, if another patch fixed the bug, ensure its marked as Fix Released. Or, if that was the
only patch for the bug and its still valid, mark it as Confirmed.
• After ensuring the bug report is in the correct state, remove the timeout-abandon tag.
You are done! Iterate.
More can be found at this Launchpad page. In a nutshell, in order to make a bug report expire automati-
cally, it needs to be unassigned, untargeted, and marked as Incomplete.
The OpenStack community has had Bug Days but they have not been wildly successful. In order to
keep the list of open bugs set to a manageable number (more like <100+, rather than closer to 1000+),
at the end of each release (in feature freeze and/or during less busy times), the PTL with the help
of team will go through the list of open (namely new, opinion, in progress, confirmed, triaged) bugs,
and do a major sweep to have the Launchpad Janitor pick them up. This gives 60 days grace period
to reporters/assignees to come back and revive the bug. Assuming that at regime, bugs are properly
reported, acknowledged and fix-proposed, losing unaddressed issues is not going to be a major issue,
but brief stats will be collected to assess how the team is doing over time.
Tagging Bugs
Launchpads Bug Tracker allows you to create ad-hoc groups of bugs with tagging.
In the Neutron team, we have a list of agreed tags that we may apply to bugs reported against various
aspects of Neutron itself. The list of approved tags used to be available on the wiki, however the section
has been moved here, to improve collaborative editing, and keep the information more current. By using
a standard set of tags, each explained on this page, we can avoid confusion. A bug report can have more
than one tag at any given time.
New tags, or changes in the meaning of existing tags (or deletion), are to be proposed via patch to
this section. After discussion, and approval, a member of the bug team will create/delete the tag in
Launchpad. Each tag covers an area with an identified go-to contact or Lieutenant, who can provide
further insight. Bug queries are provided below for convenience, more will be added over time if needed.
Access Control
API
API Reference
Baremetal
DB
• DB - All bugs
• DB - In progress
Deprecation
DNS
DOC
Fullstack
Functional Tests
FWAAS
Gate Failure
IPV6
L2 Population
L3 BGP
L3 DVR Backlog
L3 HA
• L3 HA - All bugs
• L3 HA - In progress
L3 IPAM DHCP
Lib
LinuxBridge
Load Impact
Logging
Metering
Needs Attention
OPNFV
Operators/Operations (ops)
OSLO
OVN
OVS
OVS Firewall
OVSDB Lib
QoS
RFE
RFE-Confirmed
RFE-Triaged
RFE-Approved
RFE-Postponed
SRIOV-PCI PASSTHROUGH
SG-FW
Tempest
Troubleshooting
Unit test
Usability
• UX - All bugs
• UX - In progress
VPNAAS
Backport/RC potential
List of all Backport/RC potential bugs for stable releases can be found on launchpad. Pointer
to Launchpads page with list of such bugs for any stable release can be built by using link:
https://bugs.launchpad.net/neutron/+bugs?field.tag={STABLE_BRANCH}-backport-potential
where STABLE_BRANCH is always name of one of the 3 latest releases.
Contributor Onboarding
Contributing to Neutron
Work within Neutron is discussed on the openstack-discuss mailing list, as well as in the #openstack-
neutron IRC channel. While these are great channels for engaging Neutron, the bulk of discussion of
patches and code happens in gerrit itself.
With regards to gerrit, code reviews are a great way to learn about the project. There is also a list of
low or wishlist priority bugs which are ideal for a new contributor to take on. If you havent done so
you should setup a Neutron development environment so you can actually run the code. Devstack is the
usual convenient environment to setup such an environment. See devstack.org or NeutronDevstack for
more information on using Neutron with devstack.
Helping with documentation can also be a useful first step for a newcomer. Here is a list of tagged
documentation and API reference bugs:
• Documentation bugs
• Api-ref bugs
The Neutron Core Reviewer Team is responsible for many things related to Neutron. A lot of these
things include mundane tasks such as the following:
• Ensuring the bug count is low
• Curating the gate and triaging failures
• Working on integrating shared code from projects such as Oslo
• Ensuring documentation is up to date and remains relevant
• Ensuring the level of testing for Neutron is adequate and remains relevant as features are added
• Helping new contributors with questions as they peel back the covers of Neutron
• Answering questions and participating in mailing list discussions
• Interfacing with other OpenStack teams and ensuring they are going in the same parallel direction
• Reviewing and merging code into the neutron tree
In essence, core reviewers share the following common ideals:
1. They share responsibility in the projects success.
2. They have made a long-term, recurring time investment to improve the project.
3. They spend their time doing what needs to be done to ensure the projects success, not necessarily
what is the most interesting or fun.
A core reviewers responsibility doesnt end up with merging code. The above lists are adding context
around these responsibilities.
As Neutron has grown in complexity, it has become impossible for any one person to know enough to
merge changes across the entire codebase. Areas of expertise have developed organically, and it is not
uncommon for existing cores to defer to these experts when changes are proposed. Existing cores should
be aware of the implications when they do merge changes outside the scope of their knowledge. It is
with this in mind we propose a new system built around Lieutenants through a model of trust.
In order to scale development and responsibility in Neutron, we have adopted a Lieutenant system.
The PTL is the leader of the Neutron project, and ultimately responsible for decisions made in the
project. The PTL has designated Lieutenants in place to help run portions of the Neutron project. The
Lieutenants are in charge of their own areas, and they can propose core reviewers for their areas as well.
The core reviewer addition and removal polices are in place below. The Lieutenants for each system,
while responsible for their area, ultimately report to the PTL. The PTL may opt to have regular one on
one meetings with the lieutenants. The PTL will resolve disputes in the project that arise between areas
of focus, core reviewers, and other projects. Please note Lieutenants should be leading their own area of
focus, not doing all the work themselves.
As was mentioned in the previous section, a cores responsibilities do not end with merging code. They
are responsible for bug triage and gate issues among other things. Lieutenants have an increased respon-
sibility to ensure gate and bug triage for their area of focus is under control.
Neutron Lieutenants
Sub-project Lieutenants
Neutron also consists of several plugins, drivers, and agents that are developed effectively as sub-projects
within Neutron in their own git repositories. Lieutenants are also named for these sub-projects to identify
a clear point of contact and leader for that area. The Lieutenant is also responsible for updating the core
review team for the sub-projects repositories.
Existing core reviewers have been reviewing code for a varying degree of cycles. With the new plan
of Lieutenants and ownership, its fair to try to understand how they fit into the new model. Existing
core reviewers seem to mostly focus in particular areas and are cognizant of their own strengths and
weaknesses. These members may not be experts in all areas, but know their limits, and will not exceed
those limits when reviewing changes outside their area of expertise. The model is built on trust, and
when that trust is broken, responsibilities will be taken away.
Lieutenant Responsibilities
In the hierarchy of Neutron responsibilities, Lieutenants are expected to partake in the following addi-
tional activities compared to other core reviewers:
• Ensuring feature requests for their areas have adequate testing and documentation coverage.
• Gate triage and resolution. Lieutenants are expected to work to keep the Neutron gate running
smoothly by triaging issues, filing elastic recheck queries, and closing gate bugs.
• Triaging bugs for the specific areas.
Neutron Teams
Given all of the above, Neutron has a number of core reviewer teams with responsibility over the areas
of code listed below:
Neutron core reviewers have merge rights to the following git repositories:
• openstack/neutron
• openstack/python-neutronclient
Please note that as we adopt to the system above with core specialty in particular areas, we expect this
broad core team to shrink as people naturally evolve into an area of specialization.
The plugin decomposition effort has led to having many drivers with code in separate repositories with
their own core reviewer teams. For each one of these repositories in the following repository list, there
is a core team associated with it:
• Neutron project team
These teams are also responsible for handling their own specs/RFEs/features if they choose to use them.
However, by choosing to be a part of the Neutron project, they submit to oversight and veto by the
Neutron PTL if any issues arise.
Neutron specs core reviewers have +2 rights to the following git repositories:
• openstack/neutron-specs
The Neutron specs core reviewer team is responsible for reviewing specs targeted to all Neutron git
repositories (Neutron + Advanced Services). It is worth noting that specs reviewers have the following
attributes which are potentially different than code reviewers:
• Broad understanding of cloud and networking technologies
• Broad understanding of core OpenStack projects and technologies
• An understanding of the effect approved specs have on the teams development capacity for each
cycle
Specs core reviewers may match core members of the above mentioned groups, but the group can be
extended to other individuals, if required.
Drivers Team
The drivers team is the group of people who have full rights to the specs repo. This team, which matches
Launchpad Neutron Drivers team, is instituted to ensure a consistent architectural vision for the Neutron
project, and to continue to disaggregate and share the responsibilities of the Neutron PTL. The team is in
charge of reviewing and commenting on RFEs, and working with specification contributors to provide
guidance on the process that govern contributions to the Neutron project as a whole. The team meets
regularly to go over RFEs and discuss the project roadmap. Anyone is welcome to join and/or read the
meeting notes.
Release Team
The release team is a group of people with some additional gerrit permissions primarily aimed at allow-
ing release management of Neutron sub-projects. These permissions include:
• Ability to push signed tags to sub-projects whose releases are managed by the Neutron release
team as opposed to the OpenStack release team.
• Ability to push merge commits for Neutron or other sub-projects.
• Ability to approve changes in all Neutron git repositories. This is required as the team needs to be
able to quickly unblock things if needed, especially at release time.
While everyone is encouraged to review changes for these repositories, members of the Neutron core
reviewer group have the ability to +2/-2 and +A changes to these repositories. This is an extra level
of responsibility not to be taken lightly. Correctly merging code requires not only understanding the
code itself, but also how the code affects things like documentation, testing, and interactions with other
projects. It also means you pay attention to release milestones and understand if a patch youre merging
is marked for the release, especially critical during the feature freeze.
The bottom line here is merging code is a responsibility Neutron core reviewers have.
A new Neutron core reviewer may be proposed at anytime on the openstack-discuss mailing list. Typi-
cally, the Lieutenant for a given area will propose a new core reviewer for their specific area of coverage,
though the Neutron PTL may propose new core reviewers as well. The proposal is typically made after
discussions with existing core reviewers. Once a proposal has been made, three existing Neutron core
reviewers from the Lieutenants area of focus must respond to the email with a +1. If the member is
being added by a Lieutenant from an area of focus with less than three members, a simple majority will
be used to determine if the vote is successful. Another Neutron core reviewer from the same area of
focus can vote -1 to veto the proposed new core reviewer. The PTL will mediate all disputes for core
reviewer additions.
The PTL may remove a Neutron core reviewer at any time. Typically when a member has decreased their
involvement with the project through a drop in reviews and participation in general project development,
the PTL will propose their removal and remove them. Please note there is no voting or vetoing of core
reviewer removal. Members who have previously been a core reviewer may be fast-tracked back into
a core reviewer role if their involvement picks back up and the existing core reviewers support their
re-instatement.
This page provides guidelines for spotting and assessing neutron gate failures. Some hints for triaging
failures are also provided.
Grenade is used in the Neutron gate to test every patch proposed to Neutron to ensure it will not break
the upgrade process. Upgrading from the N-1 to the N branch is constantly being tested. So if you send
patch to the Neutron master branch Grenade jobs will first deploy Neutron from the last stable release
and then upgrade it to the master branch with your patch. Details about how Grenade works are available
in the documentation.
In Neutron CI jobs that use Grenade are run in the multinode jobs configuration which means that we
have deployed OpenStack on 2 VMs:
• one called controller which is in an all in one node so it runs neutron-server, as well as the
neutron-ovs-agent and nova-compute services,
• one called compute1 which runs only services like nova-compute and neutron-ovs-agent.
Neutron supports that neutron-server in N version will always work with the agents which runs in
N-1 version. To test such scenario all our Grenade jobs upgrade OpenStack services only on the
controller node. Services which run on the compute1 node are always run with the old release
during that job.
Debugging of failures in the Grenade job is very similar to debugging any other Tempest based job. The
difference is that in the logs of the Grenade job, there is always logs/old and logs/new directories which
contain Devstack logs from each run of the Devstacks stack.sh script. In the logs/grenade.sh_log.txt file
there is a full log of the grenade.sh run and you should always start checking failures from that file. Logs
of the Neutron services for old and new versions are in the same files, like, for example, logs/screen-
q-svc.txt for neutron-server logs. You will find in that log when the service was restarted - that is the
moment when it was upgraded by Grenade and it is now running the new version.
As a first step of troubleshooting a failing gate job, you should always check the logs of the job as
described above. Unfortunately, sometimes when a tempest/functional/fullstack job is failing, it might
be hard to reproduce it in a local environment, and might also be hard to understand the reason of such
a failure from only reading the logs of the failed job. In such cases there are some additional ways to
debug the job directly on the test node in a live setting.
This can be done in two ways:
1. Using the remote_pdb python module and telnet to directly access the python debugger while
in the failed test.
To achieve this, you need to send a Do not merge patch to gerrit with changes as described
below:
• Add an iptables rule to accept incoming telnet connections to remote_pdb. This can be done
in one of the ansible roles used in the test job. Like for example in neutron/roles/
configure_functional_tests file for functional tests:
sudo iptables -I openstack-INPUT -p tcp -m state --state NEW -m
,→tcp --dport 44444 -j ACCEPT
• Increase the OS_TEST_TIMEOUT value to make the test wait longer when remote_pdb is
active to make debugging easier. This change can also be done in the ansible role mentioned
above:
export OS_TEST_TIMEOUT=999999
Please note that the overall job will be limited by the job timeout, and that cannot be changed
from within the job.
• To make it easier to find the IP address of the test node, you should add to the ansible role
so it prints the IPs configured on the test node. For example:
hostname -I
• Add the package remote_pdb to the test-requirements.txt file. That way it will
be automatically installed in the venv of the test before it is run:
$ tail -1 test-requirements.txt
remote_pdb
• Finally, you need to import and call the remote_pdb module in the part of your test code
where you want to start the debugger:
$ diff --git a/neutron/tests/fullstack/test_connectivity.py b/
,→neutron/tests/fullstack/test_connectivity.py
index c8650b0..260207b 100644
--- a/neutron/tests/fullstack/test_connectivity.py
+++ b/neutron/tests/fullstack/test_connectivity.py
@@ -189,6 +189,8 @@ class
TestLinuxBridgeConnectivitySameNetwork(BaseConnectivitySameNetworkTest):
]
def test_connectivity(self):
(continues on next page)
Please note that discovery of public IP addresses is necessary because by default remote_pdb
will only bind to the 127.0.0.1 IP address. Above is just an example of one of possible
method, there could be other ways to do this as well.
When all the above changes are done, you must commit them and go to the Zuul status page to
find the status of the tests for your Do not merge patch. Open the console log for your job
and wait there until remote_pdb is started. You then need to find the IP address of the test
node in the console log. This is necessary to connect via telnet and start debugging. It will be
something like:
An example of such a Do not merge patch described above can be found at https://review.
opendev.org/#/c/558259/.
Please note that after adding new packages to the requirements.txt file, the
requirements-check job for your test patch will fail, but it is not important for debugging.
2. If root access to the test node is necessary, for example, to check if VMs have really been spawned,
or if router/dhcp namespaces have been configured properly, etc., you can ask a member of the
infra-team to hold the job for troubleshooting. You can ask someone to help with that on the
openstack-infra IRC channel. In that case, the infra-team will need to add your SSH key to
the test node, and configure things so that if the job fails, the node will not be destroyed. You will
then be able to SSH to it and debug things further. Please remember to tell the infra-team when
you finish debugging so they can unlock and destroy the node being held.
The above two solutions can be used together. For example, you should be able to connect to the test
node with both methods:
• using remote_pdb to connect via telnet;
• using SSH to connect as a root to the test node.
You can then ask the infra-team to add your key to the specific node on which you have already started
your remote_pdb session.
The elastic recheck page has all the current open ER queries. To file one, please see the ER Wiki.
Code reviews are a critical component of all OpenStack projects. Neutron accepts patches from many
diverse people with diverse backgrounds, employers, and experience levels. Code reviews provide a
way to enforce a level of consistency across the project, and also allow for the careful on boarding of
contributions from new contributors.
Neutron follows the code review guidelines as set forth for all OpenStack projects. It is expected that all
reviewers are following the guidelines set forth on that page.
In addition to that, the following rules are to follow:
• Any change that requires a new feature from Neutron runtime dependencies requires special re-
view scrutiny to make sure such a change does not break a supported platform (examples of those
platforms are latest Ubuntu LTS or CentOS). Runtime dependencies include but are not limited
to: kernel, daemons and tools as defined in oslo.rootwrap filter files, runlevel management
systems, as well as other elements of Neutron execution environment.
Note: For some components, the list of supported platforms can be wider than usual. For exam-
ple, Open vSwitch agent is expected to run successfully in Win32 runtime environment.
1. All such changes must be tagged with UpgradeImpact in their commit messages.
2. Reviewers are then advised to make an effort to check if the newly proposed runtime depen-
dency is fulfilled on supported platforms.
3. Specifically, reviewers and authors are advised to use existing gate and experimental plat-
form specific jobs to validate those patches. To trigger experimental jobs, use the usual
protocol (posting check experimental comment in Gerrit). CI will then execute and
report back a baseline of Neutron tests for platforms of interest and will provide feedback
on the effect of the runtime change required.
4. If review identifies that the proposed change would break a supported platform, advise to
rework the patch so that its no longer breaking the platform. One of the common ways of
achieving that is gracefully falling back to alternative means on older platforms, another
is hiding the new code behind a conditional, potentially controlled with a oslo.config
option.
Note: Neutron team retains the right to remove any platform conditionals in future re-
leases. Platform owners are expected to accommodate in due course, or otherwise see their
platforms broken. The team also retains the right to discontinue support for unresponsive
platforms.
5. The change should also include a new sanity check that would help interested parties to
identify their platform limitation in timely manner.
• Special attention should also be paid to changes in Neutron that can impact the Stadium and the
wider family of networking-related projects (referred to as sub-projects below). These changes
include:
1. Renaming or removal of methods.
2. Addition or removal of positional arguments.
3. Renaming or removal of constants.
To mitigate the risk of impacting the sub-projects with these changes, the following measures are
suggested:
1. Use of the online tool codesearch to ascertain how the proposed changes will affect the code
of the sub-projects.
2. Review the results of the non-voting check and 3rd party CI jobs executed by the sub-projects
against the proposed change, which are returned by Zuul in the changes Gerrit page.
When impacts are identified as a result of the above steps, every effort must be made to work with
the affected sub-projects to resolve the issues.
• Any change that modifies or introduces a new API should have test coverage in neutron-tempest-
plugin or tempest test suites. There should be at least one API test added for a new feature, but it
is preferred that both API and scenario tests be added where it is appropriate.
Scenario tests should cover not only the base level of new functionality, but also standard ways in
which the functionality can be used. For example, if the feature adds a new kind of networking
(like e.g. trunk ports) then tests should make sure that instances can use IPs provided by that
networking, can be migrated, etc.
It is also preferred that some negative test cases, like API tests to ensure that correct HTTP error
is returned when wrong data is provided, will be added where it is appropriate.
• It is usually enough for any mechanical changes, like e.g. translation imports or imports of updated
CI templates, to have only one +2 Code-Review vote to be approved. If there is any uncertainty
about a specific patch, it is better to wait for review from another core reviewer before approving
the patch.
In addition to code reviews, Neutron also maintains a BP specification git repository. Detailed instruc-
tions for the use of this repository are provided here. It is expected that Neutron core team members
are actively reviewing specifications which are pushed out for review to the specification repository. In
addition, there is a neutron-drivers team, composed of a handful of Neutron core reviewers, who can
approve and merge Neutron specs.
Some guidelines around this process are provided below:
• Once a specification has been pushed, it is expected that it will not be approved for at least 3 days
after a first Neutron core reviewer has reviewed it. This allows for additional cores to review the
specification.
• For blueprints which the core team deems of High or Critical importance, core reviewers may be
assigned based on their subject matter expertise.
• Specification priority will be set by the PTL with review by the core team once the specification
is approved.
Stackalytics provides some nice interfaces to track review statistics. The links are provided below. These
statistics are used to track not only Neutron core reviewer statistics, but also to track review statistics for
potential future core members.
• 30 day review stats
• 60 day review stats
• 90 day review stats
• 180 day review stats
This page lists things to cover before a Neutron release and will serve as a guide for next release man-
agers.
Server
Major release
A Major release is cut off once per development cycle and has an assigned name (Victoria, Wallaby, )
Prior to major release,
1. consider blocking all patches that are not targeted for the new release;
2. consider blocking trivial patches to keep the gate clean;
3. revise the current list of blueprints and bugs targeted for the release; roll over anything that does
not fit there, or wont make it (note that no new features land in master after so called feature
freeze is claimed by release team; there is a feature freeze exception (FFE) process described in
release engineering documentation in more details: http://docs.openstack.org/project-team-guide/
release-management.html );
4. start collecting state for targeted features from the team. For example, propose a post-mortem
patch for neutron-specs as in: https://review.opendev.org/c/openstack/neutron-specs/+/286413/
5. revise deprecation warnings collected in latest Zuul runs: some of them may indicate a problem
that should be fixed prior to release (see deprecations.txt file in those log directories); also, check
whether any Launchpad bugs with the deprecation tag need a clean-up or a follow-up in the context
of the release being planned;
6. check that release notes and sample configuration files render correctly, arrange clean-up if
needed;
7. ensure all doc links are valid by running tox -e linkcheck and addressing any broken links.
New major release process contains several phases:
1. master branch is blocked for patches that are not targeted for the release;
2. the whole team is expected to work on closing remaining pieces targeted for the release;
3. once the team is ready to release the first release candidate (RC1), either PTL or one of release
liaisons proposes a patch for openstack/releases repo. For example, see: https://review.opendev.
org/c/openstack/releases/+/753039/
4. once the openstack/releases patch lands, release team creates a new stable branch using hash
values specified in the patch;
5. at this point, master branch is open for patches targeted to the next release; PTL unblocks all
patches that were blocked in step 1;
6. if additional patches are identified that are critical for the release and must be shipped in the final
major build, corresponding bugs are tagged with <release>-rc-potential in Launchpad, fixes are
prepared and land in master branch, and are then backported to the newly created stable branch;
7. if patches landed in the release stable branch as per the previous step, a new release candidate that
would include those patches should be requested by PTL in openstack/releases repo;
8. eventually, the latest release candidate requested by PTL becomes the final major release of the
project.
Release candidate (RC) process allows for stabilization of the final release.
The following technical steps should be taken before the final release is cut off:
1. the latest alembic scripts are tagged with a milestone label. For example, see: https://review.
opendev.org/c/openstack/neutron/+/755285/
In the new stable branch, you should make sure that:
1. .gitreview file points to the new branch; https://review.opendev.org/c/openstack/neutron/+/
754738/
2. if the branch uses constraints to manage gated dependency versions, the default constraints
file name points to corresponding stable branch in openstack/requirements repo; https://review.
opendev.org/c/openstack/neutron/+/754739/
3. job templates are updated to use versions for that branch; https://review.opendev.
org/c/openstack/neutron-tempest-plugin/+/756585/ and https://review.opendev.org/c/openstack/
neutron/+/759856/
4. all CI jobs running against master branch of another project are dropped; https://review.opendev.
org/c/openstack/neutron/+/756695/
5. neutron itself is capped in requirements in the new branch; https://review.opendev.org/c/
openstack/requirements/+/764022/
6. all new Neutron features without an API extension which have new tempest tests (in tempest
or in neutron-tempest-plugin) must have a new item in available_features list
<neutron>/tools/abandon_old_reviews.sh
2. declutter Launchpad:
Minor release
A Minor release is created from an existing stable branch after the initial major release, and usually
contains bug fixes and small improvements only. The minor release frequency should follow the release
schedule for the current series. For example, assuming the current release is Rocky, stable branch
releases should coincide with milestones R1, R2, R3 and the final release. Stable branches can be also
released more frequently if needed, for example, if there is a major bug fix that has merged recently.
The following steps should be taken before claiming a successful minor release:
1. a patch for openstack/releases repo is proposed and merged.
Minor version number should be bumped always in cases when new release contains a patch which
introduces for example:
1. new OVO version for an object,
2. new configuration option added,
3. requirement change,
4. API visible change,
The above list doesnt cover all possible cases. Those are only examples of fixes which require bump of
minor version number but there can be also other types of changes requiring the same.
Changes that require the minor version number to be bumped should always have a release note added.
In other cases only patch number can be bumped.
Client
Most tips from the Server section apply to client releases too. Several things to note though:
1. when preparing for a major release, pay special attention to client bits that are targeted for the
release. Global openstack/requirements freeze happens long before first RC release of server
components. So if you plan to land server patches that depend on a new client, make sure you
dont miss the requirements freeze. After the freeze is in action, there is no easy way to land more
client patches for the planned target. All this may push an affected feature to the next development
cycle.
Neutron Third-party CI
As of the Liberty summit, Neutron no longer requires a third-party CI, but it is strongly encouraged, as
internal neutron refactoring can break external plugins and drivers at any time.
Neutron expects any Third Party CI system that interacts with gerrit to follow the requirements set by
the Infrastructure team1 as well as the Neutron Third Party CI guidelines below. Please ping the PTL
in #openstack-neutron or send an email to the openstack-discuss ML (with subject [neutron]) with any
questions. Be aware that the Infrastructure documentation as well as this document are living documents
and undergo changes. Track changes to the infrastructure documentation using this url2 (and please
review the patches) and check this doc on a regular basis for updates.
If your code is a neutron plugin or driver, you should run against every neutron change submitted, except
for docs, tests, tools, and top-level setup files. You can skip your CI runs for such exceptions by using
skip-if and all-files-match-any directives in Zuul. You can see a programmatic example of
the exceptions here3 .
If your code is in a neutron-*aas repo, you should run against the tests for that repo. You may also run
against every neutron change, if your service driver is using neutron interfaces that are not provided by
your service plugin (e.g. firewall/fwaas_plugin_v2.py). If you are using only plugin interfaces, it should
be safe to test against only the service repo tests.
Network API tests (git link). Network scenario tests (The test_network_* tests here). Any tests written
specifically for your setup. http://opendev.org/openstack/tempest/tree/tempest/api/network
Run with the test filter: network. This will include all neutron specific tests as well as any other tests
that are tagged as requiring networking. An example tempest setup for devstack-gate:
export DEVSTACK_GATE_NEUTRON=1
export DEVSTACK_GATE_TEMPEST_REGEX='(?!.*\[.*\bslow\b.*\
,→])((network)|(neutron))' (continues on next page)
1
http://ci.openstack.org/third_party.html
2
https://review.opendev.org/#/q/status:open+project:openstack-infra/system-config+branch:master+topic:third-party,n,z
3
https://github.com/openstack-infra/project-config/blob/master/dev/zuul/layout.yaml
The Neutron team encourages you to NOT vote -1 with a third-party CI. False negatives are noisy to the
community, and have given -1 from third-party CIs a bad reputation. Really bad, to the point of people
ignoring them all. Failure messages are useful to those doing refactors, and provide you feedback on the
state of your plugin.
If you insist on voting, by default, the infra team will not allow voting by new 3rd party CI systems. The
way to get your 3rd party CI system to vote is to talk with the Neutron PTL, who will let infra know the
system is ready to vote. The requirements for a new system to be given voting rights are as follows:
• A new system must be up and running for a month, with a track record of voting on the sandbox
system.
• A new system must correctly run and pass tests on patches for the third party driver/plugin for a
month.
• A new system must have a logfile setup and retention setup similar to the below.
Once the system has been running for a month, the owner of the third party CI system can contact the
Neutron PTL to have a conversation about getting voting rights upstream.
The general process to get these voting rights is outlined here. Please follow that, taking note of the
guidelines Neutron also places on voting for its CI systems.
A third party system can have its voting rights removed as well. If the system becomes unstable (stops
running, voting, or start providing inaccurate results), the Neutron PTL or any core reviewer will make
an attempt to contact the owner and copy the openstack-discuss mailing list. If no response is received
within 2 days, the Neutron PTL will remove voting rights for the third party CI system. If a response
is received, the owner will work to correct the issue. If the issue cannot be addressed in a reasonable
amount of time, the voting rights will be temporarily removed.
Third-Party CI systems MUST provide logs and configuration data to help developers troubleshoot test
failures. A third-party CI that DOES NOT post logs should be a candidate for removal, and new CI
systems MUST post logs before they can be awarded voting privileges.
Third party CI systems should follow the filesystem layout convention of the OpenStack CI system.
Please store your logs as viewable in a web browser, in a directory structure. Requiring the user to
download a giant tarball is not acceptable, and will be reason to not allow your system to vote from the
start, or cancel its voting rights if this changes while the system is running.
At the root of the results - there should be the following:
• console.html.gz - contains the output of stdout of the test run
• local.conf / localrc - contains the setup used for this run
• logs - contains the output of detail test log of the test run
The above logs must be a directory, which contains the following:
• Log files for each screen session that DevStack creates and launches an OpenStack component in
• Test result files
• testr_results.html.gz
• tempest.txt.gz
https://wiki.openstack.org/wiki/Neutron_Plugins_and_Drivers#Existing_Plugin_and_Drivers
References
This document provides guidelines on what to do in case your patch fails one of the Jenkins CI jobs. In
order to discover potential bugs hidden in the code or tests themselves, its very helpful to check failed
scenarios to investigate the cause of the failure. Sometimes the failure will be caused by the patch being
tested, while other times the failure can be caused by a previously untracked bug. Such failures are
usually related to tests that interact with a live system, like functional, fullstack and tempest jobs.
Before issuing a recheck on your patch, make sure that the gate failure is not caused by your patch.
Failed job can be also caused by some infra issue, for example unable to fetch things from external
resources like git or pip due to outage. Such failures outside of OpenStack world are not worth tracking
in launchpad and you can recheck leaving couple of words what went wrong. Data about gate stability
is collected and visualized via Grafana.
Please, do not recheck without providing the bug number for the failed job. For example, do not just
put an empty recheck comment but find the related bug number and put a recheck bug ###### comment
instead. If a bug does not exist yet, create one so other team members can have a look. It helps us
maintain better visibility of gate failures. You can find how to troubleshoot gate failures in the Gate
Failure Triage documentation.
This section contains information on policies and procedures for the so called Neutron Stadium. The
Neutron Stadium is the list of projects that show up in the OpenStack Governance Document.
The list includes projects that the Neutron PTL and core team are directly involved in, and manage on a
day to day basis. To do so, the PTL and team ensure that common practices and guidelines are followed
throughout the Stadium, for all aspects that pertain software development, from inception, to coding,
testing, documentation and more.
The Stadium is not to be intended as a VIP club for OpenStack networking projects, or an upper tier
within OpenStack. It is simply the list of projects the Neutron team and PTL claim responsibility for
when producing Neutron deliverables throughout the release cycles.
For more details on the Stadium, and what it takes for a project to be considered an integral part of the
Stadium, please read on.
Stadium Governance
Background
Neutron grew to become a big monolithic codebase, and its core team had a tough time making progress
on a number of fronts, like adding new features, ensuring stability, etc. During the Kilo timeframe,
a decomposition effort started, where the codebase got disaggregated into separate repos, like the high
level services, and the various third-party solutions for L2 and L3 services, and the Stadium was officially
born.
These initiatives enabled the various individual teams in charge of the smaller projects the opportunity
to iterate faster and reduce the time to feature. This has been due to the increased autonomy and implicit
trust model that made the lack of oversight of the PTL and the Neutron drivers/core team acceptable
for a small number of initiatives. When the proposed arrangement allowed projects to be automatically
enlisted as a Neutron project based simply on description, and desire for affiliation, the number of
projects included in the Stadium started to grow rapidly, which created a number of challenges for the
PTL and the drivers team.
In fact, it became harder and harder to ensure consistency in the APIs, architecture, design, implemen-
tation and testing of the overarching project; all aspects of software development, like documentation,
integration, release management, maintenance, and upgrades started to being neglected for some projects
and that led to some unhappy experiences.
The point about uniform APIs is particularly important, because the Neutron platform is so flexible that
a project can take a totally different turn in the way it exposes functionality, that it is virtually impossible
for the PTL and the drivers team to ensure that good API design principles are being followed over time.
In a situation where each project is on its own, that might be acceptable, but allowing independent API
evolution while still under the Neutron umbrella is counterproductive.
These challenges led the Neutron team to find a better balance between autonomy and consistency and
lay down criteria that more clearly identify when a project can be eligible for inclusion in the Neutron
governance.
This document describes these criteria, and document the steps involved to maintain the integrity of the
Stadium, and how to ensure this integrity be maintained over time when modifications to the governance
are required.
In order to be considered part of the Stadium, a project must show a track record of alignment with the
Neutron core project. This means showing proof of adoption of practices as led by the Neutron core
team. Some of these practices are typically already followed by the most mature OpenStack projects:
• Exhaustive documentation: it is expected that each project will have a developer, user/operator
and API documentations available.
• Exhaustive OpenStack CI coverage: unit, functional, and tempest coverage using OpenStack CI
(upstream) resources so that Grafana and OpenStack Health support is available. Access to CI
resources and historical data by the team is key to ensuring stability and robustness of a project.
In particular, it is of paramount importance to ensure that DB models/migrations are tested func-
tionally to prevent data inconsistency issues or unexpected DB logic errors due to schema/models
mismatch. For more details, please look at the following resources:
– https://review.opendev.org/#/c/346091/
– https://review.opendev.org/#/c/346272/
– https://review.opendev.org/#/c/346083/
More Database related information can be found on:
– Alembic Migrations
– Neutron Database Layer
Bear in mind that many projects have been transitioning their codebase and tests to fully support
Python 3+, and it is important that each Stadium project supports Python 3+ the same way Neu-
tron core does. For more information on how to do testing, please refer to the Neutron testing
documentation.
• Good release footprint, according to the chosen release model.
• Adherence to deprecation and stable backports policies.
• Demonstrated ability to do upgrades and/or rolling upgrades, where applicable. This means having
grenade support on top of the CI coverage as described above.
• Client bindings and CLI developed according to the OpenStack Client plugin model.
On top of the above mentioned criteria, the following also are taken into consideration:
• A project must use, adopt and implement open software and technologies.
• A project must integrate with Neutron via one of the supported, advertised and maintained public
Python APIs. REST API does not qualify (the project python-neutronclient is an exception).
• It adopts neutron-lib (with related hacking rules applied), and has proof of good decoupling from
Neutron core internals.
• It provides an API that adopts API guidelines as set by the Neutron core team, and that relies on
an open implementation.
• It adopts modular interfaces to provide networking services: this means that L2/7 services are
provided in the form of ML2 mech drivers and service plugins respectively. A service plugin
can expose a driver interface to support multiple backend technologies, and/or adopt the flavor
framework as necessary.
When a project is to be considered part of the Stadium, proof of compliance to the aforementioned
practices will have to be demonstrated typically for at least two OpenStack releases. Application for
inclusion is to be considered only within the first milestone of each OpenStack cycle, which is the
time when the PTL and Neutron team do release planning, and have the most time available to discuss
governance issues.
Projects part of the Neutron Stadium have typically the first milestone to get their house in order, during
which time reassessment happens; if removed, because of substantial lack of meeting the criteria, a
project cannot reapply within the same release cycle it has been evicted.
The process for proposing a repo into openstack/ and under the Neutron governance is to propose a
patch to the openstack/governance repository. For example, to propose networking-foo, one would add
the following entry under Neutron in reference/projects.yaml:
- repo: openstack/networking-foo
tags:
- name: release:independent
Typically this is a patch that the PTL, in collaboration with the projects point of contact, will shepherd
through the review process. This step is undertaken once it is clear that all criteria are met. The next
section provides an informal checklist that shows what steps a project needs to go through in order to
enable the PTL and the TC to vote positively on the proposed inclusion.
Once a project is included, it abides by the Neutron RFE submission process, where specifications to
neutron-specs are required for major API as well as major architectural changes that may require core
Neutron platform enhancements.
Checklist
• How to integrate documentation into docs.o.o: The documentation website has a section for
project developer documentation. Each project in the Neutron Stadium must have an entry under
the Networking Sub Projects section that points to the developer documentation for the project,
available at https://docs.openstack.org/<your-project>/latest/. This is a
two step process that involves the following:
– Build the artefacts: this can be done by following example https://review.opendev.org/#/c/
293399/.
– Publish the artefacts: this can be done by following example https://review.opendev.org/#/c/
216448/.
More information can also be found on the project creator guide.
• How to integrate into Grafana: Grafana is a great tool that provides the ability to display historical
series, like failure rates of OpenStack CI jobs. A few examples that added dashboards over time
are:
– Neutron.
– Networking-OVN.
– Networking-Midonet.
Any subproject must have a Grafana dashboard that shows failure rates for at least Gate and Check
queues.
• How to integrate into neutron-libs CI: there are a number of steps required to integrate with
neutron-lib CI and adopt neutron-lib in general. One step is to validate that neutron-lib master
is working with the master of a given project that uses neutron-lib. For example patch introduced
such support for the Neutron project. Any subproject that wants to do the same would need to
adopt the following few lines:
1. https://review.opendev.org/#/c/338603/4/jenkins/jobs/projects.yaml@4685
2. https://review.opendev.org/#/c/338603/3/zuul/layout.yaml@8501
3. https://review.opendev.org/#/c/338603/4/grafana/neutron.yaml@39
Line 1 and 2 respectively add a job to the periodic queue for the project, whereas line 3 introduced
the failure rate trend for the periodic job to spot failure spikes etc. Make sure your project has the
following:
1. https://review.opendev.org/#/c/357086/
2. https://review.opendev.org/#/c/359143/
• How to port api-ref over to neutron-lib: to publish the subproject API reference into the Network-
ing API guide you must contribute the API documentation into neutron-libs api-ref directory as
done in the WADL/REST transition patch. Once this is done successfully, a link to the subproject
API will show under the published table of content. An RFE bug tracking this effort effectively
initiates the request for Stadium inclusion, where all the aspects as outlined in this documented
are reviewed by the PTL.
• How to port API definitions over the neutron-lib: the most basic steps to port API definitions over
to neutron-lib are demonstrated in the following patches:
– https://review.opendev.org/#/c/353131/
– https://review.opendev.org/#/c/353132/
The neutron-lib patch introduces the elements that define the API, and testing coverage validates
that the resource and actions maps use valid keywords. API reference documentation is provided
alongside the definition to keep everything in one place. The neutron patch uses the Neutron
extension framework to plug the API definition on top of the Neutron API backbone. The change
can only merge when there is a released version of neutron-lib.
• How to integrate into the openstack release: every project in the Stadium must have release notes.
In order to set up release notes, please see the patches below for an example on how to set up reno:
– https://review.opendev.org/#/c/320904/
– https://review.opendev.org/#/c/243085/
For release documentation related to Neutron, please check the Neutron Policies. Once, everything
is set up and your project is released, make sure you see an entry on the release page (e.g. Pike.
Make sure you release according to the project declared release model.
• How to port OpenStack Client over to python-neutronclient: client API bindings and client com-
mand line interface support must be developed in python-neutronclient under osc module. If your
project requires one or both, consider looking at the following example on how to contribute these
two python-neutronclient according to the OSC framework and guidelines:
– https://review.opendev.org/#/c/340624/
– https://review.opendev.org/#/c/340763/
– https://review.opendev.org/#/c/352653/
More information on how to develop python-openstackclient plugins can be found on the follow-
ing links:
– https://docs.openstack.org/python-openstackclient/latest/contributor/plugins.html
– https://docs.openstack.org/python-openstackclient/latest/contributor/humaninterfaceguide.
html
It is worth prefixing the commands being added with the keyword network to avoid potential clash
with other commands with similar names. This is only required if the command object name is
highly likely to have an ambiguous meaning.
Sub-Project Guidelines
This document provides guidance for those who maintain projects that consume main neutron or neutron
advanced services repositories as a dependency. It is not meant to describe projects that are not tightly
coupled with Neutron code.
Code Reuse
At all times, avoid using any Neutron symbols that are explicitly marked as private (those have an
underscore at the start of their names).
Try to avoid copy pasting the code from Neutron to extend it. Instead, rely on enormous number of
different plugin entry points provided by Neutron (L2 agent extensions, API extensions, service plugins,
core plugins, ML2 mechanism drivers, etc.)
Requirements
Neutron dependency
Subprojects usually depend on neutron repositories, by using -e https:// schema to define such a depen-
dency. The dependency must not be present in requirements lists though, and instead belongs to tox.ini
deps section. This is because next pbr library releases do not guarantee -e https:// dependencies will
work.
You may still put some versioned neutron dependency in your requirements list to indicate the depen-
dency for anyone who packages your subproject.
Explicit dependencies
Each neutron project maintains its own lists of requirements. Subprojects that depend on neutron while
directly using some of those libraries that neutron maintains as its dependencies must not rely on the fact
that neutron will pull the needed dependencies for them. Direct library usage requires that this library is
mentioned in requirements lists of the subproject.
The reason to duplicate those dependencies is that neutron team does not stick to any backwards com-
patibility strategy in regards to requirements lists, and is free to drop any of those dependencies at any
time, breaking anyone who could rely on those libraries to be pulled by neutron itself.
At all times, subprojects that use neutron as a dependency should make sure their dependencies do not
conflict with neutrons ones.
Core neutron projects maintain their requirements lists by utilizing a so-called proposal bot. To keep
your subproject in sync with neutron, it is highly recommended that you register your project in open-
stack/requirements:projects.txt file to enable the bot to update requirements for you.
Once a subproject opts in global requirements synchronization, it should enable check-requirements jobs
in project-config. For example, see this patch.
Stable branches
Stable branches for subprojects should be created at the same time when corresponding neutron stable
branches are created. This is to avoid situations when a postponed cut-off results in a stable branch that
contains some patches that belong to the next release. This would require reverting patches, and this is
something you should avoid.
Make sure your neutron dependency uses corresponding stable branch for neutron, not master.
Note that to keep requirements in sync with core neutron repositories in stable branches, you should
make sure that your project is registered in openstack/requirements:projects.txt for the branch in ques-
tion.
Subproject stable branches are supervised by horizontal neutron-stable-maint team.
More info on stable branch process can be found on the following page.
Merges into stable branches are handled by members of the neutron-stable-maint gerrit group. The
reason for this is to ensure consistency among stable branches, and compliance with policies for stable
backports.
For sub-projects who participate in the Neutron Stadium effort and who also create and utilize stable
branches, there is an expectation around what is allowed to be merged in these stable branches. The
Stadium projects should be following the stable branch policies as defined by on the Stable Branch wiki.
This means that, among other things, no features are allowed to be backported into stable branches.
Releases
It is suggested that sub-projects cut off new releases from time to time, especially for stable branches. It
will make the life of packagers and other consumers of your code easier.
All subproject releases are managed by global OpenStack Release Managers team. The neutron-release
team handles only the following operations:
• Make stable branches end of life
To release a sub-project, follow the following steps:
• For projects which have not moved to post-versioning, we need to push an alpha tag to avoid pbr
complaining. A member of the neutron-release group will handle this.
• A sub-project owner should modify setup.cfg to remove the version (if you have one), which
moves your project to post-versioning, similar to all the other Neutron projects. You can skip this
step if you dont have a version in setup.cfg.
• A sub-project owner proposes a patch to openstack/releases repository with the intended git hash.
The Neutron release liaison should be added in Gerrit to the list of reviewers for the patch.
Note: New major tag versions should conform to SemVer requirements, meaning no year num-
bers should be used as a major version. The switch to SemVer is advised at earliest convenience
for all new major releases.
Note: Before Ocata, when releasing the very first release in a stable series, a sub-project owner
would need to request a new stable branch creation during Gerrit review, but not anymore. See
the following email for more details.
• The Neutron release liaison votes with +1 for the openstack/releases patch.
• The releases will now be on PyPI. A sub-project owner should verify this by going to an URL
similar to this.
• A sub-project owner should next go to Launchpad and release this version using the Release Now
button for the release itself.
• If a sub-project uses the delay-release option, a sub-project owner should update any bugs that
were fixed with this release to Fix Released in Launchpad. This step is not necessary if the sub-
project uses the direct-release option, which is the default.1
• The new release will be available on OpenStack Releases.
• A sub-project owner should add the next milestone to the Launchpad series, or if a new series is
required, create the new series and a new milestone.
Note: You need to be careful when picking a git commit to base new releases on. In most cases, youll
want to tag the merge commit that merges your last commit in to the branch. This bug shows an instance
where this mistake was caught. Notice the difference between the incorrect commit and the correct one
which is the merge commit. git log 6191994..22dd683 --oneline shows that the first one
misses a handful of important commits that the second one catches. This is the nature of merging to
master.
References
In the Developer Guide, you will find information on Neutrons lower level programming APIs. There
are sections that cover the core pieces of Neutron, including its database, message queue, and scheduler
components. There are also subsections that describe specific plugins inside Neutron. Finally, the
developer guide includes information about Neutron testing infrastructure.
14.4.1 Effective Neutron: 100 specific ways to improve your Neutron contribu-
tions
There are a number of skills that make a great Neutron developer: writing good code, reviewing ef-
fectively, listening to peer feedback, etc. The objective of this document is to describe, by means of
examples, the pitfalls, the good and bad practices that we as project encounter on a daily basis and that
make us either go slower or accelerate while contributing to Neutron.
By reading and collaboratively contributing to such a knowledge base, your development and review
cycle becomes shorter, because you will learn (and teach to others after you) what to watch out for, and
how to be proactive in order to prevent negative feedback, minimize programming errors, writing better
tests, and so on and so forthin a nutshell, how to become an effective Neutron developer.
The notes below are meant to be free-form and brief by design. They are not meant to replace or
duplicate OpenStack documentation, or any project-wide documentation initiative like peer-review notes
or the team guide. For this reason, references are acceptable and should be favored, if the shortcut is
deemed useful to expand on the distilled information. We will try to keep these notes tidy by breaking
them down into sections if it makes sense. Feel free to add, adjust, remove as you see fit. Please do
so, taking into consideration yourself and other Neutron developers as readers. Capture your experience
during development and review and add any comment that you believe will make your life and others
easier.
Happy hacking!
Plugin development
Document common pitfalls as well as good practices done during plugin development.
• Use mixin classes as last resort. They can be a powerful tool to add behavior but their strength is
also a weakness, as they can introduce unpredictable behavior to the MRO, amongst other issues.
• In lieu of mixins, if you need to add behavior that is relevant for ML2, consider using the extension
manager.
• If you make changes to the DB class methods, like calling methods that can be inherited, think
about what effect that may have to plugins that have controller backends.
• If you make changes to the ML2 plugin or components used by the ML2 plugin, think about the
effect that may have to other plugins.
• When adding behavior to the L2 and L3 db base classes, do not assume that there is an agent on
the other side of the message broker that interacts with the server. Plugins may not rely on agents
at all.
• Be mindful of required capabilities when you develop plugin extensions. The Extension descrip-
tion provides the ability to specify the list of required capabilities for the extension you are de-
veloping. By declaring this list, the server will not start up if the requirements are not met, thus
avoiding leading the system to experience undetermined behavior at runtime.
Database interaction
Document common pitfalls as well as good practices done during database development.
• first() does not raise an exception.
• Do not use delete() to remove objects. A delete query does not load the object so no sqlalchemy
events can be triggered that would do things like recalculate quotas or update revision numbers of
parent objects. For more details on all of the things that can go wrong using bulk delete operations,
see the Warning sections in the link above.
• For PostgreSQL if youre using GROUP BY everything in the SELECT list must be an aggregate
SUM(), COUNT(), etc or used in the GROUP BY.
The incorrect variant:
q = query(Object.id, Object.name,
func.count(Object.number)).group_by(Object.name)
q = query(Object.id, Object.name,
func.count(Object.number)).group_by(Object.id, Object.name)
• Beware of the InvalidRequestError exception. There is even a Neutron bug registered for it. Bear
in mind that this error may also occur when nesting transaction blocks, and the innermost block
raises an error without proper rollback. Consider if savepoints can fit your use case.
• When designing data models that are related to each other, be careful to how you model the
relationships loading strategy. For instance a joined relationship can be very efficient over others
(some examples include router gateways or network availability zones).
• If you add a relationship to a Neutron object that will be referenced in the majority of cases where
the object is retrieved, be sure to use the lazy=joined parameter to the relationship so the related
objects are loaded as part of the same query. Otherwise, the default method is select, which emits a
new DB query to retrieve each related object adversely impacting performance. For example, see
patch 88665 which resulted in a significant improvement since router retrieval functions always
include the gateway interface.
• Conversely, do not use lazy=joined if the relationship is only used in corner cases because the
JOIN statement comes at a cost that may be significant if the relationship contains many objects.
For example, see patch 168214 which reduced a subnet retrieval by ~90% by avoiding a join to
the IP allocation table.
• When writing extensions to existing objects (e.g. Networks), ensure that they are written in a way
that the data on the object can be calculated without additional DB lookup. If thats not possible,
ensure the DB lookup is performed once in bulk during a list operation. Otherwise a list call for
a 1000 objects will change from a constant small number of DB queries to 1000 DB queries. For
example, see patch 257086 which changed the availability zone code from the incorrect style to a
database friendly one.
• Beware of ResultProxy.inserted_primary_key which returns a list of last inserted primary keys not
the last inserted primary key:
result = session.execute(mymodel.insert().values(**values))
# result.inserted_primary_key is a list even if we inserted a unique
,→row!
• Beware of pymysql which can silently unwrap a list with an element (and hide a wrong use of
ResultProxy.inserted_primary_key for example):
The 2nd insert should crash (list provided, integer expected). It crashes at least with mysql and
postgresql backends, but succeeds with pymysql because it transforms them into:
System development
Document common pitfalls as well as good practices done when invoking system commands and inter-
acting with linux utils.
• When a patch requires a new platform tool or a new feature in an existing tool, check if com-
mon platforms ship packages with the aforementioned feature. Also, tag such a patch with
UpgradeImpact to raise its visibility (as these patches are brought up to the attention of the
core team during team meetings). More details in review guidelines.
• When a patch or the code depends on a new feature in the kernel or in any platform tools (dnsmasq,
ip, Open vSwitch etc.), consider introducing a new sanity check to validate deployments for the
expected features. Note that sanity checks must not check for version numbers of underlying
platform tools because distributions may decide to backport needed features into older versions.
Instead, sanity checks should validate actual features by attempting to use them.
Document common pitfalls as well as good practices done when using eventlet and monkey patching.
• Do not use with_lockmode(update) on SQL queries without protecting the operation with a
lockutils semaphore. For some SQLAlchemy database drivers that operators may choose (e.g.
MySQLdb) it may result in a temporary deadlock by yielding to another coroutine while hold-
ing the DB lock. The following wiki provides more details: https://wiki.openstack.org/wiki/
OpenStack_and_SQLAlchemy#MySQLdb_.2B_eventlet_.3D_sad
Document common pitfalls as well as good practices done when writing tests, any test. For anything
more elaborate, please visit the testing section.
• Preferring low level testing versus full path testing (e.g. not testing database via client calls). The
former is to be favored in unit testing, whereas the latter is to be favored in functional testing.
• Prefer specific assertions (assert(Not)In, assert(Not)IsInstance, assert(Not)IsNone, etc) over
generic ones (assertTrue/False, assertEqual) because they raise more meaningful errors:
def test_specific(self):
self.assertIn(3, [1, 2])
# raise meaningful error: "MismatchError: 3 not in [1, 2]"
def test_generic(self):
self.assertTrue(3 in [1, 2])
# raise meaningless error: "AssertionError: False is not true"
• Use the pattern self.assertEqual(expected, observed) not the opposite, it helps reviewers to un-
derstand which one is the expected/observed value in non-trivial assertions. The expected and
observed values are also labeled in the output when the assertion fails.
• Prefer specific assertions (assertTrue, assertFalse) over assertEqual(True/False, observed).
• Dont write tests that dont test the intended code. This might seem silly but its easy to do with a
lot of mocks in place. Ensure that your tests break as expected before your code change.
• Avoid heavy use of the mock library to test your code. If your code requires more than one
mock to ensure that it does the correct thing, it needs to be refactored into smaller, testable units.
Otherwise we depend on fullstack/tempest/api tests to test all of the real behavior and we end up
with code containing way too many hidden dependencies and side effects.
• All behavior changes to fix bugs should include a test that prevents a regression. If you made a
change and it didnt break a test, it means the code was not adequately tested in the first place, its
not an excuse to leave it untested.
• Test the failure cases. Use a mock side effect to throw the necessary exceptions to test your except
clauses.
• Dont mimic existing tests that violate these guidelines. We are attempting to replace all of these
so more tests like them create more work. If you need help writing a test, reach out to the testing
lieutenants and the team on IRC.
• Mocking open() is a dangerous practice because it can lead to unexpected bugs like bug 1503847.
In fact, when the built-in open method is mocked during tests, some utilities (like debtcollector)
may still rely on the real thing, and may end up using the mock rather what they are really looking
for. If you must, consider using OpenFixture, but it is better not to mock open() at all.
Documentation
The documenation for Neutron that exists in this repository is broken down into the following directories
based on content:
• doc/source/admin/ - feature-specific configuration documentation aimed at operators.
• doc/source/configuration - stubs for auto-generated configuration files. Only needs updating if
new config files are added.
• doc/source/contributor/internals - developer documentation for lower-level technical details.
• doc/source/contributor/policies - neutron team policies and best practices.
• doc/source/install - install-specific documentation for standing-up network-enabled nodes.
Additional documentation resides in the neutron-lib repository:
• api-ref - API reference documentation for Neutron resource and API extensions.
Backward compatibility
Document common pitfalls as well as good practices done when extending the RPC Interfaces.
• Make yourself familiar with Upgrade review guidelines.
Deprecation
Sometimes we want to refactor things in a non-backward compatible way. In most cases you can
use debtcollector to mark things for deprecation. Config items have deprecation options supported by
oslo.config.
The deprecation process must follow the standard deprecation requirements. In terms of neutron devel-
opment, this means:
• A launchpad bug to track the deprecation.
• A patch to mark the deprecated items. If the deprecation affects users (config items, API changes)
then a release note must be included.
• Wait at least one cycle and at least three months linear time.
• A patch that removes the deprecated items. Make sure to refer to the original launchpad bug in
the commit message of this patch.
Scalability issues
Document common pitfalls as well as good practices done when writing code that needs to process a lot
of data.
Document common pitfalls as well as good practices done when instrumenting your code.
• Make yourself familiar with OpenStack logging guidelines to avoid littering the logs with traces
logged at inappropriate levels.
• The logger should only be passed unicode values. For example, do not pass it exceptions or other
objects directly (LOG.error(exc), LOG.error(port), etc.). See https://docs.openstack.org/oslo.log/
latest/user/migration.html#no-more-implicit-conversion-to-unicode-str for more details.
• Dont pass exceptions into LOG.exception: it is already implicitly included in the log message by
Python logging module.
• Dont use LOG.exception when there is no exception registered in current thread context: Python
3.x versions before 3.5 are known to fail on it.
Project interfaces
Document common pitfalls as well as good practices done when writing code that is used to interface
with other projects, like Keystone or Nova.
Document common pitfalls as well as good practices done when writing docstrings.
• Do not make multiple changes in one patch unless absolutely necessary. Cleaning up nearby
functions or fixing a small bug you noticed while working on something else makes the patch
very difficult to review. It also makes cherry-picking and reverting very difficult. Even apparently
minor changes such as reformatting whitespace around your change can burden reviewers and
cause merge conflicts.
• If a fix or feature requires code refactoring, submit the refactoring as a separate patch than the one
that changes the logic. Otherwise its difficult for a reviewer to tell the difference between mistakes
in the refactor and changes required for the fix/feature. If its a bug fix, try to implement the fix
before the refactor to avoid making cherry-picks to stable branches difficult.
• Consider your reviewers time before submitting your patch. A patch that requires many hours
or days to review will sit in the todo list until someone has many hours or days free (which
may never happen.) If you can deliver your patch in small but incrementally understandable and
testable pieces you will be more likely to attract reviewers.
Reviewer comments
• Acknowledge them one by one by either clicking Done or by replying extensively. If you do not,
the reviewer wont know whether you thought it was not important, or you simply forgot. If the
reply satisfies the reviewer, consider capturing the input in the code/document itself so that its for
reviewers of newer patchsets to see (and other developers when the patch merges).
• Watch for the feedback on your patches. Acknowledge it promptly and act on it quickly, so that
the reviewer remains engaged. If you disappear for a week after you posted a patchset, it is very
likely that the patch will end up being neglected.
• Do not take negative feedback personally. Neutron is a large project with lots of contributors with
different opinions on how things should be done. Many come from widely varying cultures and
languages so the English, text-only feedback can unintentionally come across as harsh. Getting a
-1 means reviewers are trying to help get the patch into a state that can be merged, it doesnt just
mean they are trying to block it. Its very rare to get a patch merged on the first iteration that makes
everyone happy.
Code Review
IRC
• IRC is a place where you can speak with many of the Neutron developers and core reviewers.
For more information you should visit OpenStack IRC wiki Neutron IRC channel is #openstack-
neutron
• There are weekly IRC meetings related to many different projects/teams in Neutron. A full list
of these meetings and their date/time can be found in OpenStack IRC Meetings. It is important
to attend these meetings in the area of your contribution and possibly mention your work and
patches.
• When you have questions regarding an idea or a specific patch of yours, it can be helpful to find a
relevant person in IRC and speak with them about it. You can find a users IRC nickname in their
launchpad account.
• Being available on IRC is useful, since reviewers can contact you directly to quickly clarify a
review issue. This speeds up the feedback loop.
• Each area of Neutron or sub-project of Neutron has a specific lieutenant in charge of it. You can
most likely find these lieutenants on IRC, it is advised however to try and send public questions to
the channel rather then to a specific person if possible. (This increase the chances of getting faster
answers to your questions). A list of the areas and lieutenants nicknames can be found at Core
Reviewers.
Commit messages
Document common pitfalls as well as good practices done when writing commit messages. For more
details see Git commit message best practices. This is the TL;DR version with the important points for
committing to Neutron.
• One liners are bad, unless the change is trivial.
• Use UpgradeImpact when the change could cause issues during the upgrade from one version
to the next.
• APIImpact should be used when the api-ref in neutron-lib must be updated to reflect the change,
and only as a last resort. Rather, the ideal workflow includes submitting a corresponding neutron-
lib api-ref change along with the implementation, thereby removing the need to use APIImpact.
• Make sure the commit message doesnt have any spelling/grammar errors. This is the first thing
reviewers read and they can be distracting enough to invite -1s.
• Describe what the change accomplishes. If its a bug fix, explain how this code will fix the prob-
lem. If its part of a feature implementation, explain what component of the feature the patch
implements. Do not just describe the bug, thats what launchpad is for.
• Use the Closes-Bug: #BUG-NUMBER tag if the patch addresses a bug. Submitting a bugfix
without a launchpad bug reference is unacceptable, even if its trivial. Launchpad is how bugs
are tracked so fixes without a launchpad bug are a nightmare when users report the bug from an
older version and the Neutron team cant tell if/why/how its been fixed. Launchpad is also how
backports are identified and tracked so patches without a bug report cannot be picked to stable
branches.
• Use the Implements: blueprint NAME-OF-BLUEPRINT or Partially-Implements: blueprint
NAME-OF-BLUEPRINT for features so reviewers can determine if the code matches the spec
that was agreed upon. This also updates the blueprint on launchpad so its easy to see all patches
that are related to a feature.
• If its not immediately obvious, explain what the previous code was doing that was incorrect. (e.g.
code assumed it would never get None from a function call)
• Be specific in your commit message about what the patch does and why it does this. For example,
Fixes incorrect logic in security groups is not helpful because the code diff already shows that you
are modifying security groups. The message should be specific enough that a reviewer looking at
the code can tell if the patch does what the commit says in the most appropriate manner. If the
reviewer has to guess why you did something, lots of your time will be wasted explaining why
certain changes were made.
Document common pitfalls as well as good practices done when dealing with OpenStack CI.
• When you submit a patch, consider checking its status in the queue. If you see a job failures, you
might as well save time and try to figure out in advance why it is failing.
• Excessive use of recheck to get test to pass is discouraged. Please examine the logs for the failing
test(s) and make sure your change has not tickled anything that might be causing a new failure or
race condition. Getting your change in could make it even harder to debug what is actually broken
later on.
This page describes how to setup a working Python development environment that can be used in devel-
oping Neutron on Ubuntu, Fedora or Mac OS X. These instructions assume youre already familiar with
Git and Gerrit, which is a code repository mirror and code review toolset , however if you arent please
see this Git tutorial for an introduction to using Git and this guide for a tutorial on using Gerrit and Git
for code contribution to OpenStack projects.
Following these instructions will allow you to run the Neutron unit tests. If you want to be able to run
Neutron in a full OpenStack environment, you can use the excellent DevStack project to do so. There is
a wiki page that describes setting up Neutron using DevStack.
In the .gitignore files, add patterns to exclude files created by tools integrated, such as test frameworks
from the projects recommended workflow, rendered documentation and package builds.
Dont add patterns to exculde files created by preferred personal like for example editors, IDEs or oper-
ating system. These should instead be maintained outside the repository, for example in a ~/.gitignore
file added with:
Testing Neutron
The vagrant directory contains a set of vagrant configurations which will help you deploy Neutron with
ovn driver for testing or development purposes.
We provide a sparse multinode architecture with clear separation between services. In the future we will
include all-in-one and multi-gateway architectures.
Vagrant prerequisites
Those are the prerequisites for using the vagrant file definitions
1. Install VirtualBox and Vagrant. Alternatively you can use parallels or libvirt vagrant plugin.
2. Install plug-ins for Vagrant:
3. On Linux hosts, you can enable instances to access external networks such as the Internet by en-
abling IP forwarding and configuring SNAT from the IP address range of the provider network
interface (typically vboxnet1) on the host to the external network interface on the host. For exam-
ple, if the eth0 network interface on the host provides external network connectivity:
# sysctl -w net.ipv4.ip_forward=1
# sysctl -p
# iptables -t nat -A POSTROUTING -s 10.10.0.0/16 -o eth0 -j MASQUERADE
Sparse architecture
The Vagrant scripts deploy OpenStack with Open Virtual Network (OVN) using four nodes (five if you
use the optional ovn-vtep node) to implement a minimal variant of the reference architecture:
1. ovn-db: Database node containing the OVN northbound (NB) and southbound (SB) databases via
the Open vSwitch (OVS) database and ovn-northd services.
2. ovn-controller: Controller node containing the Identity service, Image service, control plane por-
tion of the Compute service, control plane portion of the Networking service including the ovn
ML2 driver, and the dashboard. In addition, the controller node is configured as an NFS server to
support instance live migration between the two compute nodes.
3. ovn-compute1 and ovn-compute2: Two compute nodes containing the Compute hypervisor,
ovn-controller service for OVN, metadata agents for the Networking service, and OVS
services. In addition, the compute nodes are configured as NFS clients to support instance live
migration between them.
4. ovn-vtep: Optional. A node to run the HW VTEP simulator. This node is not started by default
but can be started by running vagrant up ovn-vtep after doing a normal vagrant up.
During deployment, Vagrant creates three VirtualBox networks:
1. Vagrant management network for deployment and VM access to external networks such as the
Internet. Becomes the VM eth0 network interface.
2. OpenStack management network for the OpenStack control plane, OVN control plane, and OVN
overlay networks. Becomes the VM eth1 network interface.
3. OVN provider network that connects OpenStack instances to external networks such as the Inter-
net. Becomes the VM eth2 network interface.
Requirements
The default configuration requires approximately 12 GB of RAM and supports launching approximately
four OpenStack instances using the m1.tiny flavor. You can change the amount of resources for each
VM in the instances.yml file.
Deployment
• For evaluating large MTUs, adjust the mtu option. You must also change the MTU on the
equivalent vboxnet interfaces on the host to the same value after Vagrant creates them.
For example:
$ vagrant up
5. After the process completes, you can use the vagrant status command to determine the VM
status:
$ vagrant status
Current machine states:
Note: If you prefer to use the VM console, the password for the root account is vagrant.
Since ovn-controller is set as the primary in the Vagrantfile, the command vagrant ssh
(without specifying the name) will connect ssh to that virtual machine.
7. Access OpenStack services via command-line tools on the ovn-controller node or via the
dashboard from the host by pointing a web browser at the IP address of the ovn-controller
node.
Note: By default, OpenStack includes two accounts: admin and demo, both using password
password.
8. After completing your tasks, you can destroy the VMs:
$ vagrant destroy
Introduction
Neutron has a pluggable architecture, with a number of extension points. This documentation covers
aspects relevant to contributing new Neutron v2 core (aka monolithic) plugins, ML2 mechanism drivers,
and L3 service plugins. This document will initially cover a number of process-oriented aspects of the
contribution process, and proceed to provide a how-to guide that shows how to go from 0 LOCs to
successfully contributing new extensions to Neutron. In the remainder of this guide, we will try to use
practical examples as much as we can so that people have working solutions they can start from.
This guide is for a developer who wants to have a degree of visibility within the OpenStack Networking
project. If you are a developer who wants to provide a Neutron-based solution without interacting with
the Neutron community, you are free to do so, but you can stop reading now, as this guide is not for you.
Plugins and drivers for non-reference implementations are known as third-party code. This includes code
for supporting vendor products, as well as code for supporting open-source networking implementations.
Before the Kilo release these plugins and drivers were included in the Neutron tree. During the Kilo
cycle the third-party plugins and drivers underwent the first phase of a process called decomposition.
During this phase, each plugin and driver moved the bulk of its logic to a separate git repository, while
leaving a thin shim in the neutron tree together with the DB models and migrations (and perhaps some
config examples).
During the Liberty cycle the decomposition concept was taken to its conclusion by allowing third-party
code to exist entirely out of tree. Further extension mechanisms have been provided to better support
external plugins and drivers that alter the API and/or the data model.
In the Mitaka cycle we will require all third-party code to be moved out of the neutron tree completely.
Outside the tree can be anything that is publicly available: it may be a repo on opendev.org for instance,
a tarball, a pypi package, etc. A plugin/drivers maintainer team self-governs in order to promote sharing,
reuse, innovation, and release of the out-of-tree deliverable. It should not be required for any member
of the core team to be involved with this process, although core members of the Neutron team can
participate in whichever capacity is deemed necessary to facilitate out-of-tree development.
This guide is aimed at you as the maintainer of code that integrates with Neutron but resides in a separate
repository.
Contribution Process
If you want to extend OpenStack Networking with your technology, and you want to do it within the vis-
ibility of the OpenStack project, follow the guidelines and examples below. Well describe best practices
for:
• Design and Development;
• Testing and Continuous Integration;
• Defect Management;
• Backport Management for plugin specific code;
• DevStack Integration;
• Documentation;
Once you have everything in place you may want to add your project to the list of Neutron sub-projects.
See Adding or removing projects to the Stadium for details.
Assuming you have a working repository, any development to your own repo does not need any
blueprint, specification or bugs against Neutron. However, if your project is a part of the Neutron
Stadium effort, you are expected to participate in the principles of the Four Opens, meaning your de-
sign should be done in the open. Thus, it is encouraged to file documentation for changes in your own
repository.
If your code is hosted on opendev.org then the gerrit review system is automatically provided. Contrib-
utors should follow the review guidelines similar to those of Neutron. However, you as the maintainer
have the flexibility to choose who can approve/merge changes in your own repo.
It is recommended (but not required, see policies) that you set up a third-party CI system. This will
provide a vehicle for checking the third-party code against Neutron changes. See Testing and Continuous
Integration below for more detailed recommendations.
Design documents can still be supplied in form of Restructured Text (RST) documents, within the same
third-party library repo. If changes to the common Neutron code are required, an RFE may need to be
filed. However, every case is different and you are invited to seek guidance from Neutron core reviewers
about what steps to follow.
The following strategies are recommendations only, since third-party CI testing is not an enforced re-
quirement. However, these strategies are employed by the majority of the plugin/driver contributors that
actively participate in the Neutron development community, since they have learned from experience
how quickly their code can fall out of sync with the rapidly changing Neutron core code base.
• You should run unit tests in your own external library (e.g. on opendev.org where Jenkins setup is
for free).
• Your third-party CI should validate third-party integration with Neutron via functional testing. The
third-party CI is a communication mechanism. The objective of this mechanism is as follows:
– it communicates to you when someone has contributed a change that potentially breaks your
code. It is then up to you maintaining the affected plugin/driver to determine whether the
failure is transient or real, and resolve the problem if it is.
– it communicates to a patch author that they may be breaking a plugin/driver. If they have the
time/energy/relationship with the maintainer of the plugin/driver in question, then they can
(at their discretion) work to resolve the breakage.
– it communicates to the community at large whether a given plugin/driver is being actively
maintained.
– A maintainer that is perceived to be responsive to failures in their third-party CI jobs is likely
to generate community goodwill.
It is worth noting that if the plugin/driver repository is hosted on opendev.org, due to current
openstack-infra limitations, it is not possible to have third-party CI systems participating in the
gate pipeline for the repo. This means that the only validation provided during the merge process
to the repo is through unit tests. Post-merge hooks can still be exploited to provide third-party
CI feedback, and alert you of potential issues. As mentioned above, third-party CI systems will
continue to validate Neutron core commits. This will allow them to detect when incompatible
changes occur, whether they are in Neutron or in the third-party repo.
Defect Management
Bugs affecting third-party code should not be filed in the Neutron project on launchpad. Bug tracking
can be done in any system you choose, but by creating a third-party project in launchpad, bugs that affect
both Neutron and your code can be more easily tracked using launchpads also affects project feature.
Security Issues
Here are some answers to how to handle security issues in your repo, taken from this mailing list mes-
sage:
• How should security your issues be managed?
The OpenStack Vulnerability Management Team (VMT) follows a documented process which can ba-
sically be reused by any project-team when needed.
• Should the OpenStack security team be involved?
The OpenStack VMT directly oversees vulnerability reporting and disclosure for a subset of OpenStack
source code repositories. However, they are still quite happy to answer any questions you might have
about vulnerability management for your own projects even if theyre not part of that set. Feel free to
reach out to the VMT in public or in private.
Also, the VMT is an autonomous subgroup of the much larger OpenStack Security project-team. Theyre
a knowledgeable bunch and quite responsive if you want to get their opinions or help with security-
related issues (vulnerabilities or otherwise).
• Does a CVE need to be filed?
It can vary widely. If a commercial distribution such as Red Hat is redistributing a vulnerable version
of your software, then they may assign one anyway even if you dont request one yourself. Or the
reporter may request one; the reporter may even be affiliated with an organization who has already
assigned/obtained a CVE before they initiate contact with you.
• Do the maintainers need to publish OSSN or equivalent documents?
OpenStack Security Advisories (OSSA) are official publications of the OpenStack VMT and only
cover VMT-supported software. OpenStack Security Notes (OSSN) are published by editors within
the OpenStack Security project-team on more general security topics and may even cover issues in
non-OpenStack software commonly used in conjunction with OpenStack, so its at their discretion as to
whether they would be able to accommodate a particular issue with an OSSN.
However, these are all fairly arbitrary labels, and what really matters in the grand scheme of things is
that vulnerabilities are handled seriously, fixed with due urgency and care, and announced widely not
just on relevant OpenStack mailing lists but also preferably somewhere with broader distribution like
the Open Source Security mailing list. The goal is to get information on your vulnerabilities, mitigating
measures and fixes into the hands of the people using your software in a timely manner.
• Anything else to consider here?
The OpenStack VMT is in the process of trying to reinvent itself so that it can better scale within the
context of the Big Tent. This includes making sure the policy/process documentation is more consum-
able and reusable even by project-teams working on software outside the scope of our charter. Its a work
in progress, and any input is welcome on how we can make this function well for everyone.
This section applies only to third-party maintainers who had code in the Neutron tree during the Kilo
and earlier releases. It will be obsolete once the Kilo release is no longer supported.
If a change made to out-of-tree third-party code needs to be back-ported to in-tree code in a stable
branch, you may submit a review without a corresponding master branch change. The change will be
evaluated by core reviewers for stable branches to ensure that the backport is justified and that it does
not affect Neutron core code stability.
When developing and testing a new or existing plugin or driver, the aid provided by DevStack is in-
credibly valuable: DevStack can help get all the software bits installed, and configured correctly, and
more importantly in a predictable way. For DevStack integration there are a few options available, and
they may or may not make sense depending on whether you are contributing a new or existing plugin or
driver.
If you are contributing a new plugin, the approach to choose should be based on Extras.d Hooks exter-
nally hosted plugins. With the extra.d hooks, the DevStack integration is co-located with the third-party
integration library, and it leads to the greatest level of flexibility when dealing with DevStack based
dev/test deployments.
One final consideration is worth making for third-party CI setups: if Devstack Gate is used, it does
provide hook functions that can be executed at specific times of the devstack-gate-wrap script run. For
example, the Neutron Functional job uses them. For more details see devstack-vm-gate-wrap.sh.
Documentation
For a layout of the how the documentation directory is structured see the effective neutron guide
The how-to below assumes that the third-party library will be hosted on opendev.org. This lets you
tap in the entire OpenStack CI infrastructure and can be a great place to start from to contribute your
new or existing driver/plugin. The list of steps below are summarized version of what you can find on
http://docs.openstack.org/infra/manual/creators.html. They are meant to be the bare minimum you have
to complete in order to get you off the ground.
• Create a public repository: this can be a personal opendev.org repo or any publicly available git
repo, e.g. https://github.com/john-doe/foo.git. This would be a temporary buffer
to be used to feed the one on opendev.org.
• Initialize the repository: if you are starting afresh, you may optionally want to use cookiecut-
ter to get a skeleton project. You can learn how to use cookiecutter on https://opendev.org/
openstack-dev/cookiecutter. If you want to build the repository from an existing Neutron module,
you may want to skip this step now, build the history first (next step), and come back here to
initialize the remainder of the repository with other files being generated by the cookiecutter (like
tox.ini, setup.cfg, setup.py, etc.).
• Create a repository on opendev.org. For this you need the help of the OpenStack infra team.
It is worth noting that you only get one shot at creating the repository on opendev.org. This is
the time you get to choose whether you want to start from a clean slate, or you want to import
the repo created during the previous step. In the latter case, you can do so by specifying the
upstream section for your project in project-config/gerrit/project.yaml. Steps are documented on
the Repository Creators Guide.
• Ask for a Launchpad user to be assigned to the core team created. Steps are documented in this
section.
• Fix, fix, fix: at this point you have an external base to work on. You can develop against the new
opendev.org project, the same way you work with any other OpenStack project: you have pep8,
docs, and python CI jobs that validate your patches when posted to Gerrit. For instance, one thing
you would need to do is to define an entry point for your plugin or driver in your own setup.cfg
similarly as to how it is done in the setup.cfg for ODL.
• Define an entry point for your plugin or driver in setup.cfg
• Create third-party CI account: if you do not already have one, follow instructions for third-party
CI to get one.
Internationalization support
oslo.i18n
• Each subproject repository should have its own oslo.i18n integration wrapper module
${MODULE_NAME}/_i18n.py. The detail is found at https://docs.openstack.org/oslo.i18n/
latest/user/usage.html.
Warning: Do not use _() in the builtins namespace which is registered by gettext.install()
in neutron/__init__.py. It is now deprecated as described in oslo.18n documentation.
You need to create or edit the following files to start translation support:
• setup.cfg
• babel.cfg
We have a good example for an oslo project at https://review.opendev.org/#/c/98248/.
Add the following to setup.cfg:
[extract_messages]
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = ${MODULE_NAME}/locale/${MODULE_NAME}.pot
[compile_catalog]
directory = ${MODULE_NAME}/locale
domain = ${MODULE_NAME}
[update_catalog]
domain = ${MODULE_NAME}
output_dir = ${MODULE_NAME}/locale
input_file = ${MODULE_NAME}/locale/${MODULE_NAME}.pot
[python: **.py]
Enable Translation
To update and import translations, you need to make a change in project-config. A good example is
found at https://review.opendev.org/#/c/224222/. After doing this, the necessary jobs will be run and
push/pull a message catalog to/from the translation infrastructure.
Configuration Files
The data_files in the [files] section of setup.cfg of Neutron shall not contain any third-
party references. These shall be located in the same section of the third-party repos own setup.cfg
file.
• Note: Care should be taken when naming sections in configuration files. When the Neutron
service or an agent starts, oslo.config loads sections from all specified config files. This means
that if a section [foo] exists in multiple config files, duplicate settings will collide. It is therefore
recommended to prefix section names with a third-party string, e.g. [vendor_foo].
Since Mitaka, configuration files are not maintained in the git repository but should be generated as
follows:
``tox -e genconfig``
If a tox environment is unavailable, then you can run the following script instead to generate the config-
uration files:
./tools/generate_config_file_samples.sh
It is advised that subprojects do not keep their configuration files in their respective trees and instead
generate them using a similar approach as Neutron does.
ToDo: Inclusion in OpenStack documentation? Is there a recommended way to have third-party con-
fig options listed in the configuration guide in docs.openstack.org?
A third-party repo may contain database models for its own tables. Although these tables are in the
Neutron database, they are independently managed entirely within the third-party code. Third-party
code shall never modify neutron core tables in any way.
Each repo has its own expand and contract alembic migration branches. A third-party repos alembic
migration branches may operate only on tables that are owned by the repo.
• Note: Care should be taken when adding new tables. To prevent collision of table names it is
required to prefix them with a vendor/plugin string.
• Note: A third-party maintainer may opt to use a separate database for their tables. This may
complicate cases where there are foreign key constraints across schemas for DBMS that do not
support this well. Third-party maintainer discretion advised.
The database tables owned by a third-party repo can have references to fields in neutron core tables.
However, the alembic branch for a plugin/driver repo shall never update any part of a table that it does
not own.
Note: What happens when a referenced item changes?
• Q: If a drivers table has a reference (for example a foreign key) to a neutron core table, and the
referenced item is changed in neutron, what should you do?
• A: Fortunately, this should be an extremely rare occurrence. Neutron core reviewers will not allow
such a change unless there is a very carefully thought-out design decision behind it. That design
will include how to address any third-party code affected. (This is another good reason why you
should stay actively involved with the Neutron developer community.)
The neutron-db-manage alembic wrapper script for neutron detects alembic branches for installed
third-party repos, and the upgrade command automatically applies to all of them. A third-party repo
must register its alembic migrations at installation time. This is done by providing an entrypoint in
setup.cfg as follows:
For a third-party repo named networking-foo, add the alembic_migrations directory as an entry-
point in the neutron.db.alembic_migrations group:
[entry_points]
neutron.db.alembic_migrations =
networking-foo = networking_foo.db.migration:alembic_migrations
DB Model/Migration Testing
Here is a template functional test third-party maintainers can use to develop tests for model-vs-migration
sync in their repos. It is recommended that each third-party CI sets up such a test, and runs it regularly
against Neutron master.
Entry Points
The Python setuptools installs all entry points for packages in one global namespace for an environment.
Thus each third-party repo can define its packages own [entry_points] in its own setup.cfg
file.
For example, for the networking-foo repo:
[entry_points]
console_scripts =
neutron-foo-agent = networking_foo.cmd.eventlet.agents.foo:main
neutron.core_plugins =
foo_monolithic = networking_foo.plugins.monolithic.plugin:FooPluginV2
neutron.service_plugins =
foo_l3 = networking_foo.services.l3_router.l3_foo:FooL3ServicePlugin
neutron.ml2.type_drivers =
foo_type = networking_foo.plugins.ml2.drivers.foo:FooType
neutron.ml2.mechanism_drivers =
foo_ml2 = networking_foo.plugins.ml2.drivers.foo:FooDriver
neutron.ml2.extension_drivers =
foo_ext = networking_foo.plugins.ml2.drivers.foo:FooExtensionDriver
• Note: It is advisable to include foo in the names of these entry points to avoid conflicts with other
third-party packages that may get installed in the same environment.
API Extensions
Service Providers
If your project uses service provider(s) the same way VPNAAS does, you specify your service provider
in your project_name.conf file like so:
[service_providers]
# Must be in form:
# service_provider=<service_type>:<name>:<driver>[:default][,...]
In order for Neutron to load this correctly, make sure you do the following in your code:
This is typically required when you instantiate your service plugin class.
Interface Drivers
Interface (VIF) drivers for the reference implementations are defined in neutron/agent/linux/
interface.py. Third-party interface drivers shall be defined in a similar location within their own
repo.
The entry point for the interface driver is a Neutron config option. It is up to the installer to configure
this item in the [default] section. For example:
[default]
interface_driver = networking_foo.agent.linux.interface.FooInterfaceDriver
Rootwrap Filters
If a third-party repo needs a rootwrap filter for a command that is not used by Neutron core, then the
filter shall be defined in the third-party repo.
For example, to add a rootwrap filters for commands in repo networking-foo:
• In the repo, create the file: etc/neutron/rootwrap.d/foo.filters
• In the repos setup.cfg add the filters to data_files:
[files]
data_files =
etc/neutron/rootwrap.d =
etc/neutron/rootwrap.d/foo.filters
Extending python-neutronclient
The maintainer of a third-party component may wish to add extensions to the Neutron CLI client. Thanks
to https://review.opendev.org/148318 this can now be accomplished. See Client Command Extensions.
Neutron main tree serves as a library for multiple subprojects that rely on different modules from neu-
tron.* namespace to accommodate their needs. Specifically, advanced service repositories and open
source or vendor plugin/driver repositories do it.
Neutron modules differ in their API stability a lot, and there is no part of it that is explicitly marked to
be consumed by other projects.
That said, there are modules that other projects should definitely avoid relying on.
Breakages
Neutron API is not very stable, and there are cases when a desired change in neutron tree is expected
to trigger breakage for one or more external repositories under the neutron tent. Below you can find a
list of known incompatible changes that could or are known to trigger those breakages. The changes are
listed in reverse chronological order (newer at the top).
• change: QoS plugin refactor
– commit: I863f063a0cfbb464cedd00bddc15dd853cbb6389
– solution: implement the new abstract methods in neutron.extensions.qos.QoSPluginBase.
– severity: Low (some out-of-tree plugins might be affected).
• change: Consume ConfigurableMiddleware from oslo_middleware.
– commit: If7360608f94625b7d0972267b763f3e7d7624fee
– solution: switch to oslo_middleware.base.ConfigurableMiddleware; stop using neu-
tron.wsgi.Middleware and neutron.wsgi.Debug.
– severity: Low (some out-of-tree plugins might be affected).
• change: Consume sslutils and wsgi modules from oslo.service.
– commit: Ibfdf07e665fcfcd093a0e31274e1a6116706aec2
– solution: switch using oslo_service.wsgi.Router; stop using neutron.wsgi.Router.
– severity: Low (some out-of-tree plugins might be affected).
The client command extension adds support for extending the neutron client while considering ease of
creation.
The full document can be found in the python-neutronclient repository: https://docs.openstack.org/
python-neutronclient/latest/contributor/client_command_extensions.html
Introduction
The migrations in the alembic/versions contain the changes needed to migrate from older Neutron re-
leases to newer versions. A migration occurs by executing a script that details the changes needed to
upgrade the database. The migration scripts are ordered so that multiple scripts can run sequentially to
update the database.
The scripts are executed by Neutrons migration wrapper neutron-db-manage which uses the Alem-
bic library to manage the migration. Pass the --help option to the wrapper for usage information.
The wrapper takes some options followed by some commands:
The wrapper needs to be provided with the database connection string, which is usually provided in the
neutron.conf configuration file in an installation. The wrapper automatically reads from /etc/
neutron/neutron.conf if it is present. If the configuration is in a different location:
The branches, current, and history commands all accept a --verbose option, which, when
passed, will instruct neutron-db-manage to display more verbose output for the specified com-
mand:
For some commands the wrapper needs to know the entrypoint of the core plugin for the installation.
This can be read from the configuration file(s) or specified using the --core_plugin option:
When giving examples below of using the wrapper the options will not be shown. It is assumed you will
use the options that you need for your environment.
For new deployments you will start with an empty database. You then upgrade to the latest database
version via:
For existing deployments the database will already be at some version. To check the current database
version:
neutron-db-manage current
After installing a new version of Neutron server, upgrading the database is the same command:
Migration Branches
Various Sub-Projects can be installed with Neutron. Each sub-project registers its own alembic branch
which is responsible for migrating the schemas of the tables owned by the sub-project.
The neutron-db-manage script detects which sub-projects have been installed by enumerating the
neutron.db.alembic_migrations entrypoints. For more details see the Entry Points section of
Contributing extensions to Neutron.
The neutron-db-manage script runs the given alembic command against all installed sub-projects. (An
exception is the revision command, which is discussed in the Developers section below.)
2. Offline/Online Migrations
Developers
A database migration script is required when you submit a change to Neutron or a sub-project that alters
the database model definition. The migration script is a special python file that includes code to upgrade
the database to match the changes in the model definition. Alembic will execute these scripts in order
to provide a linear migration path between revisions. The neutron-db-manage command can be used to
generate migration scripts for you to complete. The operations in the template are those supported by
the Alembic migration library.
When, as a developer, you want to work with the Neutron DB schema and alembic migrations only, it
can be rather tedious to rely on devstack just to get an up-to-date neutron-db-manage installed. This
section describes how to work on the schema and migration scripts with just the unit test virtualenv
and mysql. You can also operate on a separate test database so you dont mess up the installed Neutron
database.
This only needs to be done once since it is a system install. If you have run devstack on your system
before, then the mysql service is already installed and you can skip this step.
Mysql must be configured as installed by devstack, and the following script accomplishes this without
actually running devstack:
Run this from the root of the neutron repo. It assumes an up-to-date clone of the devstack repo is in
../devstack.
Note that you must know the mysql root password. It is derived from (in order of precedence):
• $MYSQL_PASSWORD in your environment
• $MYSQL_PASSWORD in ../devstack/local.conf
• $MYSQL_PASSWORD in ../devstack/localrc
• default of secretmysql from tools/configure_for_func_testing.sh
Rather than using the neutron database when working on schema and alembic migration script changes,
we can work on a test database. In the examples below, we use a database named testdb.
To create the database:
You will often need to clear it to re-run operations from a blank database:
To work on the test database instead of the neutron database, point to it with the
--database-connection option:
You may find it convenient to set up an alias (in your .bashrc) for this:
From the root of the neutron (or sub-project) repo directory, run:
Now you can use the test-db-manage alias in place of neutron-db-manage in the script auto-
generation instructions below.
When you are done, exit the virtualenv:
deactivate
Script Auto-generation
This section describes how to auto-generate an alembic migration script for a model change. You may
either use the system installed devstack environment, or a virtualenv + testdb environment as described
in Running neutron-db-manage without devstack.
Stop the neutron service. Work from the base directory of the neutron (or sub-project) repo. Check
out the master branch and do git pull to ensure it is fully up to date. Check out your development
branch and rebase to master.
NOTE: Make sure you have not updated the CONTRACT_HEAD or EXPAND_HEAD yet at this point.
Start with an empty database and upgrade to heads:
The database schema is now created without your model changes. The alembic revision
--autogenerate command will look for differences between the schema generated by the upgrade
command and the schema defined by the models, including your model updates:
This generates a prepopulated template with the changes needed to match the database state with the
models. You should inspect the autogenerated template to ensure that the proper models have been
altered. When running the above command you will probably get the following error message:
Multiple heads are present; please specify the head revision on which the
new revision should be based, or perform a merge.
This is alembic telling you that it does not know which branch (contract or expand) to generate the
revision for. You must decide, based on whether you are doing contracting or expanding changes to the
schema, and provide either the --contract or --expand option. If you have both types of changes,
you must run the command twice, once with each option, and then manually edit the generated revision
scripts to separate the migration operations.
In rare circumstances, you may want to start with an empty migration template and manually author the
changes necessary for an upgrade. You can create a blank file for a branch via:
NOTE: If you use above command you should check that migration is created in a directory that is
named as current release. If not, please raise the issue with the development team (IRC, mailing list,
launchpad bug).
NOTE: The description of revision text should be a simple English sentence. The first 30 characters of
the description will be used in the file name for the script, with underscores substituted for spaces. If the
truncation occurs at an awkward point in the description, you can modify the script file name manually
before committing.
The timeline on each alembic branch should remain linear and not interleave with other branches, so
that there is a clear path when upgrading. To verify that alembic branches maintain linear timelines, you
can run this command:
neutron-db-manage check_migration
If this command reports an error, you can troubleshoot by showing the migration timelines using the
history command:
neutron-db-manage history
The obsolete branchless design of a migration script included that it indicates a specific version of the
schema, and includes directives that apply all necessary changes to the database at once. If we look for
example at the script 2d2a8a565438_hierarchical_binding.py, we will see:
# .../alembic_migrations/versions/2d2a8a565438_hierarchical_binding.py
def upgrade():
op.create_table(
'ml2_port_binding_levels',
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
# ... more columns ...
)
(continues on next page)
The above script contains directives that are both under the expand and contract categories, as well as
some data migrations. the op.create_table directive is an expand; it may be run safely while
the old version of the application still runs, as the old code simply doesnt look for this table. The
op.drop_constraint and op.drop_column directives are contract directives (the drop column
more so than the drop constraint); running at least the op.drop_column directives means that the old
version of the application will fail, as it will attempt to access these columns which no longer exist.
The data migrations in this script are adding new rows to the newly added
ml2_port_binding_levels table.
Under the new migration script directory structure, the above script would be stated as two scripts; an
expand and a contract script:
# expansion operations
# .../alembic_migrations/versions/liberty/expand/2bde560fc638_hierarchical_
,→binding.py
def upgrade():
op.create_table(
'ml2_port_binding_levels',
sa.Column('port_id', sa.String(length=36), nullable=False),
sa.Column('host', sa.String(length=255), nullable=False),
# ... more columns ...
)
# contraction operations
# .../alembic_migrations/versions/liberty/contract/4405aedc050e_
,→hierarchical_binding.py
def upgrade():
The two scripts would be present in different subdirectories and also part of entirely separate versioning
streams. The expand operations are in the expand script, and the contract operations are in the contract
script.
For the time being, data migration rules also belong to contract branch. There is expectation that even-
tually live data migrations move into middleware that will be aware about different database schema
elements to converge on, but Neutron is still not there.
Scripts that contain only expansion or contraction rules do not require a split into two parts.
If a contraction script depends on a script from expansion stream, the following directive should be
added in the contraction script:
depends_on = ('<expansion-revision>',)
In some cases, we have to have expand operations in contract migrations. For example, table net-
worksegments was renamed in contract migration, so all operations with this table are required to be
in contract branch as well. For such cases, we use the contract_creation_exceptions that
should be implemented as part of such migrations. This is needed to get functional tests pass.
Usage:
def contract_creation_exceptions():
"""Docstring should explain why we allow such exception for contract
branch.
"""
return {
sqlalchemy_obj_type: ['name']
# For example: sa.Column: ['subnets.segment_id']
}
After the first step is done, you can stop neutron-server, apply remaining non-expansive migration rules,
if any:
neutron-db-manage has_offline_migrations
If you are not interested in applying safe migration rules while the service is running, you can still
upgrade database the old way, by stopping the service, and then applying all available rules:
It will apply all the rules from both the expand and the contract branches, in proper order.
When named release (liberty, mitaka, etc.) is done for neutron or a sub-project, the alembic revision
scripts at the head of each branch for that release must be tagged. This is referred to as a milestone
revision tag.
For example, here is a patch that tags the liberty milestone revisions for the neutron-fwaas sub-project.
Note that each branch (expand and contract) is tagged.
Tagging milestones allows neutron-db-manage to upgrade the schema to a milestone release, e.g.:
Introduction
CLI tool neutron-status upgrade check contains checks which perform a release-specific
readiness check before restarting services with new code. For more details see neutron-status command-
line client page.
Neutron upgrade checks script allows to add checks by stadium and 3rd party projects. The
neutron-status script detects which sub-projects have been installed by enumerating the
neutron.status.upgrade.checks entrypoints. For more details see the Entry Points section
of Contributing extensions to Neutron. Checks can be run in random order and should be independent
from each other.
The recommended entry point name is a repository name: For example, neutron-fwaas for FWaaS and
networking-sfc for SFC:
neutron.status.upgrade.checks =
neutron-fwaas = neutron_fwaas.upgrade.checks:Checks
14.4.9 Testing
Testing Neutron
2) Putting as much thought into your testing strategy as you do to the rest of your code. Use different
layers of testing as appropriate to provide high quality coverage. Are you touching an agent? Test
it against an actual system! Are you adding a new API? Test it for race conditions against a real
database! Are you adding a new cross-cutting feature? Test that it does what its supposed to do
when run on a real cloud!
Do you feel the need to verify your change manually? If so, the next few sections attempt to guide you
through Neutrons different test infrastructures to help you make intelligent decisions and best exploit
Neutrons test offerings.
Definitions
We will talk about three classes of tests: unit, functional and integration. Each respective category
typically targets a larger scope of code. Other than that broad categorization, here are a few more
characteristic:
• Unit tests - Should be able to run on your laptop, directly following a git clone of the project. The
underlying system must not be mutated, mocks can be used to achieve this. A unit test typically
targets a function or class.
• Functional tests - Run against a pre-configured environment
(tools/configure_for_func_testing.sh). Typically test a component such as an agent using
no mocks.
• Integration tests - Run against a running cloud, often target the API level, but also scenarios,
user stories or grenade. You may find such tests under tests/fullstack, and in the Tempest, Rally,
Grenade and neutron-tempest-plugin(neutron_tempest_plugin/api|scenario) projects.
Tests in the Neutron tree are typically organized by the testing infrastructure used, and not by the scope
of the test. For example, many tests under the unit directory invoke an API call and assert that the
expected output was received. The scope of such a test is the entire Neutron server stack, and clearly not
a specific function such as in a typical unit test.
Testing Frameworks
The different frameworks are listed below. The intent is to list the capabilities of each testing framework
as to help the reader understand when should each tool be used. Remember that when adding code that
touches many areas of Neutron, each area should be tested with the appropriate framework. Overlap
between different test layers is often desirable and encouraged.
Unit Tests
Unit tests (neutron/tests/unit/) are meant to cover as much code as possible. They are designed to test the
various pieces of the Neutron tree to make sure any new changes dont break existing functionality. Unit
tests have no requirements nor make changes to the system they are running on. They use an in-memory
sqlite database to test DB interaction.
At the start of each test run:
• RPC listeners are mocked away.
• The fake Oslo messaging driver is used.
def test_get_dvr_mac_address_list(self):
self._create_dvr_mac_entry('host_1', 'mac_1')
self._create_dvr_mac_entry('host_2', 'mac_2')
mac_list = self.mixin.get_dvr_mac_address_list(self.ctx)
self.assertEqual(2, len(mac_list))
It inserts two new host MAC address, invokes the method under test and asserts its output. The test has
many things going for it:
• It targets the method under test correctly, not taking on a larger scope than is necessary.
• It does not use mocks to assert that methods were called, it simply invokes the method and asserts
its output (In this case, that the list method returns two records).
This is allowed by the fact that the method was built to be testable - The method has clear input and
output with no side effects.
You can get oslo.db to generate a file-based sqlite database by setting
OS_TEST_DBAPI_ADMIN_CONNECTION to a file based URL as described in this mailing
list post. This file will be created but (confusingly) wont be the actual file used for the database. To find
the actual file, set a break point in your test method and inspect self.engine.url.
$ OS_TEST_DBAPI_ADMIN_CONNECTION=sqlite:///sqlite.db .tox/py38/bin/python -
,→m \
testtools.run neutron.tests.unit...
...
(Pdb) self.engine.url
sqlite:////tmp/iwbgvhbshp.db
$ sqlite3 /tmp/iwbgvhbshp.db
Functional Tests
Functional tests (neutron/tests/functional/) are intended to validate actual system interaction. Mocks
should be used sparingly, if at all. Care should be taken to ensure that existing system resources are not
modified and that resources created in tests are properly cleaned up both on test success and failure.
Lets examine the benefits of the functional testing framework. Neutron offers a library called ip_lib that
wraps around the ip binary. One of its methods is called device_exists which accepts a device name and
a namespace and returns True if the device exists in the given namespace. Its easy building a test that
targets the method directly, and such a test would be considered a unit test. However, what framework
should such a test use? A test using the unit tests framework could not mutate state on the system, and
so could not actually create a device and assert that it now exists. Such a test would look roughly like
this:
• It would mock execute, a method that executes shell commands against the system to return an IP
device named foo.
• It would then assert that when device_exists is called with foo, it returns True, but when called
with a different device name it returns False.
• It would most likely assert that execute was called using something like: ip link show foo.
The value of such a test is arguable. Remember that new tests are not free, they need to be maintained.
Code is often refactored, reimplemented and optimized.
• There are other ways to find out if a device exists (Such as by looking at /sys/class/net), and in
such a case the test would have to be updated.
• Methods are mocked using their name. When methods are renamed, moved or removed, their
mocks must be updated. This slows down development for avoidable reasons.
• Most importantly, the test does not assert the behavior of the method. It merely asserts that the
code is as written.
When adding a functional test for device_exists, several framework level methods were added. These
methods may now be used by other tests as well. One such method creates a virtual device in a names-
pace, and ensures that both the namespace and the device are cleaned up at the end of the test run
regardless of success or failure using the addCleanup method. The test generates details for a temporary
device, asserts that a device by that name does not exist, creates that device, asserts that it now exists,
deletes it, and asserts that it no longer exists. Such a test avoids all three issues mentioned above if it
were written using the unit testing framework.
Functional tests are also used to target larger scope, such as agents. Many good examples exist: See the
OVS, L3 and DHCP agents functional tests. Such tests target a top level agent method and assert that the
system interaction that was supposed to be performed was indeed performed. For example, to test the
DHCP agents top level method that accepts network attributes and configures dnsmasq for that network,
the test:
• Instantiates an instance of the DHCP agent class (But does not start its process).
• Calls its top level function with prepared data.
• Creates a temporary namespace and device, and calls dhclient from that namespace.
• Assert that the device successfully obtained the expected IP address.
Test exceptions
Fullstack Tests
Why?
The idea behind fullstack testing is to fill a gap between unit + functional tests and Tempest. Tempest
tests are expensive to run, and target black box API tests exclusively. Tempest requires an OpenStack
deployment to be run against, which can be difficult to configure and setup. Full stack testing addresses
these issues by taking care of the deployment itself, according to the topology that the test requires.
Developers further benefit from full stack testing as it can sufficiently simulate a real environment and
provide a rapidly reproducible way to verify code while youre still writing it.
More details can be found in FullStack Testing guide.
Tempest is the integration test suit of Openstack, more details can be found in Tempest testing
API Tests
Tests for other resources should be contributed to the Neutron repository. Scenario tests should be
similarly split up between Tempest and Neutron according to the API theyre targeting.
To create an API test, the testing class must at least inherit from neu-
tron_tempest_plugin.api.base.BaseNetworkTest base class. As some of tests may require certain
extensions to be enabled, the base class provides required_extensions class attribute which can
be used by subclasses to define a list of required extensions for particular test class.
Scenario Tests
[compute]
image_ref = <uuid of advanced image>
[neutron_plugin_options]
image_is_advanced = True
Rally Tests
Rally tests (rally-jobs/plugins) use the rally infrastructure to exercise a neutron deployment. Guidelines
for writing a good rally test can be found in the rally plugin documentation. There are also some exam-
ples in tree; the process for adding rally plugins to neutron requires three steps: 1) write a plugin and
place it under rally-jobs/plugins/. This is your rally scenario; 2) (optional) add a setup file under rally-
jobs/extra/. This is any devstack configuration required to make sure your environment can successfully
process your scenario requests; 3) edit neutron-neutron.yaml. This is your scenario contract or SLA.
Grenade Tests
Grenade is a tool to test upgrade process between OpenStack releases. It actually not introduces any
new tests but it is a tool which uses Tempest tests to verify upgrade process between releases. Neutron
runs couple of Grenade jobs in check and gate queue - see CI Testing summary.
You can run Grenade tests locally on the virtual machine(s). It is pretty similar to deploying OpenStack
using Devstack. All is described in the Projects wiki and documentation.
More info about how to troubleshoot Grenade failures in the CI jobs can be found in the Troubleshooting
Grenade jobs document.
Development Process
It is expected that any new changes that are proposed for merge come with tests for that feature or code
area. Any bugs fixes that are submitted must also have tests to prove that they stay fixed! In addition,
before proposing for merge, all of the current tests should be passing.
The structure of the unit test tree should match the structure of the code tree, e.g.
Unit test modules should have the same path under neutron/tests/unit/ as the module they target has under
neutron/, and their name should be the name of the target module prefixed by test_. This requirement is
intended to make it easier for developers to find the unit tests for a given module.
Similarly, when a test module targets a package, that modules name should be the name of the package
prefixed by test_ with the same path as when a test targets a module, e.g.
The following command can be used to validate whether the unit test tree is structured according to the
above requirements:
./tools/check_unit_test_structure.sh
Where appropriate, exceptions can be added to the above script. If code is not part of the Neutron
namespace, for example, its probably reasonable to exclude their unit tests from the check.
Note: At no time should the production code import anything from testing subtree (neutron.tests).
There are distributions that split out neutron.tests modules in a separate package that is not installed by
default, making any code that relies on presence of the modules to fail. For example, RDO is one of
those distributions.
Running Tests
Before submitting a patch for review you should always ensure all tests pass; a tox run is triggered by
the jenkins gate executed on gerrit for each patch pushed for review.
Neutron, like other OpenStack projects, uses tox for managing the virtual environments for running test
cases. It uses Testr for managing the running of the test cases.
Tox handles the creation of a series of virtualenvs that target specific versions of Python.
Testr handles the parallel execution of series of test cases as well as the tracking of long-running tests
and other things.
For more information on the standard Tox-based test infrastructure used by OpenStack and how to do
some common test/debugging procedures with Testr, see this wiki page: https://wiki.openstack.org/wiki/
Testr
Running pep8 and unit tests is as easy as executing this in the root directory of the Neutron source code:
tox
tox -e pep8
Since pep8 includes running pylint on all files, it can take quite some time to run. To restrict the pylint
check to only the files altered by the latest patch changes:
tox -e py38
Many changes span across both the neutron and neutron-lib repos, and tox will always build the test en-
vironment using the published module versions specified in requirements.txt and lower-constraints.txt.
To run tox tests against a different version of neutron-lib, use the TOX_ENV_SRC_MODULES envi-
ronment variable to point at a local package repo.
For example, to run against the master branch of neutron-lib:
cd $SRC
git clone https://opendev.org/openstack/neutron-lib
cd $NEUTRON_DIR
env TOX_ENV_SRC_MODULES=$SRC/neutron-lib tox -r -e py38
To run against a change of your own, repeat the same steps, but use the directory with your changes, not
a fresh clone.
To run against a particular gerrit change of the lib (substituting the desired gerrit refs for this example):
cd $SRC
git clone https://opendev.org/openstack/neutron-lib
cd neutron-lib
git fetch https://opendev.org/openstack/neutron-lib refs/changes/13/635313/
,→6 && git checkout FETCH_HEAD
cd $NEUTRON_DIR
env TOX_ENV_SRC_MODULES=$SRC/neutron-lib tox -r -e py38
Note that the -r is needed to re-create the tox virtual envs, and will also be needed to restore them to
standard when not using this method.
Any pip installable package can be overriden with this environment variable, not just neutron-
lib. To specify multiple packages to override, specify them as a space separated list to
TOX_ENV_SRC_MODULES. Example:
Functional Tests
To run functional tests that do not require sudo privileges or specific-system dependencies:
tox -e functional
To run all the functional tests, including those requiring sudo privileges and system-specific dependen-
cies, the procedure defined by tools/configure_for_func_testing.sh should be followed.
IMPORTANT: configure_for_func_testing.sh relies on DevStack to perform extensive modification to
the underlying host. Execution of the script requires sudo privileges and it is recommended that the
following commands be invoked only on a clean and disposable VM. A VM that has had DevStack
previously installed on it is also fine.
The -i option is optional and instructs the script to use DevStack to install and configure all of Neutrons
package dependencies. It is not necessary to provide this option if DevStack has already been used to
deploy Neutron to the target host.
Fullstack Tests
tox -e dsvm-fullstack
Since fullstack tests often require the same resources and dependencies as the functional tests, using the
configuration script tools/configure_for_func_testing.sh is advised (as described above). Before running
the script, you must first set the following environment variable so things are setup correctly
export VENV=dsvm-fullstack
When running fullstack tests on a clean VM for the first time, it is important to make sure all of Neutrons
package dependencies have been met. As mentioned in the functional test section above, this can be done
by running the configure script with the -i argument
./tools/configure_for_func_testing.sh ../devstack -i
You can also run ./stack.sh, and if successful, it will have also verified the package dependencies have
been met. When running on a new VM it is suggested to set the following environment variable as well,
to make sure that all requirements (including database and message bus) are installed and set
export IS_GATE=False
To run the api or scenario tests, deploy Tempest, neutron-tempest-plugin and Neutron with DevStack
and then run the following command, from the tempest directory:
$ export DEVSTACK_GATE_TEMPEST_REGEX="neutron"
$ tox -e all-plugin $DEVSTACK_GATE_TEMPEST_REGEX
If you want to limit the amount of tests, or run an individual test, you can do, for instance:
If you want to use special config for Neutron, like use advanced images (Ubuntu or CentOS) testing
advanced features, you may need to add config in tempest/etc/tempest.conf:
[neutron_plugin_options]
image_is_advanced = True
The Neutron tempest plugin configs are under neutron_plugin_options scope of tempest.
conf.
For running individual test modules, cases or tests, you just need to pass the dot-separated path you want
as an argument to it.
For example, the following would run only a single test or test case:
If you want to pass other arguments to stestr, you can do the following:
Coverage
Neutron has a fast growing code base and there are plenty of areas that need better coverage.
To get a grasp of the areas where tests are needed, you can check current unit tests coverage by running:
$ tox -ecover
Since the coverage command can only show unit test coverage, a coverage document is maintained that
shows test coverage per area of code in: doc/source/devref/testing_coverage.rst. You could also rely on
Zuul logs, that are generated post-merge (not every project builds coverage results). To access them, do
the following:
• Check out the latest merge commit
• Go to: http://logs.openstack.org/<first-2-digits-of-sha1>/<sha1>/post/neutron-coverage/.
• Spec is a work in progress to provide a better landing page.
Debugging
By default, calls to pdb.set_trace() will be ignored when tests are run. For pdb statements to work,
invoke tox as follows:
Tox-created virtual environments (venvs) can also be activated after a tox run and reused for debugging:
$ tox -e venv
$ . .tox/venv/bin/activate
$ python -m testtools.run [test module path]
Tox packages and installs the Neutron source tree in a given venv on every invocation, but if modifica-
tions need to be made between invocation (e.g. adding more pdb statements), it is recommended that the
source tree be installed in the venv in editable mode:
Editable mode ensures that changes made to the source tree are automatically reflected in the venv, and
that such changes are not overwritten during the next tox run.
Post-mortem Debugging
References
How?
Full stack tests set up their own Neutron processes (Server & agents). They assume a working Rabbit
and MySQL server before the run starts. Instructions on how to run fullstack tests on a VM are available
below.
Each test defines its own topology (What and how many servers and agents should be running).
Since the test runs on the machine itself, full stack testing enables white box testing. This means that
you can, for example, create a router through the API and then assert that a namespace was created for
it.
Full stack tests run in the Neutron tree with Neutron resources alone. You may use the Neutron API (The
Neutron server is set to NOAUTH so that Keystone is out of the picture). VMs may be simulated with
a container-like class: neutron.tests.fullstack.resources.machine.FakeFullstackMachine. An example of
its usage may be found at: neutron/tests/fullstack/test_connectivity.py.
Full stack testing can simulate multi node testing by starting an agent multiple times. Specifically,
each node would have its own copy of the OVS/LinuxBridge/DHCP/L3 agents, all configured with the
same host value. Each OVS agent is connected to its own pair of br-int/br-ex, and those bridges are
then interconnected. For LinuxBridge agent each agent is started in its own namespace, called host-
<some_random_value>. Such namespaces are connected with OVS central bridge to each other.
Segmentation at the database layer is guaranteed by creating a database per test. The messaging layer
achieves segmentation by utilizing a RabbitMQ feature called vhosts. In short, just like a MySQL server
serve multiple databases, so can a RabbitMQ server serve multiple messaging domains. Exchanges and
queues in one vhost are segmented from those in another vhost.
Please note that if the change you would like to test using fullstack tests involves a change to python-
neutronclient as well as neutron, then you should make sure your fullstack tests are in a separate third
change that depends on the python-neutronclient change using the Depends-On tag in the commit mes-
sage. You will need to wait for the next release of python-neutronclient, and a minimum version bump
for python-neutronclient in the global requirements, before your fullstack tests will work in the gate.
This is because tox uses the version of python-neutronclient listed in the upper-constraints.txt file in the
openstack/requirements repository.
When?
1) Youd like to test the interaction between Neutron components (Server and agents) and have al-
ready tested each component in isolation via unit or functional tests. You should have many unit
tests, fewer tests to test a component and even fewer to test their interaction. Edge cases should
not be tested with full stack testing.
2) Youd like to increase coverage by testing features that require multi node testing such as l2pop,
L3 HA and DVR.
3) Youd like to test agent restarts. Weve found bugs in the OVS, DHCP and L3 agents and havent
found an effective way to test these scenarios. Full stack testing can help here as the full stack
infrastructure can restart an agent during the test.
Example
Neutron offers a Quality of Service API, initially offering bandwidth capping at the port
level. In the reference implementation, it does this by utilizing an OVS feature. neu-
tron.tests.fullstack.test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_policy_rule_lifecycle is a positive
example of how the fullstack testing infrastructure should be used. It creates a network, subnet, QoS
policy & rule and a port utilizing that policy. It then asserts that the expected bandwidth limitation is
present on the OVS bridge connected to that port. The test is a true integration test, in the sense that it
invokes the API and then asserts that Neutron interacted with the hypervisor appropriately.
Fullstack tests can be run locally. That makes it much easier to understand exactly how it works, debug
issues in the existing tests or write new ones. To run fullstack tests locally, you should clone Devs-
tack <https://opendev.org/openstack/devstack/> and Neutron <https://opendev.org/openstack/neutron>
repositories. When repositories are available locally, the first thing which needs to be done is preparation
of the environment. There is a simple script in Neutron to do that.
$ export VENV=dsvm-fullstack
$ tools/configure_for_func_testing.sh /opt/stack/devstack -i
This will prepare needed files, install required packages, etc. When it is done you should see a message
like:
That means that all went well and you should be ready to run fullstack tests locally. Of course there are
many tests there and running all of them can take a pretty long time so lets try to run just one:
,→neutron/test-requirements.txt, -r/opt/stack/neutron/neutron/tests/
,→functional/requirements.txt
dsvm-fullstack develop-inst: /opt/stack/neutron
{0} neutron.tests.fullstack.test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_
,→policy_rule_lifecycle(ingress) [40.395436s] ... ok
{1} neutron.tests.fullstack.test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_
,→policy_rule_lifecycle(egress) [43.277898s] ... ok
Stopping rootwrap daemon process with pid=12657
Running upgrade for neutron ...
OK
/usr/lib/python3.8/subprocess.py:942: ResourceWarning: subprocess 13475 is
,→still running
======
Totals
======
Ran: 2 tests in 43.3367 sec.
- Passed: 2
- Skipped: 0
- Expected Fail: 0
- Unexpected Success: 0
- Failed: 0
Sum of execute time for each test: 83.6733 sec.
==============
Worker Balance
==============
- Worker 0 (1 tests) => 0:00:40.395436
- Worker 1 (1 tests) => 0:00:43.277898
___________________________________________________________________________
,→_________________________________________________________________________
,→_______ summary _________________________________________________________
,→_________________________________________________________________________
,→_________________________
dsvm-fullstack: commands succeeded
congratulations :)
That means that our test was run successfully. Now you can start hacking, write new fullstack tests or
debug failing ones as needed.
If you need to debug a fullstack test locally you can use the remote_pdb module for that. First need
to install remote_pdb module in the virtual environment created for fullstack testing by tox.
Then you need to install a breakpoint in your code. For example, lets do that in the neu-
tron.tests.fullstack.test_qos.TestBwLimitQoSOvs.test_bw_limit_qos_policy_rule_lifecycle module:
def test_bw_limit_qos_policy_rule_lifecycle(self):
import remote_pdb; remote_pdb.set_trace(port=1234)
new_limit = BANDWIDTH_LIMIT + 100
From that point you can start debugging your code in the same way you usually do with pdb module.
Each fullstack test is spawning its own, isolated environment with needed services. So, for example, it
can be neutron-server, neutron-ovs-agent or neutron-dhcp-agent. And often there
is a need to check logs of some of those processes. That is of course possible when running fullstack
tests locally. By default, logs are stored in /opt/stack/logs/dsvm-fullstack-logs. The
logs directory can be defined by the environment variable OS_LOG_PATH. In that directory there are
directories with names matching names of the tests, for example:
$ ls -l
total 224
drwxr-xr-x 2 vagrant vagrant 4096 Nov 26 16:49 TestBwLimitQoSOvs.test_bw_
,→limit_qos_policy_rule_lifecycle_egress_
-rw-rw-r-- 1 vagrant vagrant 94928 Nov 26 16:50 TestBwLimitQoSOvs.test_bw_
,→limit_qos_policy_rule_lifecycle_egress_.txt
For each test there is a directory and txt file with the same name. This txt file contains the log from the
test runner. So you can check exactly what was done by the test when it was run. This file contains logs
from all runs of the same test. So if you run the test 10 times, you will have the logs from all 10 runs of
the test. In the directory with same name there are logs from the neutron services run during the test, for
example:
$ ls -l TestBwLimitQoSOvs.test_bw_limit_qos_policy_rule_lifecycle_ingress_/
total 1836
-rw-rw-r-- 1 vagrant vagrant 333371 Nov 26 16:40 neutron-openvswitch-agent-
,→-2020-11-26--16-40-38-818499.log
Here each file is only from one run and one service. In the name of the file there is timestamp of when
the service was started.
Sometimes there is a need to investigate reason that a test failed in the gate. After every
neutron-fullstack job run, on the Zuul job page there are logs available. In the directory
controller/logs/dsvm-fullstack-logs you can find exactly the same files with logs from
each test case as mentioned above.
You can also check, for example, the journal log from the node where the tests were run. All those
logs are available in the file controller/logs/devstack.journal.xz in the jobs logs. In
controller/logs/devstack.journal.README.txt there are also instructions on how to
download and check those journal logs locally.
Test Coverage
The intention is to track merged features or areas of code that lack certain types of tests. This document
may be used both by developers that want to contribute tests, and operators that are considering adopting
a feature.
Coverage
Note that while both API and scenario tests target a deployed OpenStack cloud, API tests are under the
Neutron tree and scenario tests are under the Tempest tree.
It is the expectation that API changes involve API tests, agent features or modifications involve func-
tional tests, and Neutron-wide features involve fullstack or scenario tests as appropriate.
The table references tests that explicitly target a feature, and not a job that is configured to run against a
specific backend (Thereby testing it implicitly). So, for example, while the Linux bridge agent has a job
that runs the API and scenario tests with the Linux bridge agent configured, it does not have functional
tests that target the agent explicitly. The gate column is about running API/scenario tests with Neutron
configured in a certain way, such as what L2 agent to use or what type of routers to create.
• V - Merged
• Blank - Not applicable
• X - Absent or lacking
• Patch number - Currently in review
• A name - That person has committed to work on an item
• Implicit - The code is executed, yet no assertions are made
• Prefix delegation doesnt have functional tests for the dibbler and pd layers, nor for the L3 agent
changes. This has been an area of repeated regressions.
• The functional job now compiles OVS 2.5 from source, enabling testing features that we previ-
ously could not.
Missing Infrastructure
The following section details missing test types. If you want to pick up an action item, please contact
amuller for more context and guidance.
• The Neutron team would like Rally to persist results over a window of time, graph and visualize
this data, so that reviewers could compare average runs against a proposed patch.
• Its possible to test RPC methods via the unit tests infrastructure. This was proposed in patch
162811. The goal is provide developers a light weight way to rapidly run tests that target the
RPC layer, so that a patch that modifies an RPC methods signature could be verified quickly and
locally.
• Neutron currently runs a partial-grenade job that verifies that an OVS version from the latest stable
release works with neutron-server from master. We would like to expand this to DHCP and L3
agents as well.
This section contains a template for a test which checks that the Python models for database tables
are synchronized with the alembic migrations that create the database schema. This test should be
implemented in all driver/plugin repositories that were split out from Neutron.
This test compares models with the result of existing migrations. It is based on ModelsMigrationsSync
which is provided by oslo.db and was adapted for Neutron. It compares core Neutron models and vendor
specific models with migrations from Neutron core and migrations from the driver/plugin repo. This test
is functional - it runs against MySQL and PostgreSQL dialects. The detailed description of this test can
be found in Neutron Database Layer section - Tests to verify that database migrations and models are in
sync.
def get_metadata():
return model_base.BASEV2.metadata
The test uses external.py from Neutron. This file contains lists of table names, which were moved out
of Neutron:
VPNAAS_TABLES = [...]
...
Also the test uses VERSION_TABLE, it is the name of table in database which contains revision id
of head migration. It is preferred to keep this variable in networking_foo/db/migration/
alembic_migrations/__init__.py so it will be easy to use in test.
Create a module networking_foo/tests/functional/db/test_migrations.py with
the following content:
# EXTERNAL_TABLES should contain all names of tables that are not related
,→to
# current repo.
EXTERNAL_TABLES = set(external.TABLES) - set(external.REPO_FOO_TABLES)
class _TestModelsMigrationsFoo(test_migrations._TestModelsMigrations):
def get_metadata(self):
return head.get_metadata()
class TestModelsMigrationsMysql(testlib_api.MySQLTestCaseMixin,
_TestModelsMigrationsFoo,
testlib_api.SqlTestCaseLight):
pass
class TestModelsMigrationsPsql(testlib_api.PostgreSQLTestCaseMixin,
_TestModelsMigrationsFoo,
testlib_api.SqlTestCaseLight):
pass
psutil>=3.2.2 # BSD
psycopg2
PyMySQL>=0.6.2 # MIT License
Neutron has a service plugin to inject random delays and Deadlock exceptions into normal Neutron
operations. The service plugin is called Loki and is located under neutron.services.loki.loki_plugin.
To enable the plugin, just add loki to the list of service_plugins in your neutron-server neutron.conf file.
The plugin will inject a Deadlock exception on database flushes with a 1/50 probability and a delay of 1
second with a 1/200 probability when SQLAlchemy objects are loaded into the persistent state from the
DB. The goal is to ensure the code is tolerant of these transient delays/failures that will be experienced
in busy production (and Galera) systems.
In upstream Neutron CI there are various tempest and neutron-tempest-plugin jobs running. Each of
those jobs runs on slightly different configuration of Neutron services. Below is a summary of those
jobs.
+----------------------------------------------+---------------------------
,→-------+---------+-------+-------------+-----------------+----------+----
,→---+--------+------------+-------------+
| Job name | Run tests
,→ | python | nodes | L2 agent | firewall | L3 agent | L3
,→HA | L3 DVR | enable_dvr | Run in gate |
| |
,→ | version | | | driver | mode |
,→ | | | queue |
+==============================================+==================================+=====
|neutron-tempest-plugin-api |neutron_tempest_plugin.api
,→ | 3.6 | 1 | openvswitch | openvswitch | legacy |
,→False | False | True | Yes |
+----------------------------------------------+---------------------------
,→-------+---------+-------+-------------+-----------------+----------+----
,→---+--------+------------+-------------+
|neutron-tempest-plugin-designate-scenario |neutron_tempest_plugin.
,→scenario.\ | 3.6 | 1 | openvswitch | openvswitch | legacy
,→| False | False | True | No |
| |test_dns_integration
,→ | | | | | |
,→ | | | |
(continues on next page)
,→---+--------+------------+-------------+
|neutron-tempest-dvr-ha-multinode-full |tempest.api (without slow
,→tests) | 3.6 | 3 | openvswitch | openvswitch | dvr |
,→True | True | True | No |
|(non-voting) |tempest.scenario
,→ | | | | | dvr_snat |
,→ | | | |
| |
,→ | | | | | dvr_snat |
,→ | | | |
+----------------------------------------------+---------------------------
,→-------+---------+-------+-------------+-----------------+----------+----
,→---+--------+------------+-------------+
|neutron-tempest-slow-py3 |tempest slow tests
,→ | 3.6 | 2 | openvswitch | openvswitch | legacy |
,→False | False | True | Yes |
+----------------------------------------------+---------------------------
,→-------+---------+-------+-------------+-----------------+----------+----
,→---+--------+------------+-------------+
|neutron-tempest-ipv6-only |tempest smoke + IPv6 tests
,→ | 3.6 | 1 | openvswitch | openvswitch | legacy |
,→False | False | True | Yes | (continues on next page)
In upstream Neutron CI there are various Grenade jobs running. Each of those jobs runs on slightly
different configuration of Neutron services. Below is summary of those jobs.
+--------------------------------+---------+-------+-------------+---------
,→----+----------+-------+--------+------------+-------------+
In upstream Neutron CI there is also queue called experimental. It includes jobs which are not
needed to be run on every patch and/or jobs which isnt stable enough to be run always. Those jobs can
be run by making comment check experimental in the comment to the patch in Gerrit.
Currently we have in that queue jobs like listed below.
+----------------------------------------------+---------------------------
,→-------+---------+-------+-------------+-----------------+----------+----
,→---+--------+------------+-------------+
Columns description
• L2 agent - agent used on nodes in test job,
• firewall driver - driver configured in L2 agents config,
• L3 agent mode - mode(s) configured for L3 agent(s) on test nodes,
• L3 HA - value of l3_ha option set in neutron.conf,
• L3 DVR - value of router_distributed option set in neutron.conf,
• enable_dvr - value of enable_dvr option set in neutron.conf
This document describes how to test OpenStack with OVN using DevStack. We will start by describing
how to test on a single host.
$ sudo su - stack
$ git clone https://opendev.org/openstack/devstack.git
$ git clone https://opendev.org/openstack/neutron.git
$ cd devstack
$ cp ../neutron/devstack/ovn-local.conf.sample local.conf
5. Run DevStack.
This is going to take a while. It installs a bunch of packages, clones a bunch of git repos, and installs
everything from these git repos.
$ ./stack.sh
Once DevStack completes successfully, you should see output that looks something like this:
Environment Variables
Once DevStack finishes successfully, were ready to start interacting with OpenStack APIs. OpenStack
provides a set of command line tools for interacting with these APIs. DevStack provides a file you can
source to set up the right environment variables to make the OpenStack command line tools work.
$ . openrc
If youre curious what environment variables are set, they generally start with an OS prefix:
$ env | grep OS
OS_REGION_NAME=RegionOne
OS_IDENTITY_API_VERSION=2.0
OS_PASSWORD=password
OS_AUTH_URL=http://192.168.122.8:5000/v2.0
OS_USERNAME=demo
OS_TENANT_NAME=demo
OS_VOLUME_API_VERSION=2
OS_CACERT=/opt/stack/data/CA/int-ca/ca-chain.pem
OS_NO_CACHE=1
By default, DevStack creates networks called private and public. Run the following command to
see the existing networks:
A Neutron network is implemented as an OVN logical switch. OVN driver creates logical switches with
a name in the format neutron-<network UUID>. We can use ovn-nbctl to list the configured logical
switches and see that their names correlate with the output from openstack network list:
$ ovn-nbctl ls-list
71206f5c-b0e6-49ce-b572-eb2e964b2c4e (neutron-40080dad-0064-480a-b1b0-
,→592ae51c1471)
8d8270e7-fd51-416f-ae85-16565200b8a4 (neutron-7ec986dd-aae4-40b5-86cf-
,→8668feeeab67)
Booting VMs
In this section well go through the steps to create two VMs that have a virtual NIC attached to the
private Neutron network.
DevStack uses libvirt as the Nova backend by default. If KVM is available, it will be used. Otherwise,
it will just run qemu emulated guests. This is perfectly fine for our testing, as we only need these VMs
to be able to send and receive a small amount of traffic so performance is not very important.
1. Get the Network UUID.
Start by getting the UUID for the private network from the output of openstack network
list from earlier and save it off:
3. Choose a flavor.
We need minimal resources for these test VMs, so the m1.nano flavor is sufficient.
4. Choose an image.
DevStack imports the CirrOS image by default, which is perfect for our testing. Its a very small test
image.
$ openstack image list
+--------------------------------------+--------------------------+--------
,→+
| ID | Name | Status
,→|
+--------------------------------------+--------------------------+--------
,→+
| 849a8db2-3754-4cf6-9271-491fa4ff7195 | cirros-0.3.5-x86_64-disk | active
,→|
+--------------------------------------+--------------------------+--------
,→+
5. Setup a security rule so that we can access the VMs we will boot up next.
By default, DevStack does not allow users to access VMs, to enable that, we will need to add a rule. We
will allow both ICMP and SSH.
$ openstack security group rule create --ingress --ethertype IPv4 --dst-
,→port 22 --protocol tcp default
$ openstack security group rule create --ingress --ethertype IPv4 --
,→protocol ICMP default
$ openstack security group rule list
+--------------------------------------+-------------+-----------+---------
,→---+--------------------------------------+------------------------------
,→--------+
| ID | IP Protocol | IP Range | Port
,→Range | Remote Security Group | Security Group
,→ |
+--------------------------------------+-------------+-----------+---------
,→---+--------------------------------------+------------------------------
,→--------+
...
| ade97198-db44-429e-9b30-24693d86d9b1 | tcp | 0.0.0.0/0 | 22:22
,→ | None | a47b14da-5607-404a-8de4-
,→3a0f1ad3649c |
| d0861a98-f90e-4d1a-abfb-827b416bc2f6 | icmp | 0.0.0.0/0 |
,→ | None | a47b14da-5607-404a-8de4-
(continues on next page)
,→3a0f1ad3649c |
| Field | Value
,→ |
+-----------------------------+--------------------------------------------
,→---------------------+
| OS-DCF:diskConfig | MANUAL
,→ |
| OS-EXT-AZ:availability_zone |
,→ |
| OS-EXT-STS:power_state | NOSTATE
,→ |
| OS-EXT-STS:task_state | scheduling
,→ |
| OS-EXT-STS:vm_state | building
,→ |
| OS-SRV-USG:launched_at | None
,→ |
| OS-SRV-USG:terminated_at | None
,→ |
| accessIPv4 |
,→ |
| accessIPv6 |
,→ |
| addresses |
,→ |
| adminPass | BzAWWA6byGP6
,→ |
| config_drive |
,→ |
| created | 2017-03-09T16:56:08Z
,→ |
| flavor | m1.nano (42)
,→ |
| hostId |
,→ |
| id | d8b8084e-58ff-44f4-b029-a57e7ef6ba61
,→ |
| image | cirros-0.3.5-x86_64-disk (849a8db2-3754-
,→4cf6-9271-491fa4ff7195) |
| key_name | demo
,→ |
| name | test1
,→ |
| progress | 0
,→ | (continues on next page)
Once both VMs have been started, they will have a status of ACTIVE:
$ openstack server list
+--------------------------------------+-------+--------+------------------
,→---------------------------------------+--------------------------+
| ID | Name | Status | Networks
,→ | Image Name |
+--------------------------------------+-------+--------+------------------
,→---------------------------------------+--------------------------+
| 170d4f37-9299-4a08-b48b-2b90fce8e09b | test2 | ACTIVE |
,→private=fd5d:9d1b:457c:0:f816:3eff:fe24:49df, 10.0.0.3 | cirros-0.3.5-
,→x86_64-disk |
| d8b8084e-58ff-44f4-b029-a57e7ef6ba61 | test1 | ACTIVE |
,→private=fd5d:9d1b:457c:0:f816:3eff:fe3f:953d, 10.0.0.10 | cirros-0.3.5-
,→x86_64-disk |
+--------------------------------------+-------+--------+------------------
,→---------------------------------------+--------------------------+
Our two VMs have addresses of 10.0.0.3 and 10.0.0.10. If we list Neutron ports, there are two
new ports with these addresses associated with them:
$ openstack port list
+--------------------------------------+------+-------------------+--------
,→-------------------------------------------------------------------------
,→--------------------+--------+
| ID | Name | MAC Address | Fixed
,→IP Addresses
,→ | Status |
+--------------------------------------+------+-------------------+--------
,→-------------------------------------------------------------------------
,→--------------------+--------+
(continues on next page)
Now we can look at OVN using ovn-nbctl to see the logical switch ports that were created for these
two Neutron ports. The first part of the output is the OVN logical switch port UUID. The second part
in parentheses is the logical switch port name. Neutron sets the logical switch port name equal to the
Neutron port ID.
VM Connectivity
We can connect to our VMs by associating a floating IP address from the public network.
Devstack does not wire up the public network by default so we must do that before connecting to this
floating IP address.
Now you should be able to connect to the VM via its floating IP address. First, ping the address.
$ ping -c 1 172.24.4.8
PING 172.24.4.8 (172.24.4.8) 56(84) bytes of data.
64 bytes from 172.24.4.8: icmp_seq=1 ttl=63 time=0.823 ms
After completing the earlier instructions for setting up devstack, you can use a second VM to emulate an
additional compute node. This is important for OVN testing as it exercises the tunnels created by OVN
between the hypervisors.
Just as before, create a throwaway VM but make sure that this VM has a different host name. Having
same host name for both VMs will confuse Nova and will not produce two hypervisors when you query
nova hypervisor list later. Once the VM is setup, create the stack user:
$ sudo su - stack
$ git clone https://opendev.org/openstack/devstack.git
$ git clone https://opendev.org/openstack/neutron.git
OVN comes with another sample configuration file that can be used for this:
$ cd devstack
$ cp ../neutron/devstack/ovn-compute-local.conf.sample local.conf
You must set SERVICE_HOST in local.conf. The value should be the IP address of the main DevS-
tack host. You must also set HOST_IP to the IP address of this new host. See the text in the sample
configuration file for more information. Once that is complete, run DevStack:
$ cd devstack
$ ./stack.sh
This should complete in less time than before, as its only running a single OpenStack service (nova-
compute) along with OVN (ovn-controller, ovs-vswitchd, ovsdb-server). The final output will look
something like this:
Now go back to your main DevStack host. You can use admin credentials to verify that the additional
hypervisor has been added to the deployment:
$ cd devstack
$ . openrc admin
$ ./tools/discover_hosts.sh
$ openstack hypervisor list
+----+------------------------+-----------------+---------------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+------------------------+-----------------+---------------+-------+
| 1 | centos7-ovn-devstack | QEMU | 172.16.189.6 | up |
| 2 | centos7-ovn-devstack-2 | QEMU | 172.16.189.30 | up |
+----+------------------------+-----------------+---------------+-------+
You can also look at OVN and OVS to see that the second host has shown up. For example, there will
be a second entry in the Chassis table of the OVN_Southbound database. You can use the ovn-sbctl
utility to list chassis, their configuration, and the ports bound to each of them:
$ ovn-sbctl show
Chassis "ddc8991a-d838-4758-8d15-71032da9d062"
hostname: "centos7-ovn-devstack"
Encap vxlan
ip: "172.16.189.6"
options: {csum="true"}
Encap geneve
ip: "172.16.189.6"
options: {csum="true"}
Port_Binding "97c970b0-485d-47ec-868d-783c2f7acde3"
Port_Binding "e003044d-334a-4de3-96d9-35b2d2280454"
Port_Binding "cr-lrp-08d1f28d-cc39-4397-b12b-7124080899a1"
Chassis "b194d07e-0733-4405-b795-63b172b722fd"
hostname: "centos7-ovn-devstack-2.os1.phx2.redhat.com"
Encap geneve
ip: "172.16.189.30"
options: {csum="true"}
Encap vxlan
ip: "172.16.189.30"
options: {csum="true"}
You can also see a tunnel created to the other compute node:
$ ovs-vsctl show
...
Bridge br-int
fail_mode: secure
...
Port "ovn-b194d0-0"
(continues on next page)
Provider Networks
Neutron has a provider networks API extension that lets you specify some additional attributes on a
network. These attributes let you map a Neutron network to a physical network in your environment.
The OVN ML2 driver is adding support for this API extension. It currently supports flat and vlan
networks.
Here is how you can test it:
First you must create an OVS bridge that provides connectivity to the provider network on every host
running ovn-controller. For trivial testing this could just be a dummy bridge. In a real environment, you
would want to add a local network interface to the bridge, as well.
$ ovs-vsctl add-br br-provider
ovn-controller on each host must be configured with a mapping between a network name and the bridge
that provides connectivity to that network. In this case well create a mapping from the network name
providernet to the bridge br-provider.
$ ovs-vsctl set open . \
external-ids:ovn-bridge-mappings=providernet:br-provider
If you want to enable this chassis to host a gateway router for external connectivity, then set ovn-cms-
options to enable-chassis-as-gw.
$ ovs-vsctl set open . \
external-ids:ovn-cms-options="enable-chassis-as-gw"
Observe that the OVN ML2 driver created a special logical switch port of type localnet on the logical
switch to model the connection to the physical network.
$ ovn-nbctl show
...
switch 5bbccbbd-f5ca-411b-bad9-01095d6f1316 (neutron-729dbbee-db84-4a3d-
,→afc3-82c0b3701074) (continues on next page)
If VLAN is used, there will be a VLAN tag shown on the localnet port as well.
Finally, create a Neutron port on the provider network.
Skydive
Skydive is an open source real-time network topology and protocols analyzer. It aims to provide a
comprehensive way of understanding what is happening in the network infrastructure. Skydive works
by utilizing agents to collect host-local information, and sending this information to a central agent for
further analysis. It utilizes elasticsearch to store the data.
To enable Skydive support with OVN and devstack, enable it on the control and compute nodes.
On the control node, enable it as follows:
Troubleshooting
If you run into any problems, take a look at our Troubleshooting page.
Additional Resources
See the documentation and other references linked from the OVN information page.
Tempest testing
Tempest is the integration test suite of Openstack, for details see Tempest Testing Project.
Tempest makes it possible to add project-specific plugins, and for networking this is neutron-tempest-
plugin.
neutron-tempest-plugin covers API and scenario tests not just for core Neutron functionality, but for
stadium projects as well. For reference please read Testing Neutron’s related sections
API Tests
To create an API test, the testing class must at least inherit from neu-
tron_tempest_plugin.api.base.BaseNetworkTest base class. As some of tests may require certain
extensions to be enabled, the base class provides required_extensions class attribute which can
be used by subclasses to define a list of required extensions for a particular test class.
Scenario Tests
[compute]
image_ref = <uuid of advanced image>
[neutron_plugin_options]
default_image_is_advanced = True
To use the advanced image only for the tests that really need it and cirros for the rest to keep test
execution as fast as possible:
[compute]
image_ref = <uuid of cirros image>
[neutron_plugin_options]
advanced_image_ref = <uuid of advanced image>
advanced_image_flavor_ref = <suitable flavor for the advance image>
advanced_image_ssh_user = <username for the advanced image>
Zuul is the gating system behind Openstack, for details see: Zuul - A Project Gating System.
Zuul job definitions are in yaml, ansible in the depths. The job definitions can be inherited. Networking
projects job definitions parents are coming from devstack zuul job config and from tempest and defined
in neutron-tempest-plugin zuul.d folder and in neutron zuul.d folder .
Where to look
Tempest executed with different configurations, for details check this page Tempest jobs running in
Neutron CI
When zuul reports back job results to a review it gives links to the results as well.
The logs can be checked online if you select Logs tab on the logs page.
• job-output.txt is the full log which contains not just test execution logs, but devstack con-
sole output.
• test_results.html is the clickable html test report.
• controller and compute (in case of multinode job) are a dictionary tree containing
the relevant files (configuration files, logs etc) created in the job. For example under con-
troller/logs/etc/neutron/ you can check how Neutron services were configured, or in the file con-
troller/logs/tempest_conf.txt you can check tempest configuration file.
• services log files are the in files controller/logs/screen-`*`.txt, so for example neu-
tron l2 agent logs are in the file controller/logs/screen-q-agt.txt.
Downloading logs
$ chmod +x download-logs.sh
$ ./download-logs.sh
2020-12-07T18:12:09+01:00 | Querying https://zuul.opendev.org/api/tenant/
,→openstack/build/8caed05f5ba441b4be2b061d1d421e4e for manifest
2020-12-07T18:12:11+01:00 | Saving logs to /tmp/zuul-logs.c8ZhLM
2020-12-07T18:12:11+01:00 | Getting logs from https://3612101d6c142bf9c77a-
,→c96c299047b55dcdeaefef8e344ceab6.ssl.cf1.rackcdn.com/694539/11/check/
,→tempest-slow-py3/8caed05/
2020-12-07T18:12:11+01:00 | compute1/logs/apache/access_log.txt
,→ [ 0001/0337 ]
...
$ ls /tmp/zuul-logs.c8ZhLM/
compute1
controller
For executing tempest locally you need a working devstack, to make it worse if you have to debug a test
executed in a multinode job you need a multinode setup as well.
For devstack documentation please refer to this page: DevStack
To have tempest installed and have a proper configuration file for it in your local.conf file enable tempest
as service:
ENABLED_SERVICES+=tempest
or
enable_service tempest
To use specific config options for tempest you can add those as well to local.conf:
[[test-config|/opt/stack/tempest/etc/tempest.conf]]
[network-feature-enabled]
qos_placement_physnet=physnet1
To make devstack setup neutron and neutron-tempest-plugin as well enable their devstack plugin:
If you need a special image for the tests you can set that too in local.conf:
IMAGE_URLS="http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-i386-disk.
,→img,https://cloud-images.ubuntu.com/releases/bionic/release/ubuntu-18.04-
,→server-cloudimg-amd64.img"
ADVANCED_IMAGE_NAME=ubuntu-18.04-server-cloudimg-amd64
ADVANCED_INSTANCE_TYPE=ds512M
ADVANCED_INSTANCE_USER=ubuntu
If devstack succeeds you can find tempest and neutron-tempest-plugin under /opt/stack/ directory
(with all other project folders which are set to be installed from git).
Tempests configuration file is under /opt/stack/tempest/etc/ folder, you can check there if
everything is as expected.
You can check if neutron-tempest-plugin is known as a tempest plugin by tempest:
$ tempest list-plugins
+---------------------------------+----------------------------------------
,→--------------------+
| Name | EntryPoint
,→ |
+---------------------------------+----------------------------------------
,→--------------------+
| neutron_tests | neutron_tempest_plugin.
,→plugin:NeutronTempestPlugin |
+---------------------------------+----------------------------------------
,→--------------------+
To execute a given test or group of tests you can use a regex, or you can use the idempotent id of a test
or the tag associated with the test:
Subnet Pools
Learn about subnet pools by watching the summit talk given in Vancouver1 .
Subnet pools were added in Kilo. They are relatively simple. A SubnetPool has any number of Sub-
netPoolPrefix objects associated to it. These prefixes are in CIDR format. Each CIDR is a piece of the
address space that is available for allocation.
Subnet Pools support IPv6 just as well as IPv4.
The Subnet model object now has a subnetpool_id attribute whose default is null for backward compat-
ibility. The subnetpool_id attribute stores the UUID of the subnet pool that acted as the source for the
address range of a particular subnet.
When creating a subnet, the subnetpool_id can be optionally specified. If it is, the cidr field is not
required. If cidr is specified, it will be allocated from the pool assuming the pool includes it and hasnt
already allocated any part of it. If cidr is left out, then the prefixlen attribute can be specified. If it is not,
the default prefix length will be taken from the subnet pool. Think of it this way, the allocation logic
always needs to know the size of the subnet desired. It can pull it from a specific CIDR, prefixlen, or
default. A specific CIDR is optional and the allocation will try to honor it if provided. The request will
fail if it cant honor it.
Subnet pools do not allow overlap of subnets.
1
http://www.youtube.com/watch?v=QqP8yBUUXBM&t=6m12s
A quota mechanism was provided for subnet pools. It is different than other quota mechanisms in
Neutron because it doesnt count instances of first class objects. Instead it counts how much of the
address space is used.
For IPv4, it made reasonable sense to count quota in terms of individual addresses. So, if youre allowed
exactly one /24, your quota should be set to 256. Three /26s would be 192. This mechanism encourages
more efficient use of the IPv4 space which will be increasingly important when working with globally
routable addresses.
For IPv6, the smallest viable subnet in Neutron is a /64. There is no reason to allocate a sub-
net of any other size for use on a Neutron network. It would look pretty funny to set a quota of
4611686018427387904 to allow one /64 subnet. To avoid this, we count IPv6 quota in terms of /64s.
So, a quota of 3 allows three /64 subnets. When we need to allocate something smaller in the future, we
will need to ensure that the code can handle non-integer quota consumption.
Allocation
Allocation is done in a way that aims to minimize fragmentation of the pool. The relevant code is
here2 . First, the available prefixes are computed using a set difference: pool - allocations. The result is
compacted3 and then sorted by size. The subnet is then allocated from the smallest available prefix that
is large enough to accommodate the request.
Address Scopes
Before subnet pools or address scopes, it was impossible to tell if a network address was routable in a
certain context because the address was given explicitly on subnet create and wasnt validated against
any other addresses. Address scopes are meant to solve this by putting control over the address space
in the hands of an authority: the address scope owner. It makes use of the already existing SubnetPool
concept for allocation.
Address scopes are the thing within which address overlap is not allowed and thus provide more flexible
control as well as decoupling of address overlap from tenancy.
Prior to the Mitaka release, there was implicitly only a single shared address scope. Arbitrary address
overlap was allowed making it pretty much a free for all. To make things seem somewhat sane, normal
users are not able to use routers to cross-plug networks from different projects and NAT was used
between internal networks and external networks. It was almost as if each project had a private address
scope.
The problem is that this model cannot support use cases where NAT is not desired or supported (e.g.
IPv6) or we want to allow different projects to cross-plug their networks.
An AddressScope covers only one address family. But, they work equally well for IPv4 and IPv6.
2
neutron/ipam/subnet_alloc.py (_allocate_any_subnet)
3
http://pythonhosted.org/netaddr/api.html#netaddr.IPSet.compact
Routing
The reference implementation honors address scopes. Within an address scope, addresses route freely
(barring any FW rules or other external restrictions). Between scopes, routing is prevented unless address
translation is used.
For now, floating IPs are the only place where traffic crosses scope boundaries. When a floating IP is
associated to a fixed IP, the fixed IP is allowed to access the address scope of the floating IP by way of
a 1:1 NAT rule. That means the fixed IP can access not only the external network, but also any internal
networks that are in the same address scope as the external network. This is diagrammed as follows:
+----------------------+ +---------------------------+
| address scope 1 | | address scope 2 |
| | | |
| +------------------+ | | +------------------+ |
| | internal network | | | | external network | |
| +-------------+----+ | | +--------+---------+ |
| | | | | |
| +-------+--+ | | +------+------+ |
| | fixed ip +----------------+ floating IP | |
| +----------+ | | +--+--------+-+ |
+----------------------+ | | | |
| +------+---+ +--+-------+ |
| | internal | | internal | |
| +----------+ +----------+ |
+---------------------------+
Due to the asymmetric route in DVR, and the fact that DVR local routers do not know the information of
the floating IPs that reside in other hosts, there is a limitation in the DVR multiple hosts scenario. With
DVR in multiple hosts, when the destination of traffic is an internal fixed IP in a different host, the fixed
IP with a floating IP associated cant cross the scope boundary to access the internal networks that are in
the same address scope of the external network. See https://bugs.launchpad.net/neutron/+bug/1682228
RPC
The L3 agent in the reference implementation needs to know the address scope for each port on each
router in order to map ingress traffic correctly.
Each subnet from the same address family on a network is required to be from the same subnet pool.
Therefore, the address scope will also be the same. If this were not the case, it would be more difficult
to match ingress traffic on a port with the appropriate scope. It may be counter-intuitive but L3 address
scopes need to be anchored to some sort of non-L3 thing (e.g. an L2 interface) in the topology in order
to determine the scope of ingress traffic. For now, we use ports/networks. In the future, we may be able
to distinguish by something else like the remote MAC address or something.
The address scope id is set on each port in a dict under the address_scopes attribute. The scope is distinct
per address family. If the attribute does not appear, it is assumed to be null for both families. A value of
null means that the addresses are in the implicit address scope which holds all addresses that dont have
an explicit one. All subnets that existed in Neutron before address scopes existed fall here.
Here is an example of how the json will look in the context of a router port:
"address_scopes": {
"4": "d010a0ea-660e-4df4-86ca-ae2ed96da5c1",
(continues on next page)
To implement floating IPs crossing scope boundaries, the L3 agent needs to know the target scope of
the floating ip. The fixed address is not enough to disambiguate because, theoretically, there could be
overlapping addresses from different scopes. The scope is computed4 from the floating ip fixed port and
attached to the floating ip dict under the fixed_ip_address_scope attribute. Heres what the json looks
like (trimmed):
{
...
"floating_ip_address": "172.24.4.4",
"fixed_ip_address": "172.16.0.3",
"fixed_ip_address_scope": "d010a0ea-660e-4df4-86ca-ae2ed96da5c1",
...
}
Model
The model for subnet pools and address scopes can be found in neutron/db/models_v2.py and neu-
tron/db/address_scope_db.py. This document wont go over all of the details. It is worth noting how
they relate to existing Neutron objects. The existing Neutron subnet now optionally references a single
subnet pool:
L3 Agent
The L3 agent is limited in its support for multiple address scopes. Within a router in the reference
implementation, traffic is marked on ingress with the address scope corresponding to the network it is
coming from. If that traffic would route to an interface in a different address scope, the traffic is blocked
unless an exception is made.
One exception is made for floating IP traffic. When traffic is headed to a floating IP, DNAT is applied and
the traffic is allowed to route to the private IP address potentially crossing the address scope boundary.
When traffic flows from an internal port to the external network and a floating IP is assigned, that traffic
is also allowed.
Another exception is made for traffic from an internal network to the external network when SNAT is
enabled. In this case, SNAT to the routers fixed IP address is applied to the traffic. However, SNAT is not
used if the external network has an explicit address scope assigned and it matches the internal networks.
4
neutron/db/l3_db.py (_get_sync_floating_ips)
In that case, traffic routes straight through without NAT. The internal networks addresses are viable on
the external network in this case.
The reference implementation has limitations. Even with multiple address scopes, a router implemen-
tation is unable to connect to two networks with overlapping IP addresses. There are two reasons for
this.
First, a single routing table is used inside the namespace. An implementation using multiple routing
tables has been in the works but there are some unresolved issues with it.
Second, the default SNAT feature cannot be supported with the current Linux conntrack implementation
unless a double NAT is used (one NAT to get from the address scope to an intermediate address specific
to the scope and a second NAT to get from that intermediate address to an external address). Single NAT
wont work if there are duplicate addresses across the scopes.
Due to these complications the router will still refuse to connect to overlapping subnets. We can look in
to an implementation that overcomes these limitations in the future.
Agent extensions
All reference agents utilize a common extension mechanism that allows for the introduction and en-
abling of a core resource extension without needing to change agent code. This mechanism allows
multiple agent extensions to be run by a single agent simultaneously. The mechanism may be especially
interesting to third parties whose extensions lie outside the neutron tree.
Under this framework, an agent may expose its API to each of its extensions thereby allowing an exten-
sion to access resources internal to the agent. At layer 2, for instance, upon each port event the agent is
then able to trigger a handle_port method in its extensions.
Interactions with the agent API object are in the following order:
1. The agent initializes the agent API object.
2. The agent passes the agent API object into the extension manager.
3. The manager passes the agent API object into each extension.
4. An extension calls the new agent API object method to receive, for instance, bridge wrappers with
cookies allocated.
+-----------+
| Agent API +--------------------------------------------------+
+-----+-----+ |
| +-----------+ |
|1 +--+ Extension +--+ |
| | +-----------+ | |
+---+-+-+---+ 2 +--------------+ 3 | | 4 |
| Agent +-----+ Ext. manager +-----+--+ .... +--+-----+
+-----------+ +--------------+ | |
| +-----------+ |
+--+ Extension +--+
+-----------+
Each extension is referenced through a stevedore entry point defined within a specific namespace. For
example, L2 extensions are referenced through the neutron.agent.l2.extensions namespace.
The relevant modules are:
• neutron_lib.agent.extension: This module defines an abstract extension interface for all agent ex-
tensions across L2 and L3.
• neutron_lib.agent.l2_extension:
• neutron_lib.agent.l3_extension: These modules subclass neu-
tron_lib.agent.extension.AgentExtension and define a layer-specific abstract extension interface.
• neutron.agent.agent_extensions_manager: This module contains a manager that allows extensions
to load themselves at runtime.
• neutron.agent.l2.l2_agent_extensions_manager:
• neutron.agent.l3.l3_agent_extensions_manager: Each of these modules passes core resource
events to loaded extensions.
Every agent can pass an agent API object into its extensions in order to expose its internals to them in a
controlled way. To accommodate different agents, each extension may define a consume_api() method
that will receive this object.
This agent API object is part of neutrons public interface for third parties. All changes to the interface
will be managed in a backwards-compatible way.
At this time, on the L2 side, only the L2 Open vSwitch agent provides an agent API object to extensions.
See L2 agent extensions. For L3, see L3 agent extensions.
The relevant modules are:
• neutron_lib.agent.extension
• neutron_lib.agent.l2_extension
• neutron_lib.agent.l3_extension
• neutron.agent.agent_extensions_manager
• neutron.agent.l2.l2_agent_extensions_manager
• neutron.agent.l3.l3_agent_extensions_manager
API Extensions
API extensions is the standard way of introducing new functionality to the Neutron project, it allows
plugins to determine if they wish to support the functionality or not.
Examples
The easiest way to demonstrate how an API extension is written, is by studying an existing API extension
and explaining the different layers.
https://wiki.openstack.org/wiki/Neutron/SecurityGroups
API Extension
The API extension is the front end portion of the code, which handles defining a REST-ful API, which
is used by projects.
Database API
The Security Group API extension adds a number of methods to the database layer of Neutron
Agent RPC
This portion of the code handles processing requests from projects, after they have been stored in the
database. It involves messaging all the L2 agents running on the compute nodes, and modifying the
IPTables rules on each hypervisor.
• Plugin RPC classes
– SecurityGroupServerRpcMixin - defines the RPC API that the plugin uses to communicate
with the agents running on the compute nodes
– SecurityGroupServerRpcMixin - Defines the API methods used to fetch data from the
database, in order to return responses to agents via the RPC API
• Agent RPC classes
– The SecurityGroupServerRpcApi defines the API methods that can be called by agents, back
to the plugin that runs on the Neutron controller
– The SecurityGroupAgentRpcCallbackMixin defines methods that a plugin uses to call back
to an agent after performing an action called by an agent.
IPTables Driver
• The IptablesFirewallDriver has a method to convert security group rules into iptables
statements.
Resources that inherit from the HasStandardAttributes DB class can automatically have the extensions
written for standard attributes (e.g. timestamps, revision number, etc) extend their resources by defining
the api_collections on their model. These are used by extensions for standard attr resources to generate
the extended resources map.
Any new addition of a resource to the standard attributes collection must be accompanied with a new
extension to ensure that it is discoverable via the API. If its a completely new resource, the extension
describing that resource will suffice. If its an existing resource that was released in a previous cycle hav-
ing the standard attributes added for the first time, then a dummy extension needs to be added indicating
that the resource now has standard attributes. This ensures that an API caller can always discover if an
attribute will be available.
For example, if Flavors were migrated to include standard attributes, we need a new flavor-standardattr
extension. Then as an API caller, I will know that flavors will have timestamps by checking for flavor-
standardattr and timestamps.
Current API resources extended by standard attr extensions:
• subnets: neutron.db.models_v2.Subnet
• trunks: neutron.services.trunk.models.Trunk
• routers: neutron.db.l3_db.Router
• segments: neutron.db.segments_db.NetworkSegment
• security_group_rules: neutron.db.models.securitygroup.SecurityGroupRule
• networks: neutron.db.models_v2.Network
• policies: neutron.db.qos.models.QosPolicy
• subnetpools: neutron.db.models_v2.SubnetPool
• ports: neutron.db.models_v2.Port
• security_groups: neutron.db.models.securitygroup.SecurityGroup
• floatingips: neutron.db.l3_db.FloatingIP
• network_segment_ranges: neutron.db.models.network_segment_range.NetworkSegmentRange
This section will cover the internals of Neutrons HTTP API, and the classes in Neutron that can be used
to create Extensions to the Neutron API.
Python web applications interface with webservers through the Python Web Server Gateway Interface
(WSGI) - defined in PEP 333
Startup
Neutrons WSGI server is started from the server module and the entry point serve_wsgi is called to
build an instance of the NeutronApiService, which is then returned to the server module, which spawns
a Eventlet GreenPool that will run the WSGI application and respond to requests from clients.
WSGI Application
During the building of the NeutronApiService, the _run_wsgi function creates a WSGI application using
the load_paste_app function inside config.py - which parses api-paste.ini - in order to create a WSGI
app using Pastes deploy.
The api-paste.ini file defines the WSGI applications and routes - using the Paste INI file format.
The INI file directs paste to instantiate the APIRouter class of Neutron, which contains several methods
that map Neutron resources (such as Ports, Networks, Subnets) to URLs, and the controller for each
resource.
Further reading
When writing code for an extension, service plugin, or any other part of Neutron you must not call core
plugin methods that mutate state while you have a transaction open on the session that you pass into the
core plugin method.
The create and update methods for ports, networks, and subnets in ML2 all have a precommit phase
and postcommit phase. During the postcommit phase, the data is expected to be fully persisted to the
database and ML2 drivers will use this time to relay information to a backend outside of Neutron. Calling
the ML2 plugin within a transaction would violate this semantic because the data would not be persisted
to the DB; and, were a failure to occur that caused the whole transaction to be rolled back, the backend
would become inconsistent with the state in Neutrons DB.
To prevent this, these methods are protected with a decorator that will raise a RuntimeError if they
are called with context that has a session in an active transaction. The decorator can be found at neu-
tron.common.utils.transaction_guard and may be used in other places in Neutron to protect functions
that are expected to be called outside of a transaction.
As more functionality is added to Neutron over time, efforts to improve performance become more
difficult, given the rising complexity of the code. Identifying performance bottlenecks is frequently not
straightforward, because they arise as a result of complex interactions of different code components.
To help community developers to improve Neutron performance, a Python decorator has been imple-
mented. Decorating a method or a function with it will result in profiling data being added to the
corresponding Neutron component log file. These data are generated using cProfile which is part of the
Python standard library.
Once a method or function has been decorated, every one of its executions will add to the corresponding
log file data grouped in 3 sections:
1. The top calls (sorted by CPU cumulative time) made by the decorated method or function. The
number of calls included in this section can be controlled by a configuration option, as explained
in Setting up Neutron for code profiling. Following is a summary example of this section:
Oct 20 01:52:40.759379 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: DEBUG neutron.profiling.profiled_decorator [None req-
,→dc2d428f-4531-4f07-a12d-56843b5f9374 c_rally_8af8f2b4_YbhFJ6Ge c_
,→rally_8af8f2b4_fqvy1XJp] os-profiler parent trace-id c5b30c7f-100b-
,→4e1c-8f07-b2c38f41ad65 trace-id 6324fa85-ea5f-4ae2-9d89-
,→2aabff0dddfc 16928 millisecs elapsed for neutron.plugins.ml2.
,→plugin.create_port((<neutron.plugins.ml2.plugin.Ml2Plugin object at
,→0x7f0b4e6ca978>, <neutron_lib.context.Context object at
,→0x7f0b4bcee240>, {'port': {'tenant_id':
,→'421ab52e126e45af81a3eb1962613e18', 'network_id': 'dc59577a-9589-
,→4617-82b5-6ee31dbdb15d', 'fixed_ips': [{'ip_address': '1.1.5.177',
,→'subnet_id': 'e15ec947-9edd-4793-bf0f-c463c7ff2f62'}], 'admin_state_
2. Callers section: all functions or methods that called each function or method in the resulting
profiling data. This is restricted by the configured number of top calls to log, as explained in
Setting up Neutron for code profiling. Following is a summary example of this section:
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: Ordered by: cumulative time
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: List reduced from 1861 to 100 due to restriction
,→<100>
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: Function
,→ was called
,→by...
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ ncalls
,→ tottime cumtime
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: /usr/local/lib/python3.6/dist-packages/neutron_lib/
,→db/api.py:132(wrapped) <- 2/
,→0 0.000 0.000 /usr/local/lib/python3.6/dist-packages/neutron_
,→lib/db/api.py:224(wrapped)
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: /opt/stack/neutron/neutron/common/utils.
,→py:678(inner)
,→ <-
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: /usr/local/lib/python3.6/dist-packages/sqlalchemy/
,→orm/strategies.py:1317(<genexpr>) <-
,→ 3 0.000 0.000 /opt/stack/osprofiler/osprofiler/profiler.
,→py:426(_notify)
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ 1
,→ 0.000 16.883 /usr/local/lib/python3.6/dist-packages/neutron_
,→lib/db/api.py:132(wrapped)
,→ 1
,→ 0.000 16.704 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/orm/query.py:3281(one)
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ 0
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/orm/query.py:3337(__iter__)
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ 1
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
(continues on next page)
,→sqlalchemy/orm/query.py:3362(_execute_and_instances)
,→ 1
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/orm/strategies.py:2033(load_scalar_from_joined_new_row)
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ 1/0
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/pool/base.py:840(_checkin)
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ 1/0
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/webob/
,→request.py:1294(send)
Oct 20 01:52:40.767003 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: /opt/stack/osprofiler/osprofiler/sqlalchemy.
,→py:84(handler) <-
,→ 16/0 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/event/attr.py:316(__call__)
3. Callees section: a list of all functions or methods that were called by the indicated function or
method. Again, this is restricted by the configured number of top calls to log. Following is a
summary example of this section:
Oct 20 01:52:40.788842 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: Ordered by: cumulative time
Oct 20 01:52:40.788842 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: List reduced from 1861 to 100 due to restriction
,→<100>
Oct 20 01:52:40.788842 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: Function
,→ called...
Oct 20 01:52:40.788842 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ ncalls
,→ tottime cumtime (continues on next page)
,→ 1
,→ 0.000 16.928 /usr/local/lib/python3.6/dist-packages/neutron_
,→lib/db/api.py:224(wrapped)
Oct 20 01:52:40.788842 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ 1
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/orm/session.py:2986(is_active)
Oct 20 01:52:40.788842 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: /usr/local/lib/python3.6/dist-packages/sqlalchemy/
,→orm/strategies.py:1317(<genexpr>) ->
,→ 1 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/engine/default.py:579(do_execute)
Oct 20 01:52:40.788842 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ 2
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/engine/default.py:1078(post_exec)
,→ 1
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/event/base.py:266(__getattr__)
Oct 20 01:52:40.788842 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]:
,→ 15/3
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
,→sqlalchemy/orm/loading.py:35(instances)
Oct 20 01:52:40.788842 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: (continues on next page)
,→ 1
,→ 0.000 0.000 /usr/local/lib/python3.6/dist-packages/
1008 ,→sqlalchemy/orm/strategies.py:1317(<listcomp>)
Chapter 14. Contributor Guide
Neutron Documentation, Release 18.1.0.dev178
4. For each decorated method or function execution, only the top 50 calls by cumulative CPU time
are logged. This can be changed adding the following line to the [default] section of /etc/
neutron/neutron.conf:
code_profiling_calls_to_log = 100
Code profiling is enabled for the neutron-rally-task job in Neutrons check queue in Zuul.
Taking advantage of the fact that os-profiler is enabled for this job, the data logged by the
profiled_decorator.profile decorator includes the os-profiler parent trace-id
and trace-id as can be seen here:
Oct 20 01:52:40.759379 ubuntu-bionic-vexxhost-sjc1-0012393267 neutron-
,→server[19578]: DEBUG neutron.profiling.profiled_decorator [None req-
,→dc2d428f-4531-4f07-a12d-56843b5f9374 c_rally_8af8f2b4_YbhFJ6Ge c_rally_
Community developers wanting to use this to correlate data from os-profiler and the
profiled_decorator.profile decorator can submit a DNM (Do Not Merge) patch, decorating
the functions and methods they want to profile and optionally:
1. Configure the number of calls to be logged in the neutron-rally-task job definition, as
described in Setting up Neutron for code profiling.
2. Increase the timeout parameter value of the neutron-rally-task job in the .zuul yaml
file. The value used for the Neutron gate might be too short when logging large quantities of
profiling data.
The profiled_decorator.profile and os-profiler data will be found in the
neutron-rally-task log files and HTML report respectively.
This section contains some common information that will be useful for developers that need to do some
db changes.
For columns it is possible to set default or server_default. What is the difference between them and why
should they be used?
The explanation is quite simple:
• default - the default value that SQLAlchemy will specify in queries for creating instances of a
given model;
• server_default - the default value for a column that SQLAlchemy will specify in DDL.
Summarizing, default is useless in migrations and only server_default should be used. For synchronizing
migrations with models server_default parameter should also be added in model. If default value in
database is not needed, server_default should not be used. The declarative approach can be bypassed
(i.e. default may be omitted in the model) if default is enforced through business logic.
Database migrations
For details on the neutron-db-manage wrapper and alembic migrations, see Alembic Migrations.
class neutron.tests.functional.db.test_migrations._TestModelsMigrations
Test for checking of equality models state and migrations.
For the opportunistic testing you need to set up a db named openstack_citest with user open-
stack_citest and password openstack_citest on localhost. The test will then use that db and
user/password combo to run the tests.
For PostgreSQL on Ubuntu this can be done with the following commands:
For MySQL on Ubuntu this can be done with the following commands:
mysql -u root
>create database openstack_citest;
>grant all privileges on openstack_citest.* to
openstack_citest@localhost identified by 'openstack_citest';
Output is a list that contains information about differences between db and models. Output exam-
ple:
[('add_table',
Table('bat', MetaData(bind=None),
Column('info', String(), table=<bat>), schema=None)),
('remove_table',
Table(u'bar', MetaData(bind=None),
Column(u'data', VARCHAR(), table=<bar>), schema=None)),
('add_column',
None,
'foo',
Column('data', Integer(), table=<foo>)),
('remove_column',
None,
'foo',
Column(u'old_data', VARCHAR(), table=None)),
[('modify_nullable',
None,
'foo',
u'x',
{'existing_server_default': None,
'existing_type': INTEGER()},
True,
False)]]
This class also contains tests for branches, like that correct operations are used in contract and
expand branches.
db_sync(engine)
Run migration scripts with the given engine instance.
This method must be implemented in subclasses and run migration scripts for a DB the given
engine is connected to.
filter_metadata_diff(diff )
Filter changes before assert in test_models_sync().
Allow subclasses to whitelist/blacklist changes. By default, no filtering is performed,
changes are returned as is.
Parameters diff a list of differences (see compare_metadata() docs for details
on format)
Returns a list of differences
get_engine()
Return the engine instance to be used when running tests.
This method must be implemented in subclasses and return an engine instance to be used
when running tests.
get_metadata()
Return the metadata instance to be used for schema comparison.
This method must be implemented in subclasses and return the metadata instance attached
to the BASE model.
include_object(object_, name, type_, reflected, compare_to)
Return True for objects that should be compared.
Parameters
• object a SchemaItem object such as a Table or Column object
• name the name of the object
• type a string describing the type of object (e.g. table)
• reflected True if the given object was produced based on table reflec-
tion, False if its from a local MetaData object
• compare_to the object being compared against, if available, else None
There are many attributes that we would like to store in the database which are common across many
Neutron objects (e.g. tags, timestamps, rbac entries). We have previously been handling this by dupli-
cating the schema to every table via model mixins. This means that a DB migration is required for each
object that wants to adopt one of these common attributes. This becomes even more cumbersome when
the relationship between the attribute and the object is many-to-one because each object then needs its
own table for the attributes (assuming referential integrity is a concern).
To address this issue, the standardattribute table is available. Any model can add support for this ta-
ble by inheriting the HasStandardAttributes mixin in neutron.db.standard_attr. This mixin will add a
standard_attr_id BigInteger column to the model with a foreign key relationship to the standardattribute
table. The model will then be able to access any columns of the standardattribute table and any tables
related to it.
A model that inherits HasStandardAttributes must implement the property api_collections, which is a
list of API resources that the new object may appear under. In most cases, this will only be one (e.g.
ports for the Port model). This is used by all of the service plugins that add standard attribute fields to
determine which API responses need to be populated.
A model that supports tag mechanism must implement the property collection_resource_map which is
a dict of collection_name and resource_name for API resources. And also the model must implement
tag_support with a value True.
The introduction of a new standard attribute only requires one column addition to the standardattribute
table for one-to-one relationships or a new table for one-to-many or one-to-zero relationships. Then all
of the models using the HasStandardAttribute mixin will automatically gain access to the new attribute.
Any attributes that will apply to every neutron resource (e.g. timestamps) can be added directly to the
standardattribute table. For things that will frequently be NULL for most entries (e.g. a column to store
an error reason), a new table should be added and joined to in a query to prevent a bunch of NULL
entries in the database.
This document is intended to track and notify developers that db models in neutron will be centralized
and moved to a new tree under neutron/db/models. This was discussed in [1]. The reason for relocating
db models is to solve the cyclic import issue while implementing oslo versioned objects for resources in
neutron.
The reason behind this relocation is Mixin class and db models for some resources in neutron are in same
module. In Mixin classes, there are methods which provide functionality of fetching, adding, updating
and deleting data via queries. These queries will be replaced with use of versioned objects and definition
of versioned object will be using db models. So object files will be importing models and Mixin need to
import those objects which will end up in cyclic import.
We have decided to move all models definitions to neutron/db/models/ with no further nesting after that
point. The deprecation method to move models has already been added to avoid breakage of third party
plugins using those models. All relocated models need to use deprecate method that will generate a
warning and return new class for use of old class. Some examples of relocated models [2] and [3]. In
future if you define new models please make sure they are separated from mixins and are under tree
neutron/db/models/ .
References
In Neutron subnets, DNS nameservers are given priority when created or updated. This means if you
create a subnet with multiple DNS servers, the order will be retained and guests will receive the DNS
servers in the order you created them in when the subnet was created. The same thing applies for update
operations on subnets to add, remove, or update DNS servers.
+--------------------------------------+------+-------------+--------------
,→------------------------------+
As shown in above output, the order of the DNS nameservers has been updated. New virtual machines
deployed to this subnet will receive the DNS nameservers in this new priority order. Existing virtual
machines that have already been deployed will not be immediately affected by changing the DNS name-
server order on the neutron subnet. Virtual machines that are configured to get their IP address via
DHCP will detect the DNS nameserver order change when their DHCP lease expires or when the virtual
machine is restarted. Existing virtual machines configured with a static IP address will never detect the
updated DNS nameserver order.
Since the Mitaka release, neutron has an interface defined to interact with an external DNS service. This
interface is based on an abstract driver that can be used as the base class to implement concrete drivers
to interact with various DNS services. The reference implementation of such a driver integrates neutron
with OpenStack Designate.
This integration allows users to publish dns_name and dns_domain attributes associated with floating IP
addresses, ports, and networks in an external DNS service.
To support integration with an external DNS service, the dns_name and dns_domain attributes were
added to floating ips, ports and networks. The dns_name specifies the name to be associated with a cor-
responding IP address, both of which will be published to an existing domain with the name dns_domain
in the external DNS service.
Specifically, floating ips, ports and networks are extended as follows:
• Floating ips have a dns_name and a dns_domain attribute.
• Ports have a dns_name attribute.
• Networks have a dns_domain attributes.
• Refer to oslo_i18n documentation for the general mechanisms that should be used: https://docs.
openstack.org/oslo.i18n/latest/user/usage.html
• Each stadium project should NOT consume _i18n module from neutron-lib or neutron.
• It is recommended that you create a {package_name}/_i18n.py file in your repo, and use that.
Your localization strings will also live in your repo.
L2 agent extensions
L2 agent extensions are part of a generalized L2/L3 extension framework. See agent extensions.
• neutron.plugins.ml2.drivers.openvswitch.agent.ovs_agent_extension_api
Open vSwitch agent API object includes two methods that return wrapped and hardened bridge objects
with cookie values allocated for calling extensions:
#. request_int_br
#. request_tun_br
Bridge objects returned by those methods already have new default cookie values allocated for extension
flows. All flow management methods (add_flow, mod_flow, ) enforce those allocated cookies.
• neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_agent_extension_api
The Linux bridge agent extension API object includes a method that returns an instance of the Iptables-
Manager class, which is used by the L2 agent to manage security group rules:
#. get_iptables_manager
L2 Agent Networking
This Agent uses the Open vSwitch virtual switch to create L2 connectivity for instances, along with
bridges created in conjunction with OpenStack Nova for filtering.
ovs-neutron-agent can be configured to use different networking technologies to create project isolation.
These technologies are implemented as ML2 type drivers which are used in conjunction with the Open
vSwitch mechanism driver.
VLAN Tags
GRE Tunnels
GRE Tunneling is documented in depth in the Networking in too much detail by RedHat.
VXLAN Tunnels
VXLAN is an overlay technology which encapsulates MAC frames at layer 2 into a UDP header. More
information can be found in The VXLAN wiki page.
Geneve Tunnels
Geneve uses UDP as its transport protocol and is dynamic in size using extensible option headers. It
is important to note that currently it is only supported in newer kernels. (kernel >= 3.18, OVS version
>=2.4) More information can be found in the Geneve RFC document.
Bridge Management
In order to make the agent capable of handling more than one tunneling technology, to decouple the
requirements of segmentation technology from project isolation, and to preserve backward compatibility
for OVS agents working without tunneling, the agent relies on a tunneling bridge, or br-tun, and the well
known integration bridge, or br-int.
All VM VIFs are plugged into the integration bridge. VM VIFs on a given virtual network share a
common local VLAN (i.e. not propagated externally). The VLAN id of this local VLAN is mapped to
the physical networking details realizing that virtual network.
For virtual networks realized as VXLAN/GRE tunnels, a Logical Switch (LS) identifier is used to dif-
ferentiate project traffic on inter-HV tunnels. A mesh of tunnels is created to other Hypervisors in the
cloud. These tunnels originate and terminate on the tunneling bridge of each hypervisor, leaving br-int
unaffected. Port patching is done to connect local VLANs on the integration bridge to inter-hypervisor
tunnels on the tunnel bridge.
For each virtual network realized as a VLAN or flat network, a veth or a pair of patch ports is used
to connect the local VLAN on the integration bridge with the physical network bridge, with flow rules
adding, modifying, or stripping VLAN tags as necessary, thus preserving backward compatibility with
the way the OVS agent used to work prior to the tunneling capability (for more details, please look at
https://review.opendev.org/#/c/4367).
Bear in mind, that this design decision may be overhauled in the future to support existing VLAN-
tagged traffic (coming from NFV VMs for instance) and/or to deal with potential QinQ support natively
available in the Open vSwitch.
Rationale
At the time the first design for the OVS agent came up, trunking in OpenStack was merely a pipe
dream. Since then, lots has happened in the OpenStack platform, and many deployments have gone into
production since early 2012.
In order to address the vlan-aware-vms use case on top of Open vSwitch, the following aspects must be
taken into account:
• Design complexity: starting afresh is always an option, but a complete rearchitecture is only desir-
able under some circumstances. After all, customers want solutionsyesterday. It is noteworthy that
the OVS agent design is already relatively complex, as it accommodates a number of deployment
options, especially in relation to security rules and/or acceleration.
• Upgrade complexity: being able to retrofit the existing design means that an existing deployment
does not need to go through a forklift upgrade in order to expose new functionality; alternatively,
the desire of avoiding a migration requires a more complex solution that is able to support multiple
modes of operations;
• Design reusability: ideally, a proposed design can easily apply to the various technology backends
that the Neutron L2 agent supports: Open vSwitch and Linux Bridge.
• Performance penalty: no solution is appealing enough if it is unable to satisfy the stringent re-
quirement of high packet throughput, at least in the long term.
• Feature compatibility: VLAN transparency is for better or for worse intertwined with vlan aware-
ness. The former is about making the platform not interfere with the tag associated to the packets
sent by the VM, and let the underlay figure out where the packet needs to be sent out; the lat-
ter is about making the platform use the vlan tag associated to packet to determine where the
packet needs to go. Ideally, a design choice to satisfy the awareness use case will not have a
negative impact for solving the transparency use case. Having said that, the two features are still
meant to be mutually exclusive in their application, and plugging subports into networks whose
vlan-transparency flag is set to True might have unexpected results. In fact, it would be impos-
sible from the platforms point of view discerning which tagged packets are meant to be treated
transparently and which ones are meant to be used for demultiplexing (in order to reach the right
destination). The outcome might only be predictable if two layers of vlan tags are stacked up
together, making guest support even more crucial for the combined use case.
It is clear by now that an acceptable solution must be assessed with these issues in mind. The potential
solutions worth enumerating are:
• VLAN interfaces: in laymans terms, these interfaces allow to demux the traffic before it hits the
integration bridge where the traffic will get isolated and sent off to the right destination. This
solution is proven to work for both iptables-based and native ovs security rules (credit to Rawlin
Peters). This solution has the following design implications:
– Design complexity: this requires relative small changes to the existing OVS design, and it
can work with both iptables and native ovs security rules.
– Upgrade complexity: in order to employ this solution no major upgrade is necessary and
thus no potential dataplane disruption is involved.
– Design reusability: VLAN interfaces can easily be employed for both Open vSwitch and
Linux Bridge.
– Performance penalty: using VLAN interfaces means that the kernel must be involved. For
Open vSwitch, being able to use a fast path like DPDK would be an unresolved issue (Kernel
NIC interfaces are not on the roadmap for distros and OVS, and most likely will never be).
Even in the absence of an extra bridge, i.e. when using native ovs firewall, and with the
advent of userspace connection tracking that would allow the stateful firewall driver to work
with DPDK, the performance gap between a pure userspace DPDK capable solution and a
kernel based solution will be substantial, at least under certain traffic conditions.
– Feature compatibility: in order to keep the design simple once VLAN interfaces are adopted,
and yet enable VLAN transparency, Open vSwitch needs to support QinQ, which is currently
lacking as of 2.5 and with no ongoing plan for integration.
• Going full openflow: in laymans terms, this means programming the dataplane using OpenFlow
in order to provide tenant isolation, and packet processing. This solution has the following design
implications:
– Design complexity: this requires a big rearchitecture of the current Neutron L2 agent solu-
tion.
– Upgrade complexity: existing deployments will be unable to work correctly unless one of
the actions take place: a) the agent can handle both the old and new way of wiring the
data path; b) a dataplane migration is forced during a release upgrade and thus it may cause
(potentially unrecoverable) dataplane disruption.
– Design reusability: a solution for Linux Bridge will still be required to avoid widening the
gap between Open vSwitch (e.g. OVS has DVR but LB does not).
– Performance penalty: using Open Flow will allow to leverage the user space and fast pro-
cessing given by DPDK, but at a considerable engineering cost nonetheless. Security rules
will have to be provided by a learn based firewall to fully exploit the capabilities of DPDK,
at least until user space connection tracking becomes available in OVS.
– Feature compatibility: with the adoption of Open Flow, tenant isolation will no longer be
provided by means of local vlan provisioning, thus making the requirement of QinQ support
no longer strictly necessary for Open vSwitch.
• Per trunk port OVS bridge: in laymans terms, this is similar to the first option, in that an extra
layer of mux/demux is introduced between the VM and the integration bridge (br-int) but instead
of using vlan interfaces, a combination of a new per port OVS bridge and patch ports to wire this
new bridge with br-int will be used. This solution has the following design implications:
– Design complexity: the complexity of this solution can be considered in between the above
mentioned options in that some work is already available since Mitaka and the data path
wiring logic can be partially reused.
– Upgrade complexity: if two separate code paths are assumed to be maintained in the OVS
agent to handle regular ports and ports participating a trunk with no ability to convert from
one to the other (and vice versa), no migration is required. This is done at a cost of some
loss of flexibility and maintenance complexity.
– Design reusability: a solution to support vlan trunking for the Linux Bridge mech driver will
still be required to avoid widening the gap with Open vSwitch (e.g. OVS has DVR but LB
does not).
– Performance penalty: from a performance standpoint, the adoption of a trunk bridge relieves
the agent from employing kernel interfaces, thus unlocking the full potential of fast packet
processing. That said, this is only doable in combination with a native ovs firewall. At the
time of writing the only DPDK enabled firewall driver is the learn based one available in the
networking-ovs-dpdk repo;
– Feature compatibility: the existing local provisioning logic will not be affected by the intro-
duction of a trunk bridge, therefore use cases where VMs are connected to a vlan transparent
network via a regular port will still require QinQ support from OVS.
To summarize:
• VLAN interfaces (A) are compelling because will lead to a relatively contained engineering cost
at the expense of performance. The Open vSwitch community will need to be involved in order to
deliver vlan transparency. Irrespective of whether this strategy is chosen for Open vSwitch or not,
this is still the only viable approach for Linux Bridge and thus pursued to address Linux Bridge
support for VLAN trunking. To some extent, this option can also be considered a fallback strategy
for OVS deployments that are unable to adopt DPDK.
• Open Flow (B) is compelling because it will allow Neutron to unlock the full potential of Open
vSwitch, at the expense of development and operations effort. The development is confined within
the boundaries of the Neutron community in order to address vlan awareness and transparency (as
two distinct use cases, ie. to be adopted separately). Stateful firewall (based on ovs conntrack)
limits the adoption for DPDK at the time of writing, but a learn-based firewall can be a suitable
alternative. Obviously this solution is not compliant with iptables firewall.
• Trunk Bridges (C) tries to bring the best of option A and B together as far as OVS development
and performance are concerned, but it comes at the expense of maintenance complexity and loss
of flexibility. A Linux Bridge solution would still be required and, QinQ support will still be
needed to address vlan transparency.
All things considered, as far as OVS is concerned, option (C) is the most promising in the medium term.
Management of trunks and ports within trunks will have to be managed differently and, to start with, it
is sensible to restrict the ability to update ports (i.e. convert) once they are bound to a particular bridge
(integration vs trunk). Security rules via iptables rules is obviously not supported, and never will be.
Option (A) for OVS could be pursued in conjunction with Linux Bridge support, if the effort is seen
particularly low hanging fruit. However, a working solution based on this option positions the OVS agent
as a sub-optminal platform for performance sensitive applications in comparison to other accelerated or
SDN-controller based solutions. Since further data plane performance improvement is hindered by the
extra use of kernel resources, this option is not at all appealing in the long term.
Embracing option (B) in the long run may be complicated by the adoption of option (C). The devel-
opment and maintenance complexity involved in Option (C) and (B) respectively poses the existential
question as to whether investing in the agent-based architecture is an effective strategy, especially if the
end result would look a lot like other maturing alternatives.
This implementation doesnt require any modification of the vif-drivers since Nova will plug the vif of
the VM the same way as it does for traditional ports.
A VM is spawned passing to Nova the port-id of a parent port associated with a trunk. Nova/libvirt will
create the tap interface and will plug it into br-int or into the firewall bridge if using iptables firewall. In
the external-ids of the port Nova will store the port ID of the parent port. The OVS agent detects that
a new vif has been plugged. It gets the details of the new port and wires it. The agent configures it in
the same way as a traditional port: packets coming out from the VM will be tagged using the internal
VLAN ID associated to the network, packets going to the VM will be stripped of the VLAN ID. After
wiring it successfully the OVS agent will send a message notifying Neutron server that the parent port is
up. Neutron will send back to Nova an event to signal that the wiring was successful. If the parent port
is associated with one or more subports the agent will process them as described in the next paragraph.
Subport creation
If a subport is added to a parent port but no VM was booted using that parent port yet, no L2 agent will
process it (because at that point the parent port is not bound to any host). When a subport is created for
a parent port and a VM that uses that parent port is already running, the OVS agent will create a VLAN
interface on the VM tap using the VLAN ID specified in the subport segmentation id. Theres a small
possibility that a race might occur: the firewall bridge might be created and plugged while the vif is not
there yet. The OVS agent needs to check if the vif exists before trying to create a subinterface. Lets see
how the models differ when using the iptables firewall or the ovs native firewall.
Iptables Firewall
+----------------------------+
| VM |
| eth0 eth0.100 |
+-----+-----------------+----+
|
|
+---+---+ +-----+-----+
| tap1 |-------| tap1.100 |
+---+---+ +-----+-----+
| |
| |
+---+---+ +---+---+
| qbr1 | | qbr2 |
+---+---+ +---+---+
| |
| |
+-----+-----------------+----+
| port 1 port 2 |
| (tag 3) (tag 5) |
| br-int |
+----------------------------+
Lets assume the subport is on network2 and uses segmentation ID 100. In the case of hybrid plugging
the OVS agent will have to create the firewall bridge (qbr2), create tap1.100 and plug it into qbr2. It will
connect qbr2 to br-int and set the subport ID in the external-ids of port 2.
Inbound traffic from the VM point of view
The untagged traffic will flow from port 1 to eth0 through qbr1. For the traffic coming out of port 2, the
internal VLAN ID of network2 will be stripped. The packet will then go untagged through qbr2 where
iptables rules will filter the traffic. The tag 100 will be pushed by tap1.100 and the packet will finally
get to eth0.100.
Outbound traffic from the VM point of view
The untagged traffic will flow from eth0 to port1 going through qbr1 where firewall rules will be ap-
plied. Traffic tagged with VLAN 100 will leave eth0.100, go through tap1.100 where the VLAN 100 is
stripped. It will reach qbr2 where iptables rules will be applied and go to port 2. The internal VLAN of
network2 will be pushed by br-int when the packet enters port2 because its a tagged port.
+----------------------------+
| VM |
| eth0 eth0.100 |
+-----+-----------------+----+
|
|
+---+---+ +-----+-----+
| tap1 |-------| tap1.100 |
+---+---+ +-----+-----+
| |
| |
| |
+-----+-----------------+----+
| port 1 port 2 |
| (tag 3) (tag 5) |
| br-int |
+----------------------------+
When a subport is created the OVS agent will create the VLAN interface tap1.100 and plug it into br-int.
Lets assume the subport is on network2.
Inbound traffic from the VM point of view
The traffic will flow untagged from port 1 to eth0. The traffic going out from port 2 will be stripped of
the VLAN ID assigned to network2. It will be filtered by the rules installed by the firewall and reach
tap1.100. tap1.100 will tag the traffic using VLAN 100. It will then reach the VMs eth0.100.
Outbound traffic from the VM point of view
The untagged traffic will flow and reach port 1 where it will be tagged using the VLAN ID associated to
the network. Traffic tagged with VLAN 100 will leave eth0.100 reach tap1.100 where VLAN 100 will
be stripped. It will then reach port2. It will be filtered by the rules installed by the firewall on port 2.
Then the packets will be tagged using the internal VLAN associated to network2 by br-int since port 2
is a tagged port.
Deleting a port that is an active parent in a trunk is forbidden. If the parent port has no trunk associated
(its a normal port), it can be deleted. The OVS agent doesnt need to perform any action, the deletion
will result in a removal of the port data from the DB.
Trunk deletion
When Nova deletes a VM, it deletes the VMs corresponding Neutron ports only if they were created by
Nova when booting the VM. In the vlan-aware-vm case the parent port is passed to Nova, so the port
data will remain in the DB after the VM deletion. Nova will delete the VIF of the VM (in the example
tap1) as part of the VM termination. The OVS agent will detect that deletion and notify the Neutron
server that the parent port is down. The OVS agent will clean up the corresponding subports as explained
in the next paragraph.
The deletion of a trunk that is used by a VM is not allowed. The trunk can be deleted (leaving the parent
port intact) when the parent port is not used by any VM. After the trunk is deleted, the parent port can
also be deleted.
Subport deletion
Removing a subport that is associated with a parent port that was not used to boot any VM is a no op
from the OVS agent perspective. When a subport associated with a parent port that was used to boot a
VM is deleted, the OVS agent will take care of removing the firewall bridge if using iptables firewall
and the port on br-int.
This implementation is based on this etherpad. Credits to Bence Romsics. The IDs used for bridge and
port names are truncated.
+--------------------------------+
| VM |
| eth0 eth0.100 |
+-----+--------------------+-----+
|
|
+-----+--------------------------+
| tap1 |
| tbr-trunk-id |
| |
| tpt-parent-id spt-subport-id |
| (tag 100) |
+-----+-----------------+--------+
| |
| |
| |
+-----+-----------------+---------+
| tpi-parent-id spi-subport-id |
| (tag 3) (tag 5) |
(continues on next page)
tpt-parent-id: trunk bridge side of the patch port that implements a trunk. tpi-parent-id: int bridge
side of the patch port that implements a trunk. spt-subport-id: trunk bridge side of the patch port that
implements a subport. spi-subport-id: int bridge side of the patch port that implements a subport.
Trunk creation
A VM is spawned passing to Nova the port-id of a parent port associated with a trunk. Neutron will
pass to Nova the bridge where to plug the vif as part of the vif details. The os-vif driver creates the
trunk bridge tbr-trunk-id if it does not exist in plug(). It will create the tap interface tap1 and plug it into
tbr-trunk-id setting the parent port ID in the external-ids. The OVS agent will be monitoring the creation
of ports on the trunk bridges. When it detects that a new port has been created on the trunk bridge, it
will do the following:
A patch port is created to connect the trunk bridge to the integration bridge. tpt-parent-id, the trunk
bridge side of the patch is not associated to any tag. It will carry untagged traffic. tpi-parent-id, the
br-int side the patch port is tagged with VLAN 3. We assume that the trunk is on network1 that on this
host is associated with VLAN 3. The OVS agent will set the trunk ID in the external-ids of tpt-parent-id
and tpi-parent-id. If the parent port is associated with one or more subports the agent will process them
as described in the next paragraph.
Subport creation
If a subport is added to a parent port but no VM was booted using that parent port yet, the agent wont
process the subport (because at this point theres no node associated with the parent port). When a subport
is added to a parent port that is used by a VM the OVS agent will create a new patch port:
This patch port connects the trunk bridge to the integration bridge. spt-subport-id, the trunk bridge side
of the patch is tagged using VLAN 100. We assume that the segmentation ID of the subport is 100.
spi-subport-id, the br-int side of the patch port is tagged with VLAN 5. We assume that the subport is
on network2 that on this host uses VLAN 5. The OVS agent will set the subport ID in the external-ids
of spt-subport-id and spi-subport-id.
Inbound traffic from the VM point of view
The traffic coming out of tpi-parent-id will be stripped by br-int of VLAN 3. It will reach tpt-parent-id
untagged and from there tap1. The traffic coming out of spi-subport-id will be stripped by br-int of
VLAN 5. It will reach spt-subport-id where it will be tagged with VLAN 100 and it will then get to tap1
tagged.
Outbound traffic from the VM point of view
The untagged traffic coming from tap1 will reach tpt-parent-id and from there tpi-parent-id where it
will be tagged using VLAN 3. The traffic tagged with VLAN 100 from tap1 will reach spt-subport-id.
VLAN 100 will be stripped since spt-subport-id is a tagged port and the packet will reach spi-subport-id,
where its tagged using VLAN 5.
Deleting a port that is an active parent in a trunk is forbidden. If the parent port has no trunk associated,
it can be deleted. The OVS agent doesnt need to perform any action.
Trunk deletion
When Nova deletes a VM, it deletes the VMs corresponding Neutron ports only if they were created by
Nova when booting the VM. In the vlan-aware-vm case the parent port is passed to Nova, so the port
data will remain in the DB after the VM deletion. Nova will delete the port on the trunk bridge where
the VM is plugged. The L2 agent will detect that and delete the trunk bridge. It will notify the Neutron
server that the parent port is down.
The deletion of a trunk that is used by a VM is not allowed. The trunk can be deleted (leaving the parent
port intact) when the parent port is not used by any VM. After the trunk is deleted, the parent port can
also be deleted.
Subport deletion
The OVS agent will delete the patch port pair corresponding to the subport deleted.
Agent resync
During resync the agent should check that all the trunk and subports are still valid. It will delete the stale
trunk and subports using the procedure specified in the previous paragraphs according to the implemen-
tation.
Further Reading
This Agent uses the Linux Bridge to provide L2 connectivity for VM instances running on the compute
node to the public network. A graphical illustration of the deployment can be found in Networking
Guide.
In most common deployments, there is a compute and a network node. On both the compute and
the network node, the Linux Bridge Agent will manage virtual switches, connectivity among them,
and interaction via virtual ports with other network components such as namespaces and underlying
interfaces. Additionally, on the compute node, the Linux Bridge Agent will manage security groups.
Three use cases and their packet flow are documented as follows:
1. Linux Bridge: Provider networks
2. Linux Bridge: Self-service networks
3. Linux Bridge: High availability using VRRP
SR-IOV (Single Root I/O Virtualization) is a specification that allows a PCIe device to appear to be
multiple separate physical PCIe devices. SR-IOV works by introducing the idea of physical functions
(PFs) and virtual functions (VFs). Physical functions (PFs) are full-featured PCIe functions. Virtual
functions (VFs) are lightweight functions that lack configuration resources.
SR-IOV supports VLANs for L2 network isolation, other networking technologies such as
VXLAN/GRE may be supported in the future.
SR-IOV NIC agent manages configuration of SR-IOV Virtual Functions that connect VM instances
running on the compute node to the public network.
In most common deployments, there are compute and a network nodes. Compute node can support
VM connectivity via SR-IOV enabled NIC. SR-IOV NIC Agent manages Virtual Functions admin state.
Quality of service is partially implemented with the bandwidth limit and minimum bandwidth rules.
In the future it will manage additional settings, such as additional quality of service rules, rate limit
settings, spoofcheck and more. Network node will be usually deployed with either Open vSwitch or
Linux Bridge to support network node functionality.
Further Reading
L3 agent extensions
L3 agent extensions are part of a generalized L2/L3 extension framework. See agent extensions.
The L3 agent extension API object includes several methods that expose router information to L3 agent
extensions:
#. get_routers_in_project
#. get_router_hosting_port
#. is_router_in_namespace
#. get_router_info
This page discusses the usage of Neutron with Layer 3 functionality enabled.
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 10.0.0.2-10.0.0.254 |
| cidr | 10.0.0.0/24 |
| created_at | 2016-11-08T21:55:22Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.0.0.1 |
| host_routes | |
| id | c5c9f5c2-145d-46d2-a513-cf675530eaed |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | private-subnet |
| network_id | 713bae25-8276-4e0a-a453-e59a1d65425a |
| project_id | 35e3820f7490493ca9e3a5e685393298 |
| revision_number | 2 |
| service_types | |
| subnetpool_id | b1f81d96-d51d-41f3-96b5-a0da16ad7f0d |
(continues on next page)
| Field | Value
,→
,→ |
+-------------------------+------------------------------------------------
,→-------------------------------------------------------------------------
,→-----------------------------+
| admin_state_up | UP
,→
,→ |
| availability_zone_hints |
,→
,→ |
| availability_zones | nova
,→
,→ |
| created_at | 2016-11-08T21:55:30Z
,→
,→ |
| description |
,→
,→ |
| distributed | False
,→
,→ |
| external_gateway_info | {"network_id": "6ece2847-971b-487a-9c7b-
,→184651ebbc82", "enable_snat": true, "external_fixed_ips": [{"subnet_id":
,→"0d9c4261-4046-462f- |
| | 9d92-64fb89bc3ae6", "ip_address": "172.24.4.7"}
,→, {"subnet_id": "9e90b059-da97-45b8-8cb8-f9370217e181", "ip_address":
,→"2001:db8::1"}]} |
| flavor_id | None
,→
,→ |
| ha | False
,→ (continues on next page)
,→ |
1030 Chapter 14. Contributor Guide
Neutron Documentation, Release 18.1.0.dev178
,→ |
| project_id | 35e3820f7490493ca9e3a5e685393298
,→
,→ |
| revision_number | 8
,→
,→ |
| routes |
,→
,→ |
| status | ACTIVE
,→
,→ |
| updated_at | 2016-11-08T21:55:51Z
,→
,→ |
+-------------------------+------------------------------------------------
,→-------------------------------------------------------------------------
,→-----------------------------+
vagrant@bionic64:~/devstack$ openstack port list --router router1
+--------------------------------------+------+-------------------+--------
,→-------------------------------------------------------------------------
,→+--------+
| ID | Name | MAC Address | Fixed
,→IP Addresses
,→ | Status |
+--------------------------------------+------+-------------------+--------
,→-------------------------------------------------------------------------
,→+--------+
| 420abb60-2a5a-4e80-90a3-3ff47742dc53 | | fa:16:3e:2d:5c:4e | ip_
,→address='172.24.4.7', subnet_id='0d9c4261-4046-462f-9d92-64fb89bc3ae6'
,→ | ACTIVE |
| | | | ip_
,→address='2001:db8::1', subnet_id='9e90b059-da97-45b8-8cb8-f9370217e181'
,→ | |
| b42d789d-c9ed-48a1-8822-839c4599301e | | fa:16:3e:0a:ff:24 | ip_
,→address='10.0.0.1', subnet_id='c5c9f5c2-145d-46d2-a513-cf675530eaed'
,→ | ACTIVE |
| e3b7fede-277e-4c72-b66c-418a582b61ca | | fa:16:3e:13:dd:42 | ip_
,→address='2001:db8:8000::1', subnet_id='6fa3bab9-103e-45d5-872c-
,→91f21b52ceda' | ACTIVE |
+--------------------------------------+------+-------------------+--------
,→-------------------------------------------------------------------------
,→+--------+
See the Networking Guide for more detail on the creation of networks, subnets, and routers.
router1 in the Neutron logical network is realized through a port (qr-0ba8700e-da) in OpenVSwitch -
attached to br-int:
tap1abbbb60-b8
virbr0 8000.000000000000 yes
The neutron-l3-agent uses the Linux IP stack and iptables to perform L3 forwarding and NAT. In order
to support multiple routers with potentially overlapping IP addresses, neutron-l3-agent defaults to using
Linux network namespaces to provide isolated forwarding contexts. As a result, the IP addresses of
routers will not be visible simply by running ip addr list or ifconfig on the node. Similarly, you will not
be able to directly ping fixed IPs.
To do either of these things, you must run the command within a particular routers network namespace.
The namespace will have the name qrouter-<UUID of the router>.
For example:
Provider Networking
L3 agent extensions
Further Reading
Live-migration
Lets consider a VM with one port migrating from host1 with nova-compute1, neutron-l2-agent1 and
neutron-l3-agent1 to host2 with nova-compute2 and neutron-l2-agent2 and neutron-l3agent2.
Since the VM that is about to migrate is hosted by nova-compute1, nova sends the live-migration order
to nova-compute1 through RPC.
Nova Live Migration consists of the following stages:
• Pre-live-migration
• Live-migration-operation
• Post-live-migration
Pre-live-migration actions
Nova-compute1 will first ask nova-compute2 to perform pre-live-migration actions with a synchronous
RPC call. Nova-compute2 will use neutron REST API to retrieve the list of VMs ports. Then, it calls its
vif driver to create the VMs port (VIF) using plug_vifs().
In the case Open vSwitch Hybrid plug is used, Neutron-l2-agent2 will detect this new VIF, request the
device details from the neutron server and configure it accordingly. However, ports status wont change,
since this port is not bound to nova-compute2.
Nova-compute1 calls setup_networks_on_hosts. This updates the Neutron ports binding:profile with the
information of the target host. The port update RPC message sent out by Neutron server will be received
by neutron-l3-agent2, which proactively sets up the DVR router.
If pre-live-migration fails, nova rollbacks and port is removed from host2. If pre-live-migration suc-
ceeds, nova proceeds with live-migration-operation.
Live-migration-operation
Once nova-compute2 has performed pre-live-migration actions, nova-compute1 can start the live-
migration. This results in the creation of the VM and its corresponding tap interface on node 2.
In the case Open vSwitch normal plug, linux bridge or MacVTap is being used, Neutron-l2-agent2 will
detect this new tap device and configure it accordingly. However, ports status wont change, since this
port is not bound to nova-compute2.
As soon as the instance is active on host2, the original instance on host1 gets removed and with it
the corresponding tap device. Assuming OVS-hybrid plug is NOT used, Neutron-l2-agent1 detects the
removal and tells the neutron server to set the ports status to DOWN state with RPC messages.
There is no rollback if failure happens in live-migration-operation stage. TBD: Error are handled by the
post-live-migration stage.
• Some host devices that are specified in the instance definition are not present on the target host.
Migration fails before it really started. This can happen with MacVTap agent. See bug https:
//bugs.launchpad.net/bugs/1550400
Post-live-migration actions
If neutron didnt already processed the REST call update_port(binding=host2), the port status will ef-
fectively move to BUILD and then to DOWN. Otherwise, the port is bound to host2, and neutron wont
change the port status since its not bound the host that is sending RPC messages.
There is no rollback if failure happens in post-live-migration stage. In the case of an error, the instance
is set into ERROR state.
Post-Copy Migration
Usually, Live Migration is executed as pre-copy migration. The instance is active on host1 until nearly
all memory has been copied to host2. If a certain threshold of copied memory is met, the instance on
the source gets paused, the rest of the memory copied over and the instance started on the target. The
challenge with this approach is, that migration might take a infinite amount of time, when the instance
is heavily writing to memory.
This issue gets solved with post-copy migration. At some point in time, the instance on host2 will be
set to active, although still a huge amount of memory pages reside only on host1. The phase that starts
now is called the post_copy phase. If the instance tries to access a memory page that has not yet been
transferred, libvirt/qemu takes care of moving this page to the target immediately. New pages will only
be written to the source. With this approach the migration operation takes a finite amount of time.
Today, the rebinding of the port from host1 to host2 happens in the post_live_migration phase, after
migration finished. This is fine for the pre-copy case, as the time windows between the activation of
the instance on the target and the binding of the port to the target is pretty small. This becomes more
problematic for the post-copy migration case. The instance becomes active on the target pretty early
but the portbinding still happens after migration finished. During this time window, the instance might
not be reachable via the network. This should be solved with bug https://bugs.launchpad.net/nova/+bug/
1605016
Flow Diagram
OVS-Hybrid plug
The sequence with RPC messages from neutron-l2-agent processed first is described in the following
UML sequence diagram
The extension manager for ML2 was introduced in Juno (more details can be found in the approved
spec). The features allows for extending ML2 resources without actually having to introduce cross
cutting concerns to ML2. The mechanism has been applied for a number of use cases, and extensions
that currently use this frameworks are available under ml2/extensions.
This extension is an information-only API that allows a user or process to determine the amount of IPs
that are consumed across networks and their subnets allocation pools. Each network and embedded
subnet returns with values for used_ips and total_ips making it easy to determine how much of your
networks IP space is consumed.
This API provides the ability for network administrators to periodically list usage (manual or automated)
in order to preemptively add new network capacity when thresholds are exceeded.
Important Note:
This API tracks a networks consumable IPs. Whats the distinction? After a network and its subnets are
created, consumable IPs are:
• Consumed in the subnets allocations (derives used IPs)
API Specification
GET /v2.0/network-ip-availabilities
Example response
Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
{
"network_ip_availabilities": [
{
"network_id": "f944c153-3f46-417b-a3c2-487cd9a456b9",
"network_name": "net1",
"subnet_ip_availability": [
{
"cidr": "10.0.0.0/24",
"ip_version": 4,
"subnet_id": "46b1406a-8373-454c-8eb8-500a09eb77fb",
"subnet_name": "",
"total_ips": 253,
"used_ips": 3
}
],
"tenant_id": "test-project",
"total_ips": 253,
"used_ips": 3
},
{
"network_id": "47035bae-4f29-4fef-be2e-2941b72528a8",
"network_name": "net2",
"subnet_ip_availability": [],
"tenant_id": "test-project",
"total_ips": 0,
"used_ips": 0
},
{
"network_id": "2e3ea0cd-c757-44bf-bb30-42d038687e3f",
"network_name": "net3",
"subnet_ip_availability": [
(continues on next page)
Availability by network ID
GET /v2.0/network-ip-availabilities/{network_uuid}
Example response
Response:
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
{
"network_ip_availability": {
"network_id": "f944c153-3f46-417b-a3c2-487cd9a456b9",
"network_name": "net1",
"subnet_ip_availability": [
{
"cidr": "10.0.0.0/24",
"ip_version": 4,
"subnet_name": "",
"subnet_id": "46b1406a-8373-454c-8eb8-500a09eb77fb",
"total_ips": 253,
"used_ips": 3
}
],
"tenant_id": "test-project",
"total_ips": 253,
"used_ips": 3
}
}
Objects in neutron
Object versioning is a key concept in achieving rolling upgrades. Since its initial implementation by the
nova community, a versioned object model has been pushed to an oslo library so that its benefits can be
shared across projects.
Oslo VersionedObjects (aka OVO) is a database facade, where you define the middle layer between
software and the database schema. In this layer, a versioned object per database resource is created
with a strict data definition and version number. With OVO, when you change the database schema,
the version of the object also changes and a backward compatible translation is provided. This allows
different versions of software to communicate with one another (via RPC).
OVO is also commonly used for RPC payload versioning. OVO creates versioned dictionary messages
by defining a strict structure and keeping strong typing. Because of it, you can be sure of what is sent
and how to use the data on the receiving end.
Usage of objects
CRUD operations
# to update fields:
dns = DNSNameServer.get_object(context, address='asd', subnet_id='xxx')
dns.order = 2
dns.update()
# if you don't care about keeping the object, you can execute the update
# without fetch of the object state from the underlying persistent layer
count = DNSNameServer.update_objects(
context, {'order': 3}, address='asd', subnet_id='xxx')
The NeutronDbObject class has strict validation on which field sorting and filtering can hap-
pen. When calling get_objects(), count(), update_objects(), delete_objects()
and objects_exist(), validate_filters() is invoked, to see if its a supported filter
criterion (which is by default non-synthetic fields only). Additional filters can be defined using
register_filter_hook_on_model(). This will add the requested string to valid filter names
in object implementation. It is optional.
In order to disable filter validation, validate_filters=False needs to be passed as an argument
in aforementioned methods. It was added because the default behaviour of the neutron API is to accept
everything at API level and filter it out at DB layer. This can be used by out of tree extensions.
register_filter_hook_on_model() is a complementary implementation in
the NeutronDbObject layer to DB layers neutron_lib.db.model_query.
register_hook(), which adds support for extra filtering during construction of SQL
query. When extension defines extra query hook, it needs to be registered using the objects
register_filter_hook_on_model(), if it is not already included in the objects fields.
To limit or paginate results, Pager object can be used. It accepts sorts (list of (key, direction)
tuples), limit, page_reverse and marker keywords.
# filtering
# sorting
# direction True == ASC, False == DESC
direction = False
pager = Pager(sorts=[('order', direction)])
dnses = DNSNameServer.get_objects(context, _pager=pager, subnet_id='xxx')
fields = {
'id': common_types.UUIDField(),
'name': obj_fields.StringField(),
'subnetpool_id': common_types.UUIDField(nullable=True),
'ip_version': common_types.IPVersionEnumField()
}
VERSION is mandatory and defines the version of the object. Initially, set the VERSION field
to 1.0. Change VERSION if fields or their types are modified. When you change the version
of objects being exposed via RPC, add method obj_make_compatible(self, primitive,
target_version). For example, if a new version introduces a new parameter, it needs to be re-
moved for previous versions:
In the following example the object has changed an attribute definition. For example, in version 1.1
description is allowed to be None but not in version 1.0:
Using the first example as reference, this is how the unit test can be implemented:
def test_object_version_degradation_1_1_to_1_0(self):
OVO_obj_1_1 = self._method_to_create_this_OVO()
OVO_obj_1_0 = OVO_obj_1_1.obj_to_primitive(target_version='1.0')
self.assertNotIn('new_parameter', OVO_obj_1_0['versioned_object.data'])
Note: Standard Attributes are automatically added to OVO fields in base class. Attributes1 like
description, created_at, updated_at and revision_number are added in2 .
primary_keys is used to define the list of fields that uniquely identify the object. In case of database
backed objects, its usually mapped onto SQL primary keys. For immutable object fields that cannot be
changed, there is a fields_no_update list, that contains primary_keys by default.
If there is a situation where a field needs to be named differently in an object than in the database
schema, you can use fields_need_translation. This dictionary contains the name of the field
in the object definition (the key) and the name of the field in the database (the value). This allows to have
a different object layer representation for database persisted data. For example in IP allocation pools:
fields_need_translation = {
'start': 'first_ip', # field_ovo: field_db
'end': 'last_ip'
}
synthetic_fields may become handy. This object property can define a list of object fields that
dont belong to the object database model and that are hence instead to be implemented in some cus-
tom way. Some of those fields map to orm.relationships defined on models, while others are
completely untangled from the database layer.
When exposing existing orm.relationships as an ObjectField-typed field, you can use
the foreign_keys object property that defines a link between two object types. When
used, it allows objects framework to automatically instantiate child objects, and fill the rele-
vant parent fields, based on orm.relationships defined on parent models. In order to
automatically populate the synthetic_fields, the foreign_keys property is introduced.
load_synthetic_db_fields()3 method from NeutronDbObject uses foreign_keys to
match the foreign key in related object and local field that the foreign key is referring to. See simplified
examples:
class DNSNameServerSqlModel(model_base.BASEV2):
address = sa.Column(sa.String(128), nullable=False, primary_key=True)
subnet_id = sa.Column(sa.String(36),
sa.ForeignKey('subnets.id', ondelete="CASCADE"),
primary_key=True)
@obj_base.VersionedObjectRegistry.register
class DNSNameServerOVO(base.NeutronDbObject):
VERSION = '1.0'
db_model = DNSNameServerSqlModel
fields = {
'address': obj_fields.StringField(),
'subnet_id': common_types.UUIDField(),
}
@obj_base.VersionedObjectRegistry.register
class SubnetOVO(base.NeutronDbObject):
VERSION = '1.0'
(continues on next page)
3
https://opendev.org/openstack/neutron/tree/neutron/objects/base.py?h=stable/ocata#n516
fields = {
'id': common_types.UUIDField(), # HasId from model class
'project_id': obj_fields.StringField(nullable=True), # HasProject
,→from model class
'subnet_name': obj_fields.StringField(nullable=True),
'dns_nameservers': obj_fields.ListOfObjectsField('DNSNameServer',
nullable=True),
'allocation_pools': obj_fields.ListOfObjectsField(
,→'IPAllocationPoolOVO',
nullable=True)
}
@obj_base.VersionedObjectRegistry.register
class IPAllocationPoolOVO(base.NeutronDbObject):
VERSION = '1.0'
db_model = IPAllocationPoolSqlModel
fields = {
'subnet_id': common_types.UUIDField()
}
Note: foreign_keys is declared in related object IPAllocationPoolOVO, the same way as its
done in the SQL model IPAllocationPoolSqlModel: sa.ForeignKey('subnets.id')
Note: Only single foreign key is allowed (usually parent ID), you cannot link through multiple model
attributes.
It is important to remember about the nullable parameter. In the SQLAlchemy model, the nullable
parameter is by default True, while for OVO fields, the nullable is set to False. Make sure you
correctly map database column nullability properties to relevant object fields.
4
https://opendev.org/openstack/neutron/tree/neutron/objects/base.py?h=stable/ocata#n542
Synthetic fields
synthetic_fields is a list of fields, that are not directly backed by corresponding object SQL table
attributes. Synthetic fields are not limited in types that can be used to implement them.
fields = {
'dhcp_agents': obj_fields.ObjectField('NetworkDhcpAgentBinding',
nullable=True), # field that
,→contains another single NeutronDbObject of NetworkDhcpAgentBinding type
'shared': obj_fields.BooleanField(default=False),
'subnets': obj_fields.ListOfObjectsField('Subnet', nullable=True)
}
# All three fields do not belong to corresponding SQL table, and will be
# implemented in some object-specific way.
synthetic_fields = ['dhcp_agents', 'shared', 'subnets']
Sometimes you may want to expose a field on an object that is not mapped into a corresponding database
model attribute, or its orm.relationship; or may want to expose a orm.relationship data in
a format that is not directly mapped onto a child object type. In this case, here is what you need to do
to implement custom getters and setters for the custom field. The custom method to load the synthetic
fields can be helpful if the field is not directly defined in the database, OVO class is not suitable to
load the data or the related object contains only the ID and property of the parent object, for example
subnet_id and property of it: is_external.
In order to implement the custom method to load the synthetic field, you need to provide loading method
in the OVO class and override the base class method from_db_object() and obj_load_attr().
The first one is responsible for loading the fields to object attributes when calling get_object() and
get_objects(), create() and update(). The second is responsible for loading attribute when
it is not set in object. Also, when you need to create related object with attributes passed in constructor,
create() and update() methods need to be overwritten. Additionally is_external attribute
can be exposed as a boolean, instead of as an object-typed field. When field is changed, but it doesnt
need to be saved into database, obj_reset_changes() can be called, to tell OVO library to ignore
that. Lets see an example:
@obj_base.VersionedObjectRegistry.register
class ExternalSubnet(base.NeutronDbObject):
VERSION = '1.0'
fields = {'subnet_id': common_types.UUIDField(),
'is_external': obj_fields.BooleanField()}
primary_keys = ['subnet_id']
foreign_keys = {'Subnet': {'subnet_id': 'id'}}
@obj_base.VersionedObjectRegistry.register
class Subnet(base.NeutronDbObject):
VERSION = '1.0'
fields = {'external': obj_fields.BooleanField(nullable=True),}
synthetic_fields = ['external']
(continues on next page)
def create(self):
fields = self.get_changes()
with db_api.context_manager.writer.using(context):
if 'external' in fields:
ExternalSubnet(context, subnet_id=self.id,
is_external=fields['external']).create()
# Call to super() to create the SQL record for the object, and
# reload its fields from the database, if needed.
super(Subnet, self).create()
def update(self):
fields = self.get_changes()
with db_api.context_manager.writer.using(context):
if 'external' in fields:
# delete the old ExternalSubnet record, if present
obj_db_api.delete_objects(
self.obj_context, ExternalSubnet.db_model,
subnet_id=self.id)
# create the new intended ExternalSubnet object
ExternalSubnet(context, subnet_id=self.id,
is_external=fields['external']).create()
# calling super().update() will reload the synthetic fields
# and also will update any changed non-synthetic fields, if any
super(Subnet, self).update()
else:
# perform extra operation to fetch the data from DB
external_obj = ExternalSubnet.get_object(context,
subnet_id=self.id)
external = external_obj.is_external if external_obj else None
In the above example, the get_object(s) methods do not have to be overwritten, because
from_db_object() takes care of loading the synthetic fields in custom way.
Standard attributes
The standard attributes are added automatically in metaclass DeclarativeObject. If adding stan-
dard attribute, it has to be added in neutron/objects/extensions/standardattributes.
py. It will be added to all relevant objects that use the standardattributes model. Be careful
when adding something to the above, because it could trigger a change in the objects VERSION. For
more on how standard attributes work, check5 .
The RBAC is implemented currently for resources like: Subnet(*), Network and QosPolicy. Subnet is a
special case, because access control of Subnet depends on Network RBAC entries.
The RBAC support for objects is defined in neutron/objects/rbac_db.py. It defines new
base class NeutronRbacObject. The new class wraps standard NeutronDbObject methods like
create(), update() and to_dict(). It checks if the shared attribute is defined in the fields
dictionary and adds it to synthetic_fields. Also, rbac_db_model is required to be defined in
Network and QosPolicy classes.
NeutronRbacObject is a common place to handle all operations on the RBAC entries, like
getting the info if resource is shared or not, creation and updates of them. By wrapping
the NeutronDbObject methods, it is manipulating the shared attribute while create() and
update() methods are called.
The example of defining the Network OVO:
fields = {
'id': common_types.UUIDField(),
'project_id': obj_fields.StringField(nullable=True),
'name': obj_fields.StringField(nullable=True),
# share is required to be added to fields
'shared': obj_fields.BooleanField(default=False),
}
Note: The shared field is not added to the synthetic_fields, because NeutronRbacObject
requires to add it by itself, otherwise ObjectActionError is raised.6
One of the methods to extend neutron resources is to add an arbitrary value to dictionary representing
the data by providing extend_(subnet|port|network)_dict() function and defining loading
method.
From DB perspective, all the data will be loaded, including all declared fields from DB relationships.
Current implementation for core resources (Port, Subnet, Network etc.) is that DB result is parsed by
make_<resource>_dict() and extend_<resource>_dict(). When extension is enabled,
extend_<resource>_dict() takes the DB results and declares new fields in resulting dict. When
extension is not enabled, data will be fetched, but will not be populated into resulting dict, because
extend_<resource>_dict() will not be called.
Plugins can still use objects for some work, but then convert them to dicts and work as they please,
extending the dict as they wish.
For example:
class TestSubnetExtension(model_base.BASEV2):
subnet_id = sa.Column(sa.String(36),
sa.ForeignKey('subnets.id', ondelete="CASCADE"),
primary_key=True)
value = sa.Column(sa.String(64))
subnet = orm.relationship(
models_v2.Subnet,
# here is the definition of loading the extension with Subnet
,→model:
backref=orm.backref('extension', cascade='delete', uselist=False))
@oslo_obj_base.VersionedObjectRegistry.register_if(False)
class TestSubnetExtensionObject(obj_base.NeutronDbObject):
# Version 1.0: Initial version
(continues on next page)
6
https://opendev.org/openstack/neutron/tree/neutron/objects/rbac_db.py?h=stable/ocata#n291
db_model = TestSubnetExtension
fields = {
'subnet_id': common_types.UUIDField(),
'value': obj_fields.StringField(nullable=True)
}
primary_keys = ['subnet_id']
foreign_keys = {'Subnet': {'subnet_id': 'id'}}
@obj_base.VersionedObjectRegistry.register
class Subnet(base.NeutronDbObject):
# Version 1.0: Initial version
VERSION = '1.0'
fields = {
'id': common_types.UUIDField(),
'extension': obj_fields.ObjectField(TestSubnetExtensionObject.__
,→name__,
nullable=True),
}
synthetic_fields = ['extension']
The above example is the ideal situation, where all extensions have objects adopted and enabled in core
neutron resources.
By introducing the OVO work in tree, interface between base plugin code and registered extension
functions hasnt been changed. Those still receive a SQLAlchemy model, not an object. This is achieved
by capturing the corresponding database model on get_***/create/update, and exposing it via
<object>.db_obj
While the code to check object versions is meant to remain for a long period of time, in the interest of
not accruing too much cruft over time, they are not intended to be permanent. OVO downgrade code
should account for code that is within the upgrade window of any major OpenStack distribution. The
longest currently known is for Ubuntu Cloud Archive which is to upgrade four versions, meaning during
the upgrade the control nodes would be running a release that is four releases newer than what is running
on the computes.
Known fast forward upgrade windows are:
• Red Hat OpenStack Platform (RHOSP): X -> X+37
7
https://access.redhat.com/support/policy/updates/openstack/platform/
All objects can support tenant_id and project_id filters and fields at the same time; it is automat-
ically enabled for all objects that have a project_id field. The base NeutronDbObject class has
support for exposing tenant_id in dictionary access to the object fields (subnet['tenant_id'])
and in to_dict() method. There is a tenant_id read-only property for every object that has
project_id in fields. It is not exposed in obj_to_primitive() method, so it means
that tenant_id will not be sent over RPC callback wire. When talking about filtering/sorting by
tenant_id, the filters should be converted to expose project_id field. This means that for the
long run, the API layer should translate it, but as temporary workaround it can be done at DB layer
before passing filters to objects get_objects() method, for example:
def convert_filters(result):
if 'tenant_id' in result:
result['project_id'] = result.pop('tenant_id')
return result
References
The OVS driver has the same API as the current iptables firewall driver, keeping the state of security
groups and ports inside of the firewall. Class SGPortMap was created to keep state consistent, and
maps from ports to security groups and vice-versa. Every port and security group is represented by its
own object encapsulating the necessary information.
Note: Open vSwitch firewall driver uses register 5 for identifying the port related to the flow and
register 6 which identifies the network, used in particular for conntrack zones.
8
https://www.suse.com/releasenotes/x86_64/SUSE-OPENSTACK-CLOUD/8/#Upgrade
9
https://www.ubuntu.com/about/release-cycle
10
https://opendev.org/openstack/neutron-lib/tree/neutron_lib/objects/utils.py
Ingress/Egress Terminology
In this document, the terms ingress and egress are relative to a VM instance connected to OVS (or
a netns connected to OVS):
• ingress applies to traffic that will ultimately go into a VM (or into a netns), assuming it is not
dropped
• egress applies to traffic coming from a VM (or from a netns)
. .
_______|\ _______|\
\ ingress \ \ ingress \
/_______ / /_______ /
|/ .-----------------. |/
' | | '
| |-----------( netns
,→interface )
Note that these terms are used differently in OVS code and documentation, where they are relative to the
OVS bridge, with ingress applying to traffic as it comes into the OVS bridge, and egress applying
to traffic as it leaves the OVS bridge.
There are two main calls performed by the firewall driver in order to either create or update a port with
security groups - prepare_port_filter and update_port_filter. Both methods rely on the
security group objects that are already defined in the driver and work similarly to their iptables counter-
parts. The definition of the objects will be described later in this document. prepare_port_filter
must be called only once during port creation, and it defines the initial rules for the port. When the port is
updated, all filtering rules are removed, and new rules are generated based on the available information
about security groups in the driver.
Security group rules can be defined in the firewall driver by calling
update_security_group_rules, which rewrites all the rules for a given security group.
If a remote security group is changed, then update_security_group_members is called to
determine the set of IP addresses that should be allowed for this remote security group. Calling this
method will not have any effect on existing instance ports. In other words, if the port is using security
groups and its rules are changed by calling one of the above methods, then no new rules are generated
for this port. update_port_filter must be called for the changes to take effect.
All the machinery above is controlled by security group RPC methods, which mean the firewall driver
doesnt have any logic of which port should be updated based on the provided changes, it only accom-
plishes actions when called from the controller.
OpenFlow rules
At first, every connection is split into ingress and egress processes based on the input or output port
respectively. Each port contains the initial hardcoded flows for ARP, DHCP and established connections,
which are accepted by default. To detect established connections, a flow must by marked by conntrack
first with an action=ct() rule. An accepted flow means that ingress packets for the connection are
directly sent to the port, and egress packets are left to be normally switched by the integration bridge.
Note: There is a new config option explicitly_egress_direct, if it is set to True, it will direct
egress unicast traffic to local port directly or to patch bridge port if destination is in remote host. So there
is no NORMAL for egress in such scenario. This option is used to overcome the egress packet flooding
when openflow firewall is enabled.
Connections that are not matched by the above rules are sent to either the ingress or egress filtering table,
depending on its direction. The reason the rules are based on security group rules in separate tables is to
make it easy to detect these rules during removal.
Security group rules are treated differently for those without a remote group ID and those with a remote
group ID. A security group rule without a remote group ID is expanded into several OpenFlow rules by
the method create_flows_from_rule_and_port. A security group rule with a remote group
ID is expressed by three sets of flows. The first two are conjunctive flows which will be described in the
next section. The third set matches on the conjunction IDs and does accept actions.
The OpenFlow spec says a packet should not match against multiple flows at the same priority1 . The
firewall driver uses 8 levels of priorities to achieve this. The method flow_priority_offset
calculates a priority for a given security group rule. The use of priorities is essential with conjunction
flows, which will be described later in the conjunction flows examples.
With a security group rule with a remote group ID, flows that match on nw_src for remote_group_id
addresses and match on dl_dst for port MAC addresses are needed (for ingress rules; likewise for egress
rules). Without conjunction, this results in O(n*m) flows where n and m are number of ports in the
remote group ID and the port security group, respectively.
A conj_id is allocated for each (remote_group_id, security_group_id, direction, ethertype,
flow_priority_offset) tuple. The class ConjIdMap handles the mapping. The same conj_id is shared
between security group rules if multiple rules belong to the same tuple above.
Conjunctive flows consist of 2 dimensions. Flows that belong to the dimension 1 of 2 are gener-
ated by the method create_flows_for_ip_address and are in charge of IP address based
filtering specified by their remote group IDs. Flows that belong to the dimension 2 of 2 are
generated by the method create_flows_from_rule_and_port and modified by the method
substitute_conjunction_actions, which represents the portion of the rule other than its re-
mote group ID.
1
Although OVS seems to magically handle overlapping flows under some cases, we shouldnt rely on that.
Those dimension 2 of 2 flows are per port and contain no remote group information. When there are
multiple security group rules for a port, those flows can overlap. To avoid such a situation, flows are
sorted and fed to merge_port_ranges or merge_common_rules methods to rearrange them.
The following example presents two ports on the same host. They have different security groups and
there is ICMP traffic allowed from first security group to the second security group. Ports have following
attributes:
Port 1
- plugged to the port 1 in OVS bridge
- IP address: 192.168.0.1
- MAC address: fa:16:3e:a4:22:10
- security group 1: can send ICMP packets out
- allowed address pair: 10.0.0.1/32, fa:16:3e:8c:84:13
Port 2
- plugged to the port 2 in OVS bridge
- IP address: 192.168.0.2
- MAC address: fa:16:3e:24:57:c7
- security group 2:
- can receive ICMP packets from security group 1
- can receive TCP packets from security group 1
- can receive TCP packets to port 80 from security group 2
- can receive IP packets from security group 3
- allowed address pair: 10.1.0.0/24, fa:16:3e:8c:84:14
Port 3
- patch bridge port (e.g. patch-tun) in OVS bridge
The following table, table 71 (BASE_EGRESS) implements ARP spoofing protection, IP spoofing
protection, allows traffic related to IP address allocations (dhcp, dhcpv6, slaac, ndp) for egress traffic,
and allows ARP replies. Also identifies not tracked connections which are processed later with in-
formation obtained from conntrack. Notice the zone=NXM_NX_REG6[0..15] in actions when
obtaining information from conntrack. It says every port has its own conntrack zone defined by the value
in register 6 (OVSDB port tag identifying the network). Its there to avoid accepting established
traffic that belongs to different port with same conntrack parameters.
The very first rule in table 71 (BASE_EGRESS) is a rule removing conntrack information for a use-
case where Neutron logical port is placed directly to the hypervisor. In such case kernel does conntrack
lookup before packet reaches Open vSwitch bridge. Tracked packets are sent back for processing by the
same table after conntrack information is cleared.
Rules below allow ICMPv6 traffic for multicast listeners, neighbour solicitation and neighbour adver-
tisement.
table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,
,→ipv6_src=fe80::11,icmp_type=130 actions=resubmit(,94)
table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,
,→ipv6_src=fe80::11,icmp_type=131 actions=resubmit(,94)
table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,
,→ipv6_src=fe80::11,icmp_type=132 actions=resubmit(,94)
table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,
,→ipv6_src=fe80::11,icmp_type=135 actions=resubmit(,94)
table=71, priority=95,icmp6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:11,
,→ipv6_src=fe80::11,icmp_type=136 actions=resubmit(,94)
table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,
,→ipv6_src=fe80::22,icmp_type=130 actions=resubmit(,94)
table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,
,→ipv6_src=fe80::22,icmp_type=131 actions=resubmit(,94)
table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,
,→ipv6_src=fe80::22,icmp_type=132 actions=resubmit(,94)
table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,
,→ipv6_src=fe80::22,icmp_type=135 actions=resubmit(,94)
table=71, priority=95,icmp6,reg5=0x2,in_port=2,dl_src=fa:16:3e:a4:22:22,
,→ipv6_src=fe80::22,icmp_type=136 actions=resubmit(,94)
table=71, priority=95,arp,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:10,arp_
,→spa=192.168.0.1 actions=resubmit(,94)
table=71, priority=95,arp,reg5=0x1,in_port=1,dl_src=fa:16:3e:8c:84:13,arp_
,→spa=10.0.0.1 actions=resubmit(,94)
table=71, priority=95,arp,reg5=0x2,in_port=2,dl_src=fa:16:3e:24:57:c7,arp_
,→spa=192.168.0.2 actions=resubmit(,94)
table=71, priority=95,arp,reg5=0x2,in_port=2,dl_src=fa:16:3e:8c:84:14,arp_
,→spa=10.1.0.0/24 actions=resubmit(,94)
DHCP and DHCPv6 traffic is allowed to instance but DHCP servers are blocked on instances.
table=71, priority=80,udp,reg5=0x1,in_port=1,tp_src=68,tp_dst=67
,→actions=resubmit(,73)
table=71, priority=80,udp6,reg5=0x1,in_port=1,tp_src=546,tp_dst=547
,→actions=resubmit(,73)
table=71, priority=70,udp,reg5=0x1,in_port=1,tp_src=67,tp_dst=68
,→actions=resubmit(,93)
table=71, priority=70,udp6,reg5=0x1,in_port=1,tp_src=547,tp_dst=546
,→actions=resubmit(,93)
table=71, priority=80,udp,reg5=0x2,in_port=2,tp_src=68,tp_dst=67
,→actions=resubmit(,73)
table=71, priority=80,udp6,reg5=0x2,in_port=2,tp_src=546,tp_dst=547
,→actions=resubmit(,73)
table=71, priority=70,udp,reg5=0x2,in_port=2,tp_src=67,tp_dst=68
,→actions=resubmit(,93)
table=71, priority=70,udp6,reg5=0x2,in_port=2,tp_src=547,tp_dst=546
,→actions=resubmit(,93)
Flowing rules obtain conntrack information for valid IP and MAC address combinations. All other
packets are dropped.
table=71, priority=65,ip,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:10,nw_
,→src=192.168.0.1 actions=ct(table=72,zone=NXM_NX_REG6[0..15])
table=71, priority=65,ip,reg5=0x1,in_port=1,dl_src=fa:16:3e:8c:84:13,nw_
,→src=10.0.0.1 actions=ct(table=72,zone=NXM_NX_REG6[0..15])
table=71, priority=65,ip,reg5=0x2,in_port=2,dl_src=fa:16:3e:24:57:c7,nw_
,→src=192.168.0.2 actions=ct(table=72,zone=NXM_NX_REG6[0..15])
table=71, priority=65,ip,reg5=0x2,in_port=2,dl_src=fa:16:3e:8c:84:14,nw_
,→src=10.1.0.0/24 actions=ct(table=72,zone=NXM_NX_REG6[0..15])
table=71, priority=65,ipv6,reg5=0x1,in_port=1,dl_src=fa:16:3e:a4:22:10,
,→ipv6_src=fe80::f816:3eff:fea4:2210 actions=ct(table=72,zone=NXM_NX_
,→REG6[0..15])
table=71, priority=65,ipv6,reg5=0x2,in_port=2,dl_src=fa:16:3e:24:57:c7,
,→ipv6_src=fe80::f816:3eff:fe24:57c7 actions=ct(table=72,zone=NXM_NX_
,→REG6[0..15])
table=71, priority=10,reg5=0x1,in_port=1 actions=resubmit(,93)
table=71, priority=10,reg5=0x2,in_port=2 actions=resubmit(,93)
table=71, priority=0 actions=drop
table 72 (RULES_EGRESS) accepts only established or related connections, and implements rules
defined by security groups. As this egress connection might also be an ingress connection for some other
port, its not switched yet but eventually processed by the ingress pipeline.
All established or new connections defined by security group rules are accepted, which will be ex-
plained later. All invalid packets are dropped. In the case below we allow all ICMP egress traffic.
table=72, priority=75,ct_state=+est-rel-rpl,icmp,reg5=0x1
,→actions=resubmit(,73)
table=72, priority=75,ct_state=+new-est,icmp,reg5=0x1 actions=resubmit(,73)
table=72, priority=50,ct_state=+inv+trk actions=resubmit(,93)
Important on the flows below is the ct_mark=0x1. Flows that were marked as not existing anymore
by rule introduced later will value this value. Those are typically connections that were allowed by some
security group rule and the rule was removed.
All other connections that are not marked and are established or related are allowed.
table=72, priority=50,ct_state=+est-rel+rpl,ct_zone=644,ct_mark=0,reg5=0x1
,→actions=resubmit(,94)
table=72, priority=50,ct_state=+est-rel+rpl,ct_zone=644,ct_mark=0,reg5=0x2
,→actions=resubmit(,94)
table=72, priority=50,ct_state=-new-est+rel-inv,ct_zone=644,ct_mark=0,
,→reg5=0x1 actions=resubmit(,94)
table=72, priority=50,ct_state=-new-est+rel-inv,ct_zone=644,ct_mark=0,
,→reg5=0x2 actions=resubmit(,94)
In the following, flows are marked established connections that werent matched in the previous flows,
which means they dont have accepting security group rule anymore.
table=73, priority=100,reg6=0x284,dl_dst=fa:16:3e:a4:22:10
,→actions=load:0x1->NXM_NX_REG5[],resubmit(,81)
table=73, priority=100,reg6=0x284,dl_dst=fa:16:3e:8c:84:13
,→actions=load:0x1->NXM_NX_REG5[],resubmit(,81)
table=73, priority=100,reg6=0x284,dl_dst=fa:16:3e:24:57:c7
,→actions=load:0x2->NXM_NX_REG5[],resubmit(,81)
table=73, priority=100,reg6=0x284,dl_dst=fa:16:3e:8c:84:14
,→actions=load:0x2->NXM_NX_REG5[],resubmit(,81)
table=73, priority=90,ct_state=+new-est,reg5=0x1 actions=ct(commit,
,→zone=NXM_NX_REG6[0..15]),resubmit(,91)
table=82, priority=71,ct_state=+new-est,ip,reg6=0x284,nw_src=192.168.0.1
,→actions=conjunction(19,1/2)
table=82, priority=71,ct_state=+new-est,ip,reg6=0x284,nw_src=10.0.0.1
,→actions=conjunction(19,1/2)
table=82, priority=71,ct_state=+est-rel-rpl,icmp,reg5=0x2
,→actions=conjunction(18,2/2)
table=82, priority=71,ct_state=+new-est,icmp,reg5=0x2
,→actions=conjunction(19,2/2)
table=82, priority=71,conj_id=18,ct_state=+est-rel-rpl,ip,reg5=0x2
,→actions=strip_vlan,output:2
table=82, priority=71,conj_id=19,ct_state=+new-est,ip,reg5=0x2
,→actions=ct(commit,zone=NXM_NX_REG6[0..15]),strip_vlan,output:2,resubmit(,
(continues on next page)
,→92)
There are some more security group rules with remote group IDs. Next we look at TCP related ones.
Excerpt of flows that correspond to those rules are:
table=82, priority=73,ct_state=+est-rel-rpl,tcp,reg5=0x2,tp_dst=0x60/
,→0xffe0 actions=conjunction(22,2/2)
table=82, priority=73,ct_state=+new-est,tcp,reg5=0x2,tp_dst=0x60/0xffe0
,→actions=conjunction(23,2/2)
table=82, priority=73,ct_state=+est-rel-rpl,tcp,reg5=0x2,tp_dst=0x40/
,→0xfff0 actions=conjunction(22,2/2)
table=82, priority=73,ct_state=+new-est,tcp,reg5=0x2,tp_dst=0x40/0xfff0
,→actions=conjunction(23,2/2)
table=82, priority=73,ct_state=+est-rel-rpl,tcp,reg5=0x2,tp_dst=0x58/
,→0xfff8 actions=conjunction(22,2/2)
table=82, priority=73,ct_state=+new-est,tcp,reg5=0x2,tp_dst=0x58/0xfff8
,→actions=conjunction(23,2/2)
table=82, priority=73,ct_state=+est-rel-rpl,tcp,reg5=0x2,tp_dst=0x54/
,→0xfffc actions=conjunction(22,2/2)
table=82, priority=73,ct_state=+new-est,tcp,reg5=0x2,tp_dst=0x54/0xfffc
,→actions=conjunction(23,2/2)
table=82, priority=73,ct_state=+est-rel-rpl,tcp,reg5=0x2,tp_dst=0x52/
,→0xfffe actions=conjunction(22,2/2)
table=82, priority=73,ct_state=+new-est,tcp,reg5=0x2,tp_dst=0x52/0xfffe
,→actions=conjunction(23,2/2)
table=82, priority=73,ct_state=+est-rel-rpl,tcp,reg5=0x2,tp_dst=80
,→actions=conjunction(22,2/2),conjunction(14,2/2)
table=82, priority=73,ct_state=+est-rel-rpl,tcp,reg5=0x2,tp_dst=81
,→actions=conjunction(22,2/2)
table=82, priority=73,ct_state=+new-est,tcp,reg5=0x2,tp_dst=80
,→actions=conjunction(23,2/2),conjunction(15,2/2)
table=82, priority=73,ct_state=+new-est,tcp,reg5=0x2,tp_dst=81
,→actions=conjunction(23,2/2)
Only dimension 2/2 flows are shown here, as the other are similar to the previous ICMP example. There
are many more flows but only the port ranges that covers from 64 to 127 are shown for brevity.
The conjunction IDs 14 and 15 correspond to packets from the security group 1, and the conjunction IDs
22 and 23 correspond to those from the security group 2. These flows are from the following security
group rules,
before translating to flows so that there is only one matching flow even when the TCP destination port
is 80.
The remaining is a L4 protocol agnostic rule.
table=82, priority=70,ct_state=+est-rel-rpl,ip,reg5=0x2
,→actions=conjunction(24,2/2)
Any IP packet that matches the previous TCP flows matches one of these flows, but the corresponding
security group rules have different remote group IDs. Unlike the above TCP example, theres no con-
venient way of expressing protocol != TCP or icmp_code != 1. So the OVS firewall uses a
different priority than the previous TCP flows so as not to mix them up.
The mechanism for dropping connections that are not allowed anymore is the same as in table 72
(RULES_EGRESS).
table=82, priority=50,ct_state=-new-est+rel-inv,ct_zone=644,ct_mark=0,
,→reg5=0x2 actions=strip_vlan,output:2
table=82, priority=40,ct_state=-est,reg5=0x1 actions=resubmit(,93)
table=82, priority=40,ct_state=+est,reg5=0x1 actions=ct(commit,zone=NXM_NX_
,→REG6[0..15],exec(load:0x1->NXM_NX_CT_MARK[]))
table=82, priority=40,ct_state=-est,reg5=0x2 actions=resubmit(,93)
table=82, priority=40,ct_state=+est,reg5=0x2 actions=ct(commit,zone=NXM_NX_
,→REG6[0..15],exec(load:0x1->NXM_NX_CT_MARK[]))
table=82, priority=0 actions=drop
Note: Conntrack zones on a single node are now based on the network to which a port is plugged in.
That makes a difference between traffic on hypervisor only and east-west traffic. For example, if a port
has a VIP that was migrated to a port on a different node, then the new port wont contain conntrack
information about previous traffic that happened with VIP.
There are three tables where packets are sent once after going through the OVS firewall pipeline. The
tables can be used by other mechanisms that are supposed to work with the OVS firewall, typically L2
agent extensions.
Egress pipeline
Ingress pipeline
The first packet of each connection accepted by the ingress pipeline is sent to table 92 (AC-
CEPTED_INGRESS_TRAFFIC). The default action in this table is DROP because at this point the
packets have already been delivered to their destination port. This integration point is essentially pro-
vided for the logging extension.
Packets are sent to table 93 (DROPPED_TRAFFIC) if processing by the ingress filtering concluded
that they should be dropped.
During an upgrade, the agent will need to re-plug each instances tap device into the integration bridge
while trying to not break existing connections. One of the following approaches can be taken:
1) Pause the running instance in order to prevent a short period of time where its network interface does
not have firewall rules. This can happen due to the firewall driver calling OVS to obtain information
about OVS the port. Once the instance is paused and no traffic is flowing, we can delete the qvo interface
from integration bridge, detach the tap device from the qbr bridge and plug the tap device back into the
integration bridge. Once this is done, the firewall rules are applied for the OVS tap interface and the
instance is started from its paused state.
2) Set drop rules for the instances tap interface, delete the qbr bridge and related veths, plug the tap
device into the integration bridge, apply the OVS firewall rules and finally remove the drop rules for the
instance.
3) Compute nodes can be upgraded one at a time. A free node can be switched to use the OVS firewall,
and instances from other nodes can be live-migrated to it. Once the first node is evacuated, its firewall
driver can be then be switched to the OVS driver.
Note: During upgrading to openvswitch firewall, the security rules are still working for previous ipt-
ables controlled hybrid ports. But it will not work if one tries to replace openvswitch firewall with
iptables.
Neutron supports using Open vSwitch + DPDK vhost-user interfaces directly in the OVS ML2 driver
and agent. The current implementation relies on a multiple configuration values and includes runtime
verification of Open vSwitchs capability to provide these interfaces.
The OVS agent detects the capability of the underlying Open vSwitch installation and passes that infor-
mation over RPC via the agent configurations dictionary. The ML2 driver uses this information to select
the proper VIF type and binding details.
Platform requirements
• OVS 2.4.0+
• DPDK 2.0+
Configuration
[OVS]
datapath_type=netdev
vhostuser_socket_dir=/var/run/openvswitch
When OVS is running with DPDK support enabled, and the datapath_type is set to netdev, then
the OVS ML2 driver will use the vhost-user VIF type and pass the necessary binding details to
use OVS+DPDK and vhost-user sockets. This includes the vhostuser_socket_dir setting, which
must match the directory passed to ovs-vswitchd on startup.
The networking-ovs-dpdk repo will continue to exist and undergo active development. This feature just
removes the necessity for a separate ML2 driver and OVS agent in the networking-ovs-dpdk repo. The
networking-ovs-dpdk project also provides a devstack plugin which also allows automated CI, a Puppet
module, and an OpenFlow-based security group implementation.
Salvatore Orlando: How to write a Neutron Plugin (if you really need to)
Plugin API
• filters a dictionary with keys that are valid keys for a network as
listed in the RESOURCE_ATTRIBUTE_MAP object in neutron/api/
v2/attributes.py. Values in this dictionary are an iterable containing
values that will be used for an exact match comparison for that value. Each
result returned by this function will have matched one of the values for each
key in filters.
• fields a list of strings that are valid keys in a network dictionary as
listed in the RESOURCE_ATTRIBUTE_MAP object in neutron/api/
v2/attributes.py. Only these fields will be returned.
get_networks_count(context, filters=None)
Return the number of networks.
The result depends on the identity of the user making the request (as indicated by the context)
as well as any filters.
Parameters
• context neutron api request context
• filters a dictionary with keys that are valid keys for a network as
listed in the RESOURCE_ATTRIBUTE_MAP object in neutron/api/
v2/attributes.py. Values in this dictionary are an iterable containing
values that will be used for an exact match comparison for that value. Each
result returned by this function will have matched one of the values for each
key in filters.
NOTE: this method is optional, as it was not part of the originally defined plugin API.
result returned by this function will have matched one of the values for each
key in filters.
• fields a list of strings that are valid keys in a port dictionary as
listed in the RESOURCE_ATTRIBUTE_MAP object in neutron/api/
v2/attributes.py. Only these fields will be returned.
get_ports_count(context, filters=None)
Return the number of ports.
The result depends on the identity of the user making the request (as indicated by the context)
as well as any filters.
Parameters
• context neutron api request context
• filters a dictionary with keys that are valid keys for a network as
listed in the RESOURCE_ATTRIBUTE_MAP object in neutron/api/
v2/attributes.py. Values in this dictionary are an iterable containing
values that will be used for an exact match comparison for that value. Each
result returned by this function will have matched one of the values for each
key in filters.
Note: this method is optional, as it was not part of the originally defined plugin API.
Note: this method is optional, as it was not part of the originally defined plugin API.
has_native_datastore()
Return True if the plugin uses Neutrons native datastore.
Note: plugins like ML2 should override this method and return True.
rpc_state_report_workers_supported()
Return whether the plugin supports state report RPC workers.
Note: this method is optional, as it was not part of the originally defined plugin API.
rpc_workers_supported()
Return whether the plugin supports multiple RPC workers.
A plugin that supports multiple RPC workers should override the start_rpc_listeners method
to ensure that this method returns True and that start_rpc_listeners is called at the appropriate
time. Alternately, a plugin can override this method to customize detection of support for
multiple rpc workers
Note: this method is optional, as it was not part of the originally defined plugin API.
start_rpc_listeners()
Start the RPC listeners.
Most plugins start RPC listeners implicitly on initialization. In order to support multiple
process RPC, the plugin needs to expose control over when this is started.
Note: this method is optional, as it was not part of the originally defined plugin API.
start_rpc_state_reports_listener()
Start the RPC listeners consuming state reports queue.
This optional method creates rpc consumer for REPORTS queue only.
Note: this method is optional, as it was not part of the originally defined plugin API.
As most OpenStack projects, Neutron leverages oslo_policy1 . However, since Neutron loves to be
special and complicate every developers life, it also augments oslo_policy capabilities by:
• A wrapper module with its own API: neutron.policy;
• The ability of adding fine-grained checks on attributes for resources in request bodies;
• The ability of using the policy engine to filter out attributes in responses;
• Adding some custom rule checks beyond those defined in oslo_policy;
This document discusses Neutron-specific aspects of policy enforcement, and in particular how the
enforcement logic is wired into API processing. For any other information please refer to the developer
documentation for oslo_policy2 .
Authorization workflow
The Neutron API controllers perform policy checks in two phases during the processing of an API
request:
• Request authorization, immediately before dispatching the request to the plugin layer for POST,
PUT, and DELETE, and immediately after returning from the plugin layer for GET requests;
• Response filtering, when building the response to be returned to the API consumer.
Request authorization
The aim of this step is to authorize processing for a request or reject it with an error status code.
This step uses the neutron.policy.enforce routine. This routine raises oslo_policy.
PolicyNotAuthorized when policy enforcement fails. The Neutron REST API controllers catch
this exception and return:
• A 403 response code on a POST request or an PUT request for an object owned by the project
submitting the request;
• A 403 response for failures while authorizing API actions such as add_router_interface;
• A 404 response for DELETE, GET and all other PUT requests.
For DELETE operations the resource must first be fetched. This is done invoking the same
_item3 method used for processing GET requests. This is also true for PUT operations, since
1
Oslo policy module
2
Oslo policy developer
3
API controller item method
the Neutron API implements PATCH semantics for PUTs. The criteria to evaluate are built in the
_build_match_rule4 routine. This routine takes in input the following parameters:
• The action to be performed, in the <operation>_<resource> form, e.g.:
create_network
• The data to use for performing checks. For POST operations this could be a partial specification
of the object, whereas it is always a full specification for GET, PUT, and DELETE requests, as
resource data are retrieved before dispatching the call to the plugin layer.
• The collection name for the resource specified in the previous parameter; for instance, for a net-
work it would be the networks.
The _build_match_rule routine returns a oslo_policy.RuleCheck instance built in the fol-
lowing way:
• Always add a check for the action being performed. This will match a policy like create_network
in policy.yaml;
• Return for GET operations; more detailed checks will be performed anyway when building the
response;
• For each attribute which has been explicitly specified in the request create a rule matching policy
names in the form <operation>_<resource>:<attribute> rule, and link it with the
previous rule with an And relationship (using oslo_policy.AndCheck); this step will be
performed only if the enforce_policy flag is set to True in the resource attribute descriptor
(usually found in a data structure called RESOURCE_ATTRIBUTE_MAP);
• If the attribute is a composite one then further rules will be created; These will match policy names
in the form <operation>_<resource>:<attribute>:<sub_attribute>. An And
relationship will be used in this case too.
As all the rules to verify are linked by And relationships, all the policy checks should succeed in order
for a request to be authorized. Rule verification is performed by oslo_policy with no customization
from the Neutron side.
Response Filtering
Some Neutron extensions, like the provider networks one, add some attribute to resources which are
however not meant to be consumed by all clients. This might be because these attributes contain imple-
mentation details, or are meant only to be used when exchanging information between services, such as
Nova and Neutron;
For this reason the policy engine is invoked again when building API responses. This is achieved by the
_exclude_attributes_by_policy5 method in neutron.api.v2.base.Controller;
This method, for each attribute in the response returned by the plugin layer, first checks if the
is_visible flag is True. In that case it proceeds to checking policies for the attribute; if the pol-
icy check fails the attribute is added to a list of attributes that should be removed from the response
before returning it to the API client.
4
Policy engines build_match_rule method
5
exclude_attributes_by_policy method
The neutron.policy module exposes a simple API whose main goal if to allow the REST API
controllers to implement the authorization workflow discussed in this document. It is a bad practice to
call the policy engine from within the plugin layer, as this would make request authorization dependent
on configured plugins, and therefore make API behaviour dependent on the plugin itself, which defies
Neutron tenet of being backend agnostic.
The neutron.policy API exposes the following routines:
• init Initializes the policy engine loading rules from the json policy (files). This method can
safely be called several times.
• reset Clears all the rules currently configured in the policy engine. It is called in unit tests and
at the end of the initialization of core API router6 in order to ensure rules are loaded after all the
extensions are loaded.
• refresh Combines init and reset. Called when a SIGHUP signal is sent to an API worker.
• set_rules Explicitly set policy engines rules. Used only in unit tests.
• check Perform a check using the policy engine. Builds match rules as described in this docu-
ment, and then evaluates the resulting rule using oslo_policys policy engine. Returns True if the
checks succeeds, false otherwise.
• enforce Operates like the check routine but raises if the check in oslo_policy fails.
• check_is_admin Enforce the predefined context_is_admin rule; used to determine the
is_admin property for a neutron context.
• check_is_advsvc Enforce the predefined context_is_advsvc rule; used to determine the
is_advsvc property for a neutron context.
Neutron provides two additional policy rule classes in order to support the augmented authorization
capabilities it provides. They both extend oslo_policy.RuleCheck and are registered using the
oslo_policy.register decorator.
This class is registered for rules matching the tenant_id keyword and overrides the generic check
performed by oslo_policy in this case. It uses for those cases where neutron needs to check whether
the project submitting a request for a new resource owns the parent resource of the one being cre-
ated. Current usages of OwnerCheck include, for instance, creating and updating a subnet. This
class supports the extension parent resources owner check which the parent resource introduced by
service plugins. Such as router and floatingip owner check for router service plugin. Developers
can register the extension resource name and service plugin name which were registered in neutron-
lib into EXT_PARENT_RESOURCE_MAPPING which is located in neutron_lib.services.
constants.
The check, performed in the __call__ method, works as follows:
6
Policy reset in neutron.api.v2.router
• verify if the target field is already in the target data. If yes, then simply verify whether the value
for the target field in target data is equal to value for the same field in credentials, just like
oslo_policy.GenericCheck would do. This is also the most frequent case as the target
field is usually tenant_id;
• if the previous check failed, extract a parent resource type and a parent field
name from the target field. For instance networks:tenant_id identifies the
tenant_id attribute of the network resource. For extension parent resource case,
ext_parent:tenant_id identifies the tenant_id attribute of the registered extension re-
source in EXT_PARENT_RESOURCE_MAPPING;
• if no parent resource or target field could be identified raise a PolicyCheckError exception;
• Retrieve a parent foreign key from the _RESOURCE_FOREIGN_KEYS data structure in
neutron.policy. This foreign key is simply the attribute acting as a primary key in the parent
resource. A PolicyCheckError exception will be raised if such parent foreign key cannot be
retrieved;
• Using the core plugin, retrieve an instance of the resource having parent foreign key as an identi-
fier;
• Finally, verify whether the target field in this resource matches the one in the initial request data.
For instance, for a port create request, verify whether the tenant_id of the port data structure
matches the tenant_id of the network where this port is being created.
This class is registered with the policy engine for rules matching the field keyword, and provides a way
to perform fine grained checks on resource attributes. For instance, using this class of rules it is possible
to specify a rule for granting every project read access to shared resources.
In policy.yaml, a FieldCheck rules is specified in the following way:
This will result in the initialization of a FieldCheck that will check for <field> in the target resource
data, and return True if it is equal to <value> or return False is the <field> either is not equal to
<value> or does not exist at all.
When developing REST APIs for Neutron it is important to be aware of how the policy engine will
authorize these requests. This is true both for APIs served by Neutron core and for the APIs served by
the various Neutron stadium services.
• If an attribute of a resource might be subject to authorization checks then the enforce_policy
attribute should be set to True. While setting this flag to True for each attribute is a viable
strategy, it is worth noting that this will require a call to the policy engine for each attribute, thus
consistently increasing the time required to complete policy checks for a resource. This could
result in a scalability issue, especially in the case of list operations retrieving a large number of
resources;
• Some resource attributes, even if not directly used in policy checks might still be required by the
policy engine. This is for instance the case of the tenant_id attribute. For these attributes the
required_by_policy attribute should always set to True. This will ensure that the attribute
is included in the resource data sent to the policy engine for evaluation;
• The tenant_id attribute is a fundamental one in Neutron API request authorization. The default
policy, admin_or_owner, uses it to validate if a project owns the resource it is trying to operate
on. To this aim, if a resource without a tenant_id is created, it is important to ensure that ad-hoc
authZ policies are specified for this resource.
• There is still only one check which is hardcoded in Neutrons API layer: the check to verify
that a project owns the network on which it is creating a port. This check is hardcoded and is
always executed when creating a port, unless the network is shared. Unfortunately a solution for
performing this check in an efficient way through the policy engine has not yet been found. Due
to its nature, there is no way to override this check using the policy engine.
• It is strongly advised to not perform policy checks in the plugin or in the database management
classes. This might lead to divergent API behaviours across plugins. Also, it might leave the Neu-
tron DB in an inconsistent state if a request is not authorized after it has already been dispatched
to the backend.
Notes
• No authorization checks are performed for requests coming from the RPC over AMQP channel.
For all these requests a neutron admin context is built, and the plugins will process them as such.
• For PUT and DELETE requests a 404 error is returned on request authorization failures rather than
a 403, unless the project submitting the request own the resource to update or delete. This is to
avoid conditions in which an API client might try and find out other projects resource identifiers
by sending out PUT and DELETE requests for random resource identifiers.
• There is no way at the moment to specify an OR relationship between two attributes of a given
resource (eg.: port.name == 'meh' or port.status == 'DOWN'), unless the rule
with the or condition is explicitly added to the policy.yaml file.
• OwnerCheck performs a plugin access; this will likely require a database access, but since the
behaviour is implementation specific it might also imply a round-trip to the backend. This class
of checks, when involving retrieving attributes for parent resources should be used very sparingly.
• In order for OwnerCheck rules to work, parent resources should have an entry in neutron.
policy._RESOURCE_FOREIGN_KEYS; moreover the resource must be managed by the core
plugin (ie: the one defined in the core_plugin configuration variable)
Policy-in-Code support
• get_<resourceS> (get plural) is unnecessary. The neutron API layer use a single form policy
get_<resource> when listing resources78 .
• Member actions for individual resources must be defined. For example,
add_router_interface of router resource.
• All policies with attributes on create, update and delete actions must be defined.
<action>_<resource>:<attribute>(:<sub_attribute>) policy is required for
attributes with enforce_policy in the API definitions. Note that it is recommended to de-
fine even if a rule is same as for <action>_<resource> from the documentation perspective.
• For a policy with attributes of get actions like get_<resource>:<attribute>(:<sub_attribute>),
the following guideline is applied:
– A policy with an attribute must be defined if the policy is different from the policy for
get_<resource> (without attributes).
– If a policy with an attribute is same as for get_<resource>, there is no need to define it
explicitly. This is for simplicity. We check all attributes of a target resource in the process
of Response Filtering so it leads to a long long policy definitions for get actions in our
documentation. It is not happy for operators either.
– If an attribute is marked as enforce_policy, it is recommended to define the corre-
sponding policy with the attribute. This is for clarification. If an attribute is marked as
enforce_policy in the API definitions, for example, the neutron API limits to set such
attribute only to admin users but allows to retrieve a value for regular users. If policies for
the attribute are different across the types of operations, it is better to define all of them
explicitly.
Policy-in-code support in neutron is a bit different from other projects because the neutron server needs
to load policies in code from multiple projects. Each neutron related project should register the following
two entry points oslo.policy.policies and neutron.policies in setup.cfg like below:
oslo.policy.policies =
neutron = neutron.conf.policies:list_rules
neutron.policies =
neutron = neutron.conf.policies:list_rules
The above two entries are same, but they have different purposes.
• The first entry point is a normal entry point defined by oslo.policy and it is used to generate a
sample policy file910 .
• The second one is specific to neutron. It is used by neutron.policy module to load policies
of neutron related projects.
oslo.policy.policies entry point is used by all projects which adopt oslo.policy, so we cannot
determine which projects are neutron related projects, so the second entry point is required.
7
https://github.com/openstack/neutron/blob/051b6b40f3921b9db4f152a54f402c402cbf138c/neutron/pecan_wsgi/hooks/
policy_enforcement.py#L173
8
https://github.com/openstack/neutron/blob/051b6b40f3921b9db4f152a54f402c402cbf138c/neutron/pecan_wsgi/hooks/
policy_enforcement.py#L143
9
https://docs.openstack.org/oslo.policy/latest/user/usage.html#sample-file-generation
10
https://docs.openstack.org/oslo.policy/latest/cli/index.html#oslopolicy-sample-generator
The recommended entry point name is a repository name: For example, networking-sfc for SFC:
oslo.policy.policies =
neutron-sfc = neutron_sfc.policies:list_rules
neutron.policies =
neutron-sfc = neutron_sfc.policies:list_rules
Except registering the neutron.policies entry point, other steps to be done in each neutron related
project for policy-in-code support are same for all OpenStack projects.
References
We use the STATUS field on objects to indicate when a resource is ready by setting it to ACTIVE so
external systems know when its safe to use that resource. Knowing when to set the status to ACTIVE is
simple when there is only one entity responsible for provisioning a given object. When that entity has
finishing provisioning, we just update the STATUS directly to active. However, there are resources in
Neutron that require provisioning by multiple asynchronous entities before they are ready to be used so
managing the transition to the ACTIVE status becomes more complex. To handle these cases, Neutron
has the provisioning_blocks module to track the entities that are still provisioning a resource.
The main example of this is with ML2, the L2 agents and the DHCP agents. When a port is created and
bound to a host, its placed in the DOWN status. The L2 agent now has to setup flows, security group
rules, etc for the port and the DHCP agent has to setup a DHCP reservation for the ports IP and MAC.
Before the transition to ACTIVE, both agents must complete their work or the port user (e.g. Nova) may
attempt to use the port and not have connectivity. To solve this, the provisioning_blocks module is used
to track the provisioning state of each agent and the status is only updated when both complete.
To make use of the provisioning_blocks module, provisioning components should be added whenever
there is work to be done by another entity before an objects status can transition to ACTIVE. This is
accomplished by calling the add_provisioning_component method for each entity. Then as each entity
finishes provisioning the object, the provisioning_complete must be called to lift the provisioning block.
When the last provisioning block is removed, the provisioning_blocks module will trigger a call-
back notification containing the object ID for the objects resource type with the event PROVISION-
ING_COMPLETE. A subscriber to this event can now update the status of this object to ACTIVE or
perform any other necessary actions.
A normal state transition will look something like the following:
1. Request comes in to create an object
2. Logic on the Neutron server determines which entities are required to provision the object and
adds a provisioning component for each entity for that object.
3. A notification is emitted to the entities so they start their work.
4. Object is returned to the API caller in the DOWN (or BUILD) state.
5. Each entity tells the server when it has finished provisioning the object. The server calls provi-
sioning_complete for each entity that finishes.
6. When provisioning_complete is called on the last remaining entity, the provisioning_blocks mod-
ule will emit an event indicating that provisioning has completed for that object.
7. A subscriber to this event on the server will then update the status of the object to ACTIVE to
indicate that it is fully provisioned.
For a more concrete example, see the section below.
ML2 makes use of the provisioning_blocks module to prevent the status of ports from being transitioned
to ACTIVE until both the L2 agent and the DHCP agent have finished wiring a port.
When a port is created or updated, the following happens to register the DHCP agents provisioning
blocks:
1. The subnet_ids are extracted from the fixed_ips field of the port and then ML2 checks to see if
DHCP is enabled on any of the subnets.
2. The configuration for the DHCP agents hosting the network are looked up to ensure that at least
one of them is new enough to report back that it has finished setting up the port reservation.
3. If either of the preconditions above fail, a provisioning block for the DHCP agent is not added and
any existing DHCP agent blocks for that port are cleared to ensure the port isnt blocked waiting
for an event that will never happen.
4. If the preconditions pass, a provisioning block is added for the port under the DHCP entity.
When a port is created or updated, the following happens to register the L2 agents provisioning blocks:
1. If the port is not bound, nothing happens because we dont know yet if an L2 agent is involved so
we have to wait until a port update that binds it.
2. Once the port is bound, the agent based mechanism drivers will check if they have an agent on the
bound host and if the VNIC type belongs to the mechanism driver, a provisioning block is added
for the port under the L2 Agent entity.
Once the DHCP agent has finished setting up the reservation, it calls dhcp_ready_on_ports via the RPC
API with the port ID. The DHCP RPC handler receives this and calls provisioning_complete in the
provisioning module with the port ID and the DHCP entity to remove the provisioning block.
Once the L2 agent has finished setting up the reservation, it calls the normal update_device_list (or
update_device_up) via the RPC API. The RPC callbacks handler calls provisioning_complete with the
port ID and the L2 Agent entity to remove the provisioning block.
On the provisioning_complete call that removes the last record, the provisioning_blocks module emits
a callback PROVISIONING_COMPLETE event with the port ID. A function subscribed to this in ML2
then calls update_port_status to set the port to ACTIVE.
At this point the normal notification is emitted to Nova allowing the VM to be unpaused.
In the event that the DHCP or L2 agent is down, the port will not transition to the ACTIVE status (as is
the case now if the L2 agent is down). Agents must account for this by telling the server that wiring has
been completed after configuring everything during startup. This ensures that ports created on offline
agents (or agents that crash and restart) eventually become active.
To account for server instability, the notifications about port wiring be complete must use RPC calls so
the agent gets a positive acknowledgement from the server and it must keep retrying until either the port
is deleted or it is successful.
If an ML2 driver immediately places a bound port in the ACTIVE state (e.g. after calling a backend in
update_port_postcommit), this patch will not have any impact on that process.
Quality of Service
Quality of Service advanced service is designed as a service plugin. The service is decoupled from the
rest of Neutron code on multiple levels (see below).
QoS extends core resources (ports, networks) without using mixins inherited from plugins but through
an ml2 extension driver.
Details about the DB models, API extension, and use cases can be found here: qos spec .
• neutron.extensions.qos: base extension + API controller definition. Note that rules are subat-
tributes of policies and hence embedded into their URIs.
• neutron.extensions.qos_fip: base extension + API controller definition. Adds qos_policy_id to
floating IP, enabling users to set/update the binding QoS policy of a floating IP.
• neutron.services.qos.qos_plugin: QoSPlugin, service plugin that implements qos extension, re-
ceiving and handling API calls to create/modify policies and rules.
• neutron.services.qos.drivers.manager: the manager that passes object actions down to every en-
abled QoS driver and issues RPC calls when any of the drivers require RPC push notifications.
• neutron.services.qos.drivers.base: the interface class for pluggable QoS drivers that are used to
update backends about new {create, update, delete} events on any rule or policy change, including
precommit events that some backends could need for synchronization reason. The drivers also
declare which QoS rules, VIF drivers and VNIC types are supported.
• neutron.core_extensions.base: Contains an interface class to implement core resource
(port/network) extensions. Core resource extensions are then easily integrated into interested plu-
gins. We may need to have a core resource extension manager that would utilize those extensions,
to avoid plugin modifications for every new core resource extension.
• neutron.core_extensions.qos: Contains QoS core resource extension that conforms to the interface
described above.
• neutron.plugins.ml2.extensions.qos: Contains ml2 extension driver that handles core resource up-
dates by reusing the core_extensions.qos module mentioned above. In the future, we would like to
see a plugin-agnostic core resource extension manager that could be integrated into other plugins
with ease.
The neutron.extensions.qos.QoSPluginBase class uses method proxies for methods relating to QoS
policy rules. Each of these such methods is generic in the sense that it is intended to handle
any rule type. For example, QoSPluginBase has a create_policy_rule method instead of both cre-
ate_policy_dscp_marking_rule and create_policy_bandwidth_limit_rule methods. The logic behind
the proxies allows a call to a plugins create_policy_dscp_marking_rule to be handled by the cre-
ate_policy_rule method, which will receive a QosDscpMarkingRule object as an argument in order
to execute behavior specific to the DSCP marking rule type. This approach allows new rule types to be
introduced without requiring a plugin to modify code as a result. As would be expected, any subclass of
QoSPluginBase must override the base classs abc.abstractmethod methods, even if to raise NotImple-
mented.
Each QoS driver has a property called supported_rule_types, where the driver exposes the rules its able
to handle.
For a list of all rule types, see: neutron.services.qos.qos_consts.VALID_RULE_TYPES.
The list of supported QoS rule types exposed by neutron is calculated as the common subset of rules
supported by all active QoS drivers.
Note: the list of supported rule types reported by core plugin is not enforced when accessing QoS rule
resources. This is mostly because then we would not be able to create rules while at least one of the QoS
driver in gate lacks support for the rules were trying to test.
Database models
QoS design defines the following two conceptual resources to apply QoS rules for a port, a network or a
floating IP:
• QoS policy
• QoS rule (type specific)
Each QoS policy contains zero or more QoS rules. A policy is then applied to a network or a port,
making all rules of the policy applied to the corresponding Neutron resource.
When applied through a network association, policy rules could apply or not to neutron internal ports
(like router, dhcp, etc..). The QosRule base object provides a default should_apply_to_port method
which could be overridden. In the future we may want to have a flag in QoSNetworkPolicyBinding or
QosRule to enforce such type of application (for example when limiting all the ingress of routers devices
on an external network automatically).
Each project can have at most one default QoS policy, although is not mandatory. If a default QoS
policy is defined, all new networks created within this project will have assigned this policy, as long as
no other QoS policy is explicitly attached during the creation process. If the default QoS policy is unset,
no change to existing networks will be made.
From database point of view, following objects are defined in schema:
• QosPolicy: directly maps to the conceptual policy resource.
Implementation is done in a way that will allow adding a new rule list field with little or no modifications
in the policy object itself. This is achieved by smart introspection of existing available rule object
definitions and automatic definition of those fields on the policy class.
Note that rules are loaded in a non lazy way, meaning they are all fetched from the database on policy
fetch.
For Qos<type>Rule objects, an extendable approach was taken to allow easy addition of objects for new
rule types. To accommodate this, fields common to all types are put into a base class called QosRule
that is then inherited into type-specific rule implementations that, ideally, only define additional fields
and some other minor things.
Note that the QosRule base class is not registered with oslo.versionedobjects registry, because its not
expected that generic rules should be instantiated (and to suggest just that, the base rule class is marked
as ABC).
QoS objects rely on some primitive database API functions that are added in:
• neutron_lib.db.api: those can be reused to fetch other models that do not have corresponding
versioned objects yet, if needed.
• neutron.db.qos.api: contains database functions that are specific to QoS models.
RPC communication
Details on RPC communication implemented in reference backend driver are discussed in a separate
page.
The flow of updates is as follows:
• if a port that is bound to the agent is attached to a QoS policy, then ML2 plugin detects the change
by relying on ML2 QoS extension driver, and notifies the agent about a port change. The agent
proceeds with the notification by calling to get_device_details() and getting the new port dict that
contains a new qos_policy_id. Each device details dict is passed into l2 agent extension manager
that passes it down into every enabled extension, including QoS. QoS extension sees that there is
a new unknown QoS policy for a port, so it uses ResourcesPullRpcApi to fetch the current state
of the policy (with all the rules included) from the server. After that, the QoS extension applies
the rules by calling into QoS driver that corresponds to the agent.
• For floating IPs, a fip_qos L3 agent extension was implemented. This extension receives and
processes router updates. For each update, it goes over each floating IP associated to the router. If
a floating IP has a QoS policy associated to it, the extension uses ResourcesPullRpcApi to fetch
the policy details from the Neutron server. If the policy includes bandwidth_limit rules, the
extension applies them to the appropriate router device by directly calling the l3_tc_lib.
• on existing QoS policy update (it includes any policy or its rules change), server pushes the new
policy object state through ResourcesPushRpcApi interface. The interface fans out the serialized
(dehydrated) object to any agent that is listening for QoS policy updates. If an agent have seen the
policy before (it is attached to one of the ports/floating IPs it maintains), then it goes with applying
the updates to the port/floating IP. Otherwise, the agent silently ignores the update.
Agent backends
At the moment, QoS is supported by Open vSwitch, SR-IOV and Linux bridge ml2 drivers.
Each agent backend defines a QoS driver that implements the QosAgentDriver interface:
• Open vSwitch (QosOVSAgentDriver);
• SR-IOV (QosSRIOVAgentDriver);
• Linux bridge (QosLinuxbridgeAgentDriver).
For the Networking back ends, QoS supported rules, and traffic directions (from the VM point of view),
please see the table: Networking back ends, supported rules, and traffic direction.
Open vSwitch
The Open vSwitch DSCP marking implementation relies on the recent addition of the
ovs_agent_extension_api OVSAgentExtensionAPI to request access to the integration bridge functions:
• add_flow
• mod_flow
• delete_flows
• dump_flows_for
The DSCP markings are in fact configured on the port by means of openflow rules.
Note: As of Ussuri release, the QoS rules can be applied for direct ports with hardware offload capa-
bility (switchdev), this requires Open vSwitch version 2.11.0 or newer and Linux kernel based on kernel
5.4.0 or newer.
SR-IOV
SR-IOV bandwidth limit and minimum bandwidth implementation relies on the new pci_lib function:
• set_vf_rate
As the name of the function suggests, the limit is applied on a Virtual Function (VF). This function
has a parameter called rate_type and its value can be set to rate or min_tx_rate, which is for enforcing
bandwidth limit or minimum bandwidth respectively.
ip link interface has the following limitation for bandwidth limit: it uses Mbps as units of bandwidth
measurement, not kbps, and does not support float numbers. So in case the limit is set to something less
than 1000 kbps, its set to 1 Mbps only. If the limit is set to something that does not divide to 1000 kbps
chunks, then the effective limit is rounded to the nearest integer Mbps value.
Linux bridge
• delete_tbf_bw_limit
The ingress bandwidth limit is configured on the tap port by setting a simple tc-tbf queueing discipline
(qdisc) on the port. It requires a value of HZ parameter configured in kernel on the host. This value is
necessary to calculate the minimal burst value which is set in tc. Details about how it is calculated can
be found in here. This solution is similar to Open vSwitch implementation.
The Linux bridge DSCP marking implementation relies on the linuxbridge_extension_api to request
access to the IptablesManager class and to manage chains in the mangle table in iptables.
QoS framework is flexible enough to support any third-party vendor. To integrate a third party driver
(that just wants to be aware of the QoS create/update/delete API calls), one needs to implement neu-
tron.services.qos.drivers.base, and register the driver during the core plugin or mechanism driver load,
see
neutron.services.qos.drivers.openvswitch.driver register method for an example.
Note: All the functionality MUST be implemented by the vendor, neutrons QoS framework will just
act as an interface to bypass the received QoS API request and help with database persistence for the
API operations.
Note: L3 agent fip_qos extension does not have a driver implementation, it directly uses the
l3_tc_lib for all types of routers.
Configuration
Testing strategy
All the code added or extended as part of the effort got reasonable unit test coverage.
Neutron objects
Base unit test classes to validate neutron objects were implemented in a way that allows code reuse when
introducing a new object type.
There are two test classes that are utilized for that:
• BaseObjectIfaceTestCase: class to validate basic object operations (mostly CRUD) with database
layer isolated.
• BaseDbObjectTestCase: class to validate the same operations with models in place and database
layer unmocked.
Every new object implemented on top of one of those classes is expected to either inherit existing test
cases as is, or reimplement it, if it makes sense in terms of how those objects are implemented. Specific
test classes can obviously extend the set of test cases as they see needed (f.e. you need to define new
test cases for those additional methods that you may add to your object implementations on top of base
semantics common to all neutron objects).
Functional tests
API tests
API tests for basic CRUD operations for ports, networks, policies, and rules were added in:
• neutron-tempest-plugin.api.test_qos
Most resources exposed by the Neutron API are subject to quota limits. The Neutron API exposes an
extension for managing such quotas. Quota limits are enforced at the API layer, before the request is
dispatched to the plugin.
Default values for quota limits are specified in neutron.conf. Admin users can override those defaults
values on a per-project basis. Limits are stored in the Neutron database; if no limit is found for a
given resource and project, then the default value for such resource is used. Configuration-based quota
management, where every project gets the same quota limit specified in the configuration file, has been
deprecated as of the Liberty release.
Please note that Neutron does not support both specification of quota limits per user and quota manage-
ment for hierarchical multitenancy (as a matter of fact Neutron does not support hierarchical multite-
nancy at all). Also, quota limits are currently not enforced on RPC interfaces listening on the AMQP
bus.
Plugin and ML2 drivers are not supposed to enforce quotas for resources they manage. However, the
subnet_allocation1 extension is an exception and will be discussed below.
The quota management and enforcement mechanisms discussed here apply to every resource which has
been registered with the Quota engine, regardless of whether such resource belongs to the core Neutron
API or one of its extensions.
For a reservation to be successful, the total amount of resources requested, plus the total amount of
resources reserved, plus the total amount of resources already stored in the database should not exceed
the projects quota limit.
Finally, both quota management and enforcement rely on a quota driver2 , whose task is basically to
perform database operations.
Quota Management
From a performance perspective, having a table tracking resource usage has some advantages, albeit not
fundamental. Indeed the time required for executing queries to explicitly count objects will increase with
the number of records in the table. On the other hand, using TrackedResource will fetch a single record,
but has the drawback of having to execute an UPDATE statement once the operation is completed.
Nevertheless, CountableResource instances do not simply perform a SELECT query on the relevant
table for a resource, but invoke a plugin method, which might execute several statements and sometimes
even interacts with the backend before returning. Resource usage tracking also becomes important for
operational correctness when coupled with the concept of resource reservation, discussed in another
section of this chapter.
Tracking quota usage is not as simple as updating a counter every time resources are created or deleted.
Indeed a quota-limited resource in Neutron can be created in several ways. While a RESTful API request
is the most common one, resources can be created by RPC handlers listing on the AMQP bus, such as
those which create DHCP ports, or by plugin operations, such as those which create router ports.
To this aim, TrackedResource instances are initialised with a reference to the model class for the resource
for which they track usage data. During object initialisation, SqlAlchemy event handlers are installed
for this class. The event handler is executed after a record is inserted or deleted. As result usage data for
that resource and will be marked as dirty once the operation completes, so that the next time usage data
is requested, it will be synchronised counting resource usage from the database. Even if this solution
has some drawbacks, listed in the exceptions and caveats section, it is more reliable than solutions such
as:
• Updating the usage counters with the new correct value every time an operation completes.
• Having a periodic task synchronising quota usage data with actual data in the Neutron DB.
Finally, regardless of whether CountableResource or TrackedResource is used, the quota engine always
invokes its count() method to retrieve resource usage. Therefore, from the perspective of the Quota
engine there is absolutely no difference between CountableResource and TrackedResource.
Quota Enforcement
Before dispatching a request to the plugin, the Neutron base controller5 attempts to make a reserva-
tion for requested resource(s). Reservations are made by calling the make_reservation method in neu-
tron.quota.QuotaEngine. The process of making a reservation is fairly straightforward:
• Get current resource usages. This is achieved by invoking the count method on every requested
resource, and then retrieving the amount of reserved resources.
• Fetch current quota limits for requested resources, by invoking the _get_tenant_quotas method.
• Fetch expired reservations for selected resources. This amount will be subtracted from resource
usage. As in most cases there wont be any expired reservation, this approach actually requires less
DB operations than doing a sum of non-expired, reserved resources for each request.
• For each resource calculate its headroom, and verify the requested amount of resource is less than
the headroom.
• If the above is true for all resource, the reservation is saved in the DB, otherwise an OverQuo-
taLimit exception is raised.
The quota engine is able to make a reservation for multiple resources. However, it is worth noting that
because of the current structure of the Neutron API layer, there will not be any practical case in which a
5
Base controller class: http://opendev.org/openstack/neutron/tree/neutron/api/v2/base.py#n50
reservation for multiple resources is made. For this reason performance optimisation avoiding repeating
queries for every resource are not part of the current implementation.
In order to ensure correct operations, a row-level lock is acquired in the transaction which cre-
ates the reservation. The lock is acquired when reading usage data. In case of write-set certifica-
tion failures, which can occur in active/active clusters such as MySQL galera, the decorator neu-
tron_lib.db.api.retry_db_errors will retry the transaction if a DBDeadLock exception is raised. While
non-locking approaches are possible, it has been found out that, since a non-locking algorithms increases
the chances of collision, the cost of handling a DBDeadlock is still lower than the cost of retrying the
operation when a collision is detected. A study in this direction was conducted for IP allocation oper-
ations, but the same principles apply here as well6 . Nevertheless, moving away for DB-level locks is
something that must happen for quota enforcement in the future.
Committing and cancelling a reservation is as simple as deleting the reservation itself. When a reserva-
tion is committed, the resources which were committed are now stored in the database, so the reservation
itself should be deleted. The Neutron quota engine simply removes the record when cancelling a reser-
vation (ie: the request failed to complete), and also marks quota usage info as dirty when the reservation
is committed (ie: the request completed correctly). Reservations are committed or cancelled by respec-
tively calling the commit_reservation and cancel_reservation methods in neutron.quota.QuotaEngine.
Reservations are not perennial. Eternal reservation would eventually exhaust projects quotas because
they would never be removed when an API worker crashes whilst in the middle of an operation. Reser-
vation expiration is currently set to 120 seconds, and is not configurable, not yet at least. Expired
reservations are not counted when calculating resource usage. While creating a reservation, if any ex-
pired reservation is found, all expired reservation for that project and resource will be removed from the
database, thus avoiding build-up of expired reservations.
By default plugins do not leverage resource tracking. Having the plugin explicitly declare which re-
sources should be tracked is a precise design choice aimed at limiting as much as possible the chance of
introducing errors in existing plugins.
For this reason a plugin must declare which resource it intends to track. This can be achieved using
the tracked_resources decorator available in the neutron.quota.resource_registry module. The decorator
should ideally be applied to the plugins __init__ method.
The decorator accepts in input a list of keyword arguments. The name of the argument must be a resource
name, and the value of the argument must be a DB model class. For example:
::
@resource_registry.tracked_resources(network=models_v2.Network, port=models_v2.Port,
subnet=models_v2.Subnet, subnetpool=models_v2.SubnetPool)
Will ensure network, port, subnet and subnetpool resources are tracked. In theory, it is possible to use
this decorator multiple times, and not exclusively to __init__ methods. However, this would eventually
lead to code readability and maintainability problems, so developers are strongly encourage to apply this
decorator exclusively to the plugins __init__ method (or any other method which is called by the plugin
only once during its initialization).
6
http://lists.openstack.org/pipermail/openstack-dev/2015-February/057534.html
Neutron unfortunately does not have a layer which is called before dispatching the operation from the
plugin which can be leveraged both from RESTful and RPC over AMQP APIs. In particular the RPC
handlers call straight into the plugin, without doing any request authorisation or quota enforcement.
Therefore RPC handlers must explicitly indicate if they are going to call the plugin to create or delete any
sort of resources. This is achieved in a simple way, by ensuring modified resources are marked as dirty
after the RPC handler execution terminates. To this aim developers can use the mark_resources_dirty
decorator available in the module neutron.quota.resource_registry.
The decorator would scan the whole list of registered resources, and store the dirty status for their usage
trackers in the database for those resources for which items have been created or destroyed during the
plugin operation.
References
Retrying Operations
Inside of the neutron_lib.db.api module there is a decorator called retry_if_session_inactive. This should
be used to protect any functions that perform DB operations. This decorator will capture any deadlock
errors, RetryRequests, connection errors, and unique constraint violations that are thrown by the function
it is protecting.
This decorator will not retry an operation if the function it is applied to is called within an active session.
This is because the majority of the exceptions it captures put the session into a partially rolled back
state so it is no longer usable. It is important to ensure there is a decorator outside of the start of the
transaction. The decorators are safe to nest if a function is sometimes called inside of another transaction.
If a function is being protected that does not take context as an argument the retry_db_errors decorator
function may be used instead. It retries the same exceptions and has the same anti-nesting behav-
ior as retry_if_session_active, but it does not check if a session is attached to any context keywords.
(retry_if_session_active just uses retry_db_errors internally after checking the session)
Idempotency on Failures
The function that is being decorated should always fully cleanup whenever it encounters an exception
so its safe to retry the operation. So if a function creates a DB object, commits, then creates another, the
function must have a cleanup handler to remove the first DB object in the case that the second one fails.
Assume any DB operation can throw a retriable error.
You may see some retry decorators at the API layers in Neutron; however, we are trying to eliminate
them because each API operation has many independent steps that makes ensuring idempotency on
partial failures very difficult.
Argument Mutation
A decorated function should not mutate any complex arguments which are passed into it. If it does, it
should have an exception handler that reverts the change so its safe to retry.
The decorator will automatically create deep copies of sets, lists, and dicts which are passed through it,
but it will leave the other arguments alone.
One of the difficulties with detecting race conditions to create a DB record with a unique constraint is
determining where to put the exception handler because a constraint violation can happen immediately
on flush or it may not happen all of the way until the transaction is being committed on the exit of the
session context manager. So we would end up with code that looks something like this:
So we end up with an exception handler that has to understand where things went wrong and convert
them into appropriate exceptions for the end-users. This distracts significantly from the main purpose of
create_port.
Since the retry decorator will automatically catch and retry DB duplicate errors for us, we can allow it
to retry on this race condition which will give the original validation logic to be re-executed and raise
the appropriate error. This keeps validation logic in one place and makes the code cleaner.
@db_api.retry_if_session_inactive()
def create_port(context, ip_address, mac_address):
_ensure_mac_not_in_use(context, mac_address)
_ensure_ip_not_in_use(context, ip_address)
with db_api.CONTEXT_READER.using(context):
port_obj = Port(ip=ip_address, mac=mac_address)
do_expensive_thing(...)
do_extra_other_thing(...)
return port_obj
Nesting
Once the decorator retries an operation the maximum number of times, it will attach a flag to the excep-
tion it raises further up that will prevent decorators around the calling functions from retrying the error
again. This prevents an exponential increase in the number of retries if they are layered.
Usage
@db_api.retry_if_session_inactive()
def create_elephant(context, elephant_details):
....
@db_api.retry_if_session_inactive()
def atomic_bulk_create_elephants(context, elephants):
with db_api.CONTEXT_WRITER.using(context):
for elephant in elephants:
# note that if create_elephant throws a retriable
# exception, the decorator around it will not retry
# because the session is active. The decorator around
# atomic_bulk_create_elephants will be responsible for
# retrying the entire operation.
create_elephant(context, elephant)
Neutron uses the oslo.messaging library to provide an internal communication channel between Neutron
services. This communication is typically done via AMQP, but those details are mostly hidden by the
use of oslo.messaging and it could be some other protocol in the future.
RPC APIs are defined in Neutron in two parts: client side and server side.
Client Side
class ClientAPI(object):
"""Client side RPC interface definition.
This class defines the client side interface for an rpc API. The interface has 2 methods. The first method
existed in version 1.0 of the interface. The second method was added in version 1.1. When the newer
method is called, it specifies that the remote side must implement at least version 1.1 to handle this
request.
Server Side
import oslo_messaging
class ServerAPI(object):
target = oslo_messaging.Target(version='1.1')
This class implements the server side of the interface. The oslo_messaging.Target() defined says that
this class currently implements version 1.1 of the interface.
Versioning
Note that changes to rpc interfaces must always be done in a backwards compatible way. The server side
should always be able to handle older clients (within the same major version series, such as 1.X).
It is possible to bump the major version number and drop some code only needed for backwards
compatibility. For more information about how to do that, see https://wiki.openstack.org/wiki/
RpcMajorVersionUpdates.
Example Change
As an example minor API change, lets assume we want to add a new parameter to
my_remote_method_2. First, we add the argument on the server side. To be backwards compatible,
the new argument must have a default value set so that the interface will still work even if the argument
is not supplied. Also, the interfaces minor version number must be incremented. So, the new server side
code would look like this:
import oslo_messaging
class ServerAPI(object):
target = oslo_messaging.Target(version='1.2')
We can now update the client side to pass the new argument. The client must also specify that version
1.2 is required for this method call to be successful. The updated client side would look like this:
import oslo_messaging
class ClientAPI(object):
"""Client side RPC interface definition.
As discussed before, RPC APIs are defined in two parts: a client side and a server side. Several of these
pairs exist in the Neutron code base. The code base is being updated with documentation on every rpc
interface implementation that indicates where the corresponding server or client code is located.
Example: DHCP
The DHCP agent includes a client API, neutron.agent.dhcp.agent.DhcpPluginAPI. The DHCP agent
uses this class to call remote methods back in the Neutron server. The server side is defined in neu-
tron.api.rpc.handlers.dhcp_rpc.DhcpRpcCallback. It is up to the Neutron plugin in use to decide whether
the DhcpRpcCallback interface should be exposed.
Similarly, there is an RPC interface defined that allows the Neutron plugin to re-
motely invoke methods in the DHCP agent. The client side is defined in neu-
tron.api.rpc.agentnotifiers.dhcp_rpc_agent_api.DhcpAgentNotifyAPI. The server side of this interface
that runs in the DHCP agent is neutron.agent.dhcp.agent.DhcpAgent.
More Info
Neutron already has a callback system for in-process resource callbacks where publishers and sub-
scribers are able to publish and subscribe for resource events.
This system is different, and is intended to be used for inter-process callbacks, via the messaging fanout
mechanisms.
In Neutron, agents may need to subscribe to specific resource details which may change over time. And
the purpose of this messaging callback system is to allow agent subscription to those resources without
the need to extend modify existing RPC calls, or creating new RPC messages.
A few resource which can benefit of this system:
• QoS policies;
• Security Groups.
Using a remote publisher/subscriber pattern, the information about such resources could be published
using fanout messages to all interested nodes, minimizing messaging requests from agents to server
since the agents get subscribed for their whole lifecycle (unless they unsubscribe).
Within an agent, there could be multiple subscriber callbacks to the same resource events, the resources
updates would be dispatched to the subscriber callbacks from a single message. Any update would come
in a single message, doing only a single oslo versioned objects deserialization on each receiving agent.
This publishing/subscription mechanism is highly dependent on the format of the resources passed
around. This is why the library only allows versioned objects to be published and subscribed. Oslo
versioned objects allow object version down/up conversion.23
For the VOs versioning schema look here:4
versioned_objects serialization/deserialization with the obj_to_primitive(target_version=..) and primi-
tive_to_obj()1 methods is used internally to convert/retrieve objects before/after messaging.
Serialized versioned objects look like:
{'versioned_object.version': '1.0',
'versioned_object.name': 'QoSPolicy',
'versioned_object.data': {'rules': [
{'versioned_object.version': '1.0',
'versioned_object.name':
,→'QoSBandwidthLimitRule',
In this section we assume the standard Neutron upgrade process, which means upgrade the server first
and then upgrade the agents:
More information about the upgrade strategy.
We provide an automatic method which avoids manual pinning and unpinning of versions by the admin-
istrator which could be prone to error.
2
https://github.com/openstack/oslo.versionedobjects/blob/ce00f18f7e9143b5175e889970564813189e3e6d/oslo_
versionedobjects/base.py#L474
3
https://github.com/openstack/oslo.versionedobjects/blob/ce00f18f7e9143b5175e889970564813189e3e6d/oslo_
versionedobjects/tests/test_objects.py#L114
4
https://github.com/openstack/oslo.versionedobjects/blob/ce00f18f7e9143b5175e889970564813189e3e6d/oslo_
versionedobjects/base.py#L248
1
https://github.com/openstack/oslo.versionedobjects/blob/ce00f18f7e9143b5175e889970564813189e3e6d/oslo_
versionedobjects/tests/test_objects.py#L410
Resource pull requests will always be ok because the underlying resource RPC does provide the version
of the requested resource id / ids. The server will be upgraded first, so it will always be able to satisfy
any version the agents request.
Agents will subscribe to the neutron-vo-<resource_type>-<version> fanout queue which carries updated
objects for the version they know about. The versions they know about depend on the runtime Neutron
versioned objects they started with.
When the server upgrades, it should be able to instantly calculate a census of agent versions per object
(we will define a mechanism for this in a later section). It will use the census to send fanout messages
on all the version span a resource type has.
For example, if neutron-server knew it has rpc-callback aware agents with versions 1.0, and versions 1.2
of resource type A, any update would be sent to neutron-vo-A_1.0 and neutron-vo-A_1.2.
TODO(mangelajo): Verify that after upgrade is finished any unused messaging resources (queues, ex-
changes, and so on) are released as older agents go away and neutron-server stops producing new mes-
sage casts. Otherwise document the need for a neutron-server restart after rolling upgrade has finished
if we want the queues cleaned up.
We add a row to the agent db for tracking agent known objects and version numbers. This resembles the
implementation of the configuration column.
Agents report at start not only their configuration now, but also their subscribed object type / version
pairs, that are stored in the database and made available to any neutron-server requesting it:
There was a subset of Liberty agents depending on QosPolicy that required QosPolicy: 1.0 if the qos
plugin is installed. We were able to identify those by the binary name (included in the report):
• neutron-openvswitch-agent
• neutron-sriov-nic-agent
This transition was handled in the Mitaka version, but its not handled anymore in Newton, since only
one major version step upgrades are supported.
Version discovery
With the above mechanism in place and considering the exception of neutron-openvswitch-agent and
neutron-sriov-agent requiring QoSpolicy 1.0, we discover the subset of versions to be sent on every
push notification.
Agents that are in down state are excluded from this calculation. We use an extended timeout for agents
in this calculation to make sure were on the safe side, specially if deployer marked agents with low
timeouts.
Starting at Mitaka, any agent interested in versioned objects via this API should report their re-
source/version tuples of interest (the resource type/ version pairs theyre subscribed to).
The plugins interested in this RPC mechanism must inherit AgentDbMixin, since this mechanism is
only intended to be used from agents at the moment, while it could be extended to be consumed from
other components if necessary.
The AgentDbMixin provides:
Caching mechanism
The version subset per object is cached to avoid DB requests on every push given that we assume that
all old agents are already registered at the time of upgrade.
Cached subset is re-evaluated (to cut down the version sets as agents upgrade) after neu-
tron.api.rpc.callbacks.version_manager.VERSIONS_TTL.
As a fast path to update this cache on all neutron-servers when upgraded agents come up (or old agents
revive after a long timeout or even a downgrade) the server registering the new status update notifies the
other servers about the new consumer resource versions via cast.
All notifications for all calculated version sets must be sent, as non-upgraded agents would otherwise
not receive them.
It is safe to send notifications to any fanout queue as they will be discarded if no agent is listening.
neutron-vo-<resource_class_name>-<version>
In the future, we may want to get oslo messaging to support subscribing topics dynamically, then we
may want to use:
neutron-vo-<resource_class_name>-<resource_id>-<version> instead,
or something equivalent which would allow fine granularity for the receivers to only get interesting
information to them.
Subscribing to resources
Imagine that you have agent A, which just got to handle a new port, which has an associated security
group, and QoS policy.
The agent code processing port updates may look like:
# send to the right handler which will update any control plane
# details related to the updated resources...
def subscribe_resources():
registry.register(process_resource_updates, resources.SEC_GROUP)
registry.register(process_resource_updates, resources.QOS_POLICY)
def port_update(port):
On the server side, resource updates could come from anywhere, a service plugin, an extension, anything
that updates, creates, or destroys the resources and that is of any interest to subscribed agents.
A callback is expected to receive a list of resources. When resources in the list belong to the same
resource type, a single push RPC message is sent; if the list contains objects of different resource types,
resources of each type are grouped and sent separately, one push RPC message per type. On the receiver
side, resources in a list always belong to the same type. In other words, a server-side push of a list of
heterogeneous objects will result into N messages on bus and N client-side callback invocations, where
N is the number of unique resource types in the given list, e.g. L(A, A, B, C, C, C) would be fragmented
into L1(A, A), L2(B), L3(C, C, C), and each list pushed separately.
Note: there is no guarantee in terms of order in which separate resource lists will be delivered to con-
sumers.
The server/publisher side may look like:
def create_qos_policy(...):
policy = fetch_policy(...)
update_the_db(...)
registry.push([policy], events.CREATED)
def update_qos_policy(...):
policy = fetch_policy(...)
update_the_db(...)
registry.push([policy], events.UPDATED)
def delete_qos_policy(...):
policy = fetch_policy(...)
update_the_db(...)
registry.push([policy], events.DELETED)
References
Segments extension
Neutron has an extension that allows CRUD operations on the /segments resource in the API, that
corresponds to the NetworkSegment entity in the DB layer. The extension is implemented as a
service plug-in.
Note: The segments service plug-in is not configured by default. To configure it, add segments to
the service_plugins parameter in neutron.conf
Core plug-ins can coordinate with the segments service plug-in by subscribing callbacks to events as-
sociated to the SEGMENT resource. Currently, the segments plug-in notifies subscribers of the following
events:
• PRECOMMIT_CREATE
• AFTER_CREATE
• BEFORE_DELETE
• PRECOMMIT_DELETE
• AFTER_DELETE
As of this writing, ML2 and OVN register callbacks to receive events from the segments service plug-
in. The ML2 plug-in defines the callback _handle_segment_change to process all the relevant
segments events.
Service Extensions
Starting with the Kilo release, these services are split into separate repositories, and more extensions
are being developed as well. Service plugins are a clean way of adding functionality in a cohesive
manner and yet, keeping them decoupled from the guts of the framework. The aforementioned features
are developed as extensions (also known as service plugins), and more capabilities are being added to
Neutron following the same pattern. For those that are deemed orthogonal to any network service (e.g.
tags, timestamps, auto_allocate, etc), there is an informal mechanism to have these loaded automatically
at server startup. If you consider adding an entry to the dictionary, please be kind and reach out to your
PTL or a member of the drivers team for approval.
1. http://opendev.org/openstack/neutron-fwaas/
2. http://opendev.org/openstack/neutron-vpnaas/
There are many cases where a service may want to create a resource managed by the core plugin (e.g.
ports, networks, subnets). This can be achieved by importing the plugins directory and getting a direct
reference to the core plugin:
plugin = directory.get_plugin()
plugin.create_port(context, port_dict)
However, there is an important caveat. Calls to the core plugin in almost every case should not be made
inside of an ongoing transaction. This is because many plugins (including ML2), can be configured to
make calls to a backend after creating or modifying an object. If the call is made inside of a transaction
and the transaction is rolled back after the core plugin call, the backend will not be notified that the
change was undone. This will lead to consistency errors between the core plugin and its configured
backend(s).
ML2 has a guard against certain methods being called with an active DB transaction to help prevent
developers from accidentally making this mistake. It will raise an error that says explicitly that the
method should not be called within a transaction.
A usual Neutron setup consists of multiple services and agents running on one or multiple nodes (though
some exotic setups potentially may not need any agents). Each of those services provides some of the
networking or API services. Among those of special interest:
1. neutron-server that provides API endpoints and serves as a single point of access to the database.
It usually runs on nodes called Controllers.
2. Layer2 agent that can utilize Open vSwitch, Linuxbridge or other vendor specific technology to
provide network segmentation and isolation for project networks. The L2 agent should run on
every node where it is deemed responsible for wiring and securing virtual interfaces (usually both
Compute and Network nodes).
3. Layer3 agent that runs on Network node and provides East-West and North-South routing plus
some advanced services such as VPNaaS.
For the purpose of this document, we call all services, servers and agents that run on any node as just
services.
Entry points
Entry points for services are defined in setup.cfg under console_scripts section. Those entry points
should generally point to main() functions located under neutron/cmd/ path.
Note: some existing vendor/plugin agents still maintain their entry points in other locations. Developers
responsible for those agents are welcome to apply the guideline above.
Neutron extensively utilizes the eventlet library to provide asynchronous concurrency model to its ser-
vices. To utilize it correctly, the following should be kept in mind.
If a service utilizes the eventlet library, then it should not call eventlet.monkey_patch() directly but in-
stead maintain its entry point main() function under neutron/cmd/eventlet/ If that is the case, the standard
Python library will be automatically patched for the service on entry point import (monkey patching is
done inside python package file).
Note: an entry point main() function may just be an indirection to a real callable located elsewhere, as
is done for reference services such as DHCP, L3 and the neutron-server.
For more info on the rationale behind the code tree setup, see the corresponding cross-project spec.
Only the neutron-server connects to the neutron database. Agents may never connect directly to the
database, as this would break the ability to do rolling upgrades.
Configuration Options
In addition to database access, configuration options are segregated between neutron-server and agents.
Both services and agents may load the main `neutron.conf` since this file should contain the
oslo.messaging configuration for internal Neutron RPCs and may contain host specific configuration
such as file paths. In addition `neutron.conf` contains the database, Keystone, and Nova creden-
tials and endpoints strictly for neutron-server to use.
In addition neutron-server may load a plugin specific configuration file, yet the agents should not. As
the plugin configuration is primarily site wide options and the plugin provides the persistence layer for
Neutron, agents should be instructed to act upon these values via RPC.
Each individual agent may have its own configuration file. This file should be loaded after the main
`neutron.conf` file, so the agent configuration takes precedence. The agent specific configuration
may contain configurations which vary between hosts in a Neutron deployment such as the local_ip
for an L2 agent. If any agent requires access to additional external services beyond the neutron RPC,
those endpoints should be defined in the agent-specific configuration file (e.g. nova metadata for meta-
data agent).
Tag service plugin allows users to set tags on their resources. Tagging resources can be used by external
systems or any other clients of the Neutron REST API (and NOT backend drivers).
The following use cases refer to adding tags to networks, but the same can be applicable to any other
Neutron resource:
1) Ability to map different networks in different OpenStack locations to one logically same network
(for Multi site OpenStack)
2) Ability to map Ids from different management/orchestration systems to OpenStack networks in
mixed environments, for example for project Kuryr, map docker network id to neutron network id
3) Leverage tags by deployment tools
4) allow operators to tag information about provider networks (e.g. high-bandwidth, low-latency,
etc)
5) new features like get-me-a-network or a similar port scheduler could choose a network for a port
based on tags
Which Resources
Tag system uses standardattr mechanism so its targeting to resources that have the mechanism. Some
resources with standard attribute dont suit fit tag support usecases (e.g. security_group_rule). If new
tag support resource is added, the resource model should inherit HasStandardAttributes and then it must
implement the property api_parent and tag_support. And also the change must include a release note for
API user.
Current API resources extended by tag extensions:
• floatingips
• networks
• network_segment_ranges
• policies
• ports
• routers
• security_groups
• subnetpools
• subnets
• trunks
Model
Tag is not standalone resource. Tag is always related to existing resources. The following shows tag
model:
+------------------+ +------------------+
| Network | | Tag |
+------------------+ +------------------+
| standard_attr_id +------> | standard_attr_id |
| | | tag |
| | | |
+------------------+ +------------------+
Tag has two columns only and tag column is just string. These tags are defined per resource. Tag is
unique in a resource but it can be overlapped throughout.
API
The following shows basic API for tag. Tag is regarded as a subresource of resource so API always
includes id of resource related to tag.
Add a single tag on a network
PUT /v2.0/networks/{network_id}/tags/{tag}
Returns 201 Created. If the tag already exists, no error is raised, it just returns the 201 Created because
the OpenStack Development Mailing List discussion told us that PUT should be no issue updating an
existing tag.
Replace set of tags on a network
PUT /v2.0/networks/{network_id}/tags
{
'tags': ['foo', 'bar', 'baz']
}
Response
{
'tags': ['foo', 'bar', 'baz']
}
GET /v2.0/networks/{network_id}/tags/{tag}
DELETE /v2.0/networks/{network_id}/tags/{tag}
DELETE /v2.0/networks/{network_id}/tags
PUT and DELETE for collections are the motivation of extending the API framework.
Note: Much of this document discusses upgrade considerations for the Neutron reference implementa-
tion using Neutrons agents. Its expected that each Neutron plugin provides its own documentation that
discusses upgrade considerations specific to that choice of backend. For example, OVN does not use
Neutron agents, but does have a local controller that runs on each compute node. OVN supports rolling
upgrades, but information about how that works should be covered in the documentation for the OVN
Neutron plugin.
Upgrade strategy
Rolling upgrade
Rolling upgrades imply that during some interval of time there will be services of different code versions
running and interacting in the same cloud. It puts multiple constraints onto the software.
1. older services should be able to talk with newer services.
2. older services should not require the database to have older schema (otherwise newer services that
require the newer schema would not work).
More info on rolling upgrades in OpenStack.
Those requirements are achieved in Neutron by:
1. If the Neutron backend makes use of Neutron agents, the Neutron server have backwards compat-
ibility code to deal with older messaging payloads.
2. isolating a single service that accesses database (neutron-server).
To simplify the matter, its always assumed that the order of service upgrades is as following:
1. first, all neutron-servers are upgraded.
2. then, if applicable, neutron agents are upgraded.
This approach allows us to avoid backwards compatibility code on agent side and is in line with other
OpenStack projects that support rolling upgrades (specifically, nova).
Server upgrade
Neutron-server is the very first component that should be upgraded to the new code. Its also the only
component that relies on new database schema to be present, other components communicate with the
cloud through AMQP and hence do not depend on particular database state.
Database upgrades are implemented with alembic migration chains.
Database upgrade is split into two parts:
1. neutron-db-manage upgrade --expand
2. neutron-db-manage upgrade --contract
Each part represents a separate alembic branch.
The former step can be executed while old neutron-server code is running. The latter step requires all
neutron-server instances to be shut down. Once its complete, neutron-servers can be started again.
Note: Full shutdown of neutron-server instances can be skipped depending on whether there are pend-
ing contract scripts not applied to the database:
$ neutron-db-manage has_offline_migrations
Command will return a message if there are pending contract scripts.
Agents upgrade
Note: This section does not apply when the cloud does not use AMQP agents to provide networking
services to instances. In that case, other backend specific upgrade instructions may also apply.
Once neutron-server services are restarted with the new database schema and the new code, its time to
upgrade Neutron agents.
Note that in the meantime, neutron-server should be able to serve AMQP messages sent by older versions
of agents which are part of the cloud.
The recommended order of agent upgrade (per node) is:
1. first, L2 agents (openvswitch, linuxbridge, sr-iov).
2. then, all other agents (L3, DHCP, Metadata, ).
The rationale of the agent upgrade order is that L2 agent is usually responsible for wiring ports for other
agents to use, so its better to allow it to do its job first and then proceed with other agents that will use
the already configured ports for their needs.
Each network/compute node can have its own upgrade schedule that is independent of other nodes.
AMQP considerations
Since its always assumed that neutron-server component is upgraded before agents, only the former
should handle both old and new RPC versions.
The implication of that is that no code that handles UnsupportedVersion oslo.messaging exceptions
belongs to agent code.
Notifications
For notifications that are issued by neutron-server to listening agents, special consideration is needed to
support rolling upgrades. In this case, a newer controller sends newer payload to older agents.
Until we have proper RPC version pinning feature to enforce older payload format during upgrade (as
its implemented in other projects like nova), we leave our agents resistant against unknown arguments
sent as part of server notifications. This is achieved by consistently capturing those unknown arguments
with keyword arguments and ignoring them on agent side; and by not enforcing newer RPC entry point
versions on server side.
This approach is not ideal, because it makes RPC API less strict. Thats why other approaches should be
considered for notifications in the future.
More information about RPC versioning.
Interface signature
An RPC interface is defined by its name, version, and (named) arguments that it accepts. There are no
strict guarantees that arguments will have expected types or meaning, as long as they are serializable.
To provide better compatibility guarantees for rolling upgrades, RPC interfaces could also define
specific format for arguments they accept. In OpenStack world, its usually implemented using
oslo.versionedobjects library, and relying on the library to define serialized form for arguments that
are passed through AMQP wire.
Note that Neutron has not adopted oslo.versionedobjects library for its RPC interfaces yet (except for
QoS feature).
More information about RPC callbacks used for QoS.
Networking backends
Backend software upgrade should not result in any data plane disruptions. Meaning, e.g. Open vSwitch
L2 agent should not reset flows or rewire ports; Neutron L3 agent should not delete namespaces left by
older version of the agent; Neutron DHCP agent should not require immediate DHCP lease renewal; etc.
The same considerations apply to setups that do not rely on agents. Meaning, f.e. OpenDaylight or OVN
controller should not break data plane connectivity during its upgrade process.
Upgrade testing
Review guidelines
There are several upgrade related gotchas that should be tracked by reviewers.
First things first, a general advice to reviewers: make sure new code does not violate requirements set
by global OpenStack deprecation policy.
Now to specifics:
1. Configuration options:
• options should not be dropped from the tree without waiting for deprecation period (currently
its one development cycle long) and a deprecation message issued if the deprecated option
is used.
• option values should not change their meaning between releases.
2. Data plane:
• agent restart should not result in data plane disruption (no Open vSwitch ports reset; no
network namespaces deleted; no device names changed).
3. RPC versioning:
• no RPC version major number should be bumped before all agents had a chance to upgrade
(meaning, at least one release cycle is needed before compatibility code to handle old clients
is stripped from the tree).
• no compatibility code should be added to agent side of AMQP interfaces.
• server code should be able to handle all previous versions of agents, unless the major version
of an interface is bumped.
• no RPC interface arguments should change their meaning, or names.
• new arguments added to RPC interfaces should not be mandatory. It means that server should
be able to handle old requests, without the new argument specified. Also, if the argument is
not passed, the old behaviour before the addition of the argument should be retained.
• minimal client version must not be bumped for server initiated notification changes for at
least one cycle.
4. Database migrations:
• migration code should be split into two branches (contract, expand) as needed. No code that
is unsafe to execute while neutron-server is running should be added to expand branch.
• if possible, contract migrations should be minimized or avoided to reduce the time when API
endpoints must be down during database upgrade.
The primary job of the Neutron OVN ML2 driver is to translate requests for resources into OVNs data
model. Resources are created in OVN by updating the appropriate tables in the OVN northbound
database (an ovsdb database). This document looks at the mappings between the data that exists in
Neutron and what the resulting entries in the OVN northbound DB would look like.
Network
Neutron Network:
id
name
subnets
admin_state_up
status
tenant_id
Once a network is created, we should create an entry in the Logical Switch table.
Subnet
Neutron Subnet:
id
name
ip_version
network_id
cidr
gateway_ip
allocation_pools
dns_nameservers
host_routers
tenant_id
enable_dhcp
ipv6_ra_mode
ipv6_address_mode
Once a subnet is created, we should create an entry in the DHCP Options table with the DHCPv4 or
DHCPv6 options.
Port
Neutron Port:
id
name
network_id
admin_state_up
mac_address
fixed_ips
device_id
device_owner
tenant_id
status
When a port is created, we should create an entry in the Logical Switch Ports table in the OVN north-
bound DB.
If the port has extra DHCP options defined, we should create an entry in the DHCP Options table in the
Router
Neutron Router:
id
name
admin_state_up
status
tenant_id
external_gw_info:
network_id
external_fixed_ips: list of dicts
ip_address
subnet_id
Router Port
Security Groups
Neutron Port:
id
security_group: id
network_id
Security groups maps between three neutron objects to one OVN-NB object, this enable us to do the
mapping in various ways, depending on OVN capabilities
The current implementation will use the first option in this list for simplicity, but all options are kept
here for future reference
1) For every <neutron port, security rule> pair, define an ACL entry:
Leads to many ACL entries.
acl.match = sg_rule converted
example: ((inport==port.id) && (ip.proto == "tcp") &&
(1024 <= tcp.src <= 4095) && (ip.src==192.168.0.1/16))
2) For every <neutron port, security group> pair, define an ACL entry:
Reduce the number of ACL entries.
Means we have to manage the match field in case specific rule changes
(continues on next page)
Which option to pick depends on OVN match field length capabilities, and the trade off between better
performance due to less ACL entries compared to the complexity to manage them.
If the default behaviour is not drop for unmatched entries, a rule with lowest priority must be added to
drop all traffic (match==1)
Spoofing protection rules are being added by OVN internally and we need to ignore the automatically
added rules in Neutron
DHCPv4
OVN implements a native DHCPv4 support which caters to the common use case of providing an IP
address to a booting instance by providing stateless replies to DHCPv4 requests based on statically
configured address mappings. To do this it allows a short list of DHCPv4 options to be configured and
applied at each compute host running ovn-controller.
OVN northbound db provides a table DHCP_Options to store the DHCP options. Logical switch port
DHCPv6
OVN implements a native DHCPv6 support similar to DHCPv4. When a v6 subnet is created, the OVN
ML2 driver will insert a new entry into DHCP_Options table only when the subnet ipv6_address_mode
is not slaac, and enable_dhcp is True.
When the logical switch ports VIF is attached or removed to/from the ovn integration bridge, ovn-northd
updates the Logical_Switch_Port.up to True or False accordingly.
In order for the OVN Neutron ML2 driver to update the corresponding neutron ports status to ACTIVE
or DOWN in the db, it needs to monitor the OVN Northbound db. A neutron worker is created for this
purpose.
The implementation of the ovn worker can be found here - networking_ovn.ovsdb.worker.OvnWorker.
Neutron service will create n api workers and m rpc workers and 1 ovn worker (all these workers are
separate processes).
Api workers and rpc workers will create ovsdb idl client object (ovs.db.idl.Idl) to connect to
the OVN_Northbound db. See networking_ovn.ovsdb.impl_idl_ovn.OvsdbNbOvnIdl and ovsd-
bapp.backend.ovs_idl.connection.Connection classes for more details.
Ovn worker will create networking_ovn.ovsdb.ovsdb_monitor.OvnIdl class object (which inherits from
ovs.db.idl.Idl) to connect to the OVN_Northbound db. On receiving the OVN_Northbound db updates
from the ovsdb-server, notify function of OVnIdl is called by the parent class object.
OvnIdl.notify() function passes the received events to the ovsdb_monitor.OvnDbNotifyHandler class.
ovsdb_monitor.OvnDbNotifyHandler checks for any changes in the Logical_Switch_Port.up and up-
dates the neutron ports status accordingly.
ovsdb locks
If there are multiple neutron servers running, then each neutron server will have one ovn worker which
listens for the notify events. When the Logical_Switch_Port.up is updated by ovn-northd, we do not
want all the neutron servers to handle the event and update the neutron port status. In order for only one
neutron server to handle the events, ovsdb locks are used.
At start, each neutron servers ovn worker will try to acquire a lock with id - neutron_ovn_event_lock.
The ovn worker which has acquired the lock will handle the notify events.
In case the neutron server with the lock dies, ovsdb-server will assign the lock to another neutron server
in the queue.
More details about the ovsdb locks can be found here [1] and [2]
[1] - https://tools.ietf.org/html/draft-pfaff-ovsdb-proto-04#section-4.1.8 [2] - https://github.com/
openvswitch/ovs/blob/branch-2.4/python/ovs/db/idl.py#L67
One thing to note is the ovn worker (with OvnIdl) do not carry out any transactions to the OVN North-
bound db.
Since the api and rpc workers are not configured with any locks, using the ovsdb lock on the
OVN_Northbound and OVN_Southbound DBs by the ovn workers will not have any side effects to
the transactions done by these api and rpc workers.
When neutron server starts, ovn worker would receive a dump of all logical switch ports as events.
ovsdb_monitor.OvnDbNotifyHandler would sync up if there are any inconsistencies in the port status.
The OVN Neutron ML2 driver has a need to acquire chassis information (hostname and physnets combi-
nations). This is required initially to support routed networks. Thus, the plugin will initiate and maintain
a connection to the OVN SB DB during startup.
Introduction
OpenStack Nova presents a metadata API to VMs similar to what is available on Amazon EC2. Neutron
is involved in this process because the source IP address is not enough to uniquely identify the source
of a metadata request since networks can have overlapping IP addresses. Neutron is responsible for
intercepting metadata API requests and adding HTTP headers which uniquely identify the source of the
request before forwarding it to the metadata API server.
The purpose of this document is to propose a design for how to enable this functionality when OVN is
used as the backend for OpenStack Neutron.
The following blog post describes how VMs access the metadata API through Neutron today.
https://www.suse.com/communities/blog/vms-get-access-metadata-neutron/
In summary, we run a metadata proxy in either the router namespace or DHCP namespace. The DHCP
namespace can be used when theres no router connected to the network. The one downside to the DHCP
namespace approach is that it requires pushing a static route to the VM through DHCP so that it knows
to route metadata requests to the DHCP server IP address.
• Instance sends a HTTP request for metadata to 169.254.169.254
• This request either hits the router or DHCP namespace depending on the route in the instance
• The metadata proxy service in the namespace adds the following info to the request:
– Instance IP (X-Forwarded-For header)
– Router or Network-ID (X-Neutron-Network-Id or X-Neutron-Router-Id header)
• The metadata proxy service sends this request to the metadata agent (outside the namespace) via
a UNIX domain socket.
• The neutron-metadata-agent service forwards the request to the Nova metadata API service by
adding some new headers (instance ID and Tenant ID) to the request [0].
For proper operation, Neutron and Nova must be configured to communicate together with a shared se-
cret. Neutron uses this secret to sign the Instance-ID header of the metadata request to prevent spoofing.
This secret is configured through metadata_proxy_shared_secret on both nova and neutron configuration
files (optional).
[0] https://opendev.org/openstack/neutron/src/commit/f73f39f2cfcd4eace2bda14c99ead9a8cc8560f4/
neutron/agent/metadata/agent.py#L175
The current metadata API approach does not translate directly to OVN. There are no Neutron agents in
use with OVN. Further, OVN makes no use of its own network namespaces that we could take advantage
of like the original implementation makes use of the router and dhcp namespaces.
We must use a modified approach that fits the OVN model. This section details a proposed approach.
The proposed approach would be similar to the isolated network case in the current ML2+OVS imple-
mentation. Therefore, we would be running a metadata proxy (haproxy) instance on every hypervisor
for each network a VM on that host is connected to.
The downside of this approach is that well be running more metadata proxies than were doing now in
case of routed networks (one per virtual router) but since haproxy is very lightweight and they will be
idling most of the time, it shouldnt be a big issue overall. However, the major benefit of this approach
is that we dont have to implement any scheduling logic to distribute metadata proxies across the nodes,
nor any HA logic. This, however, can be evolved in the future as explained below in this document.
Also, this approach relies on a new feature in OVN that we must implement first so that an OVN port can
be present on every chassis (similar to localnet ports). This new type of logical port would be localport
and we will never forward packets over a tunnel for these ports. We would only send packets to the local
instance of a localport.
Step 1 - Create a port for the metadata proxy
When using the DHCP agent today, Neutron automatically creates a port for the DHCP agent to use.
We could do the same thing for use with the metadata proxy (haproxy). Well create an OVN localport
which will be present on every chassis and this port will have the same MAC/IP address on every host.
Eventually, we can share the same neutron port for both DHCP and metadata.
Step 2 - Routing metadata API requests to the correct Neutron port
This works similarly to the current approach.
We would program OVN to include a static route in DHCP responses that routes metadata API requests
to the localport that is hosting the metadata API proxy.
Also, in case DHCP isnt enabled or the client ignores the route info, we will program a static route in
the OVN logical router which will still get metadata requests directed to the right place.
If the DHCP route does not work and the network is isolated, VMs wont get metadata, but this already
happens with the current implementation so this approach doesnt introduce a regression.
Step 3 - Management of the namespaces and haproxy instances
We propose a new agent called neutron-ovn-metadata-agent. We will run this agent on ev-
ery hypervisor and it will be responsible for spawning the haproxy instances for managing the OVS
interfaces, network namespaces and haproxy processes used to proxy metadata API requests.
Step 4 - Metadata API request processing
Similar to the existing neutron metadata agent, neutron-ovn-metadata-agent
must act as an intermediary between haproxy and the Nova metadata API service.
neutron-ovn-metadata-agent is the process that will have access to the host networks
where the Nova metadata API exists. Each haproxy will be in a network namespace not able to reach
the appropriate host network. Haproxy will add the necessary headers to the metadata API request and
then forward it to neutron-ovn-metadata-agent over a UNIX domain socket, which matches
the behavior of the current metadata agent.
In neutron-ovn-metadata-agent.
• On startup:
– Do a full sync. Ensure we have all the required metadata proxies running. For that,
the agent would watch the Port_Binding table of the OVN Southbound database and
look for all rows with the chassis column set to the host the agent is running on. For
all those entries, make sure a metadata proxy instance is spawned for every datapath
(Neutron network) those ports are attached to. The agent will keep record of the list
of networks it currently has proxies running on by updating the external-ids key
neutron-metadata-proxy-networks of the OVN Chassis record in the OVN
Southbound database that corresponds to this host. As an example, this key would look like
neutron-metadata-proxy-networks=NET1_UUID,NET4_UUID meaning that
this chassis is hosting one or more VMs connected to networks 1 and 4 so we should have a
metadata proxy instance running for each. Ensure any running metadata proxies no longer
needed are torn down.
• Open and maintain a connection to the OVN Northbound database (using the ovsdbapp library).
On first connection, and anytime a reconnect happens:
– Do a full sync.
• Register a callback for creates/updates/deletes to Logical_Switch_Port rows to detect when meta-
data proxies should be started or torn down. neutron-ovn-metadata-agent will watch
OVN Southbound database (Port_Binding table) to detect when a port gets bound to its chas-
sis. At that point, the agent will make sure that theres a metadata proxy attached to the OVN
localport for the network which this port is connected to.
• When a new network is created, we must create an OVN localport for use as a metadata proxy.
This port will be owned by network:dhcp so that it gets auto deleted upon the removal of the
network and it will remain DOWN and not bound to any chassis. The metadata port will be created
regardless of the DHCP setting of the subnets within the network as long as the metadata service
is enabled.
• When a network is deleted, we must tear down the metadata proxy instance (if present) on the
host and delete the corresponding OVN localport (which will happen automatically as its owned
by network:dhcp).
Launching a metadata proxy includes:
• Creating a network namespace:
• Creating a VETH pair (OVS upgrades that upgrade the kernel module will make internal ports go
away and then brought back by OVS scripts. This may cause some disruption. Therefore, veth
pairs are preferred over internal ports):
Alternatives Considered
Weve been building some features useful to OpenStack directly into OVN. DHCP and DNS are key
examples of things weve replaced by building them into ovn-controller. The metadata API case has
some key differences that make this a less attractive solution:
The metadata API is an OpenStack specific feature. DHCP and DNS by contrast are more clearly useful
outside of OpenStack. Building metadata API proxy support into ovn-controller means embedding an
HTTP and TCP stack into ovn-controller. This is a significant degree of undesired complexity.
This option has been ruled out for these reasons.
In this approach, we would spawn a metadata proxy per virtual router or per network (if isolated), thus,
improving the number of metadata proxy instances running in the cloud. However, scheduling and HA
have to be considered. Also, we wouldnt need the OVN localport implementation.
neutron-ovn-metadata-agent would run on any host that we wish to be able to host metadata
API proxies. These hosts must also be running ovn-controller.
Each of these hosts will have a Chassis record in the OVN southbound database created by ovn-
controller. The Chassis table has a column called external_ids which can be used for general meta-
data however we see fit. neutron-ovn-metadata-agent will update its corresponding Chassis
record with an external-id of neutron-metadata-proxy-host=true to indicate that this OVN
chassis is one capable of hosting metadata proxy instances.
Once we have a way to determine hosts capable of hosting metadata API proxies, we can add logic to the
ovn ML2 driver that schedules metadata API proxies. This would be triggered by Neutron API requests.
The output of the scheduling process would be setting an external_ids key on a Logi-
cal_Switch_Port in the OVN northbound database that corresponds with a metadata proxy. The key
could be something like neutron-metadata-proxy-chassis=CHASSIS_HOSTNAME.
neutron-ovn-metadata-agent on each host would also be watching for updates to these Logi-
cal_Switch_Port rows. When it detects that a metadata proxy has been scheduled locally, it will kick off
the process to spawn the local haproxy instance and get it plugged into OVN.
HA must also be considered. We must know when a host goes down so that all metadata proxies
scheduled to that host can be rescheduled. This is almost the exact same problem we have with L3 HA.
When a host goes down, we need to trigger rescheduling gateways to other hosts. We should ensure that
the approach used for rescheduling L3 gateways can be utilized for rescheduling metadata proxies, as
well.
In neutron-server (ovn mechanism driver) .
Introduce a new ovn driver configuration option:
• [ovn] isolated_metadata=[True|False]
Events that trigger scheduling a new metadata proxy:
• If isolated_metadata is True
– When a new network is created, we must create an OVN logical port for use as a metadata
proxy and then schedule this to one of the neutron-ovn-metadata-agent instances.
• If isolated_metadata is False
– When a network is attached to or removed from a logical router, ensure that at least one of
the networks has a metadata proxy port already created. If not, pick a network and create a
metadata proxy port and then schedule it to an agent. At this point, we need to update the
static route for metadata API.
Events that trigger unscheduling an existing metadata proxy:
• When a network is deleted, delete the metadata proxy port if it exists and unschedule it from a
neutron-ovn-metadata-agent.
To schedule a new metadata proxy:
• Determine the list of available OVN Chassis that can host metadata proxies by reading the
Chassis table of the OVN Southbound database. Look for chassis that have an external-id
of neutron-metadata-proxy-host=true.
• Of the available OVN chassis, choose the one least loaded, or currently hosting the fewest number
of metadata proxies.
• Set neutron-metadata-proxy-chassis=CHASSIS_HOSTNAME as an external-id on
the Logical_Switch_Port in the OVN Northbound database that corresponds to the neutron port
used for this metadata proxy. CHASSIS_HOSTNAME maps to the hostname row of a Chassis
record in the OVN Southbound database.
This approach has been ruled out for its complexity although we have analyzed the details deeply be-
cause, eventually, and depending on the implementation of L3 HA, we will want to evolve to it.
Other References
This document presents the problem and proposes a solution for the data consistency issue between the
Neutron and OVN databases. Although the focus of this document is OVN this problem is common
enough to be present in other ML2 drivers (e.g OpenDayLight, BigSwitch, etc). Some of them already
contain a mechanism in place for dealing with it.
Problem description
In a common Neutron deployment model there could have multiple Neutron API workers processing
requests. For each request, the worker will update the Neutron database and then invoke the ML2 driver
to translate the information to that specific SDN data model.
There are at least two situations that could lead to some inconsistency between the Neutron and the SDN
databases, for example:
In Neutron:
with neutron_db_transaction:
update_neutron_db()
ml2_driver.update_port_precommit()
ml2_driver.update_port_postcommit()
Imagine the case where a port is being updated twice and each request is being handled
by a different API worker. The method responsible for updating the resource in the OVN
(update_port_postcommit) is not atomic and invoked outside of the Neutron database trans-
action. This could lead to a problem where the order in which the updates are committed to the Neutron
database are different than the order that they are committed to the OVN database, resulting in an incon-
sistency.
This problem has been reported at bug #1605089.
Another situation is when the changes are already committed in Neutron but an exception is raised upon
trying to update the OVN database (e.g lost connectivity to the ovsdb-server). We currently dont
have a good way of handling this problem, obviously it would be possible to try to immediately rollback
the changes in the Neutron database and raise an exception but, that rollback itself is an operation that
could also fail.
Plus, rollbacks is not very straight forward when it comes to updates or deletes. In a case where a VM
is being teared down and OVN fail to delete a port, re-creating that port in Neutron doesnt necessary
fix the problem. The decommission of a VM involves many other things, in fact, we could make things
even worse by leaving some dirty data around. I believe this is a problem that would be better dealt with
by other methods.
Proposed change
In order to fix the problems presented at the Problem description section this document proposes a
solution based on the Neutrons revision_number attribute. In summary, for every resource in
Neutron theres an attribute called revision_number which gets incremented on each update made
on that resource. For example:
This document proposes a solution that will use the revision_number attribute for three things:
1. Perform a compare-and-swap operation based on the resource version
2. Guarantee the order of the updates (Problem 1)
3. Detecting when resources in Neutron and OVN are out-of-sync
But, before any of points above can be done we need to change the ovn driver code to:
To be able to compare the version of the resource in Neutron against the version in OVN we first need
to know which version the OVN resource is present at.
Fortunately, each table in the OVNDB contains a special column called external_ids which external
systems (like Neutron) can use to store information about its own resources that corresponds to the
entries in OVNDB.
So, every time a resource is created or updated in OVNDB by ovn driver, the Neutron
revision_number referent to that change will be stored in the external_ids column of that
resource. That will allow ovn driver to look at both databases and detect whether the version in OVN is
up-to-date with Neutron or not.
As stated in Problem 1, simultaneous updates to a single resource will race and, with the current code,
the order in which these updates are applied is not guaranteed to be the correct order. That means that, if
two or more updates arrives we cant prevent an older version of that update to be applied after a newer
one.
This document proposes creating a special OVSDB command that runs as part of the same transaction
that is updating a resource in OVNDB to prevent changes with a lower revision_number to be
applied in case the resource in OVN is at a higher revision_number already.
This new OVSDB command needs to basically do two things:
1. Add a verify operation to the external_ids column in OVNDB so that if another client modifies
that column mid-operation the transaction will be restarted.
A better explanation of what verify does is described at the doc string of the Transaction class in the
OVS code itself, I quote:
Because OVSDB handles multiple clients, it can happen that between the time that OVSDB
client A reads a column and writes a new value, OVSDB client B has written that column.
Client As write should not ordinarily overwrite client Bs, especially if the column in ques-
tion is a map column that contains several more or less independent data items. If client A
adds a verify operation before it writes the column, then the transaction fails in case client
B modifies it first. Client A will then see the new value of the column and compose a new
transaction based on the new contents written by client B.
2. Compare the revision_number from the update against what is presently stored in OVNDB. If
the version in OVNDB is already higher than the version in the update, abort the transaction.
So basically this new command is responsible for guarding the OVN resource by not allowing old
changes to be applied on top of new ones. Heres a scenario where two concurrent updates comes in
the wrong order and how the solution above will deal with it:
Neutron worker 1 (NW-1): Updates a port with address A (revision_number: 2)
Neutron worker 2 (NW-2): Updates a port with address B (revision_number: 3)
TXN 1: NW-2 transaction is committed first and the OVN resource now has RN 3
TXN 2: NW-1 transaction detects the change in the external_ids column and is restarted
TXN 2: NW-1 the new command now sees that the OVN resource is at RN 3, which is higher than the
update version (RN 2) and aborts the transaction.
Theres a bit more for the above to work with the current ovn driver code, basically we need to tidy up
the code to do two more things.
1. Consolidate changes to a resource in a single transaction.
This is important regardless of this spec, having all changes to a resource done in a single transaction
minimizes the risk of having half-changes written to the database in case of an eventual problem. This
should be done already but its important to have it here in case we find more examples like that as we
code.
2. When doing partial updates, use the OVNDB as the source of comparison to create the deltas.
Being able to do a partial update in a resource is important for performance reasons; its a way to mini-
mize the number of changes that will be performed in the database.
Right now, some of the update() methods in ovn driver creates the deltas using the current and original
parameters that are passed to it. The current parameter is, as the name says, the current version of the
object present in the Neutron DB. The original parameter is the previous version (current - 1) of that
object.
The problem of creating the deltas by comparing these two objects is because only the data in the
Neutron DB is used for it. We need to stop using the original object for it and instead we should create
the delta based on the current version of the Neutron DB against the data stored in the OVNDB to be
able to detect the real differences between the two databases.
So in summary, to guarantee the correctness of the updates this document proposes to:
1. Create a new OVSDB command is responsible for comparing revision numbers and aborting the
transaction, when needed.
2. Consolidate changes to a resource in a single transaction (should be done already)
3. When doing partial updates, create the deltas based in the current version in the Neutron DB and
the OVNDB.
When things are working as expected the above changes should ensure that Neutron DB and OVNDB
are in sync but, what happens when things go bad ? As per Problem 2, things like temporarily losing
connectivity with the OVNDB could cause changes to fail to be committed and the databases getting
out-of-sync. We need to be able to detect the resources that were affected by these failures and fix them.
We do already have the means to do it, similar to what the ovn_db_sync.py script does we could fetch all
the data from both databases and compare each resource. But, depending on the size of the deployment
this can be really slow and costy.
This document proposes an optimization for this problem to make it efficient enough so that we can run
it periodically (as a periodic task) and not manually as a script anymore.
First, we need to create an additional table in the Neutron database that would serve as a cache for the
revision numbers in OVNDB.
The new table schema could look this:
For the different actions: Create, update and delete; this table will be used as:
1. Create:
In the create_*_precommit() method, we will create an entry in the new table within the same Neutron
transaction. The revision_number column for the new entry will have a placeholder value until the
resource is successfully created in OVNDB.
In case we fail to create the resource in OVN (but succeed in Neutron) we still have the entry logged
in the new table and this problem can be detected by fetching all resources where the revision_number
column value is equal to the placeholder value.
The pseudo-code will look something like this:
2. Update:
For update its simpler, we need to bump the revision number for that resource after the OVN transaction
is committed in the update_*_postcommit() method. That way, if an update fails to be applied to OVN
the inconsistencies can be detected by a JOIN between the new table and the standardattributes
table where the revision_number columns does not match.
The pseudo-code will look something like this:
3. Delete:
The standard_attr_id column in the new table is a foreign key constraint with a ONDELETE=SET
NULL set. That means that, upon Neutron deleting a resource the standard_attr_id column in the
new table will be set to NULL.
If deleting a resource succeeds in Neutron but fails in OVN, the inconsistency can be detect by looking
at all resources that has a standard_attr_id equals to NULL.
The pseudo-code will look something like this:
With the above optimization its possible to create a periodic task that can run quite frequently to detect
and fix the inconsistencies caused by random backend failures.
Note: Theres no lock linking both database updates in the postcommit() methods. So, its true that
the method bumping the revision_number column in the new table in Neutron DB could still race but,
that should be fine because this table acts like a cache and the real revision_number has been written in
OVNDB.
The mechanism that will detect and fix the out-of-sync resources should detect this inconsistency as well
and, based on the revision_number in OVNDB, decide whether to sync the resource or only bump the
revision_number in the cache table (in case the resource is already at the right version).
Refereces
• Theres a chain of patches with a proof of concept for this approach, they start at: https://review.
openstack.org/#/c/517049/
Alternatives
Journaling
An alternative solution to this problem is journaling. The basic idea is to create another table in the
Neutron database and log every operation (create, update and delete) instead of passing it directly to the
SDN controller.
A separated thread (or multiple instances of it) is then responsible for reading this table and applying the
operations to the SDN backend.
This approach has been used and validated by drivers such as networking-odl.
An attempt to implement this approach in ovn driver can be found here.
Some things to keep in mind about this approach:
• The code can get quite complex as this approach is not only about applying the changes to the SDN
backend asynchronously. The dependencies between each resource as well as their operations also
needs to be computed. For example, before attempting to create a router port the router that this
port belongs to needs to be created. Or, before attempting to delete a network all the dependent
resources on it (subnets, ports, etc) needs to be processed first.
• The number of journal threads running can cause problems. In my tests I had three controllers,
each one with 24 CPU cores (Intel Xeon E5-2620 with hyperthreading enabled) and 64GB RAM.
Running 1 journal thread per Neutron API worker has caused ovsdb-server to misbehave
when under heavy pressure1 . Running multiple journal threads seem to be causing other types of
problems in other drivers as well.
• When under heavy pressure1 , I noticed that the journal threads could come to a halt (or really
slowed down) while the API workers were handling a lot of requests. This resulted in some oper-
ations taking more than a minute to be processed. This behaviour can be seem in this screenshot.
• Given that the 1 journal thread per Neutron API worker approach is problematic, determining the
right number of journal threads is also difficult. In my tests, Ive noticed that 3 journal threads per
controller worked better but that number was pure based on trial & error. In production this
number should probably be calculated based in the environment, perhaps something like TripleO
(or any upper layer) would be in a better position to make that decision.
• At least temporarily, the data in the Neutron database is duplicated between the normal tables and
the journal one.
• Some operations like creating a new resource via Neutrons API will return HTTP 201, which
indicates that the resource has been created and is ready to be used, but as these resources are
created asynchronously one could argue that the HTTP codes are now misleading. As a note, the
resource will be created at the Neutron database by the time the HTTP request returns but it may
not be present in the SDN backend yet.
1
I ran the tests using Browbeat which is basically orchestrate Openstack Rally and monitor the machines usage of resources.
Given all considerations, this approach is still valid and the fact that its already been used by other ML2
drivers makes it more open for collaboration and code sharing.
Introduction
Load balancing is essential for enabling simple or automatic delivery scaling and availability since ap-
plication delivery, scaling and availability are considered vital features of any cloud. Octavia is an open
source, operator-scale load balancing solution designed to work with OpenStack.
The purpose of this document is to propose a design for how we can use OVN as the backend for
OpenStacks LoadBalancer API provided by Octavia.
OVN native LoadBalancer currently supports L4 protocols, with support for L7 protocols aimed for
in future releases. Currently it also does not have any monitoring facility. However, it does not need
any extra hardware/VM/Container for deployment, which is a major positive point when compared with
Amphorae. Also, it does not need any special network to handle the LoadBalancers requests as they are
taken care by OpenFlow rules directly. And, though OVN does not have support for TLS, it is in the
works and once implemented can be integrated with Octavia.
This following section details about how OVN can be used as an Octavia driver.
The OVN Driver for Octavia runs under the scope of Octavia. Octavia API receives and forwards calls
to the OVN Driver.
Step 1 - Creating a LoadBalancer
Octavia API receives and issues a LoadBalancer creation request on a network to the OVN Provider
driver. OVN driver creates a LoadBalancer in the OVN NorthBound DB and asynchronously updates
the Octavia DB with the status response. A VIP port is created in Neutron when the LoadBalancer
creation is complete. The VIP information however is not updated in the NorthBound DB until the
Members are associated with the LoadBalancers Pool.
Step 2 - Creating LoadBalancer entities (Pools, Listeners, Members)
Once a LoadBalancer is created by OVN in its NorthBound DB, users can now create Pools, Listeners
and Members associated with the LoadBalancer using the Octavia API. With the creation of each entity,
the LoadBalancers external_ids column in the NorthBound DB would be updated and corresponding
Logical and Openflow rules would be added for handling them.
Step 3 - LoadBalancer request processing
When a user sends a request to the VIP IP address, OVN pipeline takes care of load balancing the VIP
request to one of the backend members. More information about this can be found in the ovn-northd
man pages.
• On startup: Open and maintain a connection to the OVN Northbound DB (using the ovsdbapp
library). On first connection, and anytime a reconnect happens:
– Do a full sync.
• Register a callback when a new interface is added to a router or deleted from a router.
• When a new LoadBalancer L1 is created, create a Row in OVNs Load_Balancer table and
update its entries for name and network references. If the network on which the LoadBalancer is
created, is associated with a router, say R1, then add the router reference to the LoadBalancers
external_ids and associate the LoadBalancer to the router. Also associate the LoadBalancer L1
with all those networks which have an interface on the router R1. This is required so that Logical
Flows for inter-network communication while using the LoadBalancer L1 is possible. Also, dur-
ing this time, a new port is created via Neutron which acts as a VIP Port. The information of this
new port is not visible on the OVNs NorthBound DB till a member is added to the LoadBalancer.
• If a new network interface is added to the router R1 described above, all the LoadBalancers on that
network are associated with the router R1 and all the LoadBalancers on the router are associated
with the new network.
• If a network interface is removed from the router R1, then all the LoadBalancers which have
been solely created on that network (identified using the ls_ref attribute in the LoadBalancers
external_ids) are removed from the router. Similarly those LoadBalancers which are associated
with the network but not actually created on that network are removed from the network.
• LoadBalancer can either be deleted with all its children entities using the cascade option, or
its members/pools/listeners can be individually deleted. When the LoadBalancer is deleted, its
references and associations from all networks and routers are removed. This might change in the
future once the association of LoadBalancers with networks/routers are changed to weak from
strong [3]. Also the VIP port is deleted when the LoadBalancer is deleted.
OVN Northbound schema [5] has a table to store LoadBalancers. The table looks like:
"Load_Balancer": {
"columns": {
"name": {"type": "string"},
"vips": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}},
"protocol": {
"type": {"key": {"type": "string",
"enum": ["set", ["tcp", "udp"]]},
"min": 0, "max": 1}},
"external_ids": {
"type": {"key": "string", "value": "string",
"min": 0, "max": "unlimited"}}},
"isRoot": true},
2. Create a pool:
3. Create a member:
5. Create a listener:
Limitations
Support Matrix
A detailed matrix of the operations supported by OVN Provider driver in Octavia can be found in https:
//docs.openstack.org/octavia/latest/user/feature-classification/index.html
Other References
This document presents the problem and proposes a solution for handling OVSDB events in a distributed
fashion in ovn driver.
Problem description
In ovn driver, the OVSDB Monitor class is responsible for listening to the OVSDB events and performing
certain actions on them. We use it extensively for various tasks including critical ones such as monitoring
for port binding events (in order to notify Neutron/Nova that a port has been bound to a certain chassis).
Currently, this class uses a distributed OVSDB lock to ensure that only one instance handles those events
at a time.
The problem with this approach is that it creates a bottleneck because even if we have multiple Neutron
Workers running at the moment, only one is actively handling those events. And, this problem is high-
lighted even more when working with technologies such as containers which rely on creating multiple
ports at a time and waiting for them to be bound.
Proposed change
In order to fix this problem, this document proposes using a Consistent Hash Ring to split the load of
handling events across multiple Neutron Workers.
A new table called ovn_hash_ring will be created in the Neutron Database where the Neutron
Workers capable of handling OVSDB events will be registered. The table will use the following schema:
This table will be used to form the Consistent Hash Ring. Fortunately, we have an implementation
already in the tooz library of OpenStack. It was contributed by the Ironic team which also uses this data
structure in order to spread the API request load across multiple Ironic Conductors.
Heres how a Consistent Hash Ring from tooz works:
# Returns set(['worker3'])
hring[b'event-id-1']
# Returns set(['worker1'])
hring[b'event-id-2']
Every instance of the OVSDB Monitor class will be listening to a series of events from the OVSDB
database and each of them will have a unique ID registered in the database which will be part of the
Consistent Hash Ring.
When an event arrives, each OVSDB Monitor instance will hash that event UUID and the ring will return
one instance ID, which will then be compared with its own ID and if it matches that instance will then
process the event.
A new maintenance task will be created in ovn driver which will update the updated_at column from
the ovn_hash_ring table for the entries matching its hostname indicating that all Neutron Workers
running on that hostname are alive.
Note that only a single maintenance instance runs on each machine so the writes to the Neutron database
are optimized.
When forming the ring, the code should check for entries where the value of updated_at column is
newer than a given timeout. Entries that havent been updated in a certain time wont be part of the ring.
If the ring already exists it will be re-balanced.
Apart from heartbeating, we need to make sure that we remove the Nodes from the ring when the service
is stopped or killed.
By stopping the neutron-server service, all Nodes sharing the same hostname as the machine
where the service is running will be removed from the ovn_hash_ring table. This is done by han-
dling the SIGTERM event. Upon this event arriving, ovn driver should invoke the clean up method and
then let the process halt.
Unfortunately nothing can be done in case of a SIGKILL, this will leave the nodes in the database and
they will be part of the ring until the timeout is reached or the service is restarted. This can introduce a
window of time which can result in some events being lost. The current implementation shares the same
problem, if the instance holding the current OVSDB lock is killed abruptly, events will be lost until the
lock is moved on to the next instance which is alive. One could argue that the current implementation
aggravates the problem because all events will be lost where with the distributed mechanism some events
will be lost. As far as distributed systems goes, thats a normal scenario and things are soon corrected.
This section contains some ideas that can be added on top of this work to further improve it:
• Listen to changes to the Chassis table in the OVSDB and force a ring re-balance when a Chassis
is added or removed from it.
• Cache the ring for a short while to minimize the database reads when the service is under heavy
load.
• To greater minimize/avoid event losses it would be possible to cache the last X events to be repro-
cessed in case a node times out and the ring re-balances.
Problem Description
Currently if a single network node is active in the system, gateway chassis for the routers would be
scheduled on that node. However, when a new node is added to the system, neither rescheduling nor
rebalancing occur automatically. This makes the router created on the first node to be not in HA mode.
Side-effects of this behavior include:
• Skewed up load on different network nodes due to lack of router rescheduling.
• If the active node, where the gateway chassis for a router is scheduled goes down, then because of
lack of HA the North-South traffic from that router will be hampered.
Gateway scheduling has been proposed in [2]. However, rebalancing or rescheduling was not a part
of that solution. This specification clarifies what is rescheduling and rebalancing. Rescheduling would
automatically happen on every event triggered by addition or deletion of chassis. Rebalancing would be
only triggered by manual operator action.
In order to provide proper rescheduling of the gateway ports during addition or deletion of the chassis,
following approach can be considered:
• Identify the number of chassis in which each router has been scheduled
– Consider router for scheduling if no. of chassis < MAX_GW_CHASSIS
MAX_GW_CHASSIS is defined in [0]
• Find a list of chassis where router is scheduled and reschedule it up to MAX_GW_CHASSIS gate-
ways using list of available candidates. Do not modify the primary chassis association to not
interrupt network flows.
Rescheduling is an event triggered operation which will occur whenever a chassis is added or removed.
When it happend, schedule_unhosted_gateways() [1] will be called to host the unhosted gate-
ways. Routers without gateway ports are excluded in this operation because those are not connected
to provider networks and havent the gateway ports. More information about it can be found in the
gateway_chassis table definition in OVN NorthBound DB [5].
Chassis which has the flag enable-chassis-as-gw enabled in their OVN southbound database
table, would be the ones eligible for hosting the routers. Rescheduling of router depends on current
prorities set. Each chassis is given a specific priority for the routers gateway and priority increases with
increasing value ( i.e. 1 < 2 < 3 ). The highest prioritized chassis hosts gateway port. Other chassis are
selected as backups.
There are two approaches for rescheduling supported by ovn driver right now: * Least loaded - select
least-loaded chassis first, * Random - select chassis randomly.
Few points to consider for the design:
• If there are 2 Chassis C1 and C2, where the routers are already balanced, and a new chassis C3 is
added, then routers should be rescheduled only from C1 to C3 and C2 to C3. Rescheduling from
C1 to C2 and vice-versa should not be allowed.
• In order to reschedule the routers chassis, the primary chassis for a gateway router will be left
untouched. However, for the scenario where all routers are scheduled in only one chassis which
is available as gateway, the addition of the second gateway chassis would schedule the router
gateway ports at a lower priority on the new chassis.
Following scenarios are possible which have been considered in the design:
• Case #1:
– System has only one chassis C1 and all router gateway ports are scheduled on it. We
add a new chassis C2.
– Behavior: All the routers scheduled on C1 will also be scheduled on C2 with priority 1.
• Case #2:
Rebalancing is the second part of the design and it assigns a new primary to already scheduled router
gateway ports. Downtime is expected in this operation. Rebalancing of routers can be achieved using
external cli script. Similar approach has been implemeneted for DHCP rescheduling [4]. The primary
chassis gateway could be moved only to other, previously scheduled gateway. Rebalancing of chassis
occurs only if number of scheduled primary chassis ports per each provider network hosted by given
chassis is higher than average number of hosted primary gateway ports per chassis per provider network.
This dependency is determined by formula:
avg_gw_per_chassis = num_gw_by_provider_net / num_chassis_with_provider_net
Where:
• avg_gw_per_chassis - average number of scheduler primary gateway chassis withing same
provider network.
• num_gw_by_provider_net - number of primary chassis gateways scheduled in given
provider networks.
• num_chassis_with_provider_net - number of chassis that has connectivity to given provider
network.
The rebalancing occurs only if:
num_gw_by_provider_net_by_chassis > avg_gw_per_chassis
Where:
• num_gw_by_provider_net_by_chassis - number of hosted primary gateways by given
provider network by given chassis
• avg_gw_per_chassis - average number of scheduler primary gateway chassis withing same
provider network.
Following scenarios are possible which have been considered in the design:
• Case #1:
– System has only two chassis C1 and C2. Chassis host the same number of gateways.
– Behavior: Rebalancing doesnt occur.
• Case #2:
– System has only two chassis C1 and C2. C1 hosts 3 gateways. C2 hosts 2 gateways.
– Behavior: Rebalancing doesnt occur to not continuously move gateways between chas-
sis in loop.
• Case #3:
– System has two chassis C1 and C2. In meantime third chassis C3 has been added to the
system.
– Behavior: Rebalancing should occur. Gateways from C1 and C2 should be moved to
C3 up to avg_gw_per_chassis.
• Case #4:
– System has two chassis C1 and C2. C1 is connected to provnet1, but C2 is connected
to provnet2.
– Behavior: Rebalancing shouldnt occur because of lack of chassis within same provider
network.
References
ML2/OVN supports Port Forwarding (PF) across the North/South data plane. Specific L4 Ports of the
Floating IP (FIP) can be directed to a specific FixedIP:PortNumber of a VM, so that different services
running in a VM can be isolated, and can communicate with external networks easily.
OVNs native load balancing (LB) feature is used for providing this functionality. An OVN load balancer
is expressed in the OVN northbound load_balancer table for all mappings for a given FIP+protocol. All
PFs for the same FIP+protocol are kept as Virtual IP (VIP) mappings inside a LB entry. See the diagram
below for an example of how that looks like:
FIP:PORT = PRIVATE_IP:PRIV_PORT
+---------------------+ +----------------------------
,→-----+
| Floating IP AA | | Load Balancer AA UDP
,→ |
| | |
,→ |
| +-----------------+ | |
,→ |
| | Port Forwarding | | +----------->AA:portA => internal
,→IP1:portX |
| | | | | |
,→ |
| | External PortA +-----+ +------->AA:portB => internal
,→IP2:portX |
(continues on next page)
The OVN LB entries have names that include the id of the FIP and a protocol suffix. That protocol
portion is needed because a single FIP can have multiple UDP and TCP port forwarding entries while
a given LB entry can either be one or the other protocol (not both). Based on that, the format used to
specify an LB entry is:
pf-floatingip-<NEUTRON_FIP_ID>-<PROTOCOL>
A revision value is present in external_ids of each OVN load balancer entry. That number is synchro-
nized with floating IP entries (NOT the port forwarding!) of the Neutron database.
In order to differentiate a load balancer entry that was created by port forwarding vs load balancer entries
maintained by ovn-octavia-provider, the external_ids field also has an owner value:
external_ids = {
ovn_const.OVN_DEVICE_OWNER_EXT_ID_KEY: PORT_FORWARDING_PLUGIN,
ovn_const.OVN_FIP_EXT_ID_KEY: pf_obj.floatingip_id,
ovn_const.OVN_ROUTER_NAME_EXT_ID_KEY: rtr_name,
neutron:revision_number: fip_obj.revision_number,
}
The following registry (API) neutron events trigger the OVN backend to map port forwarding into LB:
@registry.receives(PORT_FORWARDING_PLUGIN, [events.AFTER_INIT])
def register(self, resource, event, trigger, payload=None):
registry.subscribe(self._handle_notification, PORT_FORWARDING,
,→events.AFTER_CREATE)
registry.subscribe(self._handle_notification, PORT_FORWARDING,
,→events.AFTER_UPDATE)
registry.subscribe(self._handle_notification, PORT_FORWARDING,
,→events.AFTER_DELETE)
ML2/OVN supports network logging, based on security groups. Unlike ML2/OVS, the driver for this
functionality leverages the Northbound database to manage affected security group rules. Thus, there is
no need for an agent.
It is good to keep in mind that Openstack Security Groups (SG) and their rules (SGR) map 1:1 into
OVNs Port Groups (PG) and Access Control Lists (ACL):
Just like SGs have a list of SGRs, PGs have a list of ACLs. PGs also have a list of logical ports, but
that is not really relevant in this context. With regards to Neutron ports, network logging entries (NLE)
can filter on Neutron ports, also known as targets. When that is the case, the underlying implementation
finds the corresponding SGs out of the Neutron port. So it is all back to SGs and affected SGRs. Or PGs
and ACLs as far as OVN is concerned.
For more info on port groups, see: https://docs.openstack.org/networking-ovn/latest/contributor/design/
acl_optimizations.html
In order to enable network logging, the Neutron OVN driver relies on 2 tables of the Northbound
database: Meter and ACL.
Meter Table
Meters are how network logging events get throttled, so they do not negatively affect the control plane.
Logged events are sent to the ovn-controller that runs locally on each compute node. Thus, the throttle
keeps ovn-controller from getting overwhelmed. Note that the meters used for network logging do not
rate-limit the datapath; they only affect the logs themselves. With the addition of fair meters, multiple
ACLs can refer to the same meter without competing with each other for what logs get rate limited. This
attribute is a pre-requisite for this feature, as the design aspires to keep the complexity associated with
the management of meters outside Openstack. The benefit of ACLs sharing a fair meter is that a noisy
neighbor (ACL) will not consume all the available capacity set for the meter.
For more info on fair meters, see: https://github.com/ovn-org/ovn/commit/
880dca99eaf73db7e783999c29386d03c82093bf
Below is an example of a meter configuration in OVN. You can locate the fair, unit, burst_size, and rate
attributes:
ACL Table
As mentioned before, ACLs are the OVNs counterpart to Openstacks SGRs. Moreover, there are a few
attributes in each ACL that makes it able to provide the networking logging feature. Lets use the example
below to point out the relevant fields:
The first command creates a networking-log for a given SG. The second shows an SGR from that SG.
The third shell command is where we can see how the ACL with the meter information gets populated.
These are the attributes pertinent to network logging:
• log: a boolean that dictates whether a log will be generated. Even if the NLE applies to the SGR
via its associated SG, this may be false if the action is not a match. That would be the case if the
NLE specified event DROP, in this example.
• meter: this is the name of the fair meter. It is the same for all ACLs.
• name: This is a string composed of the prefix neutron- and the id of the NLE. It will be part of the
generated logs.
• severity: this is the log severity that will be used by the ovn-controller. It is currently hard coded
in Neutron, but can be made configurable in future releases.
If we poked the SGR with packets that match its criteria, the ovn-controller local to where the ACLs is
enforced will log something that looks like this:
2021-02-16T11:59:00.640Z|00045|acl_log(ovn_pinctrl0)|INFO|
name="neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5",
verdict=allow, severity=info: icmp,vlan_tci=0x0000,dl_
,→src=fa:16:3e:24:dc:88,
dl_dst=fa:16:3e:15:6d:e0,
nw_src=10.0.0.12,nw_dst=10.0.0.11,nw_tos=0,nw_ecn=0,nw_ttl=64,
,→icmp_type=8,
icmp_code=0
It is beyond the scope of this document to talk about what happens after the logs are generated by ovn-
controllers. The harvesting of files across compute nodes is something a project like Monasca may be a
good fit.
OVN Tools
This document offers details on Neutron tools available for assisting with using the Open Virtual Net-
work (OVN) backend.
Overview
As described in the ovn-migration blueprint, Neutrons OVN ML2 plugin has merged to the Neutron
repository as of the Ussuri release. With that, special care must be taken to apply Neutron changes to
the proper stable branches of the networking-ovn repo.
Note: These scripts are generic enough to work on any patch file, but particularly handy with the
networking-ovn migration.
tools/files_in_patch.py
tools/download_gerrit_change.py
This tool is needed by migrate_names.py (see below), but it can be used independently. Given a
Gerrit change id, it will fetch the latest patchset of the change from review.opendev.org as a patch file.
The output can be stdout or an optional filename.
$ ./tools/download_gerrit_change.py --help
Usage: download_gerrit_change.py [OPTIONS] GERRIT_CHANGE
Options:
-o, --output_patch TEXT Output patch file. Default: stdout
-g, --gerrit_url TEXT The url to Gerrit server [default:
https://review.opendev.org/]
-t, --timeout INTEGER Timeout, in seconds [default: 10]
--help Show this message and exit.
tools/migrate_names.py
Use this tool to modify the name of the files in a patchfile so it can be converted to/from the legacy
networking-ovn and Neutron repositories.
The mapping of how the files are renamed is based on migrate_names.txt, which is located in
the same directory where migrate_names.py is installed. That behavior can be modified via the
--mapfile option. More information on how the map is parsed is provided in the header section of
that file.
$ ./tools/migrate_names.py --help
Usage: migrate_names.py [OPTIONS]
Options:
-i, --input_patch TEXT input_patch patch file or gerrit change
-o, --output_patch TEXT Output patch file. Default: stdout
-m, --mapfile PATH Data file that specifies mapping to be applied
,→to
input [default: /home/user/openstack/neutron.
,→ git
/tools/migrate_names.txt]
--reverse / --no-reverse Map filenames from networking-ovn to Neutron
,→repo
--help Show this message and exit.
$ ./tools/migrate_names.py -i 701646 > /tmp/ovn_change.patch
$ ./tools/migrate_names.py -o /tmp/reverse.patch -i /tmp/ovn_change.patch -
,→-reverse
$ diff /tmp/reverse.patch /tmp/ovn_change.patch | grep .py
< --- a/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py
< +++ b/neutron/plugins/ml2/drivers/ovn/mech_driver/mech_driver.py
> --- a/networking_ovn/ml2/mech_driver.py
(continues on next page)
$ ./tools/files_in_patch.py /tmp/ovn_change.patch
networking_ovn/ml2/mech_driver.py
networking_ovn/ml2/trunk_driver.py
networking_ovn/tests/unit/ml2/test_mech_driver.py
networking_ovn/tests/unit/ml2/test_trunk_driver.py
14.7 Dashboards
Gerrit Dashboards
Grafana Dashboards
Look for neutron and networking-* dashboard by names by going to the following link:
Grafana
For instance:
• Neutron
• Neutron-lib