Red Hat Openstack Platform 16.1 Networking Guide en Us
Red Hat Openstack Platform 16.1 Networking Guide en Us
Networking Guide
OpenStack Team
[email protected]
Legal Notice
Copyright © 2022 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.
Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.
XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.
MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.
Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.
The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
Abstract
A cookbook for common OpenStack Networking tasks.
Table of Contents
Table of Contents
. . . . . . . . . .OPEN
MAKING . . . . . . SOURCE
. . . . . . . . . .MORE
. . . . . . .INCLUSIVE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .
. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .
.CHAPTER
. . . . . . . . . . 1.. .INTRODUCTION
. . . . . . . . . . . . . . . . . TO
. . . .OPENSTACK
. . . . . . . . . . . . . .NETWORKING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .
1.1. MANAGING YOUR RHOSP NETWORKS 9
1.2. NETWORKING SERVICE COMPONENTS 11
1.3. MODULAR LAYER 2 (ML2) NETWORKING 11
1.4. ML2 NETWORK TYPES 12
1.5. MODULAR LAYER 2 (ML2) MECHANISM DRIVERS 12
1.6. OPEN VSWITCH 13
1.7. OPEN VIRTUAL NETWORK (OVN) 13
1.8. MODULAR LAYER 2 (ML2) TYPE AND MECHANISM DRIVER COMPATIBILITY 14
1.9. EXTENSION DRIVERS FOR THE RHOSP NETWORKING SERVICE 14
. . . . . . . . . . . 2.
CHAPTER . . WORKING
. . . . . . . . . . . WITH
. . . . . . ML2/OVN
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
..............
2.1. LIST OF COMPONENTS IN THE RHOSP OVN ARCHITECTURE 16
2.2. THE OVN-CONTROLLER SERVICE ON COMPUTE NODES 17
2.3. OVN METADATA AGENT ON COMPUTE NODES 17
2.4. THE OVN COMPOSABLE SERVICE 18
2.5. LAYER 3 HIGH AVAILABILITY WITH OVN 18
2.6. LIMITATIONS OF THE ML2/OVN MECHANISM DRIVER 19
2.6.1. ML2/OVS features not yet supported by ML2/OVN 19
2.6.2. Core OVN limitations 20
2.7. LIMIT FOR NON-SECURE PORTS WITH ML2/OVN 20
2.8. ML2/OVS TO ML2/OVN IN-PLACE MIGRATION: VALIDATED AND PROHIBITED SCENARIOS 20
2.8.1. Validated ML2/OVS to ML2/OVN migration scenarios 20
2.8.2. ML2/OVS to ML2/OVN in-place migration scenarios that have not been verified 21
2.8.3. ML2/OVS to ML2/OVN in-place migration and security group rules 21
2.9. USING ML2/OVS INSTEAD OF THE DEFAULT ML2/OVN IN A NEW RHOSP 16.1 DEPLOYMENT 22
2.10. KEEPING ML2/OVS AFTER AN UPGRADE INSTEAD OF THE DEFAULT ML2/OVN 23
2.11. DEPLOYING A CUSTOM ROLE WITH ML2/OVN 23
2.12. SR-IOV WITH ML2/OVN AND NATIVE OVN DHCP 26
.CHAPTER
. . . . . . . . . . 3.
. . MANAGING
. . . . . . . . . . . . . PROJECT
. . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
..............
3.1. VLAN PLANNING 27
3.2. TYPES OF NETWORK TRAFFIC 27
3.3. IP ADDRESS CONSUMPTION 29
3.4. VIRTUAL NETWORKING 29
3.5. ADDING NETWORK ROUTING 29
3.6. EXAMPLE NETWORK PLAN 30
3.7. CREATING A NETWORK 30
3.8. WORKING WITH SUBNETS 33
3.9. CREATING A SUBNET 33
3.10. ADDING A ROUTER 35
3.11. PURGING ALL RESOURCES AND DELETING A PROJECT 36
3.12. DELETING A ROUTER 36
3.13. DELETING A SUBNET 36
3.14. DELETING A NETWORK 36
. . . . . . . . . . . 4.
CHAPTER . . .CONNECTING
. . . . . . . . . . . . . . .VM
. . . .INSTANCES
. . . . . . . . . . . . TO
. . . .PHYSICAL
. . . . . . . . . . . NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
..............
4.1. OVERVIEW OF THE OPENSTACK NETWORKING TOPOLOGY 38
1
Red Hat OpenStack Platform 16.1 Networking Guide
.CHAPTER
. . . . . . . . . . 5.
. . MANAGING
. . . . . . . . . . . . .FLOATING
. . . . . . . . . . . .IP
. . ADDRESSES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
..............
5.1. CREATING FLOATING IP POOLS 59
5.2. ASSIGNING A SPECIFIC FLOATING IP 59
5.3. CREATING AN ADVANCED NETWORK 61
5.4. ASSIGNING A RANDOM FLOATING IP 61
5.5. CREATING MULTIPLE FLOATING IP POOLS 64
5.6. BRIDGING THE PHYSICAL NETWORK 64
5.7. ADDING AN INTERFACE 65
5.8. DELETING AN INTERFACE 65
.CHAPTER
. . . . . . . . . . 6.
. . .TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66
..............
6.1. BASIC PING TESTING 66
6.2. VIEWING CURRENT PORT STATUS 68
6.3. TROUBLESHOOTING CONNECTIVITY TO VLAN PROVIDER NETWORKS 69
6.4. REVIEWING THE VLAN CONFIGURATION AND LOG FILES 70
6.5. PERFORMING BASIC ICMP TESTING WITHIN THE ML2/OVN NAMESPACE 71
6.6. TROUBLESHOOTING FROM WITHIN PROJECT NETWORKS (ML2/OVS) 72
6.7. PERFORMING ADVANCED ICMP TESTING WITHIN THE NAMESPACE (ML2/OVS) 73
6.8. CREATING ALIASES FOR OVN TROUBLESHOOTING COMMANDS 74
6.9. MONITORING OVN LOGICAL FLOWS 75
6.10. MONITORING OPENFLOWS 78
6.11. LISTING THE OVN DATABASE TABLES IN AN ML2/OVN DEPLOYMENT 78
6.12. VALIDATING YOUR ML2/OVN DEPLOYMENT 79
6.13. SELECTING ML2/OVN LOG DEBUG AND INFO MODES 80
6.14. ML2/OVN LOG FILES 81
. . . . . . . . . . . 7.
CHAPTER . . CONFIGURING
. . . . . . . . . . . . . . . . PHYSICAL
. . . . . . . . . . . .SWITCHES
. . . . . . . . . . . FOR
. . . . . OPENSTACK
. . . . . . . . . . . . . . NETWORKING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82
..............
7.1. PLANNING YOUR PHYSICAL NETWORK ENVIRONMENT 82
7.2. CONFIGURING A CISCO CATALYST SWITCH 82
7.2.1. About trunk ports 82
7.2.2. Configuring trunk ports for a Cisco Catalyst switch 83
7.2.3. About access ports 84
7.2.4. Configuring access ports for a Cisco Catalyst switch 84
7.2.5. About LACP port aggregation 85
7.2.6. Configuring LACP on the physical NIC 85
7.2.7. Configuring LACP for a Cisco Catalyst switch 86
7.2.8. About MTU settings 87
7.2.9. Configuring MTU settings for a Cisco Catalyst switch 87
7.2.10. About LLDP discovery 88
7.2.11. Configuring LLDP for a Cisco Catalyst switch 88
2
Table of Contents
. . . . . . . . . . . 8.
CHAPTER . . .CONFIGURING
. . . . . . . . . . . . . . . MAXIMUM
. . . . . . . . . . . .TRANSMISSION
. . . . . . . . . . . . . . . . .UNIT
. . . . . (MTU)
. . . . . . . SETTINGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
...............
8.1. MTU OVERVIEW 104
8.2. CONFIGURING MTU SETTINGS IN DIRECTOR 105
8.3. REVIEWING THE RESULTING MTU CALCULATION 105
. . . . . . . . . . . 9.
CHAPTER . . .CONFIGURING
. . . . . . . . . . . . . . . QUALITY
. . . . . . . . . . OF
. . . .SERVICE
. . . . . . . . . .(QOS)
. . . . . . .POLICIES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
...............
3
Red Hat OpenStack Platform 16.1 Networking Guide
. . . . . . . . . . . 10.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . BRIDGE
. . . . . . . . .MAPPINGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
...............
10.1. OVERVIEW OF BRIDGE MAPPINGS 125
10.2. TRAFFIC FLOW 125
10.3. CONFIGURING BRIDGE MAPPINGS 125
10.4. MAINTAINING BRIDGE MAPPINGS FOR OVS 126
10.4.1. Cleaning up OVS patch ports manually 127
10.4.2. Cleaning up OVS patch ports automatically 127
. . . . . . . . . . . 11.
CHAPTER . . .VLAN-AWARE
. . . . . . . . . . . . . . .INSTANCES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
...............
11.1. VLAN TRUNKS AND VLAN TRANSPARENT NETWORKS 129
11.2. ENABLING VLAN TRANSPARENCY IN ML2/OVN DEPLOYMENTS 129
11.3. REVIEWING THE TRUNK PLUG-IN 131
11.4. CREATING A TRUNK CONNECTION 131
11.5. ADDING SUBPORTS TO THE TRUNK 133
11.6. CONFIGURING AN INSTANCE TO USE A TRUNK 134
11.7. CONFIGURING NETWORKING SERVICE RPC TIMEOUT 136
11.8. UNDERSTANDING TRUNK STATES 137
. . . . . . . . . . . 12.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . .RBAC
. . . . . . POLICIES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138
...............
12.1. OVERVIEW OF RBAC POLICIES 138
12.2. CREATING RBAC POLICIES 138
12.3. REVIEWING RBAC POLICIES 139
12.4. DELETING RBAC POLICIES 139
12.5. GRANTING RBAC POLICY ACCESS FOR EXTERNAL NETWORKS 140
. . . . . . . . . . . 13.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . DISTRIBUTED
. . . . . . . . . . . . . . .VIRTUAL
. . . . . . . . . .ROUTING
. . . . . . . . . . (DVR)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
..............
13.1. UNDERSTANDING DISTRIBUTED VIRTUAL ROUTING (DVR) 141
13.1.1. Overview of Layer 3 routing 141
13.1.2. Routing flows 141
13.1.3. Centralized routing 141
13.2. DVR OVERVIEW 142
13.3. DVR KNOWN ISSUES AND CAVEATS 142
13.4. SUPPORTED ROUTING ARCHITECTURES 143
13.5. DEPLOYING DVR WITH ML2 OVS 143
13.6. MIGRATING CENTRALIZED ROUTERS TO DISTRIBUTED ROUTING 145
13.7. DEPLOYING ML2/OVN OPENSTACK WITH DISTRIBUTED VIRTUAL ROUTING (DVR) DISABLED 146
13.7.1. Additional resources 146
. . . . . . . . . . . 14.
CHAPTER . . . PROJECT
. . . . . . . . . . .NETWORKING
. . . . . . . . . . . . . . . WITH
. . . . . . IPV6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147
...............
14.1. IPV6 SUBNET OPTIONS 147
14.2. CREATE AN IPV6 SUBNET USING STATEFUL DHCPV6 148
. . . . . . . . . . . 15.
CHAPTER . . . MANAGING
. . . . . . . . . . . . .PROJECT
. . . . . . . . . . QUOTAS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
..............
15.1. CONFIGURING PROJECT QUOTAS 151
15.2. L3 QUOTA OPTIONS 151
4
Table of Contents
. . . . . . . . . . . 16.
CHAPTER . . . DEPLOYING
. . . . . . . . . . . . . .ROUTED
. . . . . . . . . PROVIDER
. . . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
...............
16.1. ADVANTAGES OF ROUTED PROVIDER NETWORKS 153
16.2. FUNDAMENTALS OF ROUTED PROVIDER NETWORKS 153
16.3. LIMITATIONS OF ROUTED PROVIDER NETWORKS 154
16.4. PREPARING FOR A ROUTED PROVIDER NETWORK 154
16.5. CREATING A ROUTED PROVIDER NETWORK 157
16.6. MIGRATING A NON-ROUTED NETWORK TO A ROUTED PROVIDER NETWORK 163
. . . . . . . . . . . 17.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . ALLOWED
. . . . . . . . . . . ADDRESS
. . . . . . . . . . .PAIRS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166
...............
17.1. OVERVIEW OF ALLOWED ADDRESS PAIRS 166
17.2. CREATING A PORT AND ALLOWING ONE ADDRESS PAIR 166
17.3. ADDING ALLOWED ADDRESS PAIRS 167
. . . . . . . . . . . 18.
CHAPTER . . . COMMON
. . . . . . . . . . . ADMINISTRATIVE
. . . . . . . . . . . . . . . . . . .NETWORKING
. . . . . . . . . . . . . . . TASKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
...............
18.1. CONFIGURING THE L2 POPULATION DRIVER 168
18.2. TUNING KEEPALIVED TO AVOID VRRP PACKET LOSS 168
18.3. SPECIFYING THE NAME THAT DNS ASSIGNS TO PORTS 170
18.4. ASSIGNING DHCP ATTRIBUTES TO PORTS 173
18.5. LOADING KERNEL MODULES 175
18.6. CONFIGURING SHARED SECURITY GROUPS 176
. . . . . . . . . . . 19.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . LAYER
. . . . . . . .3. .HIGH
. . . . . .AVAILABILITY
. . . . . . . . . . . . . . .(HA)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
...............
19.1. RHOSP NETWORKING SERVICE WITHOUT HIGH AVAILABILITY (HA) 179
19.2. OVERVIEW OF LAYER 3 HIGH AVAILABILITY (HA) 179
19.3. LAYER 3 HIGH AVAILABILITY (HA) FAILOVER CONDITIONS 180
19.4. PROJECT CONSIDERATIONS FOR LAYER 3 HIGH AVAILABILITY (HA) 180
19.5. HIGH AVAILABILITY (HA) CHANGES TO THE RHOSP NETWORKING SERVICE 180
19.6. ENABLING LAYER 3 HIGH AVAILABILITY (HA) ON RHOSP NETWORKING SERVICE NODES 181
19.7. REVIEWING HIGH AVAILABILITY (HA) RHOSP NETWORKING SERVICE NODE CONFIGURATIONS 182
. . . . . . . . . . . 20.
CHAPTER . . . .REPLACING
. . . . . . . . . . . . .NETWORKER
. . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .184
...............
20.1. PREPARING TO REPLACE NETWORK NODES 184
20.2. REPLACING A NETWORKER NODE 185
20.3. RESCHEDULING NODES AND CLEANING UP THE NETWORKING SERVICE 188
. . . . . . . . . . . 21.
CHAPTER . . . IDENTIFYING
. . . . . . . . . . . . . . VIRTUAL
. . . . . . . . . . DEVICES
. . . . . . . . . .WITH
. . . . . .TAGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191
..............
21.1. OVERVIEW OF VIRTUAL DEVICE TAGGING 191
21.2. TAGGING VIRTUAL DEVICES 191
5
Red Hat OpenStack Platform 16.1 Networking Guide
6
MAKING OPEN SOURCE MORE INCLUSIVE
7
Red Hat OpenStack Platform 16.1 Networking Guide
2. Ensure that you see the Feedback button in the upper right corner of the document.
6. Optional: Add your email address so that the documentation team can contact you for
clarification on your issue.
7. Click Submit.
8
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING
Inside project networks, you can use pools of floating IP addresses or a single floating IP
address to direct ingress traffic to your VM instances. Using bridge mappings, you can associate
a physical network name (an interface label) to a bridge created with OVS or OVN to allow
provider network traffic to reach the physical network.
RPNs simplify the cloud for end users because they see only one network. For cloud operators,
RPNs deliver scalabilty and fault tolerance. For example, if a major error occurs, only one
segment is impacted instead of the entire network failing.
RHOSP uses VRRP to make project routers and floating IP addresses highly available. An
alternative to centralized routing, Distributed Virtual Routing (DVR) offers an alternative routing
design based on VRRP that deploys the L3 agent and schedules routers on every Compute
node.
9
Red Hat OpenStack Platform 16.1 Networking Guide
For more information, see Using availability zones to make network resources highly available .
In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags
are transferred over the network and consumed by the VM instances on the same VLAN, and
ignored by other instances and devices. VLAN trunks support VLAN-aware instances by
combining VLANs into a single trunked port.
10
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING
API server
The RHOSP networking API includes support for Layer 2 networking and IP Address
Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing
between Layer 2 networks and gateways to external networks. RHOSP networking includes a
growing list of plug-ins that enable interoperability with various commercial and open source
network technologies, including routers, switches, virtual switches and software-defined
networking (SDN) controllers.
Messaging queue
Accepts and routes RPC requests between agents to complete API operations. Message queue
is used in the ML2 plug-in for RPC between the neutron server and neutron agents that run on
each hypervisor, in the ML2 mechanism drivers for Open vSwitch and Linux bridge.
The ML2 framework distinguishes between the two kinds of drivers that can be configured:
Type drivers
Define how an RHOSP network is technically realized.
Each available network type is managed by an ML2 type driver, and they maintain any required type-
specific network state. Validating the type-specific information for provider networks, type drivers
are responsible for the allocation of a free segment in project networks. Examples of type drivers are
GENEVE, GRE, VXLAN, and so on.
Mechanism drivers
Define the mechanism to access an RHOSP network of a certain type.
The mechanism driver takes the information established by the type driver and applies it to the
networking mechanisms that have been enabled. Examples of mechanism drivers are Open Virtual
Networking (OVN) and Open vSwitch (OVS).
Mechanism drivers can employ L2 agents, and by using RPC interact directly with external devices or
controllers. You can use multiple mechanism and type drivers simultaneously to access different
ports of the same virtual network.
Additional resources
Section 1.8, “Modular Layer 2 (ML2) type and mechanism driver compatibility”
11
Red Hat OpenStack Platform 16.1 Networking Guide
Flat
All virtual machine (VM) instances reside on the same network, which can also be shared with the
hosts. No VLAN tagging or other network segregation occurs.
VLAN
With RHOSP networking users can create multiple provider or project networks using VLAN IDs
(802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to
communicate with each other across the environment. They can also communicate with dedicated
servers, firewalls, load balancers and other network infrastructure on the same Layer 2 VLAN.
You can use VLANs to segment network traffic for computers running on the same switch. This
means that you can logically divide your switch by configuring the ports to be members of different
networks — they are basically mini-LANs that you can use to separate traffic for security reasons.
For example, if your switch has 24 ports in total, you can assign ports 1-6 to VLAN200, and ports 7-18
to VLAN201. As a result, computers connected to VLAN200 are completely separate from those on
VLAN201; they cannot communicate directly, and if they wanted to, the traffic must pass through a
router as if they were two separate physical switches. Firewalls can also be useful for governing which
VLANs can communicate with each other.
GENEVE tunnels
GENEVE recognizes and accommodates changing capabilities and needs of different devices in
network virtualization. It provides a framework for tunneling rather than being prescriptive about the
entire system. Geneve defines the content of the metadata flexibly that is added during
encapsulation and tries to adapt to various virtualization scenarios. It uses UDP as its transport
protocol and is dynamic in size using extensible option headers. Geneve supports unicast, multicast,
and broadcast. The GENEVE type driver is compatible with the ML2/OVN mechanism driver.
VXLAN and GRE tunnels
VXLAN and GRE use network overlays to support private communication between instances. An
RHOSP networking router is required to enable traffic to traverse outside of the GRE or VXLAN
project network. A router is also required to connect directly-connected project networks with
external networks, including the internet; the router provides the ability to connect to instances
directly from an external network using floating IP addresses. VXLAN and GRE type drivers are
compatible with the ML2/OVS mechanism driver.
Additional resources
Section 1.8, “Modular Layer 2 (ML2) type and mechanism driver compatibility”
Red Hat chose ML2/OVN as the default mechanism driver for all new deployments starting with RHOSP
16.0 because it offers immediate advantages over the ML2/OVS mechanism driver for most customers
today. Those advantages multiply with each release while we continue to enhance and improve the
12
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING
If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism
driver, you should start now to evaluate the benefits and feasibility of replacing the OVS driver with the
ML2/OVN mechanism driver. See the guide Migrating the Networking Service to the ML2 OVN
Mechanism Driver.
You enable mechanism drivers using the Orchestration service (heat) parameter,
NeutronMechanismDrivers. Here is an example from a heat custom environment file:
parameter_defaults:
...
NeutronMechanismDrivers: ansible,ovn,baremetal
...
The order in which you specify the mechanism drivers matters. In the earlier example, if you want to bind
a port using the baremetal mechanism driver, then you must specify baremetal before ansible.
Otherwise, the ansible driver will bind the port, because it precedes baremetal in the list of values for
NeutronMechanismDrivers.
Additional resources
Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform
For more information about network interface bonds, see Network Interface Bonding in the Advanced
Overcloud Customization guide.
NOTE
To mitigate the risk of network loops in OVS, only a single interface or a single bond can
be a member of a given bridge. If you require multiple bonds or interfaces, you can
configure multiple bridges.
13
Red Hat OpenStack Platform 16.1 Networking Guide
A physical network comprises physical wires, switches, and routers. A virtual network extends a physical
network into a hypervisor or container platform, bridging VMs or containers into the physical network.
An OVN logical network is a network implemented in software that is insulated from physical networks by
tunnels or other encapsulations. This allows IP and other address spaces used in logical networks to
overlap with those used on physical networks without causing conflicts. Logical network topologies can
be arranged without regard for the topologies of the physical networks on which they run. Thus, VMs
that are part of a logical network can migrate from one physical machine to another without network
disruption.
The encapsulation layer prevents VMs and containers connected to a logical network from
communicating with nodes on physical networks. For clustering VMs and containers, this can be
acceptable or even desirable, but in many cases VMs and containers do need connectivity to physical
networks. OVN provides multiple forms of gateways for this purpose. An OVN deployment consists of
several components:
14
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING
programmatically list available extensions by performing a GET on the /extensions URI. Note that this is
a versioned request; that is, an extension available in one API version might not be available in another.
The ML2 plug-in also supports extension drivers that allows other pluggable drivers to extend the core
resources implemented in the ML2 plug-in for network objects. Examples of extension drivers include
support for QoS, port security, and so on.
15
Red Hat OpenStack Platform 16.1 Networking Guide
Earlier RHOSP versions used the Open vSwitch (OVS) mechanism driver by default, but Red Hat
recommends the ML2/OVN mechanism driver for most deployments.
If you upgrade from an RHOSP 13 ML2/OVS deployment to RHOSP 16, Red Hat recommends migrating
from ML2/OVS to ML2/OVN after the upgrade. In some cases, ML2/OVN might not meet your
requirements. In these cases you can deploy RHOSP with ML2/OVS.
NOTE
16
CHAPTER 2. WORKING WITH ML2/OVN
The ovn-controller service expects certain key-value pairs in the external_ids column of the
Open_vSwitch table; puppet-ovn uses puppet-vswitch to populate these fields. The following
example shows the key-value pairs that puppet-vswitch configures in the external_ids column:
hostname=<HOST NAME>
ovn-encap-ip=<IP OF THE NODE>
ovn-encap-type=geneve
ovn-remote=tcp:OVN_DBS_VIP:6642
OpenStack guest instances access the Networking metadata service available at the link-local IP
address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks where the
Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the
appropriate host network. HaProxy adds the necessary headers to the metadata API request and then
forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket.
The OVN Networking service creates a unique network namespace for each virtual network that enables
the metadata service. Each network accessed by the instances on the Compute node has a
corresponding metadata namespace (ovnmeta-<datapath_uuid>).
In a default OSP 16.1 deployment, the ML2/OVN composable service runs on Controller nodes. You can
optionally create a custom Networker role and run the OVN composable service on dedicated
Networker nodes.
The OVN composable service ovn-dbs is deployed in a container called ovn-dbs-bundle. In a default
installation ovn-dbs is included in the Controller role and runs on Controller nodes. Because the service
is composable, you can assign it to another role, such as a Networker role.
If you assign the OVN composable service to another role, ensure that the service is co-located on the
same node as the pacemaker service, which controls the OVN database containers.
Related information
NOTE
L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any
nodes becoming a bottleneck.
BFD monitoring
18
CHAPTER 2. WORKING WITH ML2/OVN
OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the
gateway nodes. This protocol is encapsulated on top of the Geneve tunnels established from node to
node.
Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway
nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and
ARP responses and announcements.
Each compute node uses BFD to monitor each gateway node and automatically steers external traffic,
such as source and destination Network Address Translation (SNAT and DNAT), through the active
gateway node for a given router. Compute nodes do not need to monitor other compute nodes.
NOTE
External network failures are not detected as would happen with an ML2-OVS
configuration.
The gateway node becomes disconnected from the network (tunneling interface).
NOTE
This BFD monitoring mechanism only works for link failures, not for routing failures.
Distributed virtual FIP traffic does not pass to a VLAN tenant https://bugzilla.redhat.com/show_
routing (DVR) with network with ML2/OVN and DVR. bug.cgi?
OVN on VLAN id=1704596https://bugzilla.redhat.
project (tenant) DVR is enabled by default in new ML2/OVN com/show_bug.cgi?id=1766930
networks. deployments and in ML2/OVN deployments
that were migrated from ML2/OVS
deployments that had DVR enabled. If you
need VLAN tenant networks with OVN, you can
disable DVR. To disable DVR, include the
following lines in an environment file:
parameter_defaults:
NeutronEnableDVR: false
19
Red Hat OpenStack Platform 16.1 Networking Guide
In some large ML2/OVN RHSOP deployments, a flow chain limit inside ML2/OVN can drop ARP
requests that are targeted to ports where the security plug-in is disabled.
There is no documented maximum limit for the actual number of logical switch ports that ML2/OVN can
support, but the limit approximates 4,000 ports.
Attributes that contribute to the approximated limit are the number of resubmits in the OpenFlow
pipeline that ML2/OVN generates, and changes to the overall logical topology.
SR-IOV was not present in the starting environment or added during or after the migration.
Workloads used only SR-IOV virtual function (VF) ports. SR-IOV physical function (PF) ports caused
20
CHAPTER 2. WORKING WITH ML2/OVN
Workloads used only SR-IOV virtual function (VF) ports. SR-IOV physical function (PF) ports caused
migration failure.
2.8.2. ML2/OVS to ML2/OVN in-place migration scenarios that have not been
verified
You cannot perform an in-place ML2/OVS to ML2/OVN migration in the following scenarios until Red
Hat announces that the underlying issues are resolved.
For example, the default security group includes rules that allow egress to the DHCP server. If you
deleted those rules in your ML2/OVS deployment, ML2/OVS automatically adds implicit rules that allow
egress to the DHCP server. Those implicit rules are not supported by ML2/OVN, so in your target
ML2/OVN environment, DHCP and metadata traffic would not reach the DHCP server and the instance
would not boot. In this case, to restore DHCP access, you could add the following rules:
# Allow VM to contact dhcp server (ipv6, non-slaac). Be aware that the remote-ip may vary
depending on your use case!
21
Red Hat OpenStack Platform 16.1 Networking Guide
openstack security group rule create --egress --ethertype IPv6 --protocol udp --dst-port 547 --
remote-ip ff02::1:2 ${SEC_GROUP_ID}
# Allow VM to contact metadata server (ipv6)
openstack security group rule create --egress --ethertype IPv6 --protocol tcp --remote-ip
fe80::a9fe:a9fe ${SEC_GROUP_ID}
Procedure
parameter_defaults:
ContainerImagePrepare:
- set:
...
neutron_driver: ovs
Example
parameter_defaults:
...
NeutronNetworkType: 'vxlan'
4. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and the files that you modified.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Additional resources
22
CHAPTER 2. WORKING WITH ML2/OVN
If instead you choose to keep using ML2/OVS after the upgrade, follow Red Hat’s upgrade procedure as
documented, and do not perform the ML2/OVS-to-ML2/OVN migration.
Additional resources
Networker
Run the OVN composable services on dedicated networker nodes.
Networker with SR-IOV
Run the OVN composable services on dedicated networker nodes with SR-IOV.
Controller with SR-IOV
Run the OVN composable services on SR-IOV capable controller nodes.
Limitations
The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this
release.
All external ports are scheduled on a single gateway node because there is only one HA Chassis
Group for all of the ports.
North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV
because the external ports are not colocated with the logical router’s gateway ports. See
https://bugs.launchpad.net/neutron/+bug/1875852.
Prerequisites
You know how to deploy custom roles. For more information see Composable Services and
Custom Roles in the Advanced Overcloud Customization guide.
Procedure
23
Red Hat OpenStack Platform 16.1 Networking Guide
Procedure
1. Log in to the undercloud host as the stack user and source the stackrc file.
$ source stackrc
2. Choose the custom roles file that is appropriate for your deployment. Use it directly in the
deploy command if it suits your needs as-is. Or you can generate your own custom roles file that
combines other custom roles files.
3. [Optional] Generate a new custom roles data file that combines one of these custom roles files
with other custom roles files. Follow the instructions in Creating a roles_data file . Include the
appropriate source role files depending on your deployment.
4. [Optional] To identify specific nodes for the role, you can create a specific hardware flavor and
assign the flavor to specific nodes. Then use an environment file define the flavor for the role,
and to specify a node count. For more information, see the example in Creating a new role .
Deployment Settings
Networker role
ControllerParameters:
OVNCMSOptions: ""
ControllerSriovParameters:
OVNCMSOptions: ""
NetworkerParameters:
OVNCMSOptions: "enable-chassis-as-gw"
NetworkerSriovParameters:
OVNCMSOptions: ""
24
CHAPTER 2. WORKING WITH ML2/OVN
Deployment Settings
ControllerParameters:
OVNCMSOptions: ""
ControllerSriovParameters:
OVNCMSOptions: ""
NetworkerParameters:
OVNCMSOptions: ""
NetworkerSriYou can uovParameters:
OVNCMSOptions: "enable-chassis-as-gw"
Co-located control
and networker with OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None
SR-IOV
ControllerParameters:
OVNCMSOptions: ""
ControllerSriovParameters:
OVNCMSOptions: "enable-chassis-as-gw"
NetworkerParameters:
OVNCMSOptions: ""
NetworkerSriovParameters:
OVNCMSOptions: ""
7. Deploy the overcloud. Include the environment file in your deployment command with the -e
option. Include the custom roles data file in your deployment command with the -r option. For
example: ``-r Networker.yaml` or '-r mycustomrolesfile.yaml`.
Verification steps
a65125d9588d undercloud-0.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-
neutron-metadata-agent-ovn:16.1_20200813.1 kolla_start 23 hours ago Up 21 hours
ago ovn_metadata_agent
2. Ensure that Controller nodes with OVN services or dedicated Networker nodes have been
configured as gateways for OVS.
25
Red Hat OpenStack Platform 16.1 Networking Guide
external_ids:ovn-cms-options
enable-chassis-as-gw
f54cbbf4523a undercloud-0.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-neutron-
sriov-agent:16.2_20200813.1
kolla_start 23 hours ago Up 21 hours ago neutron_sriov_agent
Additional resources
Composable services and custom roles in the Advanced Overcloud Customization guide.
Limitations
The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this
release.
All external ports are scheduled on a single gateway node because there is only one HA Chassis
Group for all of the ports.
North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV
because the external ports are not colocated with the logical router’s gateway ports. See
https://bugs.launchpad.net/neutron/+bug/1875852.
Additional resources
Composable services and custom roles in the Advanced Overcloud Customization guide.
26
CHAPTER 3. MANAGING PROJECT NETWORKS
For example, it is ideal that your management or API traffic is not on the same network as systems that
serve web traffic. Traffic between VLANs travels through a router where you can implement firewalls to
govern traffic flow.
You must plan your VLANs as part of your overall plan that includes traffic isolation, high availability, and
IP address utilization for the various types of virtual networking resources in your deployment.
NOTE
The maximum number of VLANs in a single network, or in one OVS agent for a network
node, is 4094. In situations where you require more than the maximum number of VLANs,
you can create several provider networks (VXLAN networks) and several network nodes,
one per network. Each node can contain up to 4094 private networks.
NOTE
You do not require all of the isolated VLANs in this section for every OpenStack
deployment. For example, if your cloud users do not create ad hoc virtual networks on
demand, then you may not require a project network. If you want each VM to connect
directly to the same switch as any other physical system, connect your Compute nodes
directly to a provider network and configure your instances to use that provider network
directly.
Provisioning network - This VLAN is dedicated to deploying new nodes using director over
PXE boot. OpenStack Orchestration (heat) installs OpenStack onto the overcloud bare metal
servers. These servers attach to the physical network to receive the platform installation image
from the undercloud infrastructure.
Internal API network - The OpenStack services use the Internal API network for
communication, including API communication, RPC messages, and database communication. In
addition, this network is used for operational messages between controller nodes. When
planning your IP address allocation, note that each API service requires its own IP address.
Specifically, you must plan IP addresses for each of the following services:
vip-msg (ampq)
vip-keystone-int
27
Red Hat OpenStack Platform 16.1 Networking Guide
vip-glance-int
vip-cinder-int
vip-nova-int
vip-neutron-int
vip-horizon-int
vip-heat-int
vip-ceilometer-int
vip-swift-int
vip-keystone-pub
vip-glance-pub
vip-cinder-pub
vip-nova-pub
vip-neutron-pub
vip-horizon-pub
vip-heat-pub
vip-ceilometer-pub
vip-swift-pub
NOTE
When using High Availability, Pacemaker moves VIP addresses between the physical
nodes.
Storage - Block Storage, NFS, iSCSI, and other storage services. Isolate this network to
separate physical Ethernet links for performance reasons.
Storage Management - OpenStack Object Storage (swift) uses this network to synchronise
data objects between participating replica nodes. The proxy service acts as the intermediary
interface between user requests and the underlying storage layer. The proxy receives incoming
requests and locates the necessary replica to retrieve the requested data. Services that use a
Ceph back end connect over the Storage Management network, since they do not interact with
Ceph directly but rather use the front end service. Note that the RBD driver is an exception; this
traffic connects directly to Ceph.
Project networks - Neutron provides each project with their own networks using either VLAN
segregation (where each project network is a network VLAN), or tunneling using VXLAN or
GRE. Network traffic is isolated within each project network. Each project network has an IP
subnet associated with it, and multiple project networks may use the same addresses.
External - The External network hosts the public API endpoints and connections to the
28
CHAPTER 3. MANAGING PROJECT NETWORKS
External - The External network hosts the public API endpoints and connections to the
Dashboard (horizon). You can also use this network for SNAT. In a production deployment, it is
common to use a separate network for floating IP addresses and NAT.
Physical nodes - Each physical NIC requires one IP address. It is common practice to dedicate
physical NICs to specific functions. For example, allocate management and NFS traffic to
distinct physical NICs, sometimes with multiple NICs connecting across to different switches for
redundancy purposes.
Virtual IPs (VIPs) for High Availability- Plan to allocate between one and three VIPs for each
network that controller nodes share.
Project networks - Each project network requires a subnet that it can use to allocate IP
addresses to instances.
Virtual routers - Each router interface plugging into a subnet requires one IP address. If you
want to use DHCP, each router interface requires two IP addresses.
Instances - Each instance requires an address from the project subnet that hosts the instance.
If you require ingress traffic, you must allocate a floating IP address to the instance from the
designated external network.
Management traffic - Includes OpenStack Services and API traffic. All services share a small
number of VIPs. API, RPC and database services communicate on the internal API VIP.
2. Select your virtual router name in the Routers list, and click Add Interface.
In the Subnet list, select the name of your new subnet. You can optionally specify an IP address
for the interface in this field.
29
Red Hat OpenStack Platform 16.1 Networking Guide
When creating networks, it is important to know that networks can host multiple subnets. This is useful if
you intend to host distinctly different systems in the same network, and prefer a measure of isolation
between them. For example, you can designate that only webserver traffic is present on one subnet,
while database traffic traverses another. Subnets are isolated from each other, and any instance that
wants to communicate with another subnet must have their traffic directed by a router. Consider placing
systems that require a high volume of traffic amongst themselves in the same subnet, so that they do
not require routing, and can avoid the subsequent latency and load.
Field Description
30
CHAPTER 3. MANAGING PROJECT NETWORKS
Field Description
3. Click the Next button, and specify the following values in the Subnet tab:
Field Description
31
Red Hat OpenStack Platform 16.1 Networking Guide
Field Description
Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the
distribution of IP settings to your instances.
IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how
to allocate IPv6 addresses and additional information:
No Options Specified - Select this option if you want to set IP addresses manually, or if
you use a non OpenStack-aware method for address allocation.
DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for
example, DNS) from the OpenStack Networking DHCPv6 service. Use this
configuration to create a subnet with ra_mode set to dhcpv6-stateful and
address_mode set to dhcpv6-stateful.
Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the
value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for
allocation.
DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP
distributes these addresses to the instances for name resolution.
IMPORTANT
32
CHAPTER 3. MANAGING PROJECT NETWORKS
IMPORTANT
For strategic services such as DNS, it is a best practice not to host them on
your cloud. For example, if your cloud hosts DNS and your cloud becomes
inoperable, DNS is unavailable and the cloud components cannot do lookups
on each other.
Host Routes - Static host routes. First, specify the destination network in CIDR format,
followed by the next hop that you want to use for routing (for example, 192.168.23.0/24,
10.1.31.1). Provide this value if you need to distribute static routes to instances.
5. Click Create.
You can view the complete network in the Networks tab. You can also click Edit to change any
options as needed. When you create instances, you can configure them now to use its subnet,
and they receive any specified DHCP options.
You can create subnets only in pre-existing networks. Remember that project networks in OpenStack
Networking can host multiple subnets. This is useful if you intend to host distinctly different systems in
the same network, and prefer a measure of isolation between them.
For example, you can designate that only webserver traffic is present on one subnet, while database
traffic traverse another.
Subnets are isolated from each other, and any instance that wants to communicate with another subnet
must have their traffic directed by a router. Therefore, you can lessen network latency and load by
grouping systems in the same subnet that require a high volume of traffic between each other.
1. In the dashboard, select Project > Network > Networks, and click the name of your network in
the Networks view.
Field Description
33
Red Hat OpenStack Platform 16.1 Networking Guide
Field Description
Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the
distribution of IP settings to your instances.
IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how
to allocate IPv6 addresses and additional information:
No Options Specified - Select this option if you want to set IP addresses manually, or if
you use a non OpenStack-aware method for address allocation.
DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for
example, DNS) from the OpenStack Networking DHCPv6 service. Use this
configuration to create a subnet with ra_mode set to dhcpv6-stateful and
address_mode set to dhcpv6-stateful.
34
CHAPTER 3. MANAGING PROJECT NETWORKS
Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the
value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for
allocation.
DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP
distributes these addresses to the instances for name resolution.
Host Routes - Static host routes. First, specify the destination network in CIDR format,
followed by the next hop that you want to use for routing (for example, 192.168.23.0/24,
10.1.31.1). Provide this value if you need to distribute static routes to instances.
4. Click Create.
You can view the subnet in the Subnets list. You can also click Edit to change any options as
needed. When you create instances, you can configure them now to use its subnet, and they
receive any specified DHCP options.
The default gateway of a router defines the next hop for any traffic received by the router. Its network is
typically configured to route traffic to the external physical network using a virtual bridge.
1. In the dashboard, select Project > Network > Routers, and click Create Router.
2. Enter a descriptive name for the new router, and click Create router.
3. Click Set Gateway next to the entry for the new router in the Routers list.
4. In the External Network list, specify the network that you want to receive traffic destined for an
external location.
IMPORTANT
The default routes for subnets must not be overwritten. When the default route for a
subnet is removed, the L3 agent automatically removes the corresponding route in the
router namespace too, and network traffic cannot flow to and from the associated
subnet. If the existing router namespace route has been removed, to fix this problem,
perform these steps:
35
Red Hat OpenStack Platform 16.1 Networking Guide
For example, to purge the resources of the test-project project, and then delete the project, run the
following commands:
To remove its interfaces and delete a router, complete the following steps:
1. In the dashboard, select Project > Network > Routers, and click the name of the router that you
want to delete.
2. Select the interfaces of type Internal Interface, and click Delete Interfaces.
3. From the Routers list, select the target router and click Delete Routers.
To delete a network in your project, together with any dependent interfaces, complete the following
steps:
36
CHAPTER 3. MANAGING PROJECT NETWORKS
To remove an interface, find the ID number of the network that you want to delete by clicking on
your target network in the Networks list, and looking at the ID field. All the subnets associated
with the network share this value in the Network ID field.
2. Navigate to Project > Network > Routers, click the name of your virtual router in the Routers
list, and locate the interface attached to the subnet that you want to delete.
You can distinguish this subnet from the other subnets by the IP address that served as the
gateway IP. You can further validate the distinction by ensuring that the network ID of the
interface matches the ID that you noted in the previous step.
3. Click the Delete Interface button for the interface that you want to delete.
4. Select Project > Network > Networks, and click the name of your network.
5. Click the Delete Subnet button for the subnet that you want to delete.
NOTE
If you are still unable to remove the subnet at this point, ensure it is not already
being used by any instances.
6. Select Project > Network > Networks, and select the network you would like to delete.
37
Red Hat OpenStack Platform 16.1 Networking Guide
Neutron server - This service runs the OpenStack Networking API server, which provides the
API for end-users and services to interact with OpenStack Networking. This server also
integrates with the underlying database to store and retrieve project network, router, and
loadbalancer details, among others.
Neutron agents - These are the services that perform the network functions for OpenStack
Networking:
neutron-l3-agent - performs layer 3 routing between project private networks, the external
network, and others.
Compute node - This node hosts the hypervisor that runs the virtual machines, also known as
instances. A Compute node must be wired directly to the network in order to provide external
connectivity for instances. This node is typically where the l2 agents run, such as neutron-
openvswitch-agent.
Additional resources
Network node - The server that runs the OpenStack Networking agents.
The steps in this chapter apply to an environment that contains these three node types. If your
deployment has both the Controller and Network node roles on the same physical node, then you must
perform the steps from both sections on that server. This also applies for a High Availability (HA)
environment, where all three nodes might be running the Controller node and Network node services
with HA. As a result, you must complete the steps in sections applicable to Controller and Network nodes
on all three nodes.
Additional resources
38
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
Prerequisites
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-modules-environment.yaml
TIP
The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your orchestration templates.
Example
parameter_defaults:
NeutronBridgeMappings: 'physnet1:br-net1,physnet2:br-net2'
3. In the custom NIC configuration template for the Controller and Compute nodes, configure the
bridges with interfaces attached.
Example
...
- type: ovs_bridge
name: br-net1
mtu: 1500
use_dhcp: false
members:
- type: interface
name: eth0
mtu: 1500
use_dhcp: false
primary: true
39
Red Hat OpenStack Platform 16.1 Networking Guide
- type: ovs_bridge
name: br-net2
mtu: 1500
use_dhcp: false
members:
- type: interface
name: eth1
mtu: 1500
use_dhcp: false
primary: true
...
4. Run the openstack overcloud deploy command and include the templates and the
environment files, including this modified custom NIC template and the new environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification
1. Create an external network (public1) as a flat network and associate it with the configured
physical network (physnet1).
Configure it as a shared network (using --share) to let other users create VM instances that
connect to the external network directly.
Example
Example
Example
$ openstack server create --image rhel --flavor my_flavor --network public01 my_instance
40
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
Additional resources
41
Red Hat OpenStack Platform 16.1 Networking Guide
1. Packets leave the eth0 interface of the instance and arrive at the linux bridge qbr-xx.
2. Bridge qbr-xx is connected to br-int using veth pair qvb-xx <-> qvo-xxx. This is because the
bridge is used to apply the inbound/outbound firewall rules defined by the security group.
3. Interface qvb-xx is connected to the qbr-xx linux bridge, and qvoxx is connected to the br-int
Open vSwitch (OVS) bridge.
# brctl show
qbr269d4d73-e7 8000.061943266ebb no qvb269d4d73-e7
tap269d4d73-e7
# ovs-vsctl show
Bridge br-int
fail_mode: secure
Interface "qvof63599ba-8f"
Port "qvo269d4d73-e7"
tag: 5
Interface "qvo269d4d73-e7"
NOTE
Port qvo-xx is tagged with the internal VLAN tag associated with the flat provider
network. In this example, the VLAN tag is 5. When the packet reaches qvo-xx, the VLAN
tag is appended to the packet header.
The packet is then moved to the br-ex OVS bridge using the patch-peer int-br-ex <-> phy-br-ex.
# ovs-vsctl show
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
42
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
When this packet reaches phy-br-ex on br-ex, an OVS flow inside br-ex strips the VLAN tag (5) and
forwards it to the physical interface.
In the following example, the output shows the port number of phy-br-ex as 2.
2(phy-br-ex): addr:ba:b5:7b:ae:5c:a2
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
The following output shows any packet that arrives on phy-br-ex (in_port=2) with a VLAN tag of 5
(dl_vlan=5). In addition, an OVS flow in br-ex strips the VLAN tag and forwards the packet to the
physical interface.
If the physical interface is another VLAN-tagged interface, then the physical interface adds the tag to
the packet.
43
Red Hat OpenStack Platform 16.1 Networking Guide
3. The packet moves to br-int via the patch-peer phy-br-ex <--> int-br-ex.
In the following example, int-br-ex uses port number 15. See the entry containing 15(int-br-ex):
1. When the packet arrives at int-br-ex, an OVS flow rule within the br-int bridge amends the
packet to add the internal VLAN tag 5. See the entry for actions=mod_vlan_vid:5:
44
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
1. The second rule manages packets that arrive on int-br-ex (in_port=15) with no VLAN tag
(vlan_tci=0x0000): This rule adds VLAN tag 5 to the packet
(actions=mod_vlan_vid:5,NORMAL) and forwards it to qvoxxx.
2. qvoxxx accepts the packet and forwards it to qvbxx, after stripping away the VLAN tag.
NOTE
VLAN tag 5 is an example VLAN that was used on a test Compute node with a flat
provider network; this value was assigned automatically by neutron-openvswitch-agent.
This value may be different for your own flat provider network, and can differ for the
same network on two separate Compute nodes.
Additional resources
Procedure
1. Review the bridge_mappings:
Verify that the physical network name you use (for example, physnet1) is consistent with the contents
of the bridge_mapping configuration as shown in this example:
Confirm that the network is created as external, and uses the flat type:
45
Red Hat OpenStack Platform 16.1 Networking Guide
...
| provider:network_type | flat |
| router:external | True |
...
Run the ovs-vsctl show command, and verify that br-int and br-ex are connected using a patch-peer
int-br-ex <--> phy-br-ex.
# ovs-vsctl show
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
Run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows br-int and review whether the flows strip
the internal VLAN IDs for outgoing packets, and add VLAN IDs for incoming packets. This flow is first
added when you spawn an instance to this network on a specific Compute node.
If this flow is not created after spawning the instance, verify that the network is created as flat, is
external, and that the physical_network name is correct. In addition, review the
bridge_mapping settings.
Finally, review the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that ethX is added as a port
within br-ex, and that ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of ip a.
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth1"
Interface "eth1"
46
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
The following example demonstrates that eth1 is configured as an OVS port, and that the kernel knows
to transfer all packets from the interface, and send them to the OVS bridge br-ex. This can be observed
in the entry: master ovs-system.
# ip a
5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state
UP qlen 1000
Additional resources
Prerequisites
Your Network nodes and Compute nodes are connected to a physical network using a physical
interface.
This example uses Network nodes and Compute nodes that are connected to a physical
network, physnet1, using a physical interface, eth1.
The switch ports that these interfaces connect to must be configured to trunk the required
VLAN ranges.
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-modules-environment.yaml
TIP
The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your orchestration templates.
Example
47
Red Hat OpenStack Platform 16.1 Networking Guide
parameter_defaults:
NeutronTypeDrivers: vxlan,flat,vlan
3. Configure the NeutronNetworkVLANRanges setting to reflect the physical network and VLAN
ranges in use:
Example
parameter_defaults:
NeutronTypeDrivers: 'vxlan,flat,vlan'
NeutronNetworkVLANRanges: 'physnet1:171:172'
4. Create an external network bridge (br-ex), and associate a port ( eth1) with it.
This example configures eth1 to use br-ex:
Example
parameter_defaults:
NeutronTypeDrivers: 'vxlan,flat,vlan'
NeutronNetworkVLANRanges: 'physnet1:171:172'
NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-int'
5. Run the openstack overcloud deploy command and include the core templates and the
environment files, including this new environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Verification
1. Create the external networks as type vlan, and associate them with the configured
physical_network.
When you create the external networks, use the --shared option so that users in other project
can share the external networks and can connect VM instances directly.
Run the following example command to create two networks: one for VLAN 171, and another for
VLAN 172:
Example
48
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
--provider-physical-network physnet1 \
--provider-segment 171 \
--share \
provider-vlan171
2. Create a number of subnets and configure them to use the external network.
You can use either openstack subnet create or the dashboard to create these subnets. Ensure
that the external subnet details you have received from your network administrator are correctly
associated with each VLAN.
In this example, VLAN 171 uses subnet 10.65.217.0/24 and VLAN 172 uses 10.65.218.0/24:
Example
Additional resources
49
Red Hat OpenStack Platform 16.1 Networking Guide
1. Packets leaving the eth0 interface of the instance arrive at the linux bridge qbr-xx connected to
the instance.
3. qvbxx is connected to the linux bridge qbr-xx and qvoxx is connected to the Open vSwitch
bridge br-int.
# brctl show
bridge name bridge id STP enabled interfaces
qbr84878b78-63 8000.e6b3df9451e0 no qvb84878b78-63
tap84878b78-63
50
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
options: {peer=phy-br-ex}
Port "qvo86257b61-5d"
tag: 3
Interface "qvo86257b61-5d"
Port "qvo84878b78-63"
tag: 2
Interface "qvo84878b78-63"
qvoxx is tagged with the internal VLAN tag associated with the VLAN provider network. In this
example, the internal VLAN tag 2 is associated with the VLAN provider network provider-171
and VLAN tag 3 is associated with VLAN provider network provider-172. When the packet
reaches qvoxx, the this VLAN tag is added to the packet header.
The packet is then moved to the br-ex OVS bridge using patch-peer int-br-ex <→ phy-br-ex.
Example patch-peer on br-int:
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal
When this packet reaches phy-br-ex on br-ex, an OVS flow inside br-ex replaces the internal
VLAN tag with the actual VLAN tag associated with the VLAN provider network.
The output of the following command shows that the port number of phy-br-ex is 4:
The following command shows any packet that arrives on phy-br-ex (in_port=4) which has VLAN tag 2
(dl_vlan=2). Open vSwitch replaces the VLAN tag with 171 ( actions=mod_vlan_vid:171,NORMAL) and
forwards the packet to the physical interface. The command also shows any packet that arrives on phy-
br-ex (in_port=4) which has VLAN tag 3 ( dl_vlan=3). Open vSwitch replaces the VLAN tag with 172
(actions=mod_vlan_vid:172,NORMAL) and forwards the packet to the physical interface. The
neutron-openvswitch-agent adds these rules.
51
Red Hat OpenStack Platform 16.1 Networking Guide
Your VLAN provider network may require a different configuration. Also, the configuration requirement
for a network may differ between two different Compute nodes.
The output of the following command shows int-br-ex with port number 18:
The output of the following command shows the flow rules on br-int.
52
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
A packet with VLAN tag 172 from the external network reaches the br-ex bridge via eth1 on the
physical node.
The packet moves to br-int via the patch-peer phy-br-ex <-> int-br-ex.
The flow actions (actions=mod_vlan_vid:3,NORMAL) replace the VLAN tag 172 with internal
VLAN tag 3 and forwards the packet to the instance with normal Layer 2 processing.
Additional resources
Procedure
1. Verify that physical network name is used consistently. In this example, physnet1 is used consistently
while creating the network, and within the bridge_mapping configuration:
2. Confirm that the network was created as external, is type vlan, and uses the correct
segmentation_id value:
3. Run ovs-vsctl show and verify that br-int and br-ex are connected using the patch-peer int-br-ex
<→ phy-br-ex.
This connection is created while restarting neutron-openvswitch-agent, provided that the
bridge_mapping is correctly configured in /etc/neutron/plugins/ml2/openvswitch_agent.ini.
Recheck the bridge_mapping setting if this is not created even after restarting the service.
4. To review the flow of outgoing packets, run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows
53
Red Hat OpenStack Platform 16.1 Networking Guide
br-int, and verify that the flows map the internal VLAN IDs to the external VLAN ID ( segmentation_id).
For incoming packets, map the external VLAN ID to the internal VLAN ID.
This flow is added by the neutron OVS agent when you spawn an instance to this network for the first
time. If this flow is not created after spawning the instance, ensure that the network is created as vlan, is
external, and that the physical_network name is correct. In addition, re-check the bridge_mapping
settings.
5. Finally, re-check the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that br-ex includes port ethX,
and that both ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of the ip a command.
For example, the following output shows that eth1 is a port in br-ex:
Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth1"
Interface "eth1"
The following command shows that eth1 has been added as a port, and that the kernel is configured to
move all packets from the interface to the OVS bridge br-ex. This is demonstrated by the entry: master
ovs-system.
# ip a
5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state
UP qlen 1000
Additional resources
IMPORTANT
You should thoroughly test and understand any multicast snooping configuration before
applying it to a production environment. Misconfiguration can break multicasting or cause
erratic network behavior.
Prerequisites
That is, the physical router must send IGMP query packets on the provider network to solicit
54
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
That is, the physical router must send IGMP query packets on the provider network to solicit
regular IGMP reports from multicast group members to maintain the snooping cache in OVS
(and for physical networking).
An RHOSP Networking service security group rule must be in place to allow inbound IGMP to
the VM instances (or port security disabled).
In this example, a rule is created for the ping_ssh security group:
Example
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-ovs-environment.yaml
TIP
The Orchestration service (heat) uses a set of plans called templates to install and configure
your environment. You can customize aspects of the overcloud with a custom environment file,
which is a special type of template that provides customization for your heat templates.
parameter_defaults:
NeutronEnableIgmpSnooping: true
...
IMPORTANT
Ensure that you add a whitespace character between the colon (:) and true.
3. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important as the parameters and resources
defined in subsequent environment files take precedence.
Example
55
Red Hat OpenStack Platform 16.1 Networking Guide
Verification
Example
Sample output
...
mcast_snooping_enable: true
...
other_config: {mac-table-size="50000", mcast-snooping-disable-flood-unregistered=True}
...
Additional resources
Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform
IMPORTANT
Prerequisites
Procedure
1. Configure security to allow multicast traffic to the appropriate VM instances. For instance,
create a pair of security group rules to allow IGMP traffic from the IGMP querier to enter and
exit the VM instances, and a third rule to allow multicast traffic.
Example
A security group mySG allows IGMP traffic to enter and exit the VM instances.
56
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS
As an alternative to setting security group rules, some operators choose to selectively disable
port security on the network. If you choose to disable port security, consider and plan for any
related security risks.
Example
parameter_defaults:
NeutronEnableIgmpSnooping: True
3. Include the environment file in the openstack overcloud deploy command with any other
environment files that are relevant to your environment and deploy the overcloud.
-e ovn-extras.yaml \
…
Replace <other_overcloud_environment_files> with the list of environment files that are part
of your existing deployment.
Verification
1. Verify that the multicast snooping is enabled. List the northbound database Logical_Switch
table.
Sample output
_uuid : d6a2fbcd-aaa4-4b9e-8274-184238d66a15
other_config : {mcast_flood_unregistered="false", mcast_snoop="true"}
...
57
Red Hat OpenStack Platform 16.1 Networking Guide
Sample output
_uuid : 2d6cae4c-bd82-4b31-9c63-2d17cbeadc4e
address : "225.0.0.120"
chassis : 34e25681-f73f-43ac-a3a4-7da2a710ecd3
datapath : eaf0f5cc-a2c8-4c30-8def-2bc1ec9dcabc
ports : [5eaf9dd5-eae5-4749-ac60-4c1451901c56, 8a69efc5-38c5-48fb-bbab-
30f2bf9b8d45]
...
Additional resources
Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform
enable_isolated_metadata = True
58
CHAPTER 5. MANAGING FLOATING IP ADDRESSES
NOTE
OpenStack Networking allocates floating IP addresses to all projects (tenants) from the
same IP ranges in CIDR format. As a result, all projects can consume floating IPs from
every floating IP subnet. You can manage this behavior using quotas for specific projects.
For example, you can set the default to 10 for ProjectA and ProjectB, while setting the
quota for ProjectC to 0.
Procedure
When you create an external subnet, you can also define the floating IP allocation pool.
If the subnet hosts only floating IP addresses, consider disabling DHCP allocation with the --no-
dhcp option in the openstack subnet create command.
Example
Verification
You can verify that the pool is configured properly by assigning a random floating IP to an
instance. (See the later link that follows.)
Additional resources
59
Red Hat OpenStack Platform 16.1 Networking Guide
Procedure
Allocate a floating IP address to an instance by using the openstack server add floating ip
command.
Example
Validation steps
Confirm that your floating IP is associated with your instance by using the openstack server
show command.
Example
Sample output
+-----------------------------+------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2021-08-11T14:45:37.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=198.51.100.56,192.0.2.200 |
| | |
| config_drive | |
| created | 2021-08-11T14:44:54Z |
| flavor | review-ephemeral |
| | (8130dd45-78f6-44dc-8173-4d6426b8e520) |
| hostId | 2308c8d8f60ed5394b1525122fb5bf8ea55c78b8 |
| | 0ec6157eca4488c9 |
| id | aef3ca09-887d-4d20-872d-1d1b49081958 |
| image | rhel8 |
| | (20724bfe-93a9-4341-a5a3-78b37b3a5dfb) |
| key_name | example-keypair |
| name | prod-serv1 |
| progress |0 |
| project_id | bd7a8c4a19424cf09a82627566b434fa |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2021-08-11T14:45:37Z |
| user_id | 4b7e19a0d723310fd92911eb2fe59743a3a5cd32 |
| | 45f76ffced91096196f646b5 |
| volumes_attached | |
+-----------------------------+------------------------------------------+
60
CHAPTER 5. MANAGING FLOATING IP ADDRESSES
Additional resources
Procedure
1. In the dashboard, select Admin > Networks > Create Network > Project.
2. Select the project that you want to host the new network with the Project drop-down list.
Local - Traffic remains on the local Compute host and is effectively isolated from any
external networks.
Flat - Traffic remains on a single network and can also be shared with the host. No VLAN
tagging or other network segregation takes place.
VLAN - Create a network using a VLAN ID that corresponds to a VLAN present in the
physical network. This option allows instances to communicate with systems on the same
layer 2 VLAN.
GRE - Use a network overlay that spans multiple nodes for private communication between
instances. Traffic egressing the overlay must be routed.
VXLAN - Similar to GRE, and uses a network overlay to span multiple nodes for private
communication between instances. Traffic egressing the overlay must be routed.
Additional resources
Prerequisites
61
Red Hat OpenStack Platform 16.1 Networking Guide
Procedure
1. Enter the following command to allocate a floating IP address from the pool. In this example, the
network is named public.
Example
Sample output
In the following example, the newly allocated floating IP is 192.0.2.200. You can assign it to an
instance.
+---------------------+--------------------------------------------------+
| Field | Value |
+---------------------+--------------------------------------------------+
| fixed_ip_address | None |
| floating_ip_address | 192.0.2.200 |
| floating_network_id | f0dcc603-f693-4258-a940-0a31fd4b80d9 |
| id | 6352284c-c5df-4792-b168-e6f6348e2620 |
| port_id | None |
| router_id | None |
| status | ACTIVE |
+---------------------+--------------------------------------------------+
Sample output
+-------------+-------------+--------+-------------+-------+-------------+
| ID | Name | Status | Networks | Image | Flavor |
+-------------+-------------+--------+-------------+-------+-------------+
| aef3ca09-88 | prod-serv1 | ACTIVE | public=198. | rhel8 | review- |
| 7d-4d20-872 | | | 51.100.56 | | ephemeral |
| d-1d1b49081 | | | | | |
| 958 | | | | | |
| | | | | | |
+-------------+-------------+--------+-------------+-------+-------------+
Example
Validation steps
62
CHAPTER 5. MANAGING FLOATING IP ADDRESSES
Enter the following command to confirm that your floating IP is associated with your instance.
Example
Sample output
+-----------------------------+------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2021-08-11T14:45:37.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=198.51.100.56,192.0.2.200 |
| | |
| config_drive | |
| created | 2021-08-11T14:44:54Z |
| flavor | review-ephemeral |
| | (8130dd45-78f6-44dc-8173-4d6426b8e520) |
| hostId | 2308c8d8f60ed5394b1525122fb5bf8ea55c78b8 |
| | 0ec6157eca4488c9 |
| id | aef3ca09-887d-4d20-872d-1d1b49081958 |
| image | rhel8 |
| | (20724bfe-93a9-4341-a5a3-78b37b3a5dfb) |
| key_name | example-keypair |
| name | prod-serv1 |
| progress |0 |
| project_id | bd7a8c4a19424cf09a82627566b434fa |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2021-08-11T14:45:37Z |
| user_id | 4b7e19a0d723310fd92911eb2fe59743a3a5cd32 |
| | 45f76ffced91096196f646b5 |
| volumes_attached | |
+-----------------------------+------------------------------------------+
Additional resources
63
Red Hat OpenStack Platform 16.1 Networking Guide
Procedure
Additional resources
In this procedure, the example physical interface, eth0, is mapped to the bridge, br-ex; the virtual bridge
acts as the intermediary between the physical network and any virtual networks.
As a result, all traffic traversing eth0 uses the configured Open vSwitch to reach instances.
To map a physical NIC to the virtual Open vSwitch bridge, complete the following steps:
Procedure
IPADDR
NETMASK GATEWAY
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes
# vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
64
CHAPTER 5. MANAGING FLOATING IP ADDRESSES
IPADDR=192.168.120.10
NETMASK=255.255.255.0
GATEWAY=192.168.120.1
DNS1=192.168.120.1
ONBOOT=yes
You can now assign floating IP addresses to instances and make them available to the physical
network.
Additional resources
To add a router interface and connect the new interface to a subnet, complete these steps:
NOTE
This procedure uses the Network Topology feature. Using this feature, you can see a
graphical representation of all your virtual routers and networks while you to perform
network management tasks.
2. Locate the router that you want to manage, hover your mouse over it, and click Add Interface.
2. Click the name of the router that hosts the interface that you want to delete.
3. Select the interface type (Internal Interface), and click Delete Interfaces.
65
Red Hat OpenStack Platform 16.1 Networking Guide
NOTE
The ping command is an ICMP operation. To use ping, you must allow ICMP traffic to
traverse any intermediary firewalls.
Ping tests are most useful when run from the machine experiencing network issues, so it may be
necessary to connect to the command line via the VNC management console if the machine seems to
be completely offline.
For example, the following ping test command validates multiple layers of network infrastructure in
order to succeed; name resolution, IP routing, and network switching must all function correctly:
$ ping www.example.com
You can terminate the ping command with Ctrl-c, after which a summary of the results is presented.
Zero percent packet loss indicates that the connection was stable and did not time out.
The results of a ping test can be very revealing, depending on which destination you test. For example, in
the following diagram VM1 is experiencing some form of connectivity issue. The possible destinations
are numbered in blue, and the conclusions drawn from a successful or failed result are presented:
66
CHAPTER 6. TROUBLESHOOTING NETWORKS
1. The internet - a common first step is to send a ping test to an internet location, such as
www.example.com.
Success: This test indicates that all the various network points in between the machine and
the Internet are functioning correctly. This includes the virtual and physical network
infrastructure.
Failure: There are various ways in which a ping test to a distant internet location can fail. If
other machines on your network are able to successfully ping the internet, that proves the
internet connection is working, and the issue is likely within the configuration of the local
machine.
2. Physical router - This is the router interface that the network administrator designates to direct
traffic onward to external destinations.
Success: Ping tests to the physical router can determine whether the local network and
underlying switches are functioning. These packets do not traverse the router, so they do
not prove whether there is a routing issue present on the default gateway.
Failure: This indicates that the problem lies between VM1 and the default gateway. The
router/switches might be down, or you may be using an incorrect default gateway. Compare
the configuration with that on another server that you know is functioning correctly. Try
pinging another server on the local network.
3. Neutron router - This is the virtual SDN (Software-defined Networking) router that Red Hat
OpenStack Platform uses to direct the traffic of virtual machines.
Failure: Confirm whether ICMP traffic is permitted in the security group of the instance.
Check that the Networking node is online, confirm that all the required services are running,
and review the L3 agent log (/var/log/neutron/l3-agent.log).
4. Physical switch - The physical switch manages traffic between nodes on the same physical
67
Red Hat OpenStack Platform 16.1 Networking Guide
4. Physical switch - The physical switch manages traffic between nodes on the same physical
network.
Success: Traffic sent by a VM to the physical switch must pass through the virtual network
infrastructure, indicating that this segment is functioning correctly.
Failure: Check that the physical switch port is configured to trunk the required VLANs.
5. VM2 - Attempt to ping a VM on the same subnet, on the same Compute node.
Success: The NIC driver and basic IP configuration on VM1 are functional.
Failure: Validate the network configuration on VM1. Or, firewall on VM2 might simply be
blocking ping traffic. In addition, verify the virtual switching configuration and review the
Open vSwitch log files.
Procedure
1. To view all the ports that attach to the router named r1, run the following command:
Sample output
+--------------------------------------+------+-------------------+--------------------------------------------
------------------------------------------+
| id | name | mac_address | fixed_ips
|
+--------------------------------------+------+-------------------+--------------------------------------------
------------------------------------------+
| b58d26f0-cc03-43c1-ab23-ccdb1018252a | | fa:16:3e:94:a7:df | {"subnet_id": "a592fdba-
babd-48e0-96e8-2dd9117614d3", "ip_address": "192.168.200.1"} |
| c45e998d-98a1-4b23-bb41-5d24797a12a4 | | fa:16:3e:ee:6a:f7 | {"subnet_id": "43f8f625-
c773-4f18-a691-fd4ebfb3be54", "ip_address": "172.24.4.225"} |
+--------------------------------------+------+-------------------+--------------------------------------------
------------------------------------------+
2. To view the details of each port, run the following command. Include the port ID of the port that
you want to view. The result includes the port status, indicated in the following example as
having an ACTIVE state:
Sample output
+-----------------------+--------------------------------------------------------------------------------------
+
| Field | Value |
+-----------------------+--------------------------------------------------------------------------------------
68
CHAPTER 6. TROUBLESHOOTING NETWORKS
+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:host_id | node.example.com |
| binding:profile | {} |
| binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} |
| binding:vif_type | ovs |
| binding:vnic_type | normal |
| device_id | 49c6ebdc-0e62-49ad-a9ca-58cea464472f |
| device_owner | network:router_interface |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "a592fdba-babd-48e0-96e8-2dd9117614d3", "ip_address":
"192.168.200.1"} |
| id | b58d26f0-cc03-43c1-ab23-ccdb1018252a |
| mac_address | fa:16:3e:94:a7:df |
| name | |
| network_id | 63c24160-47ac-4140-903d-8f9a670b0ca4
|
| security_groups | |
| status | ACTIVE |
| tenant_id | d588d1112e0f496fb6cac22f9be45d49 |
+-----------------------+--------------------------------------------------------------------------------------
+
Procedure
$ ping 192.168.120.254
a. Confirm that you have network flow for the associated VLAN.
It is possible that the VLAN ID has not been set. In this example, OpenStack Networking is
configured to trunk VLAN 120 to the provider network. (See --
provider:segmentation_id=120 in the example in step 1.)
b. Confirm the VLAN flow on the bridge interface using the command, ovs-ofctl dump-flows
69
Red Hat OpenStack Platform 16.1 Networking Guide
b. Confirm the VLAN flow on the bridge interface using the command, ovs-ofctl dump-flows
<bridge-name>.
In this example the bridge is named br-ex:
verify the registration and status of Red Hat Openstack Platform (RHOSP) Networking service
(neutron) agents.
Procedure
1. Use the openstack network agent list command to verify that the RHOSP Networking service
agents are up and registered with the correct host names.
5. Validate the OVS agent configuration file bridge mappings, to confirm that the bridge
mapped to phy-eno1 exists and is properly connected to eno1.
70
CHAPTER 6. TROUBLESHOOTING NETWORKS
Prerequisites
RHOSP deployment, with ML2/OVN as the Networking service (neutron) default mechanism
driver.
Procedure
1. Log in to the overcloud using your Red Hat OpenStack Platform credentials.
2. Run the openstack server list command to obtain the name of a VM instance.
3. Run the openstack server show command to determine the Compute node on which the
instance is running.
Example
Sample output
+----------------------+-------------------------------------------------+
| Field | Value |
+----------------------+-------------------------------------------------+
| OS-EXT-SRV-ATTR:host | compute0.overcloud.example.com |
| addresses | finance-network1=192.0.2.2; provider- |
| | storage=198.51.100.13 |
+----------------------+-------------------------------------------------+
Example
$ ssh [email protected]
5. Run the ip netns list command to see the OVN metadata namespaces.
Sample output
ovnmeta-07384836-6ab1-4539-b23a-c581cf072011 (id: 1)
ovnmeta-df9c28ea-c93a-4a60-b913-1e611d6f15aa (id: 0)
6. Using the metadata namespace run an ip netns exec command to ping the associated network.
Example
71
Red Hat OpenStack Platform 16.1 Networking Guide
Sample output
Additional resources
Prerequisites
RHOSP deployment, with ML2/OVS as the Networking service (neutron) default mechanism
driver.
Procedure
1. Determine which network namespace contains the network, by listing all of the project networks
using the openstack network list command:
In this output, note that the ID for the web-servers network (9cb32fe0-d7fb-432c-b116-
f483c6497b08). The command appends the network ID to the network namespace, which
enables you to identify the namespace in the next step.
Sample output
+--------------------------------------+-------------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-------------+-------------------------------------------------------+
| 9cb32fe0-d7fb-432c-b116-f483c6497b08 | web-servers | 453d6769-fcde-4796-a205-
66ee01680bba 192.168.212.0/24 |
| a0cc8cdd-575f-4788-a3e3-5df8c6d0dd81 | private | c1e58160-707f-44a7-bf94-
8694f29e74d3 10.0.0.0/24 |
72
CHAPTER 6. TROUBLESHOOTING NETWORKS
2. List all the network namespaces using the ip netns list command:
# ip netns list
The output contains a namespace that matches the web-servers network ID.
Sample output
qdhcp-9cb32fe0-d7fb-432c-b116-f483c6497b08
qrouter-31680a1c-9b3e-4906-bd69-cb39ed5faa01
qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b
qdhcp-a0cc8cdd-575f-4788-a3e3-5df8c6d0dd81
qrouter-e9281608-52a6-4576-86a6-92955df46f56
3. Examine the configuration of the web-servers network by running commands within the
namespace, prefixing the troubleshooting commands with ip netns exec <namespace>.
In this example, the route -n command is used.
Example
Sample output
Prerequisites
RHOSP deployment, with ML2/OVS as the Networking service (neutron) default mechanism
driver.
Procedure
Example
73
Red Hat OpenStack Platform 16.1 Networking Guide
Example
Example
3. In the terminal running the tcpdump session, observe detailed results of the ping test.
Sample output
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
IP (tos 0xc0, ttl 64, id 55447, offset 0, flags [none], proto ICMP (1), length 88)
172.24.4.228 > 172.24.4.228: ICMP host 192.168.200.20 unreachable, length 68
IP (tos 0x0, ttl 64, id 22976, offset 0, flags [DF], proto UDP (17), length 60)
172.24.4.228.40278 > 192.168.200.21: [bad udp cksum 0xfa7b -> 0xe235!] UDP, length 32
NOTE
When you perform a tcpdump analysis of traffic, you see the responding packets heading
to the router interface rather than to the VM instance. This is expected behavior, as the
qrouter performs Destination Network Address Translation (DNAT) on the return
packets.
Prerequisites
Red Hat OpenStack Platform deployment with ML2/OVN as the default mechanism driver.
Procedure
1. With your OpenStack credentials, log in to the overcloud node where you want to run the ovn
commands.
2. Create a shell script file that contains the ovn commands that you want to run.
Example
vi ~/bin/ovn-alias.sh
Example
74
CHAPTER 6. TROUBLESHOOTING NETWORKS
EXTERNAL_ID=\
$(sudo ovs-vsctl get open . external_ids:ovn-remote | awk -F: '{print $2}')
export NBDB=tcp:${EXTERNAL_ID}:6641
export SBDB=tcp:${EXTERNAL_ID}:6642
Example
# source ovn-alias.sh
Validation
Example
# ovn-nbctl show
Sample output
Prerequisites
RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism
driver.
Procedure
75
Red Hat OpenStack Platform 16.1 Networking Guide
76
CHAPTER 6. TROUBLESHOOTING NETWORKS
OVN ports are logical entities that reside somewhere on a network, not physical ports on a single
switch.
OVN gives each table in the pipeline a name in addition to its number. The name describes the
purpose of that stage in the pipeline.
The actions supported in OVN logical flows extend beyond those of OpenFlow. You can
implement higher level features, such as DHCP, in the OVN logical flow syntax.
ovn-trace
The ovn-trace command can simulate how a packet travels through the OVN logical flows, or help you
determine why a packet is dropped. Provide the ovn-trace command with the following parameters:
DATAPATH
The logical switch or logical router where the simulated packet starts.
MICROFLOW
The simulated packet, in the syntax used by the ovn-sb database.
This example displays the --minimal output option on a simulated packet and shows that the packet
reaches its destination:
$ ovn-trace --minimal sw0 'inport == "sw0-port1" && eth.src == 00:00:00:00:00:01 && eth.dst ==
00:00:00:00:00:02'
# reg14=0x1,vlan_tci=0x0000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,dl_type=0x0000
output("sw0-port2");
In more detail, the --summary output for this same simulated packet shows the full execution pipeline:
$ ovn-trace --summary sw0 'inport == "sw0-port1" && eth.src == 00:00:00:00:00:01 && eth.dst ==
00:00:00:00:00:02'
# reg14=0x1,vlan_tci=0x0000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,dl_type=0x0000
ingress(dp="sw0", inport="sw0-port1") {
outport = "sw0-port2";
output;
egress(dp="sw0", inport="sw0-port1", outport="sw0-port2") {
output;
/* output to "sw0-port2", type "" */;
};
};
77
Red Hat OpenStack Platform 16.1 Networking Guide
The packet enters the sw0 network from the sw0-port1 port and runs the ingress pipeline.
The outport variable is set to sw0-port2 indicating that the intended destination for this packet
is sw0-port2.
The packet is output from the ingress pipeline, which brings it to the egress pipeline for sw0
with the outport variable set to sw0-port2.
The output action is executed in the egress pipeline, which outputs the packet to the current
value of the outport variable, which is sw0-port2.
Additional resources
Prerequisites
RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism
driver.
Procedure
Example
Sample output
List the tables in the OVN northbound and southbound database tables to gain insight to the
78
CHAPTER 6. TROUBLESHOOTING NETWORKS
List the tables in the OVN northbound and southbound database tables to gain insight to the
configuration and to troubleshoot issues with Red Hat OpenStack Platform (RHOSP) networks.
Prerequisites
RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism
driver.
Procedure
Example
# ovn-sbctl list
Example
# ovn-nbctl list
Prerequisites
New deployment of RHOSP, with ML2/OVN as the Networking service (neutron) default
mechanism driver.
Procedure
NETWORK_ID=\
$(openstack network create internal_network | awk '/\| id/ {print $4}')
As shown in the following sample, the output should list OVN containers including openstack-
79
Red Hat OpenStack Platform 16.1 Networking Guide
As shown in the following sample, the output should list OVN containers including openstack-
ovn-northd, openstack-neutron-metadata-agent-ovn, openstack-ovn-controller,
openstack-ovn-sb-db-server, and openstack-ovn-sb-db-server.
Sample output
090ac17ac010 registry.redhat.io/rhosp15-rhel8/openstack-ovn-northd:15.0-77
dumb-init --singl... 11 hours ago Up 11 hours ago ovn_northd
e4031dea8094 registry.redhat.io/rhosp15-rhel8/openstack-neutron-metadata-agent-
ovn:15.0-74 dumb-init --singl... 11 hours ago Up 11 hours ago ovn_metadata_agent
a2983bc0f06f registry.redhat.io/rhosp15-rhel8/openstack-ovn-controller:15.0-76 dumb-
init --singl... 11 hours ago Up 11 hours ago ovn_controller
5b8dfbef6260 registry.redhat.io/rhosp15-rhel8/openstack-ovn-sb-db-server:15.0-78
dumb-init --singl... 11 hours ago Up 11 hours ago ovn_south_db_server
cf7bcb3731ad registry.redhat.io/rhosp15-rhel8/openstack-ovn-nb-db-server:15.0-76
dumb-init --singl... 11 hours ago Up 11 hours ago ovn_north_db_server
Example
# ovn-sbctl list
# ovn-sbctl list
5. Attempt to ping an instance from an OVN metadata interface that is on the same layer 2
network.
For more information, see Section 6.5, “Performing basic ICMP testing within the ML2/OVN
namespace”.
6. If you need to contact Red Hat for support, perform the steps described in this Red Hat
Solution, How to collect all required logs for Red Hat Support to investigate an OpenStack
issue.
Additional resources
Prerequisites
New deployment of Red Hat OpenStack Platform 16.0 or higher, with ML2/OVN as the default
mechanism driver.
80
CHAPTER 6. TROUBLESHOOTING NETWORKS
Additional resources
81
Red Hat OpenStack Platform 16.1 Networking Guide
First, you must decide which physical NICs oFn your Compute node you want to carry which types of
traffic. Then, when the NIC is cabled to a physical switch port, you must configure the switch port to allow
trunked or general traffic.
For example, the following diagram depicts a Compute node with two NICs, eth0 and eth1. Each NIC is
cabled to a Gigabit Ethernet port on a physical switch, with eth0 carrying instance traffic, and eth1
providing connectivity for OpenStack services:
NOTE
This diagram does not include any additional redundant NICs required for fault tolerance.
Additional resources
With OpenStack Networking you can connect instances to the VLANs that already exist on your physical
82
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
With OpenStack Networking you can connect instances to the VLANs that already exist on your physical
network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the
same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For
example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the
8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch.
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
interface GigabitEthernet1/0/12
description Trunk to Compute Node
spanning-tree portfast trunk
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 2,110,111
Field Description
interface GigabitEthernet1/0/12 The switch port that the NIC of the X node
connects to. Ensure that you replace the
GigabitEthernet1/0/12 value with the correct
port value for your environment. Use the show
interface command to view a list of ports.
description Trunk to Compute Node A unique and descriptive value that you can use
to identify this interface.
spanning-tree portfast trunk If your environment uses STP, set this value to
instruct Port Fast that this port is used to trunk
traffic.
switchport trunk encapsulation dot1q Enables the 802.1q trunking standard (rather
than ISL). This value varies depending on the
configuration that your switch supports.
switchport mode trunk Configures this port as a trunk port, rather than
an access port, meaning that it allows VLAN
traffic to pass through to the virtual switches.
83
Red Hat OpenStack Platform 16.1 Networking Guide
Field Description
switchport trunk native vlan 2 Set a native VLAN to instruct the switch where
to send untagged (non-VLAN) traffic.
switchport trunk allowed vlan 2,110,111 Defines which VLANs are allowed through the
trunk.
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
interface GigabitEthernet1/0/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
spanning-tree portfast
Field Description
interface GigabitEthernet1/0/13 The switch port that the NIC of the X node
connects to. Ensure that you replace the
GigabitEthernet1/0/12 value with the correct
port value for your environment. Use the show
interface command to view a list of ports.
description Access port for Compute A unique and descriptive value that you can use
Node to identify this interface.
84
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
Field Description
switchport access vlan 200 Configures the port to allow traffic on VLAN
200. You must configure your Compute node
with an IP address from this VLAN.
spanning-tree portfast If using STP, set this value to instruct STP not to
attempt to initialize this as a trunk, allowing for
quicker port handshakes during initial
connections (such as server reboot).
Additional resources
Procedure
- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000
BondInterfaceOvsOptions:
"mode=802.3ad"
Additional resources
85
Red Hat OpenStack Platform 16.1 Networking Guide
Procedure
1. Physically connect both NICs on the Compute node to the switch (for example, ports 12 and 13).
interface port-channel1
switchport access vlan 100
switchport mode access
spanning-tree guard root
sw01# config t
Enter configuration commands, one per line. End with CNTL/Z.
interface GigabitEthernet1/0/13
switchport access vlan 100
switchport mode access
speed 1000
duplex full
channel-group 10 mode active
channel-protocol lacp
4. Review your new port channel. The resulting output lists the new port-channel Po1, with
member ports Gi1/0/12 and Gi1/0/13:
NOTE
86
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
2. MTU settings are changed switch-wide on 3750 switches, and not for individual interfaces. Run
the following commands to configure the switch to use jumbo frames of 9000 bytes. You might
prefer to configure the MTU settings for individual interfaces, if your switch supports this
feature.
sw01# config t
Enter configuration commands, one per line. End with CNTL/Z.
NOTE
IMPORTANT
Reloading the switch causes a network outage for any devices that are
dependent on the switch. Therefore, reload the switch only during a scheduled
maintenance period.
87
Red Hat OpenStack Platform 16.1 Networking Guide
sw01# reload
Proceed with reload? [confirm]
4. After the switch reloads, confirm the new jumbo MTU size.
The exact output may differ depending on your switch model. For example, System MTU might
apply to non-Gigabit interfaces, and Jumbo MTU might describe all Gigabit interfaces.
Procedure
1. Run the lldp run command to enable LLDP globally on your Cisco Catalyst switch:
sw01# config t
Enter configuration commands, one per line. End with CNTL/Z.
NOTE
88
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
interface Ethernet1/12
description Trunk to Compute Node
switchport mode trunk
switchport trunk allowed vlan 2,110,111
switchport trunk native vlan 2
end
Procedure
Using the example from the Figure 7.1, “Sample network layout” diagram, Ethernet1/13 (on a
Cisco Nexus switch) is configured as an access port for eth1. This configuration assumes that
your physical node has an ethernet cable connected to interface Ethernet1/13 on the physical
switch.
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
interface Ethernet1/13
description Access port for Compute Node
89
Red Hat OpenStack Platform 16.1 Networking Guide
Additional resources
Procedure
- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000
BondInterfaceOvsOptions:
"mode=802.3ad"
Additional resources
Procedure
1. Physically connect the Compute node NICs to the switch (for example, ports 12 and 13).
90
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
3. Configure ports 1/12 and 1/13 as access ports, and as members of a channel group.
Depending on your deployment, you can deploy trunk interfaces rather than access interfaces.
For example, for Cisco UCI the NICs are virtual interfaces, so you might prefer to configure
access ports exclusively. Often these interfaces contain VLAN tagging configurations.
interface Ethernet1/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
channel-group 10 mode active
interface Ethernet1/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
channel-group 10 mode active
NOTE
When you use PXE to provision nodes on Cisco switches, you might need to set the
options no lacp graceful-convergence and no lacp suspend-individual to bring up the
ports and boot the server. For more information, see your Cisco switch documentation.
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
Procedure
Run the following commands to configure interface 1/12 to use jumbo frames of 9000 bytes:
91
Red Hat OpenStack Platform 16.1 Networking Guide
Procedure
You can enable LLDP for individual interfaces on Cisco Nexus 7000-series switches:
NOTE
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
92
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
Use the following configuration syntax to allow traffic for VLANs 100 and 200 to pass through
to your instances.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports glob swp1-2
bridge-vids 100 200
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
Using the example from the Figure 7.1, “Sample network layout” diagram, swp1 (on a Cumulus
Linux switch) is configured as an access port.
auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports glob swp1-2
bridge-vids 100 200
auto swp1
iface swp1
bridge-access 100
auto swp2
iface swp2
bridge-access 200
93
Red Hat OpenStack Platform 16.1 Networking Guide
dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends:
on the physical NICs, and on the physical switch ports.
Additional resources
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
Procedure
auto swp1
iface swp1
mtu 9000
NOTE
Procedure
To view all LLDP neighbors on all ports/interfaces, run the following command:
94
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
This configuration assumes that your physical node has an ethernet cable connected to
interface 24 on the physical switch. In this example, DATA and MNGT are the VLAN names.
IMPORTANT
95
Red Hat OpenStack Platform 16.1 Networking Guide
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
For example:
Additional resources
Procedure
- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000
96
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
BondInterfaceOvsOptions:
"mode=802.3ad"
Additional resources
Procedure
In this example, the Compute node has two NICs using VLAN 100:
For example:
NOTE
You might need to adjust the timeout period in the LACP negotiation script. For
more information, see
https://gtacknowledge.extremenetworks.com/articles/How_To/LACP-
configured-ports-interfere-with-PXE-DHCP-on-servers
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
Procedure
Run the commands in this example to enable jumbo frames on an Extreme Networks EXOS
switch and configure support for forwarding IP packets with 9000 bytes:
97
Red Hat OpenStack Platform 16.1 Networking Guide
Example
Procedure
In this example, LLDP is enabled on an Extreme Networks EXOS switch. 11 represents the port
string:
Procedure
If using a Juniper EX series switch running Juniper JunOS, use the following configuration
syntax to allow traffic for VLANs 110 and 111 to pass through to your instances.
This configuration assumes that your physical node has an ethernet cable connected to
interface ge-1/0/12 on the physical switch.
IMPORTANT
These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.
ge-1/0/12 {
description Trunk to Compute Node;
unit 0 {
family ethernet-switching {
98
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
port-mode trunk;
vlan {
members [110 111];
}
native-vlan-id 2;
}
}
}
IMPORTANT
These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.
Procedure
This configuration assumes that your physical node has an ethernet cable connected to interface ge-
1/0/13 on the physical switch.
ge-1/0/13 {
description Access port for Compute Node
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members 200;
}
native-vlan-id 2;
}
}
}
Additional resources
Procedure
- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000
BondInterfaceOvsOptions:
"mode=802.3ad"
Additional resources
Procedure
1. Physically connect the Compute node’s two NICs to the switch (for example, ports 12 and 13).
chassis {
aggregated-devices {
ethernet {
device-count 1;
}
}
}
3. Configure switch ports 12 (ge-1/0/12) and 13 (ge-1/0/13) to join the port aggregate ae1:
100
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
interfaces {
ge-1/0/12 {
gigether-options {
802.3ad ae1;
}
}
ge-1/0/13 {
gigether-options {
802.3ad ae1;
}
}
}
NOTE
For Red Hat OpenStack Platform director deployments, in order to PXE boot
from the bond, you must configure one of the bond members as lacp force-up
toensure that only one bond member comes up during introspection and first
boot. The bond member that you configure with lacp force-up must be the same
bond member that has the MAC address in instackenv.json (the MAC address
known to ironic must be the same MAC address configured with force-up).
interfaces {
ae1 {
aggregated-ether-options {
lacp {
active;
}
}
}
}
interfaces {
ae1 {
vlan-tagging;
native-vlan-id 2;
unit 100 {
vlan-id 100;
}
}
}
6. Review your new port channel. The resulting output lists the new port aggregate ae1 with
member ports ge-1/0/12 and ge-1/0/13:
101
Red Hat OpenStack Platform 16.1 Networking Guide
NOTE
NOTE
You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.
Additional resources
NOTE
The MTU value is calculated differently depending on whether you are using Juniper or
Cisco devices. For example, 9216 on Juniper would equal to 9202 for Cisco. The extra
bytes are used for L2 headers, where Cisco adds this automatically to the MTU value
specified, but the usable MTU will be 14 bytes smaller than specified when using Juniper.
So in order to support an MTU of 9000 on the VLANs, the MTU of 9014 would have to be
configured on Juniper.
Procedure
1. For Juniper EX series switches, MTU settings are set for individual interfaces. These commands
configure jumbo frames on the ge-1/0/14 and ge-1/0/15 ports:
NOTE
2. If using a LACP aggregate, you will need to set the MTU size there, and not on the member
NICs. For example, this setting configures the MTU size for the ae1 aggregate:
102
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING
Procedure
Use the following too enable LLDP globally on your Juniper EX 4200 switch:
lldp {
interface all{
enable;
}
}
}
Use the following to enable LLDP for the single interface ge-1/0/14:
lldp {
interface ge-1/0/14{
enable;
}
}
}
NOTE
103
Red Hat OpenStack Platform 16.1 Networking Guide
NOTE
You can use the openstack network show <network_name> command to view the
largest possible MTU values that OpenStack Networking calculates. net-mtu is a neutron
API extension that is not present in some implementations. The MTU value that you
require can be advertised to DHCPv4 clients for automatic configuration, if supported by
the instance, as well as to IPv6 clients through Router Advertisement (RA) packets. To
send Router Advertisements, the network must be attached to a router.
You must configure MTU settings consistently from end-to-end. This means that the MTU setting must
be the same at every point the packet passes through, including the VM, the virtual network
infrastructure, the physical network, and the destination server.
For example, the circles in the following diagram indicate the various points where an MTU value must
be adjusted for traffic between an instance and a physical server. You must change the MTU value for
very interface that handles network traffic to accommodate packets of a particular MTU size. This is
necessary if traffic travels from the instance 192.168.200.15 through to the physical server 10.20.15.25:
Inconsistent MTU values can result in several network issues, the most common being random packet
loss that results in connection drops and slow network performance. Such issues are problematic to
104
CHAPTER 8. CONFIGURING MAXIMUM TRANSMISSION UNIT (MTU) SETTINGS
troubleshoot because you must identify and examine every possible network point to ensure it has the
correct MTU value.
-
type: ovs_bridge
name: br-isolated
use_dhcp: false
mtu: 9000 # <--- Set MTU
members:
-
type: ovs_bond
name: bond1
mtu: 9000 # <--- Set MTU
ovs_options: {get_param: BondInterfaceOvsOptions}
members:
-
type: interface
name: ens15f0
mtu: 9000 # <--- Set MTU
primary: true
-
type: interface
name: enp131s0f0
mtu: 9000 # <--- Set MTU
-
type: vlan
device: bond1
vlan_id: {get_param: InternalApiNetworkVlanID}
mtu: 9000 # <--- Set MTU
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
type: vlan
device: bond1
mtu: 9000 # <--- Set MTU
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}
105
Red Hat OpenStack Platform 16.1 Networking Guide
You can apply QoS policies to individual ports. You can also apply QoS policies to a project network,
where ports with no specific policy attached inherit the policy.
NOTE
Internal network owned ports, such as DHCP and internal router ports, are excluded from
network policy application.
You can apply, modify, or remove QoS policies dynamically. However, for guaranteed minimum
bandwidth QoS policies, you can only apply modifications when there are no instances that use any of
the ports the policy is assigned to.
NOTE
Currently, the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver
(ML2/OVN) does not support QoS policies.
QoS policies can be enforced in various contexts, including virtual machine instance placements, floating
IP assignments, and gateway IP assignments.
Depending on the enforcement context and on the mechanism driver you use, a QoS rule affects egress
traffic (upload from instance), ingress traffic (download to instance), or both.
Table 9.1. Supported traffic direction by driver (all QoS rule types)
Bandwidth limit Egress [1][2] and Egress only [3] Egress and ingress
ingress
106
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES
[1] The OVS egress bandwidth limit is performed in the TAP interface and is traffic policing, not traffic
shaping.
[2] In RHOSP 16.2.2 and later, the OVS egress bandwidth limit is supported in hardware offloaded ports
by applying the QoS policy in the network interface using ip link commands.
[3] The mechanism drivers ignore the max_burst_kbps parameter because they do not support it.
[5] The OVS egress minimum bandwidth is supported in hardware offloaded ports by applying the QoS
policy in the network interface using ip link commands.
[6] https://bugzilla.redhat.com/show_bug.cgi?id=2060310
Table 9.2. Supported traffic direction by driver for placement reporting and scheduling (minimum
bandwidth only)
Table 9.3. Supported traffic direction by driver for enforcement types (bandwidth limit only)
ML2/OVS ML2/OVN
[1] https://bugzilla.redhat.com/show_bug.cgi?id=2064185
Additional resources
Creating and applying a guaranteed minimum bandwidth QoS policy and rule
107
Red Hat OpenStack Platform 16.1 Networking Guide
Creating and applying a DSCP marking QoS policy and rule for egress traffic
Procedure
1. Identify the ID of the project you want to create the QoS policy for:
-------------------------------------------+
| ID | Name |
-------------------------------------------+
| 8c409e909fb34d69bc896ab358317d60 | admin |
| 92b6c16c7c7244378a062be0bfd55fa0 | service |
-------------------------------------------+
(overcloud) $ openstack network qos rule create --type <rule-type> [rule properties]
<policy_name>
Property Description
108
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES
Property Description
NOTE
4. Configure a port or network to apply the policy to. You can update an existing port or network,
or create a new port or network to apply the policy to:
109
Red Hat OpenStack Platform 16.1 Networking Guide
The network back end, ML2/OVS or ML2/SR-IOV, attempts to guarantee that each port on which the
rule is applied has no less than the specified network bandwidth.
Resource allocation scheduling means that the Compute service (nova) chooses a host with enough
bandwidth on which to spawn a VM instance, after having queried the Placement service to retrieve the
amount of bandwidth on the different network interfaces and Compute nodes.
You can use QoS minimum bandwidth rules for either purpose or for both.
The following table identifies the Modular Layer 2 (ML2) mechanism drivers that support minimum
bandwidth QoS policies.
Table 9.5. ML2 mechanism drivers that support minimum bandwidth QoS
Additional resources
1. Enable the QoS service plug-in for the RHOSP Networking service (neutron).
110
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES
Currently, the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN)
does not support minimum bandwidth QoS rules.
Prerequisites
You must be the stack user with access to the RHOSP undercloud.
Do not mix ports with and without bandwidth guarantees on the same physical interface, as the
ports without a guarantee might be denied necessary resources (starvation).
TIP
Create host aggregates to separate ports with bandwidth guarantees from those ports without
bandwidth guarantees.
Procedure
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
4. Enable the QoS service plug-in for the RHOSP Networking service.
Your environment file must contain the keywords parameter_defaults. On a new line below
parameter_defaults add qos to the NeutronServicePlugins parameter.
Example
parameter_defaults:
NeutronServicePlugins: 'qos'
5. Run the deployment command and include the core heat templates, other environment files,
and this new custom environment file.
The order of the environment files is important as the parameters and resources defined in
subsequent environment files take precedence.
Example
111
Red Hat OpenStack Platform 16.1 Networking Guide
Example
$ source ~/overcloudrc
7. Identify the ID of the project you want to create the QoS policy for:
Sample output
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+
8. Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named guaranteed_min_bw is created for the admin project:
Example
In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps
are created for the policy named guaranteed_min_bw:
Example
Verification
ML2/SR-IOV
Using root access, log in to the Compute node, and show the details of the virtual functions that
are held in the physical function.
Example
Sample output
ML2/OVS
Using root access, log in to the compute node, show the tc rules and classes on the physical
bridge interface.
Example
113
Red Hat OpenStack Platform 16.1 Networking Guide
Sample output
class htb 1:11 parent 1:fffe prio 0 rate 4Gbit ceil 34359Mbit burst 9000b cburst 8589b
class htb 1:1 parent 1:fffe prio 0 rate 72Kbit ceil 34359Mbit burst 9063b cburst 8589b
class htb 1:fffe root rate 34359Mbit ceil 34359Mbit burst 8589b cburst 8589b
Additional resources
1. Enabling the QoS and the Placement service plug-ins for the RHOSP Networking service
(neutron).
3. Configure the resource provider ingress and egress bandwidths for the relevant agents on each
Compute node.
Prerequisites
agent-resources-synced
port-resource-request
qos-bw-minimum-ingress
You must be the stack user with access to the RHOSP undercloud.
You can only modify a minimum bandwidth QoS policy when there are no instances using any of
114
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES
You can only modify a minimum bandwidth QoS policy when there are no instances using any of
the ports the policy is assigned to. The Networking service cannot update the Placement API
usage information if a port is bound.
Procedure
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
4. Enable the QoS and the Placement service plug-ins` for the RHOSP Networking service.
Your environment file must contain the keywords parameter_defaults. On a new line below
parameter_defaults add qos and placement to the NeutronServicePlugins parameter.
Example
parameter_defaults:
NeutronServicePlugins: 'qos,placement'
ML2/SR-IOV
Example
parameter_defaults:
NeutronServicePlugins: 'qos,placement'
ExtraConfig:
Neutron::agents::ml2::sriov::resource_provider_hypervisors:
"ens5:%{hiera('fqdn_canonical')},ens6:%{hiera('fqdn_canonical')}"
ML2/OVS
Example
parameter_defaults:
NeutronServicePlugins: 'qos,placement'
ExtraConfig:
Neutron::agents::ml2::ovs::resource_provider_hypervisors
6. Optional: To mark vnic_types as not supported when multiple ML2 mechanism drivers support
115
Red Hat OpenStack Platform 16.1 Networking Guide
6. Optional: To mark vnic_types as not supported when multiple ML2 mechanism drivers support
them by default and multiple agents are being tracked in the Placement service, also add the
following configuration to your environment file:
parameter_defaults:
...
NeutronOvsVnicTypeBlacklist: direct
# NeutronSriovVnicTypeBlacklist: direct
7. Configure the resource provider ingress and egress bandwidths for the relevant agents on each
Compute node that needs to provide a minimum bandwidth.
You can configure only ingress or egress, or both, using the following formats:
NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:,<bridge1>:
<egress_kbps>:,...,<bridgeN>:<egress_kbps>:
NeutronOvsResourceProviderBandwidths: <bridge0>::<ingress_kbps>,<bridge1>::
<ingress_kbps>,...,<bridgeN>::<ingress_kbps>
NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:<ingress_kbps>,
<bridge1>:<egress_kbps>:<ingress_kbps>,...,<bridgeN>:<egress_kbps>:
<ingress_kbps>
parameter_defaults:
...
NeutronBridgeMappings: physnet0:br-physnet0
NeutronOvsResourceProviderBandwidths: br-physnet0:10000000:10000000
parameter_defaults:
...
NeutronML2PhysicalNetworkMtus: physnet0:ens5,physnet0:ens6
NeutronSriovResourceProviderBandwidths:
ens5:40000000:40000000,ens6:40000000:40000000
8. Run the deployment command and include the core heat templates, other environment files,
and this new custom environment file.
The order of the environment files is important as the parameters and resources defined in
subsequent environment files take precedence.
116
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES
Example
$ source ~/overcloudrc
10. Identify the ID of the project you want to create the QoS policy for:
Sample output
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+
11. Using the project ID from the previous step, create a QoS policy for the project.
Example
In this example, a QoS policy named guaranteed_min_bw is created for the admin project:
Example
In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps
are created for the policy named guaranteed_min_bw:
Example
In this example, the guaranteed_min_bw policy is applied to port ID, 56x9aiw1-2v74-144x-
c2q8-ed8w423a6s12:
117
Red Hat OpenStack Platform 16.1 Networking Guide
Verification
$ source ~/stackrc
Sample output
+--------------------------------------+-----------------------------------------------------+------------+----
----------------------------------+--------------------------------------+
| uuid | name | generation |
root_provider_uuid | parent_provider_uuid |
+--------------------------------------+-----------------------------------------------------+------------+----
----------------------------------+--------------------------------------+
| 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | dell-r730-014.localdomain |
28 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | None |
| 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | dell-r730-063.localdomain |
18 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | None |
| e2f5082a-c965-55db-acb3-8daf9857c721 | dell-r730-063.localdomain:NIC Switch agent
| 0 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | 6b15ddce-13cf-4c85-a58f-
baec5b57ab52 |
| d2fb0ef4-2f45-53a8-88be-113b3e64ba1b | dell-r730-014.localdomain:NIC Switch agent
| 0 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | 31d3d88b-bc3a-41cd-9dc0-
fda54028a882 |
| f1ca35e2-47ad-53a0-9058-390ade93b73e | dell-r730-063.localdomain:NIC Switch
agent:enp6s0f1 | 13 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | e2f5082a-c965-55db-
acb3-8daf9857c721 |
| e518d381-d590-5767-8f34-c20def34b252 | dell-r730-014.localdomain:NIC Switch
agent:enp6s0f1 | 19 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | d2fb0ef4-2f45-53a8-
88be-113b3e64ba1b |
+--------------------------------------+-----------------------------------------------------+------------+----
----------------------------------+--------------------------------------+
Example
In this example, the bandwidth provided by interface enp6s0f1 on the host dell-r730-014 is
checked, using the resource provider UUID, e518d381-d590-5767-8f34-c20def34b252:
118
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES
Sample output
+----------------------------+------------------+----------+------------+----------+-----------+----------+
| resource_class | allocation_ratio | min_unit | max_unit | reserved | step_size |
total |
+----------------------------+------------------+----------+------------+----------+-----------+----------+
| NET_BW_EGR_KILOBIT_PER_SEC | 1.0 | 1 | 2147483647 | 0| 1|
10000000 |
| NET_BW_IGR_KILOBIT_PER_SEC | 1.0 | 1 | 2147483647 | 0| 1|
10000000 |
+----------------------------+------------------+----------+------------+----------+-----------+----------+
5. To check claims against the resource provider when instances are running, run the following
command:
Example
In this example, claims against the resource provider are checked on the host, dell-r730-014,
using the resource provider UUID, e518d381-d590-5767-8f34-c20def34b252:
Sample output
{3cbb9e07-90a8-4154-8acd-b6ec2f894a83: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, 8848b88b-4464-443f-bf33-5d4e49fd6204: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, 9a29e946-698b-4731-bc28-89368073be1a: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, a6c83b86-9139-4e98-9341-dc76065136cc: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC:
3000000}}, da60e33f-156e-47be-a632-870172ec5483: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, eb582a0e-8274-4f21-9890-9a0d55114663: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC:
3000000}}}
Additional resources
119
Red Hat OpenStack Platform 16.1 Networking Guide
Procedure
parameter_defaults:
NeutronSriovAgentExtensions: 'qos'
b. To apply this configuration, deploy the overcloud, adding your custom environment file to
the stack along with your other environment files:
For more information, see "Modifying the Overcloud Environment" in the Director
Installation and Usage guide.
2. Identify the ID of the project you want to create the QoS policy for:
(overcloud) $ openstack network qos rule create --type bandwidth-limit --max_kbps 3000 --
max_burst_kbps 300 bw-limiter
120
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES
Procedure
1. If you are:
using ML2/OVS with a tunneling protocol (VXLAN and GRE), then perform the following
steps:
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
d. In the YAML environment file under parameter_defaults, add the following lines:
parameter_defaults:
ControllerExtraConfig:
neutron::config::server_config:
agent/dscp_inherit:
value: true
When dscp_inherit is true, the Networking service copies the DSCP value of the inner
header to the outer header.
e. Run the deployment command and include the core heat templates, environment files,
and this new custom environment file.
IMPORTANT
Example
121
Red Hat OpenStack Platform 16.1 Networking Guide
Example
$ source ~/overcloudrc
Example
Example
In this example, a DSCP rule is created using DSCP mark 18 and is applied to the qos-web-
servers policy:
Sample output
Example
Example
Verification
122
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES
Example
Sample output
+-----------+--------------------------------------+
| dscp_mark | id |
+-----------+--------------------------------------+
| 18 | d7f976ec-7fab-4e60-af70-f59bf88198e6 |
+-----------+--------------------------------------+
Additional resources
Action Command
List the available QoS policies $ openstack network qos policy list
Show details of a specific QoS policy $ openstack network qos policy show
<policy_name>
List the available QoS rules $ openstack network qos rule type list
List the rules of a specific QoS policy $ openstack network qos rule list
<policy_name>
Show details of a specific rule $ openstack network qos rule type show
<rule_id>
123
Red Hat OpenStack Platform 16.1 Networking Guide
Action Command
For example, you can now create a QoS policy that allows for lower-priority network traffic, and have it
only apply to certain projects. Run the following command to assign the bw-limiter policy to the project,
demo:
124
CHAPTER 10. CONFIGURING BRIDGE MAPPINGS
bridge_mappings = datacentre:br-ex
Bridge mappings allow provider network traffic to reach the physical network. Traffic leaves the
provider network from the qg-xxx interface of the router and arrives at br-int. For OVS, a patch port
between br-int and br-ex then allows the traffic to pass through the bridge of the provider network and
out to the physical network. OVN creates a patch port on a hypervisor only when there is a VM bound to
the hypervisor that requires the port.
You configure the bridge mapping on the network node on which the router is scheduled. Router traffic
can egress using the correct physical network, as represented by the provider network.
NOTE
The Networking service supports only one bridge for each physical network. Do not map
more than one physical network to the same bridge.
The return packet from the external network arrives on br-ex and moves to br-int using phy-br-ex <->
int-br-ex. When the packet is going through br-ex to br-int, the packet’s external vlan ID is replaced by
an internal vlan tag in br-int, and this allows qg-xxx to accept the packet.
In the case of egress packets, the packet’s internal vlan tag is replaced with an external vlan tag in br-ex
(or in the external bridge that is defined in the network_vlan_ranges parameter).
You can customize aspects of your initial networking configuration, such as bridge mappings, by using
the NeutronBridgeMappings parameter in a customized environment file. You call the environment file
in the openstack overcloud deploy command.
Prerequisites
You must configure bridge mappings on the network node on which the router is scheduled.
For both ML2/OVS and ML2/OVN DVR configurations, you must configure bridge mappings
for the compute nodes, too.
125
Red Hat OpenStack Platform 16.1 Networking Guide
Procedure
1. Create a custom environment file and add the NeutronBridgeMappings heat parameter with
values that are appropriate for your site.
parameter_defaults:
NeutronBridgeMappings: "datacentre:br-ex,tenant:br-tenant"
NOTE
When the NeutronBridgeMappings parameter is not used, the default maps the
external bridge on hosts (br-ex) to a physical name (datacentre).
2. To apply this configuration, deploy the overcloud, adding your custom environment file to the
stack along with your other environment files.
3. You are ready for the next steps, which are the following:
a. Using the network VLAN ranges, create the provider networks that represent the
corresponding external networks. (You use the physical name when creating neutron
provider networks or floating IP networks.)
b. Connect the external networks to your project networks with router interfaces.
Additional resources
Manual port cleanup - requires careful removal of the superfluous patch ports. No outages of
network connectivity are required.
Automated port cleanup - performs an automated cleanup, but requires an outage, and requires
that the necessary bridge mappings be re-added. Choose this option during scheduled
maintenance windows when network connectivity outages can be tolerated.
NOTE
126
CHAPTER 10. CONFIGURING BRIDGE MAPPINGS
NOTE
When OVN bridge mappings are removed, the OVN controller automatically cleans up
any associated patch ports.
Prerequisites
The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports.
You can identify the patch ports to cleanup by their naming convention:
In br-$external_bridge patch ports are named phy-<external bridge name> (for example,
phy-br-ex2).
In br-int patch ports are named int-<external bridge name> (for example, int-br-ex2).
Procedure
1. Use ovs-vsctl to remove the OVS patch ports associated with the removed bridge mapping
entry:
2. Restart neutron-openvswitch-agent:
NOTE
When OVN bridge mappings are removed, the OVN controller automatically cleans up
any associated patch ports.
Prerequisites
The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports.
Use the flag --ovs_all_ports to remove all patch ports from br-int, cleaning up tunnel ends from
br-tun, and patch ports from bridge to bridge.
127
Red Hat OpenStack Platform 16.1 Networking Guide
The neutron-ovs-cleanup command unplugs all patch ports (instances, qdhcp/qrouter, among
others) from all OVS bridges.
Procedure
IMPORTANT
# /usr/bin/neutron-ovs-cleanup
--config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini
--log-file /var/log/neutron/ovs-cleanup.log --ovs_all_ports
NOTE
After a restart, the OVS agent does not interfere with any connections that are
not present in bridge_mappings. So, if you have br-int connected to br-ex2, and
br-ex2 has some flows on it, removing br-int from the bridge_mappings
configuration does not disconnect the two bridges when you restart the OVS
agent or the node.
Additional resources
128
CHAPTER 11. VLAN-AWARE INSTANCES
In ML2/OVN deployments you can support VLAN-aware instances using VLAN transparent networks.
As an alternative in ML2/OVN or ML2/OVS deployments, you can support VLAN-aware instances using
trunks.
In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags are
transferred over the network and consumed by the VM instances on the same VLAN, and ignored by
other instances and devices. In a VLAN transparent network, the VLANs are managed in the VM
instance. You do not need to set up the VLAN in the OpenStack Networking Service (neutron).
VLAN trunks support VLAN-aware instances by combining VLANs into a single trunked port. For
example, a project data network can use VLANs or tunneling (VXLAN, GRE, or Geneve) segmentation,
while the instances see the traffic tagged with VLAN IDs. Network packets are tagged immediately
before they are injected to the instance and do not need to be tagged throughout the entire network.
The following table compares certain features of VLAN transparent networks and VLAN trunks.
Transparent Trunk
VLAN ID Flexible. You can set the VLAN ID in the Fixed. Instances must use the
instance VLAN ID configured in the trunk
Prerequisites
Deployment of Red Hat OpenStack Platform 16.1 or higher, with ML2/OVN as the mechanism
driver.
Provider network of type VLAN or Geneve. Do not use VLAN transparency in deployments with
flat type provider networks.
Ensure that the external switch supports 802.1q VLAN stacking using ethertype 0x8100 on both
129
Red Hat OpenStack Platform 16.1 Networking Guide
Ensure that the external switch supports 802.1q VLAN stacking using ethertype 0x8100 on both
VLANs. OVN VLAN transparency does not support 802.1ad QinQ with outer provider VLAN
ethertype set to 0x88A8 or 0x9100.
Procedure
parameter_defaults:
EnableVLANTransparency: True
2. Include the environment file in the openstack overcloud deploy command with any other
environment files that are relevant to your environment and deploy the overcloud.
-e ovn-extras.yaml \
…
Replace <other_overcloud_environment_files> with the list of environment files that are part
of your existing deployment.
3. Create networks using the --transparent-vlan argument. Also set the network MTU to 4 bytes
less than the MTU of the underlay network to accommodate the extra tagging required by
VLAN transparency.
Example
Example
Verification
1. Ping between two VMs on the VLAN using the vlan50 IP address.
2. Use tcpdump on eth0 to see if the packets arrive with the VLAN tag intact.
Additional resources
130
CHAPTER 11. VLAN-AWARE INSTANCES
On the controller node, confirm that the trunk plug-in is enabled in the /var/lib/config-
data/neutron/etc/neutron/neutron.conf file:
service_plugins=router,qos,trunk
1. Identify the network that contains the instances that require access to the trunked VLANs. In
this example, this is the public network:
2. Create the parent trunk port, and attach it to the network that the instance connects to. In this
example, create a neutron port named parent-trunk-port on the public network. This trunk is the
parent port, as you can use it to create subports.
131
Red Hat OpenStack Platform 16.1 Networking Guide
| id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 |
| mac_address | fa:16:3e:33:c4:75 |
| name | parent-trunk-port |
| network_id | 871a6bd8-4193-45d7-a300-dcb2420e7cc3 |
| project_id | 745d33000ac74d30a77539f8920555e7 |
| project_id | 745d33000ac74d30a77539f8920555e7 |
| revision_number |4 |
| security_groups | 59e2af18-93c6-4201-861b-19a8a8b79b23 |
| status | DOWN |
| updated_at | 2016-10-20T02:02:33Z |
+-----------------------+-----------------------------------------------------------------------------+
3. Create a trunk using the port that you created in step 2. In this example the trunk is named
parent-trunk.
132
CHAPTER 11. VLAN-AWARE INSTANCES
| status | DOWN |
| sub_ports | |
| tenant_id | 745d33000ac74d30a77539f8920555e7 |
| updated_at | 2016-10-20T02:05:17Z |
+-----------------+--------------------------------------+
NOTE
If you receive the error HttpException: Conflict, confirm that you are creating
the subport on a different network to the one that has the parent trunk port. This
example uses the public network for the parent trunk port, and private for the
subport.
2. Associate the port with the trunk (parent-trunk), and specify the VLAN ID ( 55):
133
Red Hat OpenStack Platform 16.1 Networking Guide
Prerequisites
If you are performing live migrations of your Compute nodes, ensure that the RHOSP
Networking service RPC response timeout is appropriately set for your RHOSP deployment.
The RPC response timeout value can vary between sites and is dependent on the system speed.
The general recommendation is to set the value to at least 120 seconds per/100 trunk ports.
The best practice is to measure the trunk port bind process time for your RHOSP deployment,
and then set the RHOSP Networking service RPC response timeout appropriately. Try to keep
the RPC response timeout value low, but also provide enough time for the RHOSP Networking
service to receive an RPC response. For more information, see Section 11.7, “Configuring
Networking service RPC timeout”.
Procedure
1. Review the configuration of your network trunk, using the network trunk command.
Example
Sample output
+---------------------+--------------+---------------------+-------------+
| ID | Name | Parent Port | Description |
+---------------------+--------------+---------------------+-------------+
| 0e4263e2-5761-4cf6- | parent-trunk | 20b6fdf8-0d43-475a- | |
| ab6d-b22884a0fa88 | | a0f1-ec8f757a4a39 | |
+---------------------+--------------+---------------------+-------------+
Example
Sample output
+-----------------+------------------------------------------------------+
| Field | Value |
+-----------------+------------------------------------------------------+
| admin_state_up | UP |
| created_at | 2021-10-20T02:05:17Z |
| description | |
| id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 |
| name | parent-trunk |
| port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 |
| revision_number | 2 |
| status | DOWN |
134
CHAPTER 11. VLAN-AWARE INSTANCES
Example
openstack server create --image cirros --flavor m1.tiny --security-group default --key-name
sshaccess --nic port-id=20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 testInstance
Sample output
+--------------------------------------+---------------------------------+
| Property | Value |
+--------------------------------------+---------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host |- |
| OS-EXT-SRV-ATTR:hostname | testinstance |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index |0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-juqco0el |
| OS-EXT-SRV-ATTR:root_device_name | - |
| OS-EXT-SRV-ATTR:user_data |- |
| OS-EXT-STS:power_state |0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at |- |
| OS-SRV-USG:terminated_at |- |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | uMyL8PnZRBwQ |
| config_drive | |
| created | 2021-08-20T03:02:51Z |
| description |- |
| flavor | m1.tiny (1) |
| hostId | |
| host_status | |
| id | 88b7aede-1305-4d91-a180-67e7eac |
| | 8b70d |
| image | cirros (568372f7-15df-4e61-a05f |
| | -10954f79a3c4) |
| key_name | sshaccess |
| locked | False |
| metadata | {} |
| name | testInstance |
| os-extended-volumes:volumes_attached | [] |
| progress |0 |
| security_groups | default |
135
Red Hat OpenStack Platform 16.1 Networking Guide
| status | BUILD |
| tags | [] |
| tenant_id | 745d33000ac74d30a77539f8920555e |
| |7 |
| updated | 2021-08-20T03:02:51Z |
| user_id | 8c4aea738d774967b4ef388eb41fef5 |
| |e |
+--------------------------------------+---------------------------------+
Additional resources
The RPC response timeout value can vary between sites and is dependent on the system speed. The
general recommendation is to set the value to at least 120 seconds per/100 trunk ports.
If your site uses trunk ports, the best practice is to measure the trunk port bind process time for your
RHOSP deployment, and then set the RHOSP Networking service RPC response timeout appropriately.
Try to keep the RPC response timeout value low, but also provide enough time for the RHOSP
Networking service to receive an RPC response.
By using a manual hieradata override, rpc_response_timeout, you can set the RPC response timeout
value for the RHOSP Networking service.
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-modules-environment.yaml
TIP
The RHOSP Orchestration service (heat) uses a set of plans called templates to install and
configure your environment. You can customize aspects of the overcloud with a custom
environment file, which is a special type of template that provides customization for your heat
templates.
2. In the YAML environment file under ExtraConfig, set the appropriate value (in seconds) for
rpc_response_timeout. (The default value is 60 seconds.)
Example
parameter_defaults:
ExtraConfig:
neutron::rpc_response_timeout: 120
NOTE
136
CHAPTER 11. VLAN-AWARE INSTANCES
NOTE
The RHOSP Orchestration service (heat) updates all RHOSP nodes with the
value you set in the custom environment file, however this value only impacts the
RHOSP Networking components.
3. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important as the parameters and resources
defined in subsequent environment files take precedence.
Example
Additional resources
DOWN: The virtual and physical resources for the trunk are not in sync. This can be a temporary
state during negotiation.
BUILD: There has been a request and the resources are being provisioned. After successful
completion the trunk returns to ACTIVE.
DEGRADED: The provisioning request did not complete, so the trunk has only been partially
provisioned. It is recommended to remove the subports and try again.
ERROR: The provisioning request was unsuccessful. Remove the resource that caused the error
to return the trunk to a healthier state. Do not add more subports while in the ERROR state, as
this can cause more issues.
137
Red Hat OpenStack Platform 16.1 Networking Guide
As a result, cloud administrators can remove the ability for some projects to create networks and can
instead allow them to attach to pre-existing networks that correspond to their project.
3. Create a RBAC entry for the web-servers network that grants access to the auditors project
(4b0b98f8c6c040f38ba4f7146e8680f5):
138
CHAPTER 12. CONFIGURING RBAC POLICIES
| object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 |
| object_type | network |
| target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 |
| project_id | 98a2f53c20ce4d50a40dac4a38016c69 |
+----------------+--------------------------------------+
As a result, users in the auditors project can connect instances to the web-servers network.
2. Run the openstack network rbac-show command to view the details of a specific RBAC entry:
2. Run the openstack network rbac delete command to delete the RBAC, using the ID of the
139
Red Hat OpenStack Platform 16.1 Networking Guide
2. Run the openstack network rbac delete command to delete the RBAC, using the ID of the
RBAC that you want to delete:
Complete the steps in the following example procedure to create a RBAC for the web-servers network
and grant access to the engineering project (c717f263785d4679b16a122516247deb):
As a result, users in the engineering project are able to view the network or connect instances to
it:
140
CHAPTER 13. CONFIGURING DISTRIBUTED VIRTUAL ROUTING (DVR)
Each model has advantages and disadvantages. Use this document to carefully plan whether centralized
routing or DVR better suits your needs.
New default RHOSP deployments use DVR and the Modular Layer 2 plug-in with the Open Virtual
Network mechanism driver (ML2/OVN).
East-West routing - routing of traffic between different networks in the same project. This
traffic does not leave the OpenStack deployment. This definition applies to both IPv4 and IPv6
subnets.
North-South routing with floating IPs - Floating IP addressing is a one-to-one NAT that can
be modified and that floats between instances. While floating IPs are modeled as a one-to-one
association between the floating IP and a neutron port, they are implemented by association
with a neutron router that performs the NAT translation. The floating IPs themselves are taken
from the uplink network that provides the router with external connectivity. As a result,
instances can communicate with external resources (such as endpoints on the internet) or the
other way around. Floating IPs are an IPv4 concept and do not apply to IPv6. It is assumed that
the IPv6 addressing used by projects uses Global Unicast Addresses (GUAs) with no overlap
across the projects, and therefore can be routed without NAT.
North-South routing without floating IPs (also known as SNAT) - The Networking service
offers a default port address translation (PAT) service for instances that do not have allocated
floating IPs. With this service, instances can communicate with external endpoints through the
router, but not the other way around. For example, an instance can browse a website on the
internet, but a web browser outside cannot browse a website hosted within the instance. SNAT
is applied for IPv4 traffic only. In addition, Networking service networks that are assigned GUAs
prefixes do not require NAT on the Networking service router external gateway port to access
the outside world.
141
Red Hat OpenStack Platform 16.1 Networking Guide
project’s virtual routers, managed by the neutron L3 agent, are all deployed in a dedicated node or
cluster of nodes (referred to as the Network node, or Controller node). This means that each time a
routing function is required (east/west, floating IPs or SNAT), traffic would traverse through a dedicated
node in the topology. This introduced multiple challenges and resulted in sub-optimal traffic flows. For
example:
Traffic between instances flows through a Controller node - when two instances need to
communicate with each other using L3, traffic has to hit the Controller node. Even if the
instances are scheduled on the same Compute node, traffic still has to leave the Compute
node, flow through the Controller, and route back to the Compute node. This negatively
impacts performance.
Instances with floating IPs receive and send packets through the Controller node - the external
network gateway interface is available only at the Controller node, so whether the traffic is
originating from an instance, or destined to an instance from the external network, it has to flow
through the Controller node. Consequently, in large environments the Controller node is subject
to heavy traffic load. This would affect performance and scalability, and also requires careful
planning to accommodate enough bandwidth in the external network gateway interface. The
same requirement applies for SNAT traffic.
To better scale the L3 agent, the Networking service can use the L3 HA feature, which distributes the
virtual routers across multiple nodes. In the event that a Controller node is lost, the HA router will
failover to a standby on another node and there will be packet loss until the HA router failover
completes.
North-South traffic with floating IP is distributed and routed on the Compute nodes. This
requires the external network to be connected to every Compute node.
North-South traffic without floating IP is not distributed and still requires a dedicated Controller
node.
The L3 agent on the Controller node uses the dvr_snat mode so that the node serves only
SNAT traffic.
The neutron metadata agent is distributed and deployed on all Compute nodes. The metadata
proxy service is hosted on all the distributed routers.
On ML2/OVS DVR deployments, network traffic for the Red Hat OpenStack Platform Load-
balancing service (octavia) goes through the Controller and network nodes, instead of the
compute nodes.
With an ML2/OVS mechanism driver network back end and DVR, it is possible to create VIPs.
142
CHAPTER 13. CONFIGURING DISTRIBUTED VIRTUAL ROUTING (DVR)
With an ML2/OVS mechanism driver network back end and DVR, it is possible to create VIPs.
However, the IP address assigned to a bound port using allowed_address_pairs, should match
the virtual port IP address (/32).
If you use a CIDR format IP address for the bound port allowed_address_pairs instead, port
forwarding is not configured in the back end, and traffic fails for any IP in the CIDR expecting to
reach the bound IP port.
SNAT (source network address translation) traffic is not distributed, even when DVR is enabled.
SNAT does work, but all ingress/egress traffic must traverse through the centralized Controller
node.
In ML2/OVS deployments, IPv6 traffic is not distributed, even when DVR is enabled. All
ingress/egress traffic goes through the centralized Controller node. If you use IPv6 routing
extensively with ML2/OVS, do not use DVR.
Note that in ML2/OVN deployments, all east/west traffic is always distributed, and north/south
traffic is distributed when DVR is configured.
In ML2/OVS deployments, DVR is not supported in conjunction with L3 HA. If you use DVR with
Red Hat OpenStack Platform 16.1 director, L3 HA is disabled. This means that routers are still
scheduled on the Network nodes (and load-shared between the L3 agents), but if one agent
fails, all routers hosted by this agent fail as well. This affects only SNAT traffic. The
allow_automatic_l3agent_failover feature is recommended in such cases, so that if one
network node fails, the routers are rescheduled to a different node.
DHCP servers, which are managed by the neutron DHCP agent, are not distributed and are still
deployed on the Controller node. The DHCP agent is deployed in a highly available
configuration on the Controller nodes, regardless of the routing design (centralized or DVR).
Compute nodes require an interface on the external network attached to an external bridge.
They use this interface to attach to a VLAN or flat network for an external router gateway, to
host floating IPs, and to perform SNAT for VMs that use floating IPs.
In ML2/OVS deployments, each Compute node requires one additional IP address. This is due
to the implementation of the external gateway port and the floating IP network namespace.
VLAN, GRE, and VXLAN are all supported for project data separation. When you use GRE or
VXLAN, you must enable the L2 Population feature. The Red Hat OpenStack Platform director
enforces L2 Population during installation.
Configure the interface connected to the physical network for external network traffic on both
143
Red Hat OpenStack Platform 16.1 Networking Guide
Configure the interface connected to the physical network for external network traffic on both
the Compute and Controller nodes.
Create a bridge on Compute and Controller nodes, with an interface for external network traffic.
You also configure the Networking service (neutron) to match the provisioned networking environment
and allow traffic to use the bridge.
The default settings are provided as guidelines only. They are not expected to work in production or test
environments which may require customization for network isolation, dedicated NICs, or any number of
other variable factors. In setting up an environment, you need to correctly configure the bridge mapping
type parameters used by the L2 agents and the external facing bridges for other agents, such as the L3
agent.
The following example procedure shows how to configure a proof-of-concept environment using the
typical defaults.
Procedure
NOTE
If you customize the network configuration of the Compute node, you may need
to add the appropriate configuration to your custom files instead.
b. $ cd <local_copy_of_templates_directory.
c. Run the process-templates script to render the templates to a temporary output directory:
$ ./tools/process-templates.py -r <roles_data.yaml> \
-n <network_data.yaml> -o <temporary_output_directory>
3. If needed, customize the Compute template to include an external bridge that matches the
Controller nodes, and name the custom file path in
OS::TripleO::Compute::Net::SoftwareConfig in an environment file.
144
CHAPTER 13. CONFIGURING DISTRIBUTED VIRTUAL ROUTING (DVR)
NOTE
The external bridge configuration for the L3 agent was deprecated in Red Hat
OpenStack Platform 13 and removed in Red Hat OpenStack Platform 15.
Procedure
4. You cannot transition an L3 HA router to distributed directly. Instead, for each router, disable
the L3 HA option, and then enable the distributed option:
Example
Example
Example
Example
Additional resources
145
Red Hat OpenStack Platform 16.1 Networking Guide
In a DVR topology, compute nodes with floating IP addresses route traffic between virtual machine
instances and the network that provides the router with external connectivity (north-south traffic).
Traffic between instances (east-west traffic) is also distributed.
You can optionally deploy with DVR disabled. This disables north-south DVR, requiring north-south
traffic to traverse a controller or networker node. East-west routing is always distributed in an an
ML2/OVN deployment, even when DVR is disabled.
Prerequisites
Procedure
parameter_defaults:
NeutronEnableDVR: false
2. To apply this configuration, deploy the overcloud, adding your custom environment file to the
stack along with your other environment files. For example:
146
CHAPTER 14. PROJECT NETWORKING WITH IPV6
NOTE
RHOSP does not support IPv6 prefix delegation in ML2/OVN deployments. You must set
the Global Unicast Address prefix manually.
NOTE
OpenStack
Networking
supports only
EUI-64 IPv6
address
assignment for
SLAAC. This
allows for
simplified IPv6
networking, as
hosts self-assign
addresses based
on the base 64-
bits plus the MAC
address. You
cannot create
subnets with a
different netmask
and
address_assign_ty
pe of SLAAC.
147
Red Hat OpenStack Platform 16.1 Networking Guide
For example, you can create an IPv6 subnet using Stateful DHCPv6 in network named database-servers
in a project named QA.
148
CHAPTER 14. PROJECT NETWORKING WITH IPV6
Procedure
1. Retrieve the project ID of the Project where you want to create the IPv6 subnet. These values
are unique between OpenStack deployments, so your values differ from the values in this
example.
2. Retrieve a list of all networks present in OpenStack Networking (neutron), and note the name of
the network where you want to host the IPv6 subnet:
3. Include the project ID, network name, and ipv6 address mode in the openstack subnet create
command:
149
Red Hat OpenStack Platform 16.1 Networking Guide
| name | |
| network_id | 6aff6826-4278-4a35-b74d-b0ca0cbba340 |
| tenant_id | 25837c567ed5458fbb441d39862e1399 |
+-------------------+--------------------------------------------------------------+
Validation steps
1. Validate this configuration by reviewing the network list. Note that the entry for database-
servers now reflects the newly created IPv6 subnet:
Result
As a result of this configuration, instances that the QA project creates can receive a DHCP IPv6
address when added to the database-servers subnet:
Additional resources
To find the Router Advertisement mode and address mode combinations to achieve a particular result in
an IPv6 subnet, see IPv6 subnet options in the Networking Guide.
150
CHAPTER 15. MANAGING PROJECT QUOTAS
Procedure
You can set project quotas for various network components in the /var/lib/config-
data/neutron/etc/neutron/neutron.conf file.
For example, to limit the number of routers that a project can create, change the quota_router
value:
quota_router = 10
For a listing of the quota settings, see sections that immediately follow.
151
Red Hat OpenStack Platform 16.1 Networking Guide
Here are quota options available for managing the number of security groups that projects can create:
152
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
RPNs simplify the cloud for end users because they see only one network. For cloud operators, RPNs
deliver scalabilty and fault tolerance. For example, if a major error occurs, only one segment is impacted
instead of the entire network failing.
Before routed provider networks (RPNs), operators typically had to choose from one of the following
architectures:
Single, large layer 2 networks become complex when scaling and reduce fault tolerance (increase failure
domains).
Multiple, smaller layer 2 networks scale better and shrink failure domains, but can introduce complexity
for end users.
Starting with Red Hat OpenStack Platform 16.1.1 and later, you can deploy RPNs using the ML2/OVS or
the SR-IOV mechanism drivers.
Additional resources
With RPNs, the IP addresses available to virtual machine (VM) instances depend on the segment of the
network available on the particular compute node. The Networking service port can be associated with
only one network segment.
Similar to conventional networking, layer 2 (switching) handles transit of traffic between ports on the
same network segment and layer 3 (routing) handles transit of traffic between segments.
The Networking service does not provide layer 3 services between segments. Instead, it relies on
153
Red Hat OpenStack Platform 16.1 Networking Guide
The Networking service does not provide layer 3 services between segments. Instead, it relies on
physical network infrastructure to route subnets. Thus, both the Networking service and physical
network infrastructure must contain configuration for routed provider networks, similar to conventional
provider networks.
Because the Compute service (nova) scheduler is not network segment aware, when you deploy RPNs,
you must map each leaf or rack segment or DCN edge site to a Compute service host-aggregate or
availability zone.
If you require a DHCP-metadata service, you must define an availability zone for each edge site or
network segment, to ensure that the local DHCP agent is deployed.
Additional resources
Routed provider networks are supported only by the ML2/OVS and SR-IOV mechanism drivers.
Open Virtual Network (OVN) is not supported.
OVS-DPDK (without DHCP) support for remote and edge deployments is in Technology
Preview in Red Hat OpenStack Platform 16.1.4 and later.
When using SR-IOV or PCI pass-through, physical network (physnet) names must be the same
in central and remote sites or segments. You cannot reuse segment IDs.
The Compute service (nova) scheduler is not segment-aware. (You must map each segment or
edge site to a Compute host-aggregate or availability zone.) Currently, there are only two VM
instance boot options available:
Boot using port-id and no IP address, specifying Compute availability zone (segment or
edge site).
Boot using network-id, specifying the Compute availability zone (segment or edge site).
Cold or live migration works only when you specify the destination Compute availability zone
(segment or edge site).
154
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
Procedure
1. Within a network, use a unique physical network name for each segment. This enables reuse of
the same segmentation details between subnets.
For example, use the same VLAN ID across all segments of a particular provider network.
… … …
… … …
155
Red Hat OpenStack Platform 16.1 Networking Guide
… … …
You deploy a DCHP agent and a Networking service metadata agent on the Compute nodes by
using a custom roles file.
Here is an example:
###########################################################################
####
# Role: ComputeSriov #
###########################################################################
####
- name: ComputeSriov
description: |
Compute SR-IOV Role
CountDefault: 1
networks:
External:
subnet: external_subnet
InternalApi:
subnet: internal_api_subnet
Tenant:
subnet: tenant_subnet
Storage:
subnet: storage_subnet
RoleParametersDefault:
TunedProfileName: "cpu-partitioning"
update_serial: 25
ServicesDefault:
- OS::TripleO::Services::Aide
- OS::TripleO::Services::AuditD
- OS::TripleO::Services::BootParams
- OS::TripleO::Services::CACerts
...
- OS::TripleO::Services::NeutronDhcpAgent
- OS::TripleO::Services::NeutronMetadataAgent
...
parameter_defaults:
....
NeutronEnableIsolatedMetadata: 'True'
....
5. Ensure that the RHOSP Placement service, the python3-osc-placement package, is installed
on the undercloud.
156
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
This package is available on the undercloud in RHOSP 16.1.6 and later. For earlier versions of
RHOSP you must install the package manually. To check which version of RHOSP you are
running, enter the following command on the undercloud:
$ cat /etc/rhosp-release
Red Hat OpenStack Platform release 16.1.5 GA (Train)
To install the Placement service, log in to the undercloud as root, and run this command:
Additional resources
When you perform this procedure, you create an RPN with two network segments. Each segment
contains one IPv4 subnet and one IPv6 subnet.
Prerequisites
Procedure
Example
Sample output
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
157
Red Hat OpenStack Platform 16.1 Networking Guide
| id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| l2_adjacency | True |
| mtu | 1500 |
| name | multisegment1 |
| port_security_enabled | True |
| provider:network_type | vlan |
| provider:physical_network | provider1 |
| provider:segmentation_id | 128 |
| revision_number |1 |
| router:external | Internal |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | [] |
+---------------------------+--------------------------------------+
Sample output
+--------------------------------------+----------+--------------------------------------+--------------
+---------+
| ID | Name | Network | Network Type |
Segment |
+--------------------------------------+----------+--------------------------------------+--------------
+---------+
| 43e16869-ad31-48e4-87ce-acf756709e18 | None | 6ab19caa-dda9-4b3d-abc4-
5b8f435b98d9 | vlan | 128 |
+--------------------------------------+----------+--------------------------------------+--------------
+---------+
Example
Sample output
+------------------+--------------------------------------+
158
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
| Field | Value |
+------------------+--------------------------------------+
| description | None |
| headers | |
| id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| name | segment2 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| network_type | vlan |
| physical_network | provider2 |
| revision_number | 1 |
| segmentation_id | 129 |
| tags | [] |
+------------------+--------------------------------------+
4. Verify that the network contains the segment1 and segment2 segments:
Sample output
+--------------------------------------+----------+--------------------------------------+--------------+-----
----+
| ID | Name | Network | Network Type | Segment |
+--------------------------------------+----------+--------------------------------------+--------------+-----
----+
| 053b7925-9a89-4489-9992-e164c8cc8763 | segment2 | 6ab19caa-dda9-4b3d-abc4-
5b8f435b98d9 | vlan | 129 |
| 43e16869-ad31-48e4-87ce-acf756709e18 | segment1 | 6ab19caa-dda9-4b3d-abc4-
5b8f435b98d9 | vlan | 128 |
+--------------------------------------+----------+--------------------------------------+--------------+-----
----+
5. Create one IPv4 subnet and one IPv6 subnet on the segment1 segment.
In this example, the IPv4 subnet uses 203.0.113.0/24:
Example
Sample output
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 203.0.113.2-203.0.113.254 |
| cidr | 203.0.113.0/24 |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| id | c428797a-6f8e-4cb1-b394-c404318a2762 |
| ip_version |4 |
| name | multisegment1-segment1-v4 |
159
Red Hat OpenStack Platform 16.1 Networking Guide
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 |
| tags | [] |
+-------------------+--------------------------------------+
Example
Sample output
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | fd00:203:0:113::2-fd00:203:0:113:ffff:ffff:ffff:ffff |
| cidr | fd00:203:0:113::/64 |
| enable_dhcp | True |
| gateway_ip | fd00:203:0:113::1 |
| id | e41cb069-9902-4c01-9e1c-268c8252256a |
| ip_version |6 |
| ipv6_address_mode | slaac |
| ipv6_ra_mode | None |
| name | multisegment1-segment1-v6 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 |
| tags | [] |
+-------------------+------------------------------------------------------+
NOTE
6. Create one IPv4 subnet and one IPv6 subnet on the segment2 segment.
In this example, the IPv4 subnet uses 198.51.100.0/24:
Example
Sample output
160
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 198.51.100.2-198.51.100.254 |
| cidr | 198.51.100.0/24 |
| enable_dhcp | True |
| gateway_ip | 198.51.100.1 |
| id | 242755c2-f5fd-4e7d-bd7a-342ca95e50b2 |
| ip_version |4 |
| name | multisegment1-segment2-v4 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| tags | [] |
+-------------------+--------------------------------------+
Example
Sample output
+-------------------+--------------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------------+
| allocation_pools | fd00:198:51:100::2-fd00:198:51:100:ffff:ffff:ffff:ffff |
| cidr | fd00:198:51:100::/64 |
| enable_dhcp | True |
| gateway_ip | fd00:198:51:100::1 |
| id | b884c40e-9cfe-4d1b-a085-0a15488e9441 |
| ip_version |6 |
| ipv6_address_mode | slaac |
| ipv6_ra_mode | None |
| name | multisegment1-segment2-v6 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| tags | [] |
+-------------------+--------------------------------------------------------+
Verification
1. Verify that each IPv4 subnet associates with at least one DHCP agent:
Sample output
161
Red Hat OpenStack Platform 16.1 Networking Guide
+--------------------------------------+------------+-------------+-------------------+-------+-------+------
--------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary
|
+--------------------------------------+------------+-------------+-------------------+-------+-------+------
--------------+
| c904ed10-922c-4c1a-84fd-d928abaf8f55 | DHCP agent | compute0001 | nova | :-)
| UP | neutron-dhcp-agent |
| e0b22cc0-d2a6-4f1c-b17c-27558e20b454 | DHCP agent | compute0101 | nova | :-)
| UP | neutron-dhcp-agent |
+--------------------------------------+------------+-------------+-------------------+-------+-------+------
--------------+
2. Verify that inventories were created for each segment IPv4 subnet in the Compute service
placement API.
Run this command for all segment IDs:
$ SEGMENT_ID=053b7925-9a89-4489-9992-e164c8cc8763
$ openstack resource provider inventory list $SEGMENT_ID
Sample output
In this sample output, only one of the segments is shown:
+----------------+------------------+----------+----------+-----------+----------+-------+
| resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total |
+----------------+------------------+----------+----------+-----------+----------+-------+
| IPV4_ADDRESS | 1.0 | 1| 2| 1| 1 | 30 |
+----------------+------------------+----------+----------+-----------+----------+-------+
3. Verify that host aggregates were created for each segment in the Compute service:
Sample output
In this example, only one of the segments is shown:
+----+---------------------------------------------------------+-------------------+
| Id | Name | Availability Zone |
+----+---------------------------------------------------------+-------------------+
| 10 | Neutron segment id 053b7925-9a89-4489-9992-e164c8cc8763 | None |
+----+---------------------------------------------------------+-------------------+
4. Launch one or more instances. Each instance obtains IP addresses according to the segment it
uses on the particular compute node.
NOTE
162
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
NOTE
If a fixed IP is specified by the user in the port create request, that particular IP is
allocated immediately to the port. However, creating a port and passing it to an
instance yields a different behavior than conventional networks. If the fixed IP is
not specified on the port create request, the Networking service defers
assignment of IP addresses to the port until the particular compute node
becomes apparent. For example, when you run this command:
Sample output
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | UP |
| binding_vnic_type | normal |
| id | 6181fb47-7a74-4add-9b6b-f9837c1c90c4 |
| ip_allocation | deferred |
| mac_address | fa:16:3e:34:de:9b |
| name | port1 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| port_security_enabled | True |
| revision_number |1 |
| security_groups | e4fcef0d-e2c5-40c3-a385-9c33ac9289c5 |
| status | DOWN |
| tags | [] |
+-----------------------+--------------------------------------+
Additional resources
Prerequisites
The non-routed network you are migrating must contain only one segment and only one subnet.
IMPORTANT
163
Red Hat OpenStack Platform 16.1 Networking Guide
IMPORTANT
Procedure
1. For the network that is being migrated, obtain the ID of the current network segment.
Example
Sample output
+--------------------------------------+------+--------------------------------------+--------------+---------
+
| ID | Name | Network | Network Type | Segment |
+--------------------------------------+------+--------------------------------------+--------------+---------
+
| 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 | None | 45e84575-2918-471c-95c0-
018b961a2984 | flat | None |
+--------------------------------------+------+--------------------------------------+--------------+---------
+
2. For the network that is being migrated, obtain the ID of the current subnet.
Example
Sample output
+--------------------------------------+-----------+--------------------------------------+---------------+
| ID | Name | Network | Subnet |
+--------------------------------------+-----------+--------------------------------------+---------------+
| 71d931d2-0328-46ae-93bc-126caf794307 | my_subnet | 45e84575-2918-471c-95c0-
018b961a2984 | 172.24.4.0/24 |
+--------------------------------------+-----------+--------------------------------------+---------------+
3. Verify that the current segment_id of the subnet has a value of None.
Example
Sample output
+------------+-------+
| Field | Value |
+------------+-------+
164
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS
| segment_id | None |
+------------+-------+
4. Change the value of the subnet segment_id to the network segment ID.
Here is an example:
Verification
Verify that the subnet is now associated with the desired network segment.
Example
Sample output
+------------+--------------------------------------+
| Field | Value |
+------------+--------------------------------------+
| segment_id | 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 |
+------------+--------------------------------------+
Additional resources
165
Red Hat OpenStack Platform 16.1 Networking Guide
You define allowed address pairs using the Red Hat OpenStack Platform command-line client
openstack port command.
IMPORTANT
Be aware that you should not use the default security group with a wider IP address range
in an allowed address pair. Doing so can allow a single port to bypass security groups for
all other ports within the same network.
For example, this command impacts all ports in the network and bypasses all security
groups:
NOTE
With an ML2/OVN mechanism driver network back end, it is possible to create VIPs.
However, the IP address assigned to a bound port using allowed_address_pairs, should
match the virtual port IP address (/32).
If you use a CIDR format IP address for the bound port allowed_address_pairs instead,
port forwarding is not configured in the back end, and traffic fails for any IP in the CIDR
expecting to reach the bound IP port.
Additional resources
IMPORTANT
Do not use the default security group with a wider IP address range in an allowed address
pair. Doing so can allow a single port to bypass security groups for all other ports within
the same network.
166
CHAPTER 17. CONFIGURING ALLOWED ADDRESS PAIRS
Procedure
Use the following command to create a port and allow one address pair:
Additional resources
IMPORTANT
Do not use the default security group with a wider IP address range in an allowed address
pair. Doing so can allow a single port to bypass security groups for all other ports within
the same network.
Procedure
NOTE
You cannot set an allowed-address pair that matches the mac_address and
ip_address of a port. This is because such a setting has no effect since traffic
matching the mac_address and ip_address is already allowed to pass through
the port.
Additional resources
167
Red Hat OpenStack Platform 16.1 Networking Guide
1. Enable the L2 population driver by adding it to the list of mechanism drivers. You also must enable at
least one tunneling driver: either GRE, VXLAN, or both. Add the appropriate configuration options to the
ml2_conf.ini file:
[ml2]
type_drivers = local,flat,vlan,gre,vxlan,geneve
mechanism_drivers = l2population
NOTE
Neutron’s Linux Bridge ML2 driver and agent were deprecated in Red Hat OpenStack
Platform 11. The Open vSwitch (OVS) plugin OpenStack Platform director default, and is
recommended by Red Hat for general usage.
2. Enable L2 population in the openvswitch_agent.ini file. Enable it on each node that contains the L2
agent:
[agent]
l2_population = True
NOTE
[agent]
l2_population = True
arp_responder = True
If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs,
168
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS
If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs,
the Virtual Router Redundancy Protocol (VRRP) messages might overflow the IRQ queues. This
overflow stops Open vSwitch (OVS) from responding and forwarding those VRRP messages.
To avoid VRRP packet overload, you must increase the VRRP advertisement interval using the
ha_vrrp_advert_int parameter in the ExtraConfig section for the Controller role.
Procedure
1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.
Example
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TIP
The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your heat templates.
3. In the YAML environment file, increase the VRRP advertisement interval using the
ha_vrrp_advert_int argument with a value specific for your site. (The default is 2 seconds.)
You can also set values for gratuitous ARP messages:
ha_vrrp_garp_master_repeat
The number of gratuitous ARP messages to send at one time after the transition to the
master state. (The default is 5 messages.)
ha_vrrp_garp_master_delay
The delay for second set of gratuitous ARP messages after the lower priority advert is
received in the master state. (The default is 5 seconds.)
Example
parameter_defaults:
ControllerExtraConfig:
neutron::agents::l3::ha_vrrp_advert_int: 7
neutron::config::l3_agent_config:
DEFAULT/ha_vrrp_garp_master_repeat:
value: 5
DEFAULT/ha_vrrp_garp_master_delay:
value: 5
4. Run the openstack overcloud deploy command and include the core heat templates,
169
Red Hat OpenStack Platform 16.1 Networking Guide
4. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
Additional resources
You enable the dns_domain for ports extension by declaring the RHOSP Orchestration (heat)
NeutronPluginExtensions parameter in a YAML-formatted environment file. Using a corresponding
parameter, NeutronDnsDomain, you specify your domain name, which overrides the default value,
openstacklocal. After redeploying your overcloud, you can use the OpenStack Client port commands,
port set or port create, with --dns-name to assign a port name.
Also, when the dns_domain for ports extension is enabled, the Compute service automatically populates
the dns_name attribute with the hostname attribute of the instance during the boot of VM instances.
At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname.
Procedure
1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.
Example
$ source ~/stackrc
NOTE
170
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS
NOTE
Values inside parentheses are sample values that are used in the example
commands in this procedure. Substitute these sample values with ones that are
appropriate for your site.
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TIP
The undercloud includes a set of Orchestration service templates that form the plan for your
overcloud creation. You can customize aspects of the overcloud with environment files, which
are YAML-formatted files that override parameters and resources in the core Orchestration
service template collection. You can include as many environment files as necessary.
3. In the environment file, add a parameter_defaults section. Under this section, add the
dns_domain for ports extension, dns_domain_ports.
Example
parameter_defaults:
NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
NOTE
If you set dns_domain_ports, ensure that the deployment does not also use
dns_domain, the DNS Integration extension. These extensions are incompatible,
and both extensions cannot be defined simultaneously.
4. Also in the parameter_defaults section, add your domain name ( example.com) using the
NeutronDnsDomain parameter.
Example
parameter_defaults:
NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
NeutronDnsDomain: "example.com"
5. Run the openstack overcloud deploy command and include the core Orchestration templates,
environment files, and this new environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
171
Red Hat OpenStack Platform 16.1 Networking Guide
-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-
environment.yaml
Verification
1. Log in to the overcloud, and create a new port (new_port) on a network (public). Assign a DNS
name (my_port) to the port.
Example
$ source ~/overcloudrc
$ openstack port create --network public --dns-name my_port new_port
Example
Output
+-------------------------+----------------------------------------------+
| Field | Value |
+-------------------------+----------------------------------------------+
| dns_assignment | fqdn='my_port.example.com', |
| | hostname='my_port', |
| | ip_address='10.65.176.113' |
| dns_domain | example.com |
| dns_name | my_port |
| name | new_port |
+-------------------------+----------------------------------------------+
Under dns_assignment, the fully qualified domain name (fqdn) value for the port contains a
concatenation of the DNS name (my_port) and the domain name (example.com) that you set
earlier with NeutronDnsDomain.
3. Create a new VM instance (my_vm) using the port (new_port) that you just created.
Example
$ openstack server create --image rhel --flavor m1.small --port new_port my_vm
Example
Output
+-------------------------+----------------------------------------------+
172
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS
| Field | Value |
+-------------------------+----------------------------------------------+
| dns_assignment | fqdn='my_vm.example.com', |
| | hostname='my_vm', |
| | ip_address='10.65.176.113' |
| dns_domain | example.com |
| dns_name | my_vm |
| name | new_port |
+-------------------------+----------------------------------------------+
Note that the Compute service changes the dns_name attribute from its original value
(my_port) to the name of the instance with which the port is associated ( my_vm).
Additional resources
The value of the extra_dhcp_opt attribute is an array of DHCP option objects, where each object
contains an opt_name and an opt_value. IPv4 is the default version, but you can change this to IPv6 by
including a third option, ip-version=6.
When a VM instance starts, the RHOSP Networking service supplies port information to the instance
using DHCP protocol. If you add DHCP information to a port already connected to a running instance,
the instance only uses the new DHCP port information when the instance is restarted.
Some of the more common DHCP port attributes are: bootfile-name, dns-server, domain-name, mtu,
server-ip-address, and tftp-server. For the complete set of acceptable values for opt_name, refer to
the DHCP specification.
Prerequisites
Procedure
$ source ~/stackrc
173
Red Hat OpenStack Platform 16.1 Networking Guide
Example
$ vi /home/stack/templates/my-octavia-environment.yaml
4. Your environment file must contain the keywords parameter_defaults. Under these keywords,
add the extra DHCP option extension, extra_dhcp_opt.
Example
parameter_defaults:
NeutronPluginExtensions: "qos,port_security,extra_dhcp_opt"
5. Run the deployment command and include the core heat templates, environment files, and this
new custom environment file.
The order of the environment files is important because the parameters and resources
defined in subsequent environment files take precedence.
Example
Verification
Example
$ source ~/overcloudrc
2. Create a new port (new_port) on a network (public). Assign a valid attribute from the DHCP
specification to the new port.
Example
Example
Sample output
+-----------------+--------------------------------------------------------------------+
174
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS
| Field | Value |
+-----------------+--------------------------------------------------------------------+
| extra_dhcp_opts | ip_version='4', opt_name='domain-name', opt_value='test.domain' |
| | ip_version='4', opt_name='ntp-server', opt_value='192.0.2.123' |
+-----------------+--------------------------------------------------------------------+
Additional resources
Dynamic Host Configuration Protocol (DHCP) and Bootstrap Protocol (BOOTP) Parameters
By using a special Orchestration service (heat) parameter, ExtraKernelModules, you can ensure that
heat stores configuration information about the required kernel modules needed for features like GRE
tunneling. Later, during normal module management, these required kernel modules are loaded.
Procedure
1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.
Example
$ vi /home/stack/templates/my-modules-environment.yaml
TIP
Heat uses a set of plans called templates to install and configure your environment. You can
customize aspects of the overcloud with a custom environment file , which is a special type of
template that provides customization for your heat templates.
Example
ComputeParameters:
ExtraKernelModules:
nf_conntrack_proto_gre: {}
175
Red Hat OpenStack Platform 16.1 Networking Guide
ControllerParameters:
ExtraKernelModules:
nf_conntrack_proto_gre: {}
3. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important as the parameters and resources
defined in subsequent environment files take precedence.
Example
Verification
If heat has properly loaded the module, you should see output when you run the lsmod
command on the Compute node:
Example
Additional resources
Prerequisites
You have at least two RHOSP projects that you want to share.
In one of the projects, the current project, you have created a security group that you want to
share with another project, the target project.
In this example, the ping_ssh security group is created:
Example
176
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS
Procedure
1. Log in to the overcloud for the current project that contains the security group.
3. Obtain the name or ID of the security group that you want to share between RHOSP projects.
4. Using the identifiers from the previous steps, create an RBAC policy using the openstack
network rbac create command.
In this example, the ID of the target project is 32016615de5d43bb88de99e7f2e26a1e. The ID of
the security group is 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24:
Example
--target-project
specifies the project that requires access to the security group.
TIP
You can share data between all projects by using the --target-all-projects argument instead
of --target-project <target-project>. By default, only the admin user has this privilege.
--action access_as_shared
specifies what the project is allowed to do.
--type
indicates that the target object is a security group.
5ba835b7-22b0-4be6-bdbe-e0722d1b5f24
is the ID of the particular security group which is being granted access to.
The target project is able to access the security group when running the OpenStack Client security
group commands, in addition to being able to bind to its ports. No other users (other than
administrators and the owner) are able to access the security group.
TIP
To remove access for the target project, delete the RBAC policy that allows it using the openstack
network rbac delete command.
Additional resources
177
Red Hat OpenStack Platform 16.1 Networking Guide
178
CHAPTER 19. CONFIGURING LAYER 3 HIGH AVAILABILITY (HA)
In a typical deployment, projects create virtual routers, which are scheduled to run on physical
Networking service Layer 3 (L3) agent nodes. This becomes an issue when you lose an L3 agent node
and the dependent virtual machines subsequently lose connectivity to external networks. Any floating IP
addresses are also unavailable. In addition, connectivity is lost between any networks that the router
hosts.
NOTE
To deploy Layer 3 (L3) HA, you must maintain similar configuration on the redundant
Networking service nodes, including floating IP ranges and access to external networks.
In the following diagram, the active Router1 and Router2 routers are running on separate physical L3
Networking service agent nodes. L3 HA has scheduled backup virtual routers on the corresponding
nodes, ready to resume service in the case of a physical node failure. When the L3 agent node fails, L3
HA reschedules the affected virtual router and floating IP addresses to a working node:
During a failover event, instance TCP sessions through floating IPs remain unaffected, and migrate to
the new L3 node without disruption. Only SNAT traffic is affected by failover events.
Additional resources
179
Red Hat OpenStack Platform 16.1 Networking Guide
The Networking service L3 agent node shuts down or otherwise loses power because of a
hardware failure.
The L3 agent node becomes isolated from the physical network and loses connectivity.
NOTE
Manually stopping the L3 agent service does not induce a failover event.
Internal VRRP messages are transported within a separate internal network, created
automatically for each project. This process occurs transparently to the user.
When implementing high availability (HA) routers on ML2/OVS, each L3 agent spawns haproxy
and neutron-keepalived-state-change-monitor processes for each router. Each process
consumes approximately 20MB of memory. By default, each HA router resides on three L3
agents and consumes resources on each of the nodes. Therefore, when sizing your RHOSP
networks, ensure that you have allocated enough memory to support the number of HA routers
that you plan to implement.
Layer 3 (L3) HA assigns the active role randomly, regardless of the scheduler used by the
Networking service (whether random or leastrouter).
The database schema has been modified to handle allocation of virtual IP addresses (VIPs)
to virtual routers.
A new keepalived manager has been added, providing load-balancing and HA capabilities.
180
CHAPTER 19. CONFIGURING LAYER 3 HIGH AVAILABILITY (HA)
A new keepalived manager has been added, providing load-balancing and HA capabilities.
Prerequisites
Procedure
1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.
Example
$ source ~/stackrc
Example
$ vi /home/stack/templates/my-neutron-environment.yaml
TIP
The Orchestration service (heat) uses a set of plans called templates to install and configure
your environment. You can customize aspects of the overcloud with a custom environment file ,
which is a special type of template that provides customization for your heat templates.
3. Set the NeutronL3HA parameter to true in the YAML environment file. This ensures HA is
enabled even if director did not set it by default.
parameter_defaults:
NeutronL3HA: 'true'
Example
181
Red Hat OpenStack Platform 16.1 Networking Guide
parameter_defaults:
NeutronL3HA: 'true'
ControllerExtraConfig:
neutron::server::max_l3_agents_per_router: 2
In this example, if you deploy four Networking service nodes, only two L3 agents protect each
HA virtual router: one active, and one standby.
If you set the value of max_l3_agents_per_router to be greater than the number of available
network nodes, you can scale out the number of standby routers by adding new L3 agents. For
every new L3 agent node that you deploy, the Networking service schedules additional standby
versions of the virtual routers until the max_l3_agents_per_router limit is reached.
5. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
Example
NOTE
When NeutronL3HA is set to true, all virtual routers that are created default to
HA routers. When you create a router, you can override the HA option by including
the --no-ha option in the openstack router create command:
Additional resources
Procedure
Run the ip address command within the virtual router namespace to return a high availability
(HA) device in the result, prefixed with ha-.
182
CHAPTER 19. CONFIGURING LAYER 3 HIGH AVAILABILITY (HA)
<snip>
2794: ha-45249562-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state DOWN group default
link/ether 12:34:56:78:2b:5d brd ff:ff:ff:ff:ff:ff
inet 169.254.0.2/24 brd 169.254.0.255 scope global ha-54b92d86-4f
With Layer 3 HA enabled, virtual routers and floating IP addresses are protected against individual node
failure.
183
Red Hat OpenStack Platform 16.1 Networking Guide
Prerequisites
Your RHOSP deployment is highly available with three or more Networker nodes.
Procedure
$ source ~/stackrc
The overcloud stack and its subsequent child stacks should have a status of either
CREATE_COMPLETE or UPDATE_COMPLETE.
4. Ensure that you have a recent backup image of the undercloud node by running the Relax-and-
Recover tool.
For more information, see the Backing up and restoring the undercloud and control plane nodes
guide.
6. Open an interactive bash shell on the container and check the status of the Galera cluster:
# pcs status
Sample output
184
CHAPTER 20. REPLACING NETWORKER NODES
7. Log on to the RHOSP director node and check the nova-compute service:
Prerequisites
Your RHOSP deployment is highly available with three or more Networker nodes.
The node that you add must be able to connect to the other nodes in the cluster over the
network.
You have performed the steps described in Section 20.1, “Preparing to replace network nodes”
Procedure
Example
$ source ~/stackrc
Sample output
+--------------------------------------+------+--------------------------------------+
| UUID | Name | Instance UUID |
+--------------------------------------+------+--------------------------------------+
| 36404147-7c8a-41e6-8c72-6af1e339da2a | None | 7bee57cf-4a58-4eaf-b851-f3203f6e5e05
|
185
Red Hat OpenStack Platform 16.1 Networking Guide
4. Set the node into maintenance mode by using the baremetal node maintenance set
command.
Example
5. Create a JSON file to add the new node to the node pool that contains RHOSP director.
Example
{
"nodes":[
{
"mac":[
"dd:dd:dd:dd:dd:dd"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.207"
}
]
}
For more information, see Adding nodes to the overcloud in the Director Installation and Usage
guide.
6. Run the openstack overcloud node import command to register the new node.
Example
186
CHAPTER 20. REPLACING NETWORKER NODES
7. After registering the new node, launch the introspection process by using the following
commands:
8. Tag the new node with the Networker profile by using the openstack baremetal node set
command.
Example
Example
parameters:
NetworkerRemovalPolicies:
[{'resource_list': ['1']}]
10. Create a ~/templates/node-count-networker.yaml environment file and set the total count of
Networker nodes in the file.
Example
parameter_defaults:
OvercloudNetworkerFlavor: networker
NetworkerCount: 3
11. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and the environment files that you modified.
IMPORTANT
The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.
RHOSP director removes the old Networker node, creates a new one, and updates the
overcloud stack.
Verification
187
Red Hat OpenStack Platform 16.1 Networking Guide
2. Verify that the new Networker node is listed, and the old one is removed.
Sample output
+--------------------------------------+------------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------------+--------+
| 861408be-4027-4f53-87a6-cd3cf206ba7a | overcloud-compute-0 | ACTIVE |
| 0966e9ae-f553-447a-9929-c4232432f718 | overcloud-compute-1 | ACTIVE |
| 9c08fa65-b38c-4b2e-bd47-33870bff06c7 | overcloud-compute-2 | ACTIVE |
| a7f0f5e1-e7ce-4513-ad2b-81146bc8c5af | overcloud-controller-0 | ACTIVE |
| cfefaf60-8311-4bc3-9416-6a824a40a9ae | overcloud-controller-1 | ACTIVE |
| 97a055d4-aefd-481c-82b7-4a5f384036d2 | overcloud-controller-2 | ACTIVE |
| 844c9a88-713a-4ff1-8737-6410bf551d4f | overcloud-networker-0 | ACTIVE |
| c2e40164-c659-4849-a28f-507eb7edb79f | overcloud-networker-2 | ACTIVE |
| 425a0828-b42f-43b0-940c-7fb02522753a | overcloud-networker-3 | ACTIVE |
+--------------------------------------+------------------------+--------+
Additional resources
Adding nodes to the overcloud in the Director Installation and Usage guide
Registering Nodes for the Overcloud in the Director Installation and Usage guide
Prerequisites
Your RHOSP deployment is highly available with three or more Networker nodes.
Procedure
188
CHAPTER 20. REPLACING NETWORKER NODES
Example
$ source ~/overcloudrc
3. Verify that the RHOSP Networking service processes exist, and are marked out-of-service (xxx)
for the overcloud-networker-1.
+--------------------------------------+-----------------------+-------+-------------------------------+
| ID | Host | Alive | Binary |
+--------------------------------------+-----------------------+-------+-------------------------------+
| 26316f47-4a30-4baf-ba00-d33c9a9e0844 | overcloud-networker-1 | xxx | ovn-controller
|
+--------------------------------------+-----------------------+-------+-------------------------------+
+--------------------------------------+-----------------------+-------+------------------------+
| ID | Host | Alive | Binary |
+--------------------------------------+-----------------------+-------+------------------------+
| 8377-66d75323e466c-b838-1149e10441ee | overcloud-networker-1 | xxx | neutron-
metadata-agent |
| b55d-797668c336707-a2cf-cba875eeda21 | overcloud-networker-1 | xxx | neutron-l3-agent
|
| 9dcb-00a9e32ecde42-9458-01cfa9742862 | overcloud-networker-1 | xxx | neutron-ovs-
agent |
| be83-e4d9329846540-9ae6-1540947b2ffd | overcloud-networker-1 | xxx | neutron-dhcp-
agent |
+--------------------------------------+-----------------------+-------+------------------------+
Sample output
Additional resources
189
Red Hat OpenStack Platform 16.1 Networking Guide
190
CHAPTER 21. IDENTIFYING VIRTUAL DEVICES WITH TAGS
Procedure
To tag virtual devices, use the tag parameters, --block-device and --nic, when creating
instances.
Here is an example:
The resulting tags are added to the existing instance metadata and are available through both
the metadata API, and on the configuration drive.
{
"devices": [
{
"type": "nic",
"bus": "pci",
"address": "0030:00:02.0",
"mac": "aa:00:00:00:01",
"tags": ["nfv1"]
},
{
"type": "disk",
"bus": "pci",
"address": "0030:00:07.0",
"serial": "disk-vol-227",
"tags": ["database-server"]
}
]
}
The device tag metadata is available using GET /openstack/latest/meta_data.json from the
191
Red Hat OpenStack Platform 16.1 Networking Guide
The device tag metadata is available using GET /openstack/latest/meta_data.json from the
metadata API.
If the configuration drive is enabled, and mounted under /configdrive in the instance operating
system, the metadata is also present in /configdrive/openstack/latest/meta_data.json.
192