0% found this document useful (0 votes)
35 views196 pages

Red Hat Openstack Platform 16.1 Networking Guide en Us

The Red Hat OpenStack Platform 16.1 Networking Guide serves as an advanced resource for managing networking tasks within the OpenStack environment. It covers various topics including network management, ML2/OVN integration, and troubleshooting techniques. The document is intended for users looking to deepen their understanding of OpenStack networking components and configurations.

Uploaded by

plmperez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views196 pages

Red Hat Openstack Platform 16.1 Networking Guide en Us

The Red Hat OpenStack Platform 16.1 Networking Guide serves as an advanced resource for managing networking tasks within the OpenStack environment. It covers various topics including network management, ML2/OVN integration, and troubleshooting techniques. The document is intended for users looking to deepen their understanding of OpenStack networking components and configurations.

Uploaded by

plmperez
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 196

Red Hat OpenStack Platform 16.

Networking Guide

An advanced guide to Red Hat OpenStack Platform Networking

Last Updated: 2022-07-11


Red Hat OpenStack Platform 16.1 Networking Guide
An advanced guide to Red Hat OpenStack Platform Networking

OpenStack Team
[email protected]
Legal Notice
Copyright © 2022 Red Hat, Inc.

The text of and illustrations in this document are licensed by Red Hat under a Creative Commons
Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is
available at
http://creativecommons.org/licenses/by-sa/3.0/
. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must
provide the URL for the original version.

Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert,
Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.

Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift,
Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States
and other countries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

XFS ® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States
and/or other countries.

MySQL ® is a registered trademark of MySQL AB in the United States, the European Union and
other countries.

Node.js ® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the
official Joyent Node.js open source or commercial project.

The OpenStack ® Word Mark and OpenStack logo are either registered trademarks/service marks
or trademarks/service marks of the OpenStack Foundation, in the United States and other
countries and are used with the OpenStack Foundation's permission. We are not affiliated with,
endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.

All other trademarks are the property of their respective owners.

Abstract
A cookbook for common OpenStack Networking tasks.
Table of Contents

Table of Contents
. . . . . . . . . .OPEN
MAKING . . . . . . SOURCE
. . . . . . . . . .MORE
. . . . . . .INCLUSIVE
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . .

. . . . . . . . . . . . . FEEDBACK
PROVIDING . . . . . . . . . . . . ON
. . . .RED
. . . . .HAT
. . . . .DOCUMENTATION
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . .

.CHAPTER
. . . . . . . . . . 1.. .INTRODUCTION
. . . . . . . . . . . . . . . . . TO
. . . .OPENSTACK
. . . . . . . . . . . . . .NETWORKING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . .
1.1. MANAGING YOUR RHOSP NETWORKS 9
1.2. NETWORKING SERVICE COMPONENTS 11
1.3. MODULAR LAYER 2 (ML2) NETWORKING 11
1.4. ML2 NETWORK TYPES 12
1.5. MODULAR LAYER 2 (ML2) MECHANISM DRIVERS 12
1.6. OPEN VSWITCH 13
1.7. OPEN VIRTUAL NETWORK (OVN) 13
1.8. MODULAR LAYER 2 (ML2) TYPE AND MECHANISM DRIVER COMPATIBILITY 14
1.9. EXTENSION DRIVERS FOR THE RHOSP NETWORKING SERVICE 14

. . . . . . . . . . . 2.
CHAPTER . . WORKING
. . . . . . . . . . . WITH
. . . . . . ML2/OVN
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
..............
2.1. LIST OF COMPONENTS IN THE RHOSP OVN ARCHITECTURE 16
2.2. THE OVN-CONTROLLER SERVICE ON COMPUTE NODES 17
2.3. OVN METADATA AGENT ON COMPUTE NODES 17
2.4. THE OVN COMPOSABLE SERVICE 18
2.5. LAYER 3 HIGH AVAILABILITY WITH OVN 18
2.6. LIMITATIONS OF THE ML2/OVN MECHANISM DRIVER 19
2.6.1. ML2/OVS features not yet supported by ML2/OVN 19
2.6.2. Core OVN limitations 20
2.7. LIMIT FOR NON-SECURE PORTS WITH ML2/OVN 20
2.8. ML2/OVS TO ML2/OVN IN-PLACE MIGRATION: VALIDATED AND PROHIBITED SCENARIOS 20
2.8.1. Validated ML2/OVS to ML2/OVN migration scenarios 20
2.8.2. ML2/OVS to ML2/OVN in-place migration scenarios that have not been verified 21
2.8.3. ML2/OVS to ML2/OVN in-place migration and security group rules 21
2.9. USING ML2/OVS INSTEAD OF THE DEFAULT ML2/OVN IN A NEW RHOSP 16.1 DEPLOYMENT 22
2.10. KEEPING ML2/OVS AFTER AN UPGRADE INSTEAD OF THE DEFAULT ML2/OVN 23
2.11. DEPLOYING A CUSTOM ROLE WITH ML2/OVN 23
2.12. SR-IOV WITH ML2/OVN AND NATIVE OVN DHCP 26

.CHAPTER
. . . . . . . . . . 3.
. . MANAGING
. . . . . . . . . . . . . PROJECT
. . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27
..............
3.1. VLAN PLANNING 27
3.2. TYPES OF NETWORK TRAFFIC 27
3.3. IP ADDRESS CONSUMPTION 29
3.4. VIRTUAL NETWORKING 29
3.5. ADDING NETWORK ROUTING 29
3.6. EXAMPLE NETWORK PLAN 30
3.7. CREATING A NETWORK 30
3.8. WORKING WITH SUBNETS 33
3.9. CREATING A SUBNET 33
3.10. ADDING A ROUTER 35
3.11. PURGING ALL RESOURCES AND DELETING A PROJECT 36
3.12. DELETING A ROUTER 36
3.13. DELETING A SUBNET 36
3.14. DELETING A NETWORK 36

. . . . . . . . . . . 4.
CHAPTER . . .CONNECTING
. . . . . . . . . . . . . . .VM
. . . .INSTANCES
. . . . . . . . . . . . TO
. . . .PHYSICAL
. . . . . . . . . . . NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .38
..............
4.1. OVERVIEW OF THE OPENSTACK NETWORKING TOPOLOGY 38

1
Red Hat OpenStack Platform 16.1 Networking Guide

4.2. PLACEMENT OF OPENSTACK NETWORKING SERVICES 38


4.3. CONFIGURING FLAT PROVIDER NETWORKS 39
4.4. HOW DOES THE FLAT PROVIDER NETWORK PACKET FLOW WORK? 41
4.5. TROUBLESHOOTING INSTANCE-PHYSICAL NETWORK CONNECTIONS ON FLAT PROVIDER
NETWORKS 45
4.6. CONFIGURING VLAN PROVIDER NETWORKS 47
4.7. HOW DOES THE VLAN PROVIDER NETWORK PACKET FLOW WORK? 49
4.8. TROUBLESHOOTING INSTANCE-PHYSICAL NETWORK CONNECTIONS ON VLAN PROVIDER
NETWORKS 53
4.9. ENABLING MULTICAST SNOOPING FOR PROVIDER NETWORKS IN AN ML2/OVS DEPLOYMENT 54
4.10. ENABLING MULTICAST IN AN ML2/OVN DEPLOYMENT 56
4.11. ENABLING COMPUTE METADATA ACCESS 58
4.12. FLOATING IP ADDRESSES 58

.CHAPTER
. . . . . . . . . . 5.
. . MANAGING
. . . . . . . . . . . . .FLOATING
. . . . . . . . . . . .IP
. . ADDRESSES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .59
..............
5.1. CREATING FLOATING IP POOLS 59
5.2. ASSIGNING A SPECIFIC FLOATING IP 59
5.3. CREATING AN ADVANCED NETWORK 61
5.4. ASSIGNING A RANDOM FLOATING IP 61
5.5. CREATING MULTIPLE FLOATING IP POOLS 64
5.6. BRIDGING THE PHYSICAL NETWORK 64
5.7. ADDING AN INTERFACE 65
5.8. DELETING AN INTERFACE 65

.CHAPTER
. . . . . . . . . . 6.
. . .TROUBLESHOOTING
. . . . . . . . . . . . . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .66
..............
6.1. BASIC PING TESTING 66
6.2. VIEWING CURRENT PORT STATUS 68
6.3. TROUBLESHOOTING CONNECTIVITY TO VLAN PROVIDER NETWORKS 69
6.4. REVIEWING THE VLAN CONFIGURATION AND LOG FILES 70
6.5. PERFORMING BASIC ICMP TESTING WITHIN THE ML2/OVN NAMESPACE 71
6.6. TROUBLESHOOTING FROM WITHIN PROJECT NETWORKS (ML2/OVS) 72
6.7. PERFORMING ADVANCED ICMP TESTING WITHIN THE NAMESPACE (ML2/OVS) 73
6.8. CREATING ALIASES FOR OVN TROUBLESHOOTING COMMANDS 74
6.9. MONITORING OVN LOGICAL FLOWS 75
6.10. MONITORING OPENFLOWS 78
6.11. LISTING THE OVN DATABASE TABLES IN AN ML2/OVN DEPLOYMENT 78
6.12. VALIDATING YOUR ML2/OVN DEPLOYMENT 79
6.13. SELECTING ML2/OVN LOG DEBUG AND INFO MODES 80
6.14. ML2/OVN LOG FILES 81

. . . . . . . . . . . 7.
CHAPTER . . CONFIGURING
. . . . . . . . . . . . . . . . PHYSICAL
. . . . . . . . . . . .SWITCHES
. . . . . . . . . . . FOR
. . . . . OPENSTACK
. . . . . . . . . . . . . . NETWORKING
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .82
..............
7.1. PLANNING YOUR PHYSICAL NETWORK ENVIRONMENT 82
7.2. CONFIGURING A CISCO CATALYST SWITCH 82
7.2.1. About trunk ports 82
7.2.2. Configuring trunk ports for a Cisco Catalyst switch 83
7.2.3. About access ports 84
7.2.4. Configuring access ports for a Cisco Catalyst switch 84
7.2.5. About LACP port aggregation 85
7.2.6. Configuring LACP on the physical NIC 85
7.2.7. Configuring LACP for a Cisco Catalyst switch 86
7.2.8. About MTU settings 87
7.2.9. Configuring MTU settings for a Cisco Catalyst switch 87
7.2.10. About LLDP discovery 88
7.2.11. Configuring LLDP for a Cisco Catalyst switch 88

2
Table of Contents

7.3. CONFIGURING A CISCO NEXUS SWITCH 88


7.3.1. About trunk ports 89
7.3.2. Configuring trunk ports for a Cisco Nexus switch 89
7.3.3. About access ports 89
7.3.4. Configuring access ports for a Cisco Nexus switch 89
7.3.5. About LACP port aggregation 90
7.3.6. Configuring LACP on the physical NIC 90
7.3.7. Configuring LACP for a Cisco Nexus switch 90
7.3.8. About MTU settings 91
7.3.9. Configuring MTU settings for a Cisco Nexus 7000 switch 91
7.3.10. About LLDP discovery 92
7.3.11. Configuring LLDP for a Cisco Nexus 7000 switch 92
7.4. CONFIGURING A CUMULUS LINUX SWITCH 92
7.4.1. About trunk ports 92
7.4.2. Configuring trunk ports for a Cumulus Linux switch 92
7.4.3. About access ports 93
7.4.4. Configuring access ports for a Cumulus Linux switch 93
7.4.5. About LACP port aggregation 93
7.4.6. About MTU settings 94
7.4.7. Configuring MTU settings for a Cumulus Linux switch 94
7.4.8. About LLDP discovery 94
7.4.9. Configuring LLDP for a Cumulus Linux switch 94
7.5. CONFIGURING A EXTREME EXOS SWITCH 95
7.5.1. About trunk ports 95
7.5.2. Configuring trunk ports on an Extreme Networks EXOS switch 95
7.5.3. About access ports 95
7.5.4. Configuring access ports for an Extreme Networks EXOS switch 95
7.5.5. About LACP port aggregation 96
7.5.6. Configuring LACP on the physical NIC 96
7.5.7. Configuring LACP on an Extreme Networks EXOS switch 97
7.5.8. About MTU settings 97
7.5.9. Configuring MTU settings on an Extreme Networks EXOS switch 97
7.5.10. About LLDP discovery 98
7.5.11. Configuring LLDP settings on an Extreme Networks EXOS switch 98
7.6. CONFIGURING A JUNIPER EX SERIES SWITCH 98
7.6.1. About trunk ports 98
7.6.2. Configuring trunk ports for a Juniper EX Series switch 98
7.6.3. About access ports 99
7.6.4. Configuring access ports for a Juniper EX Series switch 99
7.6.5. About LACP port aggregation 99
7.6.6. Configuring LACP on the physical NIC 100
7.6.7. Configuring LACP for a Juniper EX Series switch 100
7.6.8. About MTU settings 102
7.6.9. Configuring MTU settings for a Juniper EX Series switch 102
7.6.10. About LLDP discovery 103
7.6.11. Configuring LLDP for a Juniper EX Series switch 103

. . . . . . . . . . . 8.
CHAPTER . . .CONFIGURING
. . . . . . . . . . . . . . . MAXIMUM
. . . . . . . . . . . .TRANSMISSION
. . . . . . . . . . . . . . . . .UNIT
. . . . . (MTU)
. . . . . . . SETTINGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .104
...............
8.1. MTU OVERVIEW 104
8.2. CONFIGURING MTU SETTINGS IN DIRECTOR 105
8.3. REVIEWING THE RESULTING MTU CALCULATION 105

. . . . . . . . . . . 9.
CHAPTER . . .CONFIGURING
. . . . . . . . . . . . . . . QUALITY
. . . . . . . . . . OF
. . . .SERVICE
. . . . . . . . . .(QOS)
. . . . . . .POLICIES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .106
...............

3
Red Hat OpenStack Platform 16.1 Networking Guide

9.1. QOS RULES 106


9.2. CREATING AND APPLYING A QOS POLICY AND RULE 108
9.3. CONTROLLING MINIMUM BANDWIDTH BY USING QOS POLICIES 110
9.3.1. Using network back-end policies to enforce a minimum bandwidth 110
9.3.2. Scheduling instances by using minimum bandwidth QoS policies 114
9.4. CREATING AND APPLYING A BANDWIDTH LIMIT QOS POLICY AND RULE 120
9.5. DSCP MARKING FOR EGRESS TRAFFIC 121
9.6. HOW TO VERIFY THE APPLICATION OF YOUR QOS POLICY AND RULES 123
9.7. RBAC FOR QOS POLICIES 124

. . . . . . . . . . . 10.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . BRIDGE
. . . . . . . . .MAPPINGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
...............
10.1. OVERVIEW OF BRIDGE MAPPINGS 125
10.2. TRAFFIC FLOW 125
10.3. CONFIGURING BRIDGE MAPPINGS 125
10.4. MAINTAINING BRIDGE MAPPINGS FOR OVS 126
10.4.1. Cleaning up OVS patch ports manually 127
10.4.2. Cleaning up OVS patch ports automatically 127

. . . . . . . . . . . 11.
CHAPTER . . .VLAN-AWARE
. . . . . . . . . . . . . . .INSTANCES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
...............
11.1. VLAN TRUNKS AND VLAN TRANSPARENT NETWORKS 129
11.2. ENABLING VLAN TRANSPARENCY IN ML2/OVN DEPLOYMENTS 129
11.3. REVIEWING THE TRUNK PLUG-IN 131
11.4. CREATING A TRUNK CONNECTION 131
11.5. ADDING SUBPORTS TO THE TRUNK 133
11.6. CONFIGURING AN INSTANCE TO USE A TRUNK 134
11.7. CONFIGURING NETWORKING SERVICE RPC TIMEOUT 136
11.8. UNDERSTANDING TRUNK STATES 137

. . . . . . . . . . . 12.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . .RBAC
. . . . . . POLICIES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .138
...............
12.1. OVERVIEW OF RBAC POLICIES 138
12.2. CREATING RBAC POLICIES 138
12.3. REVIEWING RBAC POLICIES 139
12.4. DELETING RBAC POLICIES 139
12.5. GRANTING RBAC POLICY ACCESS FOR EXTERNAL NETWORKS 140

. . . . . . . . . . . 13.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . DISTRIBUTED
. . . . . . . . . . . . . . .VIRTUAL
. . . . . . . . . .ROUTING
. . . . . . . . . . (DVR)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .141
..............
13.1. UNDERSTANDING DISTRIBUTED VIRTUAL ROUTING (DVR) 141
13.1.1. Overview of Layer 3 routing 141
13.1.2. Routing flows 141
13.1.3. Centralized routing 141
13.2. DVR OVERVIEW 142
13.3. DVR KNOWN ISSUES AND CAVEATS 142
13.4. SUPPORTED ROUTING ARCHITECTURES 143
13.5. DEPLOYING DVR WITH ML2 OVS 143
13.6. MIGRATING CENTRALIZED ROUTERS TO DISTRIBUTED ROUTING 145
13.7. DEPLOYING ML2/OVN OPENSTACK WITH DISTRIBUTED VIRTUAL ROUTING (DVR) DISABLED 146
13.7.1. Additional resources 146

. . . . . . . . . . . 14.
CHAPTER . . . PROJECT
. . . . . . . . . . .NETWORKING
. . . . . . . . . . . . . . . WITH
. . . . . . IPV6
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147
...............
14.1. IPV6 SUBNET OPTIONS 147
14.2. CREATE AN IPV6 SUBNET USING STATEFUL DHCPV6 148

. . . . . . . . . . . 15.
CHAPTER . . . MANAGING
. . . . . . . . . . . . .PROJECT
. . . . . . . . . . QUOTAS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
..............
15.1. CONFIGURING PROJECT QUOTAS 151
15.2. L3 QUOTA OPTIONS 151

4
Table of Contents

15.3. FIREWALL QUOTA OPTIONS 151


15.4. SECURITY GROUP QUOTA OPTIONS 151
15.5. MANAGEMENT QUOTA OPTIONS 152

. . . . . . . . . . . 16.
CHAPTER . . . DEPLOYING
. . . . . . . . . . . . . .ROUTED
. . . . . . . . . PROVIDER
. . . . . . . . . . . .NETWORKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
...............
16.1. ADVANTAGES OF ROUTED PROVIDER NETWORKS 153
16.2. FUNDAMENTALS OF ROUTED PROVIDER NETWORKS 153
16.3. LIMITATIONS OF ROUTED PROVIDER NETWORKS 154
16.4. PREPARING FOR A ROUTED PROVIDER NETWORK 154
16.5. CREATING A ROUTED PROVIDER NETWORK 157
16.6. MIGRATING A NON-ROUTED NETWORK TO A ROUTED PROVIDER NETWORK 163

. . . . . . . . . . . 17.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . ALLOWED
. . . . . . . . . . . ADDRESS
. . . . . . . . . . .PAIRS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .166
...............
17.1. OVERVIEW OF ALLOWED ADDRESS PAIRS 166
17.2. CREATING A PORT AND ALLOWING ONE ADDRESS PAIR 166
17.3. ADDING ALLOWED ADDRESS PAIRS 167

. . . . . . . . . . . 18.
CHAPTER . . . COMMON
. . . . . . . . . . . ADMINISTRATIVE
. . . . . . . . . . . . . . . . . . .NETWORKING
. . . . . . . . . . . . . . . TASKS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .168
...............
18.1. CONFIGURING THE L2 POPULATION DRIVER 168
18.2. TUNING KEEPALIVED TO AVOID VRRP PACKET LOSS 168
18.3. SPECIFYING THE NAME THAT DNS ASSIGNS TO PORTS 170
18.4. ASSIGNING DHCP ATTRIBUTES TO PORTS 173
18.5. LOADING KERNEL MODULES 175
18.6. CONFIGURING SHARED SECURITY GROUPS 176

. . . . . . . . . . . 19.
CHAPTER . . . CONFIGURING
. . . . . . . . . . . . . . . . LAYER
. . . . . . . .3. .HIGH
. . . . . .AVAILABILITY
. . . . . . . . . . . . . . .(HA)
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .179
...............
19.1. RHOSP NETWORKING SERVICE WITHOUT HIGH AVAILABILITY (HA) 179
19.2. OVERVIEW OF LAYER 3 HIGH AVAILABILITY (HA) 179
19.3. LAYER 3 HIGH AVAILABILITY (HA) FAILOVER CONDITIONS 180
19.4. PROJECT CONSIDERATIONS FOR LAYER 3 HIGH AVAILABILITY (HA) 180
19.5. HIGH AVAILABILITY (HA) CHANGES TO THE RHOSP NETWORKING SERVICE 180
19.6. ENABLING LAYER 3 HIGH AVAILABILITY (HA) ON RHOSP NETWORKING SERVICE NODES 181
19.7. REVIEWING HIGH AVAILABILITY (HA) RHOSP NETWORKING SERVICE NODE CONFIGURATIONS 182

. . . . . . . . . . . 20.
CHAPTER . . . .REPLACING
. . . . . . . . . . . . .NETWORKER
. . . . . . . . . . . . . .NODES
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .184
...............
20.1. PREPARING TO REPLACE NETWORK NODES 184
20.2. REPLACING A NETWORKER NODE 185
20.3. RESCHEDULING NODES AND CLEANING UP THE NETWORKING SERVICE 188

. . . . . . . . . . . 21.
CHAPTER . . . IDENTIFYING
. . . . . . . . . . . . . . VIRTUAL
. . . . . . . . . . DEVICES
. . . . . . . . . .WITH
. . . . . .TAGS
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191
..............
21.1. OVERVIEW OF VIRTUAL DEVICE TAGGING 191
21.2. TAGGING VIRTUAL DEVICES 191

5
Red Hat OpenStack Platform 16.1 Networking Guide

6
MAKING OPEN SOURCE MORE INCLUSIVE

MAKING OPEN SOURCE MORE INCLUSIVE


Red Hat is committed to replacing problematic language in our code, documentation, and web
properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the
enormity of this endeavor, these changes will be implemented gradually over several upcoming releases.
For more details, see our CTO Chris Wright’s message .

7
Red Hat OpenStack Platform 16.1 Networking Guide

PROVIDING FEEDBACK ON RED HAT DOCUMENTATION


We appreciate your input on our documentation. Tell us how we can make it better.

Using the Direct Documentation Feedback (DDF) function


Use the Add Feedback DDF function for direct comments on specific sentences, paragraphs, or code
blocks.

1. View the documentation in the Multi-page HTML format.

2. Ensure that you see the Feedback button in the upper right corner of the document.

3. Highlight the part of text that you want to comment on.

4. Click Add Feedback.

5. Complete the Add Feedback field with your comments.

6. Optional: Add your email address so that the documentation team can contact you for
clarification on your issue.

7. Click Submit.

8
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING

CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING


The Networking service (neutron) is the software-defined networking (SDN) component of Red Hat
OpenStack Platform (RHOSP). The RHOSP Networking service manages internal and external traffic to
and from virtual machine instances and provides core services such as routing, segmentation, DHCP,
and metadata. It provides the API for virtual networking capabilities and management of switches,
routers, ports, and firewalls.

1.1. MANAGING YOUR RHOSP NETWORKS


With the Red Hat OpenStack Platform (RHOSP) Networking service (neutron) you can effectively meet
your site’s networking goals. You can:

Provide connectivity to VM instances within a project.


Project networks primarily enable general (non-privileged) projects to manage networks
without involving administrators. These networks are entirely virtual and require virtual routers
to interact with other project networks and external networks such as the Internet. Project
networks also usually provide DHCP and metadata services to instances. RHOSP supports the
following project network types: flat, VLAN, VXLAN, GRE, and GENEVE.

For more information, see Managing project networks.

Connect VM instances to networks outside of a project.


Provider networks provide connectivity like project networks. But only administrative
(privileged) users can manage those networks because they interface with the physical network
infrastructure. RHOSP supports the following provider network types: flat and VLAN.

Inside project networks, you can use pools of floating IP addresses or a single floating IP
address to direct ingress traffic to your VM instances. Using bridge mappings, you can associate
a physical network name (an interface label) to a bridge created with OVS or OVN to allow
provider network traffic to reach the physical network.

For more information, see Connecting VM instances to physical networks .

Create a network that is optimized for the edge.


Operators can create routed provider networks (RPNs) that are typically used in edge
deployments, and rely on multiple layer 2 network segments instead of traditional networks that
have only one segment.

RPNs simplify the cloud for end users because they see only one network. For cloud operators,
RPNs deliver scalabilty and fault tolerance. For example, if a major error occurs, only one
segment is impacted instead of the entire network failing.

For more information, see Deploying routed provider networks .

Make your network resources highly available.


You can use availability zones (AZs) and Virtual Router Redundancy Protocol (VRRP) to keep
your network resources highly available. Operators group network nodes that are attached to
different power sources on different AZs. Next, operators schedule crucial services such as
DHCP, L3, FW, and so on to be on separate AZs.

RHOSP uses VRRP to make project routers and floating IP addresses highly available. An
alternative to centralized routing, Distributed Virtual Routing (DVR) offers an alternative routing
design based on VRRP that deploys the L3 agent and schedules routers on every Compute
node.

9
Red Hat OpenStack Platform 16.1 Networking Guide

For more information, see Using availability zones to make network resources highly available .

Secure your network at the port level.


Security groups provide a container for virtual firewall rules that control ingress (inbound to
instances) and egress (outbound from instances) network traffic at the port level. Security
groups use a default deny policy and only contain rules that allow specific traffic. Each port can
reference one or more security groups in an additive fashion. The firewall driver translates
security group rules to a configuration for the underlying packet filtering technology such as
iptables.

For more information, see Configuring shared security groups .

Manage port traffic.


With allowed address pairs you identify a specific MAC address, IP address, or both to allow
network traffic to pass through a port regardless of the subnet. When you define allowed
address pairs, you are able to use protocols like VRRP (Virtual Router Redundancy Protocol)
that float an IP address between two VM instances to enable fast data plane failover.

For more information, see Configuring allowed address pairs.

Optimize large overlay networks.


Using the L2 Population driver you can enable broadcast, multicast, and unicast traffic to scale
out on large overlay networks.

For more information, see Configuring the L2 population driver .

Set ingress and egress limits for traffic on VM instances.


You can offer varying service levels for instances by using quality of service (QoS) policies to
apply rate limits to egress and ingress traffic. You can apply QoS policies to individual ports. You
can also apply QoS policies to a project network, where ports with no specific policy attached
inherit the policy.

For more information, see Configuring Quality of Service (QoS) policies .

Manage the amount of network resources RHOSP projects can create.


With the Networking service quota options you can set limits on the amount of network
resources project users can create. This includes resources such as ports, subnets, networks,
and so on.

For more information, see Managing project quotas.

Optimize your VM instances for Network Functions Virtualization (NFV).


Instances can send and receive VLAN-tagged traffic over a single virtual NIC. This is particularly
useful for NFV applications (VNFs) that expect VLAN-tagged traffic, allowing a single virtual
NIC to serve multiple customers or services.

In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags
are transferred over the network and consumed by the VM instances on the same VLAN, and
ignored by other instances and devices. VLAN trunks support VLAN-aware instances by
combining VLANs into a single trunked port.

For more information, see VLAN-aware instances.

Control which projects can attach instances to a shared network.


Using role-based access control (RBAC) policies in the RHOSP Networking service, cloud
administrators can remove the ability for some projects to create networks and can instead
allow them to attach to pre-existing networks that correspond to their project.

10
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING

For more information, see Configuring RBAC policies .

1.2. NETWORKING SERVICE COMPONENTS


The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) includes the following
components:

API server
The RHOSP networking API includes support for Layer 2 networking and IP Address
Management (IPAM), as well as an extension for a Layer 3 router construct that enables routing
between Layer 2 networks and gateways to external networks. RHOSP networking includes a
growing list of plug-ins that enable interoperability with various commercial and open source
network technologies, including routers, switches, virtual switches and software-defined
networking (SDN) controllers.

Modular Layer 2 (ML2) plug-in and agents


ML2 plugs and unplugs ports, creates networks or subnets, and provides IP addressing.

Messaging queue
Accepts and routes RPC requests between agents to complete API operations. Message queue
is used in the ML2 plug-in for RPC between the neutron server and neutron agents that run on
each hypervisor, in the ML2 mechanism drivers for Open vSwitch and Linux bridge.

1.3. MODULAR LAYER 2 (ML2) NETWORKING


Modular Layer 2 (ML2) is the Red Hat OpenStack Platform (RHOSP) networking core plug-in. The ML2
modular design enables the concurrent operation of mixed network technologies through mechanism
drivers. Open Virtual Network (OVN) is the default mechanism driver used with ML2.

The ML2 framework distinguishes between the two kinds of drivers that can be configured:

Type drivers
Define how an RHOSP network is technically realized.
Each available network type is managed by an ML2 type driver, and they maintain any required type-
specific network state. Validating the type-specific information for provider networks, type drivers
are responsible for the allocation of a free segment in project networks. Examples of type drivers are
GENEVE, GRE, VXLAN, and so on.

Mechanism drivers
Define the mechanism to access an RHOSP network of a certain type.
The mechanism driver takes the information established by the type driver and applies it to the
networking mechanisms that have been enabled. Examples of mechanism drivers are Open Virtual
Networking (OVN) and Open vSwitch (OVS).

Mechanism drivers can employ L2 agents, and by using RPC interact directly with external devices or
controllers. You can use multiple mechanism and type drivers simultaneously to access different
ports of the same virtual network.

Additional resources

Section 1.8, “Modular Layer 2 (ML2) type and mechanism driver compatibility”

11
Red Hat OpenStack Platform 16.1 Networking Guide

1.4. ML2 NETWORK TYPES


Multiple network segment types can be operated concurrently. In addition, these network segments can
interconnect using ML2 support for multi-segmented networks. Ports are automatically bound to the
segment with connectivity; it is not necessary to bind ports to a specific segment. Depending on the
mechanism driver, ML2 supports the following network segment types:

Flat
All virtual machine (VM) instances reside on the same network, which can also be shared with the
hosts. No VLAN tagging or other network segregation occurs.
VLAN
With RHOSP networking users can create multiple provider or project networks using VLAN IDs
(802.1Q tagged) that correspond to VLANs present in the physical network. This allows instances to
communicate with each other across the environment. They can also communicate with dedicated
servers, firewalls, load balancers and other network infrastructure on the same Layer 2 VLAN.
You can use VLANs to segment network traffic for computers running on the same switch. This
means that you can logically divide your switch by configuring the ports to be members of different
networks — they are basically mini-LANs that you can use to separate traffic for security reasons.

For example, if your switch has 24 ports in total, you can assign ports 1-6 to VLAN200, and ports 7-18
to VLAN201. As a result, computers connected to VLAN200 are completely separate from those on
VLAN201; they cannot communicate directly, and if they wanted to, the traffic must pass through a
router as if they were two separate physical switches. Firewalls can also be useful for governing which
VLANs can communicate with each other.

GENEVE tunnels
GENEVE recognizes and accommodates changing capabilities and needs of different devices in
network virtualization. It provides a framework for tunneling rather than being prescriptive about the
entire system. Geneve defines the content of the metadata flexibly that is added during
encapsulation and tries to adapt to various virtualization scenarios. It uses UDP as its transport
protocol and is dynamic in size using extensible option headers. Geneve supports unicast, multicast,
and broadcast. The GENEVE type driver is compatible with the ML2/OVN mechanism driver.
VXLAN and GRE tunnels
VXLAN and GRE use network overlays to support private communication between instances. An
RHOSP networking router is required to enable traffic to traverse outside of the GRE or VXLAN
project network. A router is also required to connect directly-connected project networks with
external networks, including the internet; the router provides the ability to connect to instances
directly from an external network using floating IP addresses. VXLAN and GRE type drivers are
compatible with the ML2/OVS mechanism driver.

Additional resources

Section 1.8, “Modular Layer 2 (ML2) type and mechanism driver compatibility”

1.5. MODULAR LAYER 2 (ML2) MECHANISM DRIVERS


Modular Layer 2 (ML2) plug-ins are implemented as mechanisms with a common code base. This
approach enables code reuse and eliminates much of the complexity around code maintenance and
testing.

Red Hat chose ML2/OVN as the default mechanism driver for all new deployments starting with RHOSP
16.0 because it offers immediate advantages over the ML2/OVS mechanism driver for most customers
today. Those advantages multiply with each release while we continue to enhance and improve the

12
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING

ML2/OVN feature set.

If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism
driver, you should start now to evaluate the benefits and feasibility of replacing the OVS driver with the
ML2/OVN mechanism driver. See the guide Migrating the Networking Service to the ML2 OVN
Mechanism Driver.

You enable mechanism drivers using the Orchestration service (heat) parameter,
NeutronMechanismDrivers. Here is an example from a heat custom environment file:

parameter_defaults:
...
NeutronMechanismDrivers: ansible,ovn,baremetal
...

The order in which you specify the mechanism drivers matters. In the earlier example, if you want to bind
a port using the baremetal mechanism driver, then you must specify baremetal before ansible.
Otherwise, the ansible driver will bind the port, because it precedes baremetal in the list of values for
NeutronMechanismDrivers.

Additional resources

Migrating the Networking Service to the ML2 OVN Mechanism Driver

Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

1.6. OPEN VSWITCH


Open vSwitch (OVS) is a software-defined networking (SDN) virtual switch similar to the Linux software
bridge. OVS provides switching services to virtualized networks with support for industry standard
OpenFlow and sFlow. OVS can also integrate with physical switches using layer 2 features, such as STP,
LACP, and 802.1Q VLAN tagging. Open vSwitch version 1.11.0-1.el6 or later also supports tunneling with
VXLAN and GRE.

For more information about network interface bonds, see Network Interface Bonding in the Advanced
Overcloud Customization guide.

NOTE

To mitigate the risk of network loops in OVS, only a single interface or a single bond can
be a member of a given bridge. If you require multiple bonds or interfaces, you can
configure multiple bridges.

1.7. OPEN VIRTUAL NETWORK (OVN)


Open Virtual Network (OVN), is a system to support logical network abstraction in virtual machine and
container environments. Sometimes called open source virtual networking for Open vSwitch, OVN
complements the existing capabilities of OVS to add native support for logical network abstractions,
such as logical L2 and L3 overlays, security groups and services such as DHCP.

13
Red Hat OpenStack Platform 16.1 Networking Guide

A physical network comprises physical wires, switches, and routers. A virtual network extends a physical
network into a hypervisor or container platform, bridging VMs or containers into the physical network.
An OVN logical network is a network implemented in software that is insulated from physical networks by
tunnels or other encapsulations. This allows IP and other address spaces used in logical networks to
overlap with those used on physical networks without causing conflicts. Logical network topologies can
be arranged without regard for the topologies of the physical networks on which they run. Thus, VMs
that are part of a logical network can migrate from one physical machine to another without network
disruption.

The encapsulation layer prevents VMs and containers connected to a logical network from
communicating with nodes on physical networks. For clustering VMs and containers, this can be
acceptable or even desirable, but in many cases VMs and containers do need connectivity to physical
networks. OVN provides multiple forms of gateways for this purpose. An OVN deployment consists of
several components:

Cloud Management System (CMS)


integrates OVN into a physical network by managing the OVN logical network elements and
connecting the OVN logical network infrastructure to physical network elements. Some examples
include OpenStack and OpenShift.
OVN databases
stores data representing the OVN logical and physical networks.
Hypervisors
run Open vSwitch and translate the OVN logical network into OpenFlow on a physical or virtual
machine.
Gateways
extends a tunnel-based OVN logical network into a physical network by forwarding packets between
tunnels and the physical network infrastructure.

1.8. MODULAR LAYER 2 (ML2) TYPE AND MECHANISM DRIVER


COMPATIBILITY
Refer to the following table when planning your Red Hat OpenStack Platform data networks to
determine the network types each Modular Layer 2 (ML2) mechanism driver supports.

Table 1.1. Network types supported by ML2 mechanism drivers

Mechanism driver Supports these type drivers

Flat GRE VLAN VXLAN GENEVE

Open Virtual Network (OVN) Yes No Yes No Yes

Open vSwitch (OVS) Yes Yes Yes Yes No

1.9. EXTENSION DRIVERS FOR THE RHOSP NETWORKING SERVICE


The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) is extensible. Extensions
serve two purposes: they allow the introduction of new features in the API without requiring a version
change and they allow the introduction of vendor specific niche functionality. Applications can

14
CHAPTER 1. INTRODUCTION TO OPENSTACK NETWORKING

programmatically list available extensions by performing a GET on the /extensions URI. Note that this is
a versioned request; that is, an extension available in one API version might not be available in another.

The ML2 plug-in also supports extension drivers that allows other pluggable drivers to extend the core
resources implemented in the ML2 plug-in for network objects. Examples of extension drivers include
support for QoS, port security, and so on.

15
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 2. WORKING WITH ML2/OVN


Red Hat OpenStack Platform (RHOSP) networks are managed by the Networking service (neutron).
The core of the Networking service is the Modular Layer 2 (ML2) plug-in, and the default mechanism
driver for RHOSP ML2 plug-in is the Open Virtual Networking (OVN) mechanism driver.

Earlier RHOSP versions used the Open vSwitch (OVS) mechanism driver by default, but Red Hat
recommends the ML2/OVN mechanism driver for most deployments.

If you upgrade from an RHOSP 13 ML2/OVS deployment to RHOSP 16, Red Hat recommends migrating
from ML2/OVS to ML2/OVN after the upgrade. In some cases, ML2/OVN might not meet your
requirements. In these cases you can deploy RHOSP with ML2/OVS.

2.1. LIST OF COMPONENTS IN THE RHOSP OVN ARCHITECTURE


The RHOSP OVN architecture replaces the OVS Modular Layer 2 (ML2) mechanism driver with the
OVN ML2 mechanism driver to support the Networking API. OVN provides networking services for the
Red Hat OpenStack platform.

The OVN architecture consists of the following components and services:

ML2 plugin with OVN mechanism driver


The ML2 plug-in translates the OpenStack-specific networking configuration into the platform-
neutral OVN logical networking configuration. It typically runs on the Controller node.
OVN Northbound (NB) database (ovn-nb)
This database stores the logical OVN networking configuration from the OVN ML2 plugin. It typically
runs on the Controller node and listens on TCP port 6641.
OVN Northbound service (ovn-northd)
This service converts the logical networking configuration from the OVN NB database to the logical
data path flows and populates these on the OVN Southbound database. It typically runs on the
Controller node.
OVN Southbound (SB) database (ovn-sb)
This database stores the converted logical data path flows. It typically runs on the Controller node
and listens on TCP port 6642.
OVN controller (ovn-controller)
This controller connects to the OVN SB database and acts as the open vSwitch controller to control
and monitor network traffic. It runs on all Compute and gateway nodes where
OS::Tripleo::Services::OVNController is defined.
OVN metadata agent (ovn-metadata-agent)
This agent creates the haproxy instances for managing the OVS interfaces, network namespaces
and HAProxy processes used to proxy metadata API requests. The agent runs on all Compute and
gateway nodes where OS::TripleO::Services::OVNMetadataAgent is defined.
OVS database server (OVSDB)
Hosts the OVN Northbound and Southbound databases. Also interacts with ovs-vswitchd to host
the OVS database conf.db.

NOTE

The schema file for the NB database is located in /usr/share/ovn/ovn-nb.ovsschema,


and the SB database schema file is in /usr/share/ovn/ovn-sb.ovsschema.

16
CHAPTER 2. WORKING WITH ML2/OVN

2.2. THE OVN-CONTROLLER SERVICE ON COMPUTE NODES


The ovn-controller service runs on each Compute node and connects to the OVN southbound (SB)
database server to retrieve the logical flows. The ovn-controller translates these logical flows into
physical OpenFlow flows and adds the flows to the OVS bridge (br-int). To communicate with ovs-
vswitchd and install the OpenFlow flows, the ovn-controller connects to the local ovsdb-server
(which hosts conf.db) using the UNIX socket path that was passed when ovn-controller was started (for
example unix:/var/run/openvswitch/db.sock).

The ovn-controller service expects certain key-value pairs in the external_ids column of the
Open_vSwitch table; puppet-ovn uses puppet-vswitch to populate these fields. The following
example shows the key-value pairs that puppet-vswitch configures in the external_ids column:

hostname=<HOST NAME>
ovn-encap-ip=<IP OF THE NODE>
ovn-encap-type=geneve
ovn-remote=tcp:OVN_DBS_VIP:6642

2.3. OVN METADATA AGENT ON COMPUTE NODES

The OVN metadata agent is configured in the tripleo-heat-templates/deployment/ovn/ovn-metadata-


17
Red Hat OpenStack Platform 16.1 Networking Guide

The OVN metadata agent is configured in the tripleo-heat-templates/deployment/ovn/ovn-metadata-


container-puppet.yaml file and included in the default Compute role through
OS::TripleO::Services::OVNMetadataAgent. As such, the OVN metadata agent with default
parameters is deployed as part of the OVN deployment.

OpenStack guest instances access the Networking metadata service available at the link-local IP
address: 169.254.169.254. The neutron-ovn-metadata-agent has access to the host networks where the
Compute metadata API exists. Each HAProxy is in a network namespace that is not able to reach the
appropriate host network. HaProxy adds the necessary headers to the metadata API request and then
forwards the request to the neutron-ovn-metadata-agent over a UNIX domain socket.

The OVN Networking service creates a unique network namespace for each virtual network that enables
the metadata service. Each network accessed by the instances on the Compute node has a
corresponding metadata namespace (ovnmeta-<datapath_uuid>).

2.4. THE OVN COMPOSABLE SERVICE


Red Hat OpenStack Platform usually consists of nodes in pre-defined roles, such as nodes in Controller
roles, Compute roles, and different storage role types. Each of these default roles contains a set of
services that are defined in the core heat template collection.

In a default OSP 16.1 deployment, the ML2/OVN composable service runs on Controller nodes. You can
optionally create a custom Networker role and run the OVN composable service on dedicated
Networker nodes.

The OVN composable service ovn-dbs is deployed in a container called ovn-dbs-bundle. In a default
installation ovn-dbs is included in the Controller role and runs on Controller nodes. Because the service
is composable, you can assign it to another role, such as a Networker role.

If you assign the OVN composable service to another role, ensure that the service is co-located on the
same node as the pacemaker service, which controls the OVN database containers.

Related information

Deploying a Custom Role with ML2/OVN

SR-IOV with ML2/OVN and native OVN DHCP

2.5. LAYER 3 HIGH AVAILABILITY WITH OVN


OVN supports Layer 3 high availability (L3 HA) without any special configuration. OVN automatically
schedules the router port to all available gateway nodes that can act as an L3 gateway on the specified
external network. OVN L3 HA uses the gateway_chassis column in the OVN Logical_Router_Port
table. Most functionality is managed by OpenFlow rules with bundled active_passive outputs. The ovn-
controller handles the Address Resolution Protocol (ARP) responder and router enablement and
disablement. Gratuitous ARPs for FIPs and router external addresses are also periodically sent by the
ovn-controller.

NOTE

L3HA uses OVN to balance the routers back to the original gateway nodes to avoid any
nodes becoming a bottleneck.

BFD monitoring

18
CHAPTER 2. WORKING WITH ML2/OVN

OVN uses the Bidirectional Forwarding Detection (BFD) protocol to monitor the availability of the
gateway nodes. This protocol is encapsulated on top of the Geneve tunnels established from node to
node.

Each gateway node monitors all the other gateway nodes in a star topology in the deployment. Gateway
nodes also monitor the compute nodes to let the gateways enable and disable routing of packets and
ARP responses and announcements.

Each compute node uses BFD to monitor each gateway node and automatically steers external traffic,
such as source and destination Network Address Translation (SNAT and DNAT), through the active
gateway node for a given router. Compute nodes do not need to monitor other compute nodes.

NOTE

External network failures are not detected as would happen with an ML2-OVS
configuration.

L3 HA for OVN supports the following failure modes:

The gateway node becomes disconnected from the network (tunneling interface).

ovs-vswitchd stops (ovs-switchd is responsible for BFD signaling)

ovn-controller stops (ovn-controller removes itself as a registered node).

NOTE

This BFD monitoring mechanism only works for link failures, not for routing failures.

2.6. LIMITATIONS OF THE ML2/OVN MECHANISM DRIVER


Some features available with the ML2/OVS mechanism driver are not yet supported with the ML2/OVN
mechanism driver.

2.6.1. ML2/OVS features not yet supported by ML2/OVN

Feature Notes Track this Feature

Distributed virtual FIP traffic does not pass to a VLAN tenant https://bugzilla.redhat.com/show_
routing (DVR) with network with ML2/OVN and DVR. bug.cgi?
OVN on VLAN id=1704596https://bugzilla.redhat.
project (tenant) DVR is enabled by default in new ML2/OVN com/show_bug.cgi?id=1766930
networks. deployments and in ML2/OVN deployments
that were migrated from ML2/OVS
deployments that had DVR enabled. If you
need VLAN tenant networks with OVN, you can
disable DVR. To disable DVR, include the
following lines in an environment file:

parameter_defaults:
NeutronEnableDVR: false

19
Red Hat OpenStack Platform 16.1 Networking Guide

Feature Notes Track this Feature

Provisioning The built-in DHCP server on OVN presently can https://bugzilla.redhat.com/show_


Baremetal not provision baremetal nodes. It cannot serve bug.cgi?id=1622154
Machines with OVN DHCP for the provisioning networks.
DHCP Chainbooting iPXE requires tagging ( --dhcp-
match in dnsmasq), which is not supported in
the OVN DHCP server.

2.6.2. Core OVN limitations


North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV because
the external ports are not colocated with the logical router’s gateway ports. See
https://bugs.launchpad.net/neutron/+bug/1875852.

2.7. LIMIT FOR NON-SECURE PORTS WITH ML2/OVN


Ports might become unreachable if you disable the port security plug-in extension in Red Hat Open
Stack Platform (RHOSP) deployments with the default ML2/OVN mechanism driver and a large number
of ports.

In some large ML2/OVN RHSOP deployments, a flow chain limit inside ML2/OVN can drop ARP
requests that are targeted to ports where the security plug-in is disabled.

There is no documented maximum limit for the actual number of logical switch ports that ML2/OVN can
support, but the limit approximates 4,000 ports.

Attributes that contribute to the approximated limit are the number of resubmits in the OpenFlow
pipeline that ML2/OVN generates, and changes to the overall logical topology.

2.8. ML2/OVS TO ML2/OVN IN-PLACE MIGRATION: VALIDATED AND


PROHIBITED SCENARIOS
Red Hat continues to test and refine in-place migration scenarios. Work with your Red Hat Technical
Account Manager or Global Professional Services to determine whether your OVS deployment meets
the criteria for a valid in-place migration scenario.

2.8.1. Validated ML2/OVS to ML2/OVN migration scenarios


DVR to DVR
Start: RHOSP 16.1.1 or later with OVS with DVR.
End: Same RHOSP version and release with OVN with DVR.

SR-IOV was not present in the starting environment or added during or after the migration.

Centralized routing + SR-IOV with virtual function (VF) ports only


Start: RHOSP 16.1.1 or later with OVS (no DVR)and SR-IOV.
End: RHOSP 16.1.1 with OVN (no DVR) and SR-IOV.

Workloads used only SR-IOV virtual function (VF) ports. SR-IOV physical function (PF) ports caused

20
CHAPTER 2. WORKING WITH ML2/OVN

Workloads used only SR-IOV virtual function (VF) ports. SR-IOV physical function (PF) ports caused
migration failure.

2.8.2. ML2/OVS to ML2/OVN in-place migration scenarios that have not been
verified
You cannot perform an in-place ML2/OVS to ML2/OVN migration in the following scenarios until Red
Hat announces that the underlying issues are resolved.

OVS deployment uses iptables_hybrid firewall driver


You can migrate from the ML2/OVS to the ML2/OVN mechanism driver with the openvswitch
firewall driver, but not with the iptables_hybrid firewall driver. Migration with the iptables_hybrid
firewall driver is not supported. For more information, see
https://bugzilla.redhat.com/show_bug.cgi?id=2011450.
OVS deployment uses network functions virtualization (NFV)
Red Hat supports new deployments with ML2/OVN and NFV, but has not successfully tested
migration of an ML2/OVS and NFV deployment to ML2/OVN. To track progress on this issue, see
https://bugzilla.redhat.com/show_bug.cgi?id=1925290.
SR-IOV with physical function (PF) ports
Migration tests failed when any workload uses an SR-IOV PF port. To track progress on this issue,
see https://bugzilla.redhat.com/show_bug.cgi?id=1879546.
OVS uses trunk ports
If your ML2/OVS deployment uses trunk ports, do not perform an ML2/OVS to ML2/OVN migration.
The migration does not properly set up the trunked ports in the OVN environment. To track progress
on this issue, see https://bugzilla.redhat.com/show_bug.cgi?id=1857652.
DVR with VLAN project (tenant) networks
Do not migrate to ML2/OVN with DVR and VLAN project networks. You can migrate to ML2/OVN
with centralized routing. To track progress on this issue, see
https://bugzilla.redhat.com/show_bug.cgi?id=1766930.

2.8.3. ML2/OVS to ML2/OVN in-place migration and security group rules


Ensure that any custom security group rules in your originating ML2/OVS deployment are compatible
with the target ML2/OVN deployment.

For example, the default security group includes rules that allow egress to the DHCP server. If you
deleted those rules in your ML2/OVS deployment, ML2/OVS automatically adds implicit rules that allow
egress to the DHCP server. Those implicit rules are not supported by ML2/OVN, so in your target
ML2/OVN environment, DHCP and metadata traffic would not reach the DHCP server and the instance
would not boot. In this case, to restore DHCP access, you could add the following rules:

# Allow VM to contact dhcp server (ipv4)


openstack security group rule create --egress --ethertype IPv4 --protocol udp --dst-port 67
${SEC_GROUP_ID}
# Allow VM to contact metadata server (ipv4)
openstack security group rule create --egress --ethertype IPv4 --protocol tcp --remote-ip
169.254.169.254 ${SEC_GROUP_ID}

# Allow VM to contact dhcp server (ipv6, non-slaac). Be aware that the remote-ip may vary
depending on your use case!

21
Red Hat OpenStack Platform 16.1 Networking Guide

openstack security group rule create --egress --ethertype IPv6 --protocol udp --dst-port 547 --
remote-ip ff02::1:2 ${SEC_GROUP_ID}
# Allow VM to contact metadata server (ipv6)
openstack security group rule create --egress --ethertype IPv6 --protocol tcp --remote-ip
fe80::a9fe:a9fe ${SEC_GROUP_ID}

2.9. USING ML2/OVS INSTEAD OF THE DEFAULT ML2/OVN IN A NEW


RHOSP 16.1 DEPLOYMENT
In Red Hat OpenStack Platform (RHOSP) 16.0 and later deployments, the Modular Layer 2 plug-in with
Open Virtual Network (ML2/OVN) is the default mechanism driver for the RHOSP Networking service.
You can change this setting if your application requires the ML2/OVS mechanism driver.

Procedure

1. Log in to your undercloud as the stack user.

2. In the template file, /home/stack/templates/containers-prepare-parameter.yaml, use ovs


instead of ovn as value of the neutron_driver parameter:

parameter_defaults:
ContainerImagePrepare:
- set:
...
neutron_driver: ovs

3. In the environment file, /usr/share/openstack-tripleo-heat-


templates/environments/services/neutron-ovs.yaml, ensure that the NeutronNetworkType
parameter includes vxlan or gre instead of geneve.

Example

parameter_defaults:
...
NeutronNetworkType: 'vxlan'

4. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and the files that you modified.

IMPORTANT

The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.

$ openstack overcloud deploy --templates \


-e <your_environment_files> \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/ \
neutron-ovs.yaml \
-e /home/stack/templates/containers-prepare-parameter.yaml \

Additional resources

22
CHAPTER 2. WORKING WITH ML2/OVN

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

2.10. KEEPING ML2/OVS AFTER AN UPGRADE INSTEAD OF THE


DEFAULT ML2/OVN
In Red Hat OpenStack Platform (RHOSP) 16.0 and later deployments, the Modular Layer 2 plug-in with
Open Virtual Network (ML2/OVN) is the default mechanism driver for the RHOSP Networking service.
If you upgrade from an earlier version of RHOSP that used ML2/OVS, you can migrate from ML2/OVN
to ML2/OVS after the upgrade.

If instead you choose to keep using ML2/OVS after the upgrade, follow Red Hat’s upgrade procedure as
documented, and do not perform the ML2/OVS-to-ML2/OVN migration.

Additional resources

Framework for Upgrades (13 to 16.1) guide

Migrating the Networking Service to the ML2 OVN Mechanism Driver

2.11. DEPLOYING A CUSTOM ROLE WITH ML2/OVN


In a default OSP 16.1 deployment, the ML2/OVN composable service runs on Controller nodes. You can
optionally use supported custom roles like those described in the following examples.

Networker
Run the OVN composable services on dedicated networker nodes.
Networker with SR-IOV
Run the OVN composable services on dedicated networker nodes with SR-IOV.
Controller with SR-IOV
Run the OVN composable services on SR-IOV capable controller nodes.

You can also generate your own custom roles.

Limitations
The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this
release.

All external ports are scheduled on a single gateway node because there is only one HA Chassis
Group for all of the ports.

North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV
because the external ports are not colocated with the logical router’s gateway ports. See
https://bugs.launchpad.net/neutron/+bug/1875852.

Prerequisites

You know how to deploy custom roles. For more information see Composable Services and
Custom Roles in the Advanced Overcloud Customization guide.

Procedure

23
Red Hat OpenStack Platform 16.1 Networking Guide

Procedure

1. Log in to the undercloud host as the stack user and source the stackrc file.

$ source stackrc

2. Choose the custom roles file that is appropriate for your deployment. Use it directly in the
deploy command if it suits your needs as-is. Or you can generate your own custom roles file that
combines other custom roles files.

Deployment Role Role File

Networker role Networker Networker.yaml

Networker role with NetworkerSriov NetworkerSriov.ya


SR-IOV ml

Co-located control ControllerSriov ControllerSriov.ya


and networker with ml
SR-IOV

3. [Optional] Generate a new custom roles data file that combines one of these custom roles files
with other custom roles files. Follow the instructions in Creating a roles_data file . Include the
appropriate source role files depending on your deployment.

4. [Optional] To identify specific nodes for the role, you can create a specific hardware flavor and
assign the flavor to specific nodes. Then use an environment file define the flavor for the role,
and to specify a node count. For more information, see the example in Creating a new role .

5. Create an environment file as appropriate for your deployment.

Deployment Sample Environment File

Networker role neutron-ovn-dvr-ha.yaml

Networker role with SR-IOV ovn-sriov.yaml

6. Include the following settings as appropriate for your deployment.

Deployment Settings

Networker role
ControllerParameters:
OVNCMSOptions: ""
ControllerSriovParameters:
OVNCMSOptions: ""
NetworkerParameters:
OVNCMSOptions: "enable-chassis-as-gw"
NetworkerSriovParameters:
OVNCMSOptions: ""

24
CHAPTER 2. WORKING WITH ML2/OVN

Deployment Settings

Networker role with


SR-IOV OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None

ControllerParameters:
OVNCMSOptions: ""
ControllerSriovParameters:
OVNCMSOptions: ""
NetworkerParameters:
OVNCMSOptions: ""
NetworkerSriYou can uovParameters:
OVNCMSOptions: "enable-chassis-as-gw"

Co-located control
and networker with OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None
SR-IOV
ControllerParameters:
OVNCMSOptions: ""
ControllerSriovParameters:
OVNCMSOptions: "enable-chassis-as-gw"
NetworkerParameters:
OVNCMSOptions: ""
NetworkerSriovParameters:
OVNCMSOptions: ""

7. Deploy the overcloud. Include the environment file in your deployment command with the -e
option. Include the custom roles data file in your deployment command with the -r option. For
example: ``-r Networker.yaml` or '-r mycustomrolesfile.yaml`.

Verification steps

1. Ensure that ovn_metadata_agent is running on Controller and Networker nodes.

[heat-admin@controller-0 ~]$ sudo podman ps | grep ovn_metadata

Expect output similar to the following example.

a65125d9588d undercloud-0.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-
neutron-metadata-agent-ovn:16.1_20200813.1 kolla_start 23 hours ago Up 21 hours
ago ovn_metadata_agent

2. Ensure that Controller nodes with OVN services or dedicated Networker nodes have been
configured as gateways for OVS.

[heat-admin@controller-0 ~]$ sudo ovs-vsctl get Open_Vswitch .


...OS::TripleO::Services::NeutronDhcpAgent: OS::Heat::None

Expect output similar to the following example.

25
Red Hat OpenStack Platform 16.1 Networking Guide

external_ids:ovn-cms-options
enable-chassis-as-gw

Additional verification steps for SR-IOV deployments

1. Ensure that neutron_sriov_agent is running on compute nodes.

[heat-admin@controller-0 ~]sudo podman ps | grep neutron_sriov_agent

Expect output similar to the following example.

f54cbbf4523a undercloud-0.ctlplane.localdomain:8787/rh-osbs/rhosp16-openstack-neutron-
sriov-agent:16.2_20200813.1
kolla_start 23 hours ago Up 21 hours ago neutron_sriov_agent

2. Ensure that network-available SR-IOV NICs have been successfully detected.

[heat-admin@controller-0 ~]$ sudo podman exec -uroot galera-bundle-podman-0 mysql nova


-e 'select hypervisor_hostname,pci_stats from compute_nodes;'

Expect output similar to the following example.

computesriov-1.localdomain {... {"dev_type": "type-PF", "physical_network": "datacentre",


"trusted": "true"}, "count": 1}, ... {"dev_type": "type-VF", "physical_network": "datacentre",
"trusted": "true", "parent_ifname": "enp7s0f3"}, "count": 5}, ...}
computesriov-0.localdomain {... {"dev_type": "type-PF", "physical_network": "datacentre",
"trusted": "true"}, "count": 1}, ... {"dev_type": "type-VF", "physical_network": "datacentre",
"trusted": "true", "parent_ifname": "enp7s0f3"}, "count": 5}, ...}

Additional resources

Composable services and custom roles in the Advanced Overcloud Customization guide.

2.12. SR-IOV WITH ML2/OVN AND NATIVE OVN DHCP


You can deploy a custom role to use SR-IOV in an ML2/OVN deployment with native OVN DHCP. See
Deploying a Custom Role with ML2/OVN .

Limitations
The following limitations apply to the use of SR-IOV with ML2/OVN and native OVN DHCP in this
release.

All external ports are scheduled on a single gateway node because there is only one HA Chassis
Group for all of the ports.

North/south routing on VF(direct) ports on VLAN tenant networks does not work with SR-IOV
because the external ports are not colocated with the logical router’s gateway ports. See
https://bugs.launchpad.net/neutron/+bug/1875852.

Additional resources

Composable services and custom roles in the Advanced Overcloud Customization guide.

26
CHAPTER 3. MANAGING PROJECT NETWORKS

CHAPTER 3. MANAGING PROJECT NETWORKS


Project networks help you to isolate network traffic for cloud computing. Steps to create a project
network include planning and creating the network, and adding subnets and routers.

3.1. VLAN PLANNING


When you plan your Red Hat OpenStack Platform deployment, you start with a number of subnets, from
which you allocate individual IP addresses. When you use multiple subnets you can segregate traffic
between systems into VLANs.

For example, it is ideal that your management or API traffic is not on the same network as systems that
serve web traffic. Traffic between VLANs travels through a router where you can implement firewalls to
govern traffic flow.

You must plan your VLANs as part of your overall plan that includes traffic isolation, high availability, and
IP address utilization for the various types of virtual networking resources in your deployment.

NOTE

The maximum number of VLANs in a single network, or in one OVS agent for a network
node, is 4094. In situations where you require more than the maximum number of VLANs,
you can create several provider networks (VXLAN networks) and several network nodes,
one per network. Each node can contain up to 4094 private networks.

3.2. TYPES OF NETWORK TRAFFIC


You can allocate separate VLANs for the different types of network traffic that you want to host. For
example, you can have separate VLANs for each of these types of networks. Only the External network
must be routable to the external physical network. In this release, director provides DHCP services.

NOTE

You do not require all of the isolated VLANs in this section for every OpenStack
deployment. For example, if your cloud users do not create ad hoc virtual networks on
demand, then you may not require a project network. If you want each VM to connect
directly to the same switch as any other physical system, connect your Compute nodes
directly to a provider network and configure your instances to use that provider network
directly.

Provisioning network - This VLAN is dedicated to deploying new nodes using director over
PXE boot. OpenStack Orchestration (heat) installs OpenStack onto the overcloud bare metal
servers. These servers attach to the physical network to receive the platform installation image
from the undercloud infrastructure.

Internal API network - The OpenStack services use the Internal API network for
communication, including API communication, RPC messages, and database communication. In
addition, this network is used for operational messages between controller nodes. When
planning your IP address allocation, note that each API service requires its own IP address.
Specifically, you must plan IP addresses for each of the following services:

vip-msg (ampq)

vip-keystone-int

27
Red Hat OpenStack Platform 16.1 Networking Guide

vip-glance-int

vip-cinder-int

vip-nova-int

vip-neutron-int

vip-horizon-int

vip-heat-int

vip-ceilometer-int

vip-swift-int

vip-keystone-pub

vip-glance-pub

vip-cinder-pub

vip-nova-pub

vip-neutron-pub

vip-horizon-pub

vip-heat-pub

vip-ceilometer-pub

vip-swift-pub

NOTE

When using High Availability, Pacemaker moves VIP addresses between the physical
nodes.

Storage - Block Storage, NFS, iSCSI, and other storage services. Isolate this network to
separate physical Ethernet links for performance reasons.

Storage Management - OpenStack Object Storage (swift) uses this network to synchronise
data objects between participating replica nodes. The proxy service acts as the intermediary
interface between user requests and the underlying storage layer. The proxy receives incoming
requests and locates the necessary replica to retrieve the requested data. Services that use a
Ceph back end connect over the Storage Management network, since they do not interact with
Ceph directly but rather use the front end service. Note that the RBD driver is an exception; this
traffic connects directly to Ceph.

Project networks - Neutron provides each project with their own networks using either VLAN
segregation (where each project network is a network VLAN), or tunneling using VXLAN or
GRE. Network traffic is isolated within each project network. Each project network has an IP
subnet associated with it, and multiple project networks may use the same addresses.

External - The External network hosts the public API endpoints and connections to the
28
CHAPTER 3. MANAGING PROJECT NETWORKS

External - The External network hosts the public API endpoints and connections to the
Dashboard (horizon). You can also use this network for SNAT. In a production deployment, it is
common to use a separate network for floating IP addresses and NAT.

Provider networks - Use provider networks to attach instances to existing network


infrastructure. You can use provider networks to map directly to an existing physical network in
the data center, using flat networking or VLAN tags. This allows an instance to share the same
layer-2 network as a system external to the OpenStack Networking infrastructure.

3.3. IP ADDRESS CONSUMPTION


The following systems consume IP addresses from your allocated range:

Physical nodes - Each physical NIC requires one IP address. It is common practice to dedicate
physical NICs to specific functions. For example, allocate management and NFS traffic to
distinct physical NICs, sometimes with multiple NICs connecting across to different switches for
redundancy purposes.

Virtual IPs (VIPs) for High Availability- Plan to allocate between one and three VIPs for each
network that controller nodes share.

3.4. VIRTUAL NETWORKING


The following virtual resources consume IP addresses in OpenStack Networking. These resources are
considered local to the cloud infrastructure, and do not need to be reachable by systems in the external
physical network:

Project networks - Each project network requires a subnet that it can use to allocate IP
addresses to instances.

Virtual routers - Each router interface plugging into a subnet requires one IP address. If you
want to use DHCP, each router interface requires two IP addresses.

Instances - Each instance requires an address from the project subnet that hosts the instance.
If you require ingress traffic, you must allocate a floating IP address to the instance from the
designated external network.

Management traffic - Includes OpenStack Services and API traffic. All services share a small
number of VIPs. API, RPC and database services communicate on the internal API VIP.

3.5. ADDING NETWORK ROUTING


To allow traffic to be routed to and from your new network, you must add its subnet as an interface to an
existing virtual router:

1. In the dashboard, select Project > Network > Routers.

2. Select your virtual router name in the Routers list, and click Add Interface.
In the Subnet list, select the name of your new subnet. You can optionally specify an IP address
for the interface in this field.

3. Click Add Interface.


Instances on your network can now communicate with systems outside the subnet.

29
Red Hat OpenStack Platform 16.1 Networking Guide

3.6. EXAMPLE NETWORK PLAN


This example shows a number of networks that accommodate multiple subnets, with each subnet being
assigned a range of IP addresses:

Table 3.1. Example subnet plan

Subnet name Address range Number of addresses Subnet Mask

Provisioning network 192.168.100.1 - 250 255.255.255.0


192.168.100.250

Internal API network 172.16.1.10 - 172.16.1.250 241 255.255.255.0

Storage 172.16.2.10 - 172.16.2.250 241 255.255.255.0

Storage Management 172.16.3.10 - 172.16.3.250 241 255.255.255.0

Tenant network 172.16.4.10 - 172.16.4.250 241 255.255.255.0


(GRE/VXLAN)

External network (incl. 10.1.2.10 - 10.1.3.222 469 255.255.254.0


floating IPs)

Provider network 10.10.3.10 - 10.10.3.250 241 255.255.252.0


(infrastructure)

3.7. CREATING A NETWORK


Create a network so that your instances can communicate with each other and receive IP addresses
using DHCP. For more information about external network connections, see Bridging the physical
network.

When creating networks, it is important to know that networks can host multiple subnets. This is useful if
you intend to host distinctly different systems in the same network, and prefer a measure of isolation
between them. For example, you can designate that only webserver traffic is present on one subnet,
while database traffic traverses another. Subnets are isolated from each other, and any instance that
wants to communicate with another subnet must have their traffic directed by a router. Consider placing
systems that require a high volume of traffic amongst themselves in the same subnet, so that they do
not require routing, and can avoid the subsequent latency and load.

1. In the dashboard, select Project > Network > Networks.

2. Click +Create Network and specify the following values:

Field Description

30
CHAPTER 3. MANAGING PROJECT NETWORKS

Field Description

Network Name Descriptive name, based on the role that the


network will perform. If you are integrating the
network with an external VLAN, consider
appending the VLAN ID number to the name.
For example, webservers_122, if you are
hosting HTTP web servers in this subnet, and
your VLAN tag is 122. Or you might use
internal-only if you intend to keep the network
traffic private, and not integrate the network
with an external network.

Admin State Controls whether the network is immediately


available. Use this field to create the network in a
Down state, where it is logically present but
inactive. This is useful if you do not intend to
enter the network into production immediately.

Create Subnet Determines whether to create a subnet. For


example, you might not want to create a subnet
if you intend to keep this network as a
placeholder without network connectivity.

3. Click the Next button, and specify the following values in the Subnet tab:

Field Description

Subnet Name Enter a descriptive name for the subnet.

Network Address Enter the address in CIDR format, which


contains the IP address range and subnet mask
in one value. To determine the address, calculate
the number of bits masked in the subnet mask
and append that value to the IP address range.
For example, the subnet mask 255.255.255.0 has
24 masked bits. To use this mask with the IPv4
address range 192.168.122.0, specify the address
192.168.122.0/24.

IP Version Specifies the internet protocol version, where


valid types are IPv4 or IPv6. The IP address
range in the Network Address field must match
whichever version you select.

31
Red Hat OpenStack Platform 16.1 Networking Guide

Field Description

Gateway IP IP address of the router interface for your


default gateway. This address is the next hop for
routing any traffic destined for an external
location, and must be within the range that you
specify in the Network Address field. For
example, if your CIDR network address is
192.168.122.0/24, then your default gateway is
likely to be 192.168.122.1.

Disable Gateway Disables forwarding and isolates the subnet.

4. Click Next to specify DHCP options:

Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the
distribution of IP settings to your instances.

IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how
to allocate IPv6 addresses and additional information:

No Options Specified - Select this option if you want to set IP addresses manually, or if
you use a non OpenStack-aware method for address allocation.

SLAAC (Stateless Address Autoconfiguration) - Instances generate IPv6 addresses


based on Router Advertisement (RA) messages sent from the OpenStack Networking
router. Use this configuration to create an OpenStack Networking subnet with ra_mode
set to slaac and address_mode set to slaac.

DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for
example, DNS) from the OpenStack Networking DHCPv6 service. Use this
configuration to create a subnet with ra_mode set to dhcpv6-stateful and
address_mode set to dhcpv6-stateful.

DHCPv6 stateless - Instances generate IPv6 addresses based on Router


Advertisement (RA) messages sent from the OpenStack Networking router. Additional
options (for example, DNS) are allocated from the OpenStack Networking DHCPv6
service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateless
and address_mode set to dhcpv6-stateless.

Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the
value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for
allocation.

DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP
distributes these addresses to the instances for name resolution.

IMPORTANT
32
CHAPTER 3. MANAGING PROJECT NETWORKS

IMPORTANT

For strategic services such as DNS, it is a best practice not to host them on
your cloud. For example, if your cloud hosts DNS and your cloud becomes
inoperable, DNS is unavailable and the cloud components cannot do lookups
on each other.

Host Routes - Static host routes. First, specify the destination network in CIDR format,
followed by the next hop that you want to use for routing (for example, 192.168.23.0/24,
10.1.31.1). Provide this value if you need to distribute static routes to instances.

5. Click Create.
You can view the complete network in the Networks tab. You can also click Edit to change any
options as needed. When you create instances, you can configure them now to use its subnet,
and they receive any specified DHCP options.

3.8. WORKING WITH SUBNETS


Use subnets to grant network connectivity to instances. Each instance is assigned to a subnet as part of
the instance creation process, therefore it’s important to consider proper placement of instances to
best accommodate their connectivity requirements.

You can create subnets only in pre-existing networks. Remember that project networks in OpenStack
Networking can host multiple subnets. This is useful if you intend to host distinctly different systems in
the same network, and prefer a measure of isolation between them.

For example, you can designate that only webserver traffic is present on one subnet, while database
traffic traverse another.

Subnets are isolated from each other, and any instance that wants to communicate with another subnet
must have their traffic directed by a router. Therefore, you can lessen network latency and load by
grouping systems in the same subnet that require a high volume of traffic between each other.

3.9. CREATING A SUBNET


To create a subnet, follow these steps:

1. In the dashboard, select Project > Network > Networks, and click the name of your network in
the Networks view.

2. Click Create Subnet, and specify the following values:

Field Description

Subnet Name Descriptive subnet name.

33
Red Hat OpenStack Platform 16.1 Networking Guide

Field Description

Network Address Address in CIDR format, which contains the IP


address range and subnet mask in one value. To
determine the CIDR address, calculate the
number of bits masked in the subnet mask and
append that value to the IP address range. For
example, the subnet mask 255.255.255.0 has 24
masked bits. To use this mask with the IPv4
address range 192.168.122.0, specify the address
192.168.122.0/24.

IP Version Internet protocol version, where valid types are


IPv4 or IPv6. The IP address range in the
Network Address field must match whichever
protocol version you select.

Gateway IP IP address of the router interface for your


default gateway. This address is the next hop for
routing any traffic destined for an external
location, and must be within the range that you
specify in the Network Address field. For
example, if your CIDR network address is
192.168.122.0/24, then your default gateway is
likely to be 192.168.122.1.

Disable Gateway Disables forwarding and isolates the subnet.

3. Click Next to specify DHCP options:

Enable DHCP - Enables DHCP services for this subnet. You can use DHCP to automate the
distribution of IP settings to your instances.

IPv6 Address - Configuration Modes. If you create an IPv6 network, you must specify how
to allocate IPv6 addresses and additional information:

No Options Specified - Select this option if you want to set IP addresses manually, or if
you use a non OpenStack-aware method for address allocation.

SLAAC (Stateless Address Autoconfiguration) - Instances generate IPv6 addresses


based on Router Advertisement (RA) messages sent from the OpenStack Networking
router. Use this configuration to create an OpenStack Networking subnet with ra_mode
set to slaac and address_mode set to slaac.

DHCPv6 stateful - Instances receive IPv6 addresses as well as additional options (for
example, DNS) from the OpenStack Networking DHCPv6 service. Use this
configuration to create a subnet with ra_mode set to dhcpv6-stateful and
address_mode set to dhcpv6-stateful.

DHCPv6 stateless - Instances generate IPv6 addresses based on Router


Advertisement (RA) messages sent from the OpenStack Networking router. Additional
options (for example, DNS) are allocated from the OpenStack Networking DHCPv6
service. Use this configuration to create a subnet with ra_mode set to dhcpv6-stateless
and address_mode set to dhcpv6-stateless.

34
CHAPTER 3. MANAGING PROJECT NETWORKS

Allocation Pools - Range of IP addresses that you want DHCP to assign. For example, the
value 192.168.22.100,192.168.22.150 considers all up addresses in that range as available for
allocation.

DNS Name Servers - IP addresses of the DNS servers available on the network. DHCP
distributes these addresses to the instances for name resolution.

Host Routes - Static host routes. First, specify the destination network in CIDR format,
followed by the next hop that you want to use for routing (for example, 192.168.23.0/24,
10.1.31.1). Provide this value if you need to distribute static routes to instances.

4. Click Create.
You can view the subnet in the Subnets list. You can also click Edit to change any options as
needed. When you create instances, you can configure them now to use its subnet, and they
receive any specified DHCP options.

3.10. ADDING A ROUTER


OpenStack Networking provides routing services using an SDN-based virtual router. Routers are a
requirement for your instances to communicate with external subnets, including those in the physical
network. Routers and subnets connect using interfaces, with each subnet requiring its own interface to
the router.

The default gateway of a router defines the next hop for any traffic received by the router. Its network is
typically configured to route traffic to the external physical network using a virtual bridge.

To create a router, complete the following steps:

1. In the dashboard, select Project > Network > Routers, and click Create Router.

2. Enter a descriptive name for the new router, and click Create router.

3. Click Set Gateway next to the entry for the new router in the Routers list.

4. In the External Network list, specify the network that you want to receive traffic destined for an
external location.

5. Click Set Gateway.


After you add a router, you must configure any subnets you have created to send traffic using
this router. You do this by creating interfaces between the subnet and the router.

IMPORTANT

The default routes for subnets must not be overwritten. When the default route for a
subnet is removed, the L3 agent automatically removes the corresponding route in the
router namespace too, and network traffic cannot flow to and from the associated
subnet. If the existing router namespace route has been removed, to fix this problem,
perform these steps:

1. Disassociate all floating IPs on the subnet.

2. Detach the router from the subnet.

3. Re-attach the router to the subnet.

4. Re-attach all floating IPs.

35
Red Hat OpenStack Platform 16.1 Networking Guide

3.11. PURGING ALL RESOURCES AND DELETING A PROJECT


Use the openstack project purge command to delete all resources that belong to a particular project
as well as deleting the project, too.

For example, to purge the resources of the test-project project, and then delete the project, run the
following commands:

# openstack project list


+----------------------------------+--------------+
| ID | Name |
+----------------------------------+--------------+
| 02e501908c5b438dbc73536c10c9aac0 | test-project |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+--------------+

# openstack project purge --project 02e501908c5b438dbc73536c10c9aac0

3.12. DELETING A ROUTER


You can delete a router if it has no connected interfaces.

To remove its interfaces and delete a router, complete the following steps:

1. In the dashboard, select Project > Network > Routers, and click the name of the router that you
want to delete.

2. Select the interfaces of type Internal Interface, and click Delete Interfaces.

3. From the Routers list, select the target router and click Delete Routers.

3.13. DELETING A SUBNET


You can delete a subnet if it is no longer in use. However, if any instances are still configured to use the
subnet, the deletion attempt fails and the dashboard displays an error message.

Complete the following steps to delete a specific subnet in a network:

1. In the dashboard, select Project > Network > Networks.

2. Click the name of your network.

3. Select the target subnet, and click Delete Subnets.

3.14. DELETING A NETWORK


There are occasions where it becomes necessary to delete a network that was previously created,
perhaps as housekeeping or as part of a decommissioning process. You must first remove or detach any
interfaces where the network is still in use, before you can successfully delete a network.

To delete a network in your project, together with any dependent interfaces, complete the following
steps:

36
CHAPTER 3. MANAGING PROJECT NETWORKS

1. In the dashboard, select Project > Network > Networks.


Remove all router interfaces associated with the target network subnets.

To remove an interface, find the ID number of the network that you want to delete by clicking on
your target network in the Networks list, and looking at the ID field. All the subnets associated
with the network share this value in the Network ID field.

2. Navigate to Project > Network > Routers, click the name of your virtual router in the Routers
list, and locate the interface attached to the subnet that you want to delete.
You can distinguish this subnet from the other subnets by the IP address that served as the
gateway IP. You can further validate the distinction by ensuring that the network ID of the
interface matches the ID that you noted in the previous step.

3. Click the Delete Interface button for the interface that you want to delete.

4. Select Project > Network > Networks, and click the name of your network.

5. Click the Delete Subnet button for the subnet that you want to delete.

NOTE

If you are still unable to remove the subnet at this point, ensure it is not already
being used by any instances.

6. Select Project > Network > Networks, and select the network you would like to delete.

7. Click Delete Networks.

37
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL


NETWORKS
You can directly connect your VM instances to an external network using flat and VLAN provider
networks.

4.1. OVERVIEW OF THE OPENSTACK NETWORKING TOPOLOGY


OpenStack Networking (neutron) has two categories of services distributed across a number of node
types.

Neutron server - This service runs the OpenStack Networking API server, which provides the
API for end-users and services to interact with OpenStack Networking. This server also
integrates with the underlying database to store and retrieve project network, router, and
loadbalancer details, among others.

Neutron agents - These are the services that perform the network functions for OpenStack
Networking:

neutron-dhcp-agent - manages DHCP IP addressing for project private networks.

neutron-l3-agent - performs layer 3 routing between project private networks, the external
network, and others.

Compute node - This node hosts the hypervisor that runs the virtual machines, also known as
instances. A Compute node must be wired directly to the network in order to provide external
connectivity for instances. This node is typically where the l2 agents run, such as neutron-
openvswitch-agent.

Additional resources

Placement of OpenStack Networking services

4.2. PLACEMENT OF OPENSTACK NETWORKING SERVICES


The OpenStack Networking services can either run together on the same physical server, or on separate
dedicated servers, which are named according to their roles:

Controller node - The server that runs API service.

Network node - The server that runs the OpenStack Networking agents.

Compute node - The hypervisor server that hosts the instances.

The steps in this chapter apply to an environment that contains these three node types. If your
deployment has both the Controller and Network node roles on the same physical node, then you must
perform the steps from both sections on that server. This also applies for a High Availability (HA)
environment, where all three nodes might be running the Controller node and Network node services
with HA. As a result, you must complete the steps in sections applicable to Controller and Network nodes
on all three nodes.

Additional resources

Overview of the OpenStack Networking topology

38
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

4.3. CONFIGURING FLAT PROVIDER NETWORKS


You can use flat provider networks to connect instances directly to the external network. This is useful if
you have multiple physical networks and separate physical interfaces, and intend to connect each
Compute and Network node to those external networks.

Prerequisites

You have multiple physical networks.


This example uses physical networks called physnet1, and physnet2, respectively.

You have separate physical interfaces.


This example uses separate physical interfaces, eth0 and eth1, respectively.

Procedure

1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.

Example

$ vi /home/stack/templates/my-modules-environment.yaml

TIP

The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your orchestration templates.

2. In the YAML environment file under parameter_defaults, use the NeutronBridgeMappings to


specify which OVS bridges are used for accessing external networks.

Example

parameter_defaults:
NeutronBridgeMappings: 'physnet1:br-net1,physnet2:br-net2'

3. In the custom NIC configuration template for the Controller and Compute nodes, configure the
bridges with interfaces attached.

Example

...
- type: ovs_bridge
name: br-net1
mtu: 1500
use_dhcp: false
members:
- type: interface
name: eth0
mtu: 1500
use_dhcp: false
primary: true

39
Red Hat OpenStack Platform 16.1 Networking Guide

- type: ovs_bridge
name: br-net2
mtu: 1500
use_dhcp: false
members:
- type: interface
name: eth1
mtu: 1500
use_dhcp: false
primary: true
...

4. Run the openstack overcloud deploy command and include the templates and the
environment files, including this modified custom NIC template and the new environment file.

IMPORTANT

The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.

Example

$ openstack overcloud deploy --templates \


-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-
environment.yaml

Verification

1. Create an external network (public1) as a flat network and associate it with the configured
physical network (physnet1).
Configure it as a shared network (using --share) to let other users create VM instances that
connect to the external network directly.

Example

# openstack network create --share --provider-network-type flat --provider-physical-network


physnet1 --external public01

2. Create a subnet (public_subnet) using the openstack subnet create command.

Example

# openstack subnet create --no-dhcp --allocation-pool


start=192.168.100.20,end=192.168.100.100 --gateway 192.168.100.1 --network public01
public_subnet

3. Create a VM instance and connect it directly to the newly-created external network.

Example

$ openstack server create --image rhel --flavor my_flavor --network public01 my_instance

40
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

Additional resources

Custom network interface templates in the Advanced Overcloud Customization guide

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

network create in the Command Line Interface Reference

subnet create in the Command Line Interface Reference

server create in the Command Line Interface Reference

4.4. HOW DOES THE FLAT PROVIDER NETWORK PACKET FLOW


WORK?
This section describes in detail how traffic flows to and from an instance with flat provider network
configuration.

The flow of outgoing traffic in a flat provider network


The following diagram describes the packet flow for traffic leaving an instance and arriving directly at an
external network. After you configure the br-ex external bridge, add the physical interface to the bridge,
and spawn an instance to a Compute node, the resulting configuration of interfaces and bridges
resembles the configuration in the following diagram (if using the iptables_hybrid firewall driver):

41
Red Hat OpenStack Platform 16.1 Networking Guide

1. Packets leave the eth0 interface of the instance and arrive at the linux bridge qbr-xx.

2. Bridge qbr-xx is connected to br-int using veth pair qvb-xx <-> qvo-xxx. This is because the
bridge is used to apply the inbound/outbound firewall rules defined by the security group.

3. Interface qvb-xx is connected to the qbr-xx linux bridge, and qvoxx is connected to the br-int
Open vSwitch (OVS) bridge.

An example configuration of `qbr-xx`Linux bridge:

# brctl show
qbr269d4d73-e7 8000.061943266ebb no qvb269d4d73-e7
tap269d4d73-e7

The configuration of qvo-xx on br-int:

# ovs-vsctl show
Bridge br-int
fail_mode: secure
Interface "qvof63599ba-8f"
Port "qvo269d4d73-e7"
tag: 5
Interface "qvo269d4d73-e7"

NOTE

Port qvo-xx is tagged with the internal VLAN tag associated with the flat provider
network. In this example, the VLAN tag is 5. When the packet reaches qvo-xx, the VLAN
tag is appended to the packet header.

The packet is then moved to the br-ex OVS bridge using the patch-peer int-br-ex <-> phy-br-ex.

Example configuration of the patch-peer on br-int:

# ovs-vsctl show
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}

Example configuration of the patch-peer on br-ex:

Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal

42
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

When this packet reaches phy-br-ex on br-ex, an OVS flow inside br-ex strips the VLAN tag (5) and
forwards it to the physical interface.

In the following example, the output shows the port number of phy-br-ex as 2.

# ovs-ofctl show br-ex


OFPT_FEATURES_REPLY (xid=0x2): dpid:00003440b5c90dc6
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST
SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE

2(phy-br-ex): addr:ba:b5:7b:ae:5c:a2
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max

The following output shows any packet that arrives on phy-br-ex (in_port=2) with a VLAN tag of 5
(dl_vlan=5). In addition, an OVS flow in br-ex strips the VLAN tag and forwards the packet to the
physical interface.

# ovs-ofctl dump-flows br-ex


NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=4703.491s, table=0, n_packets=3620, n_bytes=333744, idle_age=0, priority=1
actions=NORMAL
cookie=0x0, duration=3890.038s, table=0, n_packets=13, n_bytes=1714, idle_age=3764,
priority=4,in_port=2,dl_vlan=5 actions=strip_vlan,NORMAL
cookie=0x0, duration=4702.644s, table=0, n_packets=10650, n_bytes=447632, idle_age=0,
priority=2,in_port=2 actions=drop

If the physical interface is another VLAN-tagged interface, then the physical interface adds the tag to
the packet.

The flow of incoming traffic in a flat provider network


This section contains information about the flow of incoming traffic from the external network until it
arrives at the interface of the instance.

43
Red Hat OpenStack Platform 16.1 Networking Guide

1. Incoming traffic arrives at eth1 on the physical node.

2. The packet passes to the br-ex bridge.

3. The packet moves to br-int via the patch-peer phy-br-ex <--> int-br-ex.

In the following example, int-br-ex uses port number 15. See the entry containing 15(int-br-ex):

# ovs-ofctl show br-int


OFPT_FEATURES_REPLY (xid=0x2): dpid:00004e67212f644d
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST
SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
15(int-br-ex): addr:12:4e:44:a9:50:f4
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max

Observing the traffic flow on br-int

1. When the packet arrives at int-br-ex, an OVS flow rule within the br-int bridge amends the
packet to add the internal VLAN tag 5. See the entry for actions=mod_vlan_vid:5:

# ovs-ofctl dump-flows br-int


NXST_FLOW reply (xid=0x4):

44
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

cookie=0x0, duration=5351.536s, table=0, n_packets=12118, n_bytes=510456, idle_age=0,


priority=1 actions=NORMAL
cookie=0x0, duration=4537.553s, table=0, n_packets=3489, n_bytes=321696, idle_age=0,
priority=3,in_port=15,vlan_tci=0x0000 actions=mod_vlan_vid:5,NORMAL
cookie=0x0, duration=5350.365s, table=0, n_packets=628, n_bytes=57892, idle_age=4538,
priority=2,in_port=15 actions=drop
cookie=0x0, duration=5351.432s, table=23, n_packets=0, n_bytes=0, idle_age=5351, priority=0
actions=drop

1. The second rule manages packets that arrive on int-br-ex (in_port=15) with no VLAN tag
(vlan_tci=0x0000): This rule adds VLAN tag 5 to the packet
(actions=mod_vlan_vid:5,NORMAL) and forwards it to qvoxxx.

2. qvoxxx accepts the packet and forwards it to qvbxx, after stripping away the VLAN tag.

3. The packet then reaches the instance.

NOTE

VLAN tag 5 is an example VLAN that was used on a test Compute node with a flat
provider network; this value was assigned automatically by neutron-openvswitch-agent.
This value may be different for your own flat provider network, and can differ for the
same network on two separate Compute nodes.

Additional resources

Troubleshooting instance-physical network connections on flat provider networks

4.5. TROUBLESHOOTING INSTANCE-PHYSICAL NETWORK


CONNECTIONS ON FLAT PROVIDER NETWORKS
The output provided in "How does the flat provider network packet flow work?" provides sufficient
debugging information for troubleshooting a flat provider network, should anything go wrong. The
following steps contain further information about the troubleshooting process.

Procedure
1. Review the bridge_mappings:

Verify that the physical network name you use (for example, physnet1) is consistent with the contents
of the bridge_mapping configuration as shown in this example:

# grep bridge_mapping /etc/neutron/plugins/ml2/openvswitch_agent.ini


bridge_mappings = physnet1:br-ex

# openstack network show provider-flat


...
| provider:physical_network | physnet1
...

2. Review the network configuration:

Confirm that the network is created as external, and uses the flat type:

# openstack network show provider-flat

45
Red Hat OpenStack Platform 16.1 Networking Guide

...
| provider:network_type | flat |
| router:external | True |
...

3. Review the patch-peer:

Run the ovs-vsctl show command, and verify that br-int and br-ex are connected using a patch-peer
int-br-ex <--> phy-br-ex.

# ovs-vsctl show
Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}

Example configuration of the patch-peer on br-ex:

Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal

This connection is created when you restart the neutron-openvswitch-agent service, if


bridge_mapping is correctly configured in /etc/neutron/plugins/ml2/openvswitch_agent.ini. Re-
check the bridge_mapping setting if the connection is not created after you restart the service.

4. Review the network flows:

Run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows br-int and review whether the flows strip
the internal VLAN IDs for outgoing packets, and add VLAN IDs for incoming packets. This flow is first
added when you spawn an instance to this network on a specific Compute node.

If this flow is not created after spawning the instance, verify that the network is created as flat, is
external, and that the physical_network name is correct. In addition, review the
bridge_mapping settings.

Finally, review the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that ethX is added as a port
within br-ex, and that ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of ip a.

The following output shows eth1 is a port in br-ex:

Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth1"
Interface "eth1"

46
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

The following example demonstrates that eth1 is configured as an OVS port, and that the kernel knows
to transfer all packets from the interface, and send them to the OVS bridge br-ex. This can be observed
in the entry: master ovs-system.

# ip a
5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state
UP qlen 1000

Additional resources

How does the flat provider network packet flow work?

Configuring bridge mappings

4.6. CONFIGURING VLAN PROVIDER NETWORKS


When you connect multiple VLAN-tagged interfaces on a single NIC to multiple provider networks,
these new VLAN provider networks can connect VM instances directly to external networks.

Prerequisites

You have a physical network, with a range of VLANs.


This example uses a physical network called physnet1, with a range of VLANs, 171-172.

Your Network nodes and Compute nodes are connected to a physical network using a physical
interface.
This example uses Network nodes and Compute nodes that are connected to a physical
network, physnet1, using a physical interface, eth1.

The switch ports that these interfaces connect to must be configured to trunk the required
VLAN ranges.

Procedure

1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.

Example

$ vi /home/stack/templates/my-modules-environment.yaml

TIP

The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your orchestration templates.

2. In the YAML environment file under parameter_defaults, use NeutronTypeDrivers to specify


your network type drivers.

Example

47
Red Hat OpenStack Platform 16.1 Networking Guide

parameter_defaults:
NeutronTypeDrivers: vxlan,flat,vlan

3. Configure the NeutronNetworkVLANRanges setting to reflect the physical network and VLAN
ranges in use:

Example

parameter_defaults:
NeutronTypeDrivers: 'vxlan,flat,vlan'
NeutronNetworkVLANRanges: 'physnet1:171:172'

4. Create an external network bridge (br-ex), and associate a port ( eth1) with it.
This example configures eth1 to use br-ex:

Example

parameter_defaults:
NeutronTypeDrivers: 'vxlan,flat,vlan'
NeutronNetworkVLANRanges: 'physnet1:171:172'
NeutronBridgeMappings: 'datacentre:br-ex,tenant:br-int'

5. Run the openstack overcloud deploy command and include the core templates and the
environment files, including this new environment file.

IMPORTANT

The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.

Example

$ openstack overcloud deploy --templates \


-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-
environment.yaml

Verification

1. Create the external networks as type vlan, and associate them with the configured
physical_network.
When you create the external networks, use the --shared option so that users in other project
can share the external networks and can connect VM instances directly.

Run the following example command to create two networks: one for VLAN 171, and another for
VLAN 172:

Example

$ openstack network create \


--provider-network-type vlan \
--external \

48
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

--provider-physical-network physnet1 \
--provider-segment 171 \
--share \
provider-vlan171

$ openstack network create \


--provider-network-type vlan \
--external \
--provider-physical-network physnet1 \
--provider-segment 172 \
--share \
provider-vlan172

2. Create a number of subnets and configure them to use the external network.
You can use either openstack subnet create or the dashboard to create these subnets. Ensure
that the external subnet details you have received from your network administrator are correctly
associated with each VLAN.

In this example, VLAN 171 uses subnet 10.65.217.0/24 and VLAN 172 uses 10.65.218.0/24:

Example

$ openstack subnet create \


--network provider-171 \
--subnet-range 10.65.217.0/24 \
--dhcp \
--gateway 10.65.217.254 \
subnet-provider-171

$ openstack subnet create \


--network provider-172 \
--subnet-range 10.65.218.0/24 \
--dhcp \
--gateway 10.65.218.254 \
subnet-provider-172

Additional resources

Custom network interface templates in the Advanced Overcloud Customization guide

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

network create in the Command Line Interface Reference

subnet create in the Command Line Interface Reference

4.7. HOW DOES THE VLAN PROVIDER NETWORK PACKET FLOW


WORK?
This section describes in detail how traffic flows to and from an instance with VLAN provider network
configuration.

49
Red Hat OpenStack Platform 16.1 Networking Guide

The flow of outgoing traffic in a VLAN provider network


The following diagram describes the packet flow for traffic leaving an instance and arriving directly to a
VLAN provider external network. This example uses two instances attached to the two VLAN networks
(171 and 172). After you configure br-ex, add a physical interface to it, and spawn an instance to a
Compute node, the resulting configuration of interfaces and bridges resembles the configuration in the
following diagram:

1. Packets leaving the eth0 interface of the instance arrive at the linux bridge qbr-xx connected to
the instance.

2. qbr-xx is connected to br-int using veth pair qvbxx <→ qvoxxx.

3. qvbxx is connected to the linux bridge qbr-xx and qvoxx is connected to the Open vSwitch
bridge br-int.

Example configuration of qbr-xx on the Linux bridge.


This example features two instances and two corresponding linux bridges:

# brctl show
bridge name bridge id STP enabled interfaces
qbr84878b78-63 8000.e6b3df9451e0 no qvb84878b78-63
tap84878b78-63

qbr86257b61-5d 8000.3a3c888eeae6 no qvb86257b61-5d


tap86257b61-5d

50
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

The configuration of qvoxx on br-int:

options: {peer=phy-br-ex}
Port "qvo86257b61-5d"
tag: 3

Interface "qvo86257b61-5d"
Port "qvo84878b78-63"
tag: 2
Interface "qvo84878b78-63"

qvoxx is tagged with the internal VLAN tag associated with the VLAN provider network. In this
example, the internal VLAN tag 2 is associated with the VLAN provider network provider-171
and VLAN tag 3 is associated with VLAN provider network provider-172. When the packet
reaches qvoxx, the this VLAN tag is added to the packet header.

The packet is then moved to the br-ex OVS bridge using patch-peer int-br-ex <→ phy-br-ex.
Example patch-peer on br-int:

Bridge br-int
fail_mode: secure
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}

Example configuration of the patch peer on br-ex:

Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port br-ex
Interface br-ex
type: internal

When this packet reaches phy-br-ex on br-ex, an OVS flow inside br-ex replaces the internal
VLAN tag with the actual VLAN tag associated with the VLAN provider network.

The output of the following command shows that the port number of phy-br-ex is 4:

# ovs-ofctl show br-ex


4(phy-br-ex): addr:32:e7:a1:6b:90:3e
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max

The following command shows any packet that arrives on phy-br-ex (in_port=4) which has VLAN tag 2
(dl_vlan=2). Open vSwitch replaces the VLAN tag with 171 ( actions=mod_vlan_vid:171,NORMAL) and
forwards the packet to the physical interface. The command also shows any packet that arrives on phy-
br-ex (in_port=4) which has VLAN tag 3 ( dl_vlan=3). Open vSwitch replaces the VLAN tag with 172
(actions=mod_vlan_vid:172,NORMAL) and forwards the packet to the physical interface. The
neutron-openvswitch-agent adds these rules.

51
Red Hat OpenStack Platform 16.1 Networking Guide

# ovs-ofctl dump-flows br-ex


NXST_FLOW reply (xid=0x4):
NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=6527.527s, table=0, n_packets=29211, n_bytes=2725576, idle_age=0,
priority=1 actions=NORMAL
cookie=0x0, duration=2939.172s, table=0, n_packets=117, n_bytes=8296, idle_age=58,
priority=4,in_port=4,dl_vlan=3 actions=mod_vlan_vid:172,NORMAL
cookie=0x0, duration=6111.389s, table=0, n_packets=145, n_bytes=9368, idle_age=98,
priority=4,in_port=4,dl_vlan=2 actions=mod_vlan_vid:171,NORMAL
cookie=0x0, duration=6526.675s, table=0, n_packets=82, n_bytes=6700, idle_age=2462,
priority=2,in_port=4 actions=drop

This packet is then forwarded to physical interface eth1.

The flow of incoming traffic in a VLAN provider network


The following example flow was tested on a Compute node using VLAN tag 2 for provider network
provider-171 and VLAN tag 3 for provider network provider-172. The flow uses port 18 on the integration
bridge br-int.

Your VLAN provider network may require a different configuration. Also, the configuration requirement
for a network may differ between two different Compute nodes.

The output of the following command shows int-br-ex with port number 18:

# ovs-ofctl show br-int


18(int-br-ex): addr:fe:b7:cb:03:c5:c1
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max

The output of the following command shows the flow rules on br-int.

# ovs-ofctl dump-flows br-int


NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=6770.572s, table=0, n_packets=1239, n_bytes=127795, idle_age=106,
priority=1 actions=NORMAL

cookie=0x0, duration=3181.679s, table=0, n_packets=2605, n_bytes=246456, idle_age=0,


priority=3,in_port=18,dl_vlan=172 actions=mod_vlan_vid:3,NORMAL

cookie=0x0, duration=6353.898s, table=0, n_packets=5077, n_bytes=482582, idle_age=0,


priority=3,in_port=18,dl_vlan=171 actions=mod_vlan_vid:2,NORMAL

cookie=0x0, duration=6769.391s, table=0, n_packets=22301, n_bytes=2013101, idle_age=0,


priority=2,in_port=18 actions=drop

cookie=0x0, duration=6770.463s, table=23, n_packets=0, n_bytes=0, idle_age=6770, priority=0


actions=drop

Incoming flow example


This example demonstrates the the following br-int OVS flow:

52
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

cookie=0x0, duration=3181.679s, table=0, n_packets=2605, n_bytes=246456, idle_age=0,


priority=3,in_port=18,dl_vlan=172 actions=mod_vlan_vid:3,NORMAL

A packet with VLAN tag 172 from the external network reaches the br-ex bridge via eth1 on the
physical node.

The packet moves to br-int via the patch-peer phy-br-ex <-> int-br-ex.

The packet matches the flow’s criteria (in_port=18,dl_vlan=172).

The flow actions (actions=mod_vlan_vid:3,NORMAL) replace the VLAN tag 172 with internal
VLAN tag 3 and forwards the packet to the instance with normal Layer 2 processing.

Additional resources

Troubleshooting instance-physical network connections on VLAN provider networks

4.8. TROUBLESHOOTING INSTANCE-PHYSICAL NETWORK


CONNECTIONS ON VLAN PROVIDER NETWORKS
Refer to the packet flow described in "How does the VLAN provider network packet flow work?" when
troubleshooting connectivity in a VLAN provider network. In addition, review the following configuration
options:

Procedure
1. Verify that physical network name is used consistently. In this example, physnet1 is used consistently
while creating the network, and within the bridge_mapping configuration:

# grep bridge_mapping /etc/neutron/plugins/ml2/openvswitch_agent.ini


bridge_mappings = physnet1:br-ex

# openstack network show provider-vlan171


...
| provider:physical_network | physnet1
...

2. Confirm that the network was created as external, is type vlan, and uses the correct
segmentation_id value:

# openstack network show provider-vlan171


...
| provider:network_type | vlan |
| provider:physical_network | physnet1 |
| provider:segmentation_id | 171 |
...

3. Run ovs-vsctl show and verify that br-int and br-ex are connected using the patch-peer int-br-ex
<→ phy-br-ex.
This connection is created while restarting neutron-openvswitch-agent, provided that the
bridge_mapping is correctly configured in /etc/neutron/plugins/ml2/openvswitch_agent.ini.
Recheck the bridge_mapping setting if this is not created even after restarting the service.

4. To review the flow of outgoing packets, run ovs-ofctl dump-flows br-ex and ovs-ofctl dump-flows

53
Red Hat OpenStack Platform 16.1 Networking Guide

br-int, and verify that the flows map the internal VLAN IDs to the external VLAN ID ( segmentation_id).
For incoming packets, map the external VLAN ID to the internal VLAN ID.
This flow is added by the neutron OVS agent when you spawn an instance to this network for the first
time. If this flow is not created after spawning the instance, ensure that the network is created as vlan, is
external, and that the physical_network name is correct. In addition, re-check the bridge_mapping
settings.

5. Finally, re-check the ifcfg-br-ex and ifcfg-ethx configuration. Ensure that br-ex includes port ethX,
and that both ifcfg-br-ex and ifcfg-ethx have an UP flag in the output of the ip a command.
For example, the following output shows that eth1 is a port in br-ex:

Bridge br-ex
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Port "eth1"
Interface "eth1"

The following command shows that eth1 has been added as a port, and that the kernel is configured to
move all packets from the interface to the OVS bridge br-ex. This is demonstrated by the entry: master
ovs-system.

# ip a
5: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state
UP qlen 1000

Additional resources

How does the VLAN provider network packet flow work?

4.9. ENABLING MULTICAST SNOOPING FOR PROVIDER NETWORKS


IN AN ML2/OVS DEPLOYMENT
To prevent flooding multicast packets to every port in a Red Hat OpenStack Platform (RHOSP)
provider network, you must enable multicast snooping. In RHOSP deployments that use the Modular
Layer 2 plug-in with the Open vSwitch mechanism driver (ML2/OVS), you do this by declaring the
RHOSP Orchestration (heat) NeutronEnableIgmpSnooping parameter in a YAML-formatted
environment file.

IMPORTANT

You should thoroughly test and understand any multicast snooping configuration before
applying it to a production environment. Misconfiguration can break multicasting or cause
erratic network behavior.

Prerequisites

Your configuration must only use ML2/OVS provider networks.

Your physical routers must also have IGMP snooping enabled.

That is, the physical router must send IGMP query packets on the provider network to solicit
54
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

That is, the physical router must send IGMP query packets on the provider network to solicit
regular IGMP reports from multicast group members to maintain the snooping cache in OVS
(and for physical networking).

An RHOSP Networking service security group rule must be in place to allow inbound IGMP to
the VM instances (or port security disabled).
In this example, a rule is created for the ping_ssh security group:

Example

$ openstack security group rule create --protocol igmp --ingress ping_ssh

Procedure

1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.

Example

$ vi /home/stack/templates/my-ovs-environment.yaml

TIP

The Orchestration service (heat) uses a set of plans called templates to install and configure
your environment. You can customize aspects of the overcloud with a custom environment file,
which is a special type of template that provides customization for your heat templates.

2. In the YAML environment file under parameter_defaults, set NeutronEnableIgmpSnooping


to true.

parameter_defaults:
NeutronEnableIgmpSnooping: true
...

IMPORTANT

Ensure that you add a whitespace character between the colon (:) and true.

3. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.

IMPORTANT

The order of the environment files is important as the parameters and resources
defined in subsequent environment files take precedence.

Example

$ openstack overcloud deploy --templates \


-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/my-ovs-
environment.yaml

55
Red Hat OpenStack Platform 16.1 Networking Guide

Verification

Verify that the multicast snooping is enabled.

Example

# sudo ovs-vsctl list bridge br-int

Sample output

...
mcast_snooping_enable: true
...
other_config: {mac-table-size="50000", mcast-snooping-disable-flood-unregistered=True}
...

Additional resources

Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

Networking (neutron) Parameters in the Overcloud Parameters guide

Creating a security group in the Creating and Managing Instances guide

4.10. ENABLING MULTICAST IN AN ML2/OVN DEPLOYMENT


To support multicast traffic, modify the deployment’s security configuration to allow multicast traffic to
reach the virtual machine (VM) instances in the multicast group. To prevent multicast traffic flooding,
enable IGMP snooping.

IMPORTANT

Test and understand any multicast snooping configuration before applying it to a


production environment. Misconfiguration can break multicasting or cause erratic
network behavior.

Prerequisites

An OpenStack deployment with the ML2/OVN mechanism driver.

Procedure

1. Configure security to allow multicast traffic to the appropriate VM instances. For instance,
create a pair of security group rules to allow IGMP traffic from the IGMP querier to enter and
exit the VM instances, and a third rule to allow multicast traffic.

Example
A security group mySG allows IGMP traffic to enter and exit the VM instances.

56
CHAPTER 4. CONNECTING VM INSTANCES TO PHYSICAL NETWORKS

openstack security group rule create --protocol igmp --ingress mySG

openstack security group rule create --protocol igmp --egress mySG

Another rule allows multicast traffic to reach VM instances.

openstack security group rule create --protocol udp mySG

As an alternative to setting security group rules, some operators choose to selectively disable
port security on the network. If you choose to disable port security, consider and plan for any
related security risks.

2. Set the heat parameter NeutronEnableIgmpSnooping: True in an environment file on the


undercloud node. For instance, add the following lines to ovn-extras.yaml.

Example

parameter_defaults:
NeutronEnableIgmpSnooping: True

3. Include the environment file in the openstack overcloud deploy command with any other
environment files that are relevant to your environment and deploy the overcloud.

$ openstack overcloud deploy \


--templates \

-e <other_overcloud_environment_files> \

-e ovn-extras.yaml \

Replace <other_overcloud_environment_files> with the list of environment files that are part
of your existing deployment.

Verification

1. Verify that the multicast snooping is enabled. List the northbound database Logical_Switch
table.

$ ovn-nbctl list Logical_Switch

Sample output

_uuid : d6a2fbcd-aaa4-4b9e-8274-184238d66a15
other_config : {mcast_flood_unregistered="false", mcast_snoop="true"}
...

The Networking Service (neutron) igmp_snooping_enable configuration is translated into the


mcast_snoop option set in the other_config column of the Logical_Switch table in the OVN
Northbound Database. Note that mcast_flood_unregistered is always “false”.

2. Show the IGMP groups.

57
Red Hat OpenStack Platform 16.1 Networking Guide

$ ovn-sbctl list IGMP_group

Sample output

_uuid : 2d6cae4c-bd82-4b31-9c63-2d17cbeadc4e
address : "225.0.0.120"
chassis : 34e25681-f73f-43ac-a3a4-7da2a710ecd3
datapath : eaf0f5cc-a2c8-4c30-8def-2bc1ec9dcabc
ports : [5eaf9dd5-eae5-4749-ac60-4c1451901c56, 8a69efc5-38c5-48fb-bbab-
30f2bf9b8d45]
...

Additional resources

Neutron in Component, Plug-In, and Driver Support in Red Hat OpenStack Platform

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

4.11. ENABLING COMPUTE METADATA ACCESS


Instances connected as described in this chapter are directly attached to the provider external networks,
and have external routers configured as their default gateway. No OpenStack Networking (neutron)
routers are used. This means that neutron routers cannot be used to proxy metadata requests from
instances to the nova-metadata server, which may result in failures while running cloud-init. However,
this issue can be resolved by configuring the dhcp agent to proxy metadata requests. You can enable
this functionality in /etc/neutron/dhcp_agent.ini. For example:

enable_isolated_metadata = True

4.12. FLOATING IP ADDRESSES


You can use the same network to allocate floating IP addresses to instances, even if the floating IPs are
already associated with private networks. The addresses that you allocate as floating IPs from this
network are bound to the qrouter-xxx namespace on the Network node, and perform DNAT-SNAT to
the associated private IP address. In contrast, the IP addresses that you allocate for direct external
network access are bound directly inside the instance, and allow the instance to communicate directly
with external network.

58
CHAPTER 5. MANAGING FLOATING IP ADDRESSES

CHAPTER 5. MANAGING FLOATING IP ADDRESSES


In addition to a having a private, fixed IP address, VM instances can have a public, or floating IP address
to communicate with other networks. The information in this section describes how to create and
manage floating IPs with the Red Hat OpenStack Platform (RHOSP) Networking service (neutron).

5.1. CREATING FLOATING IP POOLS


You can use floating IP addresses to direct ingress network traffic to your OpenStack instances. First,
you must define a pool of validly routable external IP addresses, which you can then assign to instances
dynamically. OpenStack Networking routes all incoming traffic destined for that floating IP to the
instance that you associate with the floating IP.

NOTE

OpenStack Networking allocates floating IP addresses to all projects (tenants) from the
same IP ranges in CIDR format. As a result, all projects can consume floating IPs from
every floating IP subnet. You can manage this behavior using quotas for specific projects.
For example, you can set the default to 10 for ProjectA and ProjectB, while setting the
quota for ProjectC to 0.

Procedure

When you create an external subnet, you can also define the floating IP allocation pool.

$ openstack subnet create --no-dhcp --allocation-pool


start=IP_ADDRESS,end=IP_ADDRESS --gateway IP_ADDRESS --network
SUBNET_RANGE NETWORK_NAME

If the subnet hosts only floating IP addresses, consider disabling DHCP allocation with the --no-
dhcp option in the openstack subnet create command.

Example

$ openstack subnet create --no-dhcp --allocation_pool


start=192.168.100.20,end=192.168.100.100 --gateway 192.168.100.1 --network
192.168.100.0/24 public

Verification

You can verify that the pool is configured properly by assigning a random floating IP to an
instance. (See the later link that follows.)

Additional resources

subnet create in the Command Line Interface Reference

Assigning a random floating IP

5.2. ASSIGNING A SPECIFIC FLOATING IP


You can assign a specific floating IP address to a VM instance.

59
Red Hat OpenStack Platform 16.1 Networking Guide

Procedure

Allocate a floating IP address to an instance by using the openstack server add floating ip
command.

Example

$ openstack server add floating ip prod-serv1 192.0.2.200

Validation steps

Confirm that your floating IP is associated with your instance by using the openstack server
show command.

Example

$ openstack server show prod-serv1

Sample output

+-----------------------------+------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2021-08-11T14:45:37.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=198.51.100.56,192.0.2.200 |
| | |
| config_drive | |
| created | 2021-08-11T14:44:54Z |
| flavor | review-ephemeral |
| | (8130dd45-78f6-44dc-8173-4d6426b8e520) |
| hostId | 2308c8d8f60ed5394b1525122fb5bf8ea55c78b8 |
| | 0ec6157eca4488c9 |
| id | aef3ca09-887d-4d20-872d-1d1b49081958 |
| image | rhel8 |
| | (20724bfe-93a9-4341-a5a3-78b37b3a5dfb) |
| key_name | example-keypair |
| name | prod-serv1 |
| progress |0 |
| project_id | bd7a8c4a19424cf09a82627566b434fa |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2021-08-11T14:45:37Z |
| user_id | 4b7e19a0d723310fd92911eb2fe59743a3a5cd32 |
| | 45f76ffced91096196f646b5 |
| volumes_attached | |
+-----------------------------+------------------------------------------+

60
CHAPTER 5. MANAGING FLOATING IP ADDRESSES

Additional resources

server add floating ip in the Command Line Interface Reference

server show in the Command Line Interface Reference

Assigning a random floating IP

5.3. CREATING AN ADVANCED NETWORK


Advanced network options are available for administrators, when creating a network in the Dashboard
from the Admin view. Use these options to specify projects and to define the network type that you
want to use.

Procedure

1. In the dashboard, select Admin > Networks > Create Network > Project.

2. Select the project that you want to host the new network with the Project drop-down list.

3. Review the options in Provider Network Type:

Local - Traffic remains on the local Compute host and is effectively isolated from any
external networks.

Flat - Traffic remains on a single network and can also be shared with the host. No VLAN
tagging or other network segregation takes place.

VLAN - Create a network using a VLAN ID that corresponds to a VLAN present in the
physical network. This option allows instances to communicate with systems on the same
layer 2 VLAN.

GRE - Use a network overlay that spans multiple nodes for private communication between
instances. Traffic egressing the overlay must be routed.

VXLAN - Similar to GRE, and uses a network overlay to span multiple nodes for private
communication between instances. Traffic egressing the overlay must be routed.

4. Click Create Network.


Review the Project Network Topology to validate that the network has been successfully
created.

Additional resources

Assigning a specific floating IP

Assigning a random floating IP

5.4. ASSIGNING A RANDOM FLOATING IP


You can dynamically allocate floating IP addresses to VM instances from a pool of external IP addresses.

Prerequisites

61
Red Hat OpenStack Platform 16.1 Networking Guide

A pool of routable external IP addresses.


For more information, see Section 5.1, “Creating floating IP pools” .

Procedure

1. Enter the following command to allocate a floating IP address from the pool. In this example, the
network is named public.

Example

$ openstack floating ip create public

Sample output
In the following example, the newly allocated floating IP is 192.0.2.200. You can assign it to an
instance.

+---------------------+--------------------------------------------------+
| Field | Value |
+---------------------+--------------------------------------------------+
| fixed_ip_address | None |
| floating_ip_address | 192.0.2.200 |
| floating_network_id | f0dcc603-f693-4258-a940-0a31fd4b80d9 |
| id | 6352284c-c5df-4792-b168-e6f6348e2620 |
| port_id | None |
| router_id | None |
| status | ACTIVE |
+---------------------+--------------------------------------------------+

2. Enter the following command to locate your instance:

$ openstack server list

Sample output

+-------------+-------------+--------+-------------+-------+-------------+
| ID | Name | Status | Networks | Image | Flavor |
+-------------+-------------+--------+-------------+-------+-------------+
| aef3ca09-88 | prod-serv1 | ACTIVE | public=198. | rhel8 | review- |
| 7d-4d20-872 | | | 51.100.56 | | ephemeral |
| d-1d1b49081 | | | | | |
| 958 | | | | | |
| | | | | | |
+-------------+-------------+--------+-------------+-------+-------------+

3. Associate the instance name or ID with the floating IP.

Example

$ openstack server add floating ip prod-serv1 192.0.2.200

Validation steps

62
CHAPTER 5. MANAGING FLOATING IP ADDRESSES

Enter the following command to confirm that your floating IP is associated with your instance.

Example

$ openstack server show prod-serv1

Sample output

+-----------------------------+------------------------------------------+
| Field | Value |
+-----------------------------+------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | nova |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2021-08-11T14:45:37.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=198.51.100.56,192.0.2.200 |
| | |
| config_drive | |
| created | 2021-08-11T14:44:54Z |
| flavor | review-ephemeral |
| | (8130dd45-78f6-44dc-8173-4d6426b8e520) |
| hostId | 2308c8d8f60ed5394b1525122fb5bf8ea55c78b8 |
| | 0ec6157eca4488c9 |
| id | aef3ca09-887d-4d20-872d-1d1b49081958 |
| image | rhel8 |
| | (20724bfe-93a9-4341-a5a3-78b37b3a5dfb) |
| key_name | example-keypair |
| name | prod-serv1 |
| progress |0 |
| project_id | bd7a8c4a19424cf09a82627566b434fa |
| properties | |
| security_groups | name='default' |
| status | ACTIVE |
| updated | 2021-08-11T14:45:37Z |
| user_id | 4b7e19a0d723310fd92911eb2fe59743a3a5cd32 |
| | 45f76ffced91096196f646b5 |
| volumes_attached | |
+-----------------------------+------------------------------------------+

Additional resources

floating ip create in the Command Line Interface Reference

server add floating ip in the Command Line Interface Reference

server show in the Command Line Interface Reference

Creating floating IP pools

63
Red Hat OpenStack Platform 16.1 Networking Guide

5.5. CREATING MULTIPLE FLOATING IP POOLS


OpenStack Networking supports one floating IP pool for each L3 agent. Therefore, you must scale your
L3 agents to create additional floating IP pools.

Procedure

Make sure that in /var/lib/config-data/neutron/etc/neutron/neutron.conf the property


handle_internal_only_routers is set to True for only one L3 agent in your environment. This
option configures the L3 agent to manage only non-external routers.

Additional resources

Creating floating IP pools

Assigning a random floating IP

5.6. BRIDGING THE PHYSICAL NETWORK


Bridge your virtual network to the physical network to enable connectivity to and from virtual instances.

In this procedure, the example physical interface, eth0, is mapped to the bridge, br-ex; the virtual bridge
acts as the intermediary between the physical network and any virtual networks.

As a result, all traffic traversing eth0 uses the configured Open vSwitch to reach instances.

To map a physical NIC to the virtual Open vSwitch bridge, complete the following steps:

Procedure

1. Open /etc/sysconfig/network-scripts/ifcfg-eth0 in a text editor, and update the following


parameters with values appropriate for the network at your site:

IPADDR

NETMASK GATEWAY

DNS1 (name server)


Here is an example:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

2. Open /etc/sysconfig/network-scripts/ifcfg-br-ex in a text editor and update the virtual bridge


parameters with the IP address values that were previously allocated to eth0:

# vi /etc/sysconfig/network-scripts/ifcfg-br-ex
DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static

64
CHAPTER 5. MANAGING FLOATING IP ADDRESSES

IPADDR=192.168.120.10
NETMASK=255.255.255.0
GATEWAY=192.168.120.1
DNS1=192.168.120.1
ONBOOT=yes

You can now assign floating IP addresses to instances and make them available to the physical
network.

Additional resources

Configuring bridge mappings

5.7. ADDING AN INTERFACE


You can use interfaces to interconnect routers with subnets so that routers can direct any traffic that
instances send to destinations outside of their intermediate subnet.

To add a router interface and connect the new interface to a subnet, complete these steps:

NOTE

This procedure uses the Network Topology feature. Using this feature, you can see a
graphical representation of all your virtual routers and networks while you to perform
network management tasks.

1. In the dashboard, select Project > Network > Network Topology.

2. Locate the router that you want to manage, hover your mouse over it, and click Add Interface.

3. Specify the Subnet that you want to connect to the router.


You can also specify an IP address. The address is useful for testing and troubleshooting
purposes, since a successful ping to this interface indicates that the traffic is routing as
expected.

4. Click Add interface.


The Network Topology diagram automatically updates to reflect the new interface connection
between the router and subnet.

5.8. DELETING AN INTERFACE


You can remove an interface to a subnet if you no longer require the router to direct traffic for the
subnet.

To delete an interface, complete the following steps:

1. In the dashboard, select Project > Network > Routers.

2. Click the name of the router that hosts the interface that you want to delete.

3. Select the interface type (Internal Interface), and click Delete Interfaces.

65
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 6. TROUBLESHOOTING NETWORKS


The diagnostic process of troubleshooting network connectivity in Red Hat OpenStack Platform is
similar to the diagnostic process for physical networks. If you use VLANs, you can consider the virtual
infrastructure as a trunked extension of the physical network, rather than a wholly separate environment.
There are some differences between troubleshooting an ML2/OVS network and the default, ML2/OVN
network.

6.1. BASIC PING TESTING


The ping command is a useful tool for analyzing network connectivity problems. The results serve as a
basic indicator of network connectivity, but might not entirely exclude all connectivity issues, such as a
firewall blocking the actual application traffic. The ping command sends traffic to specific destinations,
and then reports back whether the attempts were successful.

NOTE

The ping command is an ICMP operation. To use ping, you must allow ICMP traffic to
traverse any intermediary firewalls.

Ping tests are most useful when run from the machine experiencing network issues, so it may be
necessary to connect to the command line via the VNC management console if the machine seems to
be completely offline.

For example, the following ping test command validates multiple layers of network infrastructure in
order to succeed; name resolution, IP routing, and network switching must all function correctly:

$ ping www.example.com

PING e1890.b.akamaiedge.net (125.56.247.214) 56(84) bytes of data.


64 bytes from a125-56.247-214.deploy.akamaitechnologies.com (125.56.247.214): icmp_seq=1
ttl=54 time=13.4 ms
64 bytes from a125-56.247-214.deploy.akamaitechnologies.com (125.56.247.214): icmp_seq=2
ttl=54 time=13.5 ms
64 bytes from a125-56.247-214.deploy.akamaitechnologies.com (125.56.247.214): icmp_seq=3
ttl=54 time=13.4 ms
^C

You can terminate the ping command with Ctrl-c, after which a summary of the results is presented.
Zero percent packet loss indicates that the connection was stable and did not time out.

--- e1890.b.akamaiedge.net ping statistics ---


3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 13.461/13.498/13.541/0.100 ms

The results of a ping test can be very revealing, depending on which destination you test. For example, in
the following diagram VM1 is experiencing some form of connectivity issue. The possible destinations
are numbered in blue, and the conclusions drawn from a successful or failed result are presented:

66
CHAPTER 6. TROUBLESHOOTING NETWORKS

1. The internet - a common first step is to send a ping test to an internet location, such as
www.example.com.

Success: This test indicates that all the various network points in between the machine and
the Internet are functioning correctly. This includes the virtual and physical network
infrastructure.

Failure: There are various ways in which a ping test to a distant internet location can fail. If
other machines on your network are able to successfully ping the internet, that proves the
internet connection is working, and the issue is likely within the configuration of the local
machine.

2. Physical router - This is the router interface that the network administrator designates to direct
traffic onward to external destinations.

Success: Ping tests to the physical router can determine whether the local network and
underlying switches are functioning. These packets do not traverse the router, so they do
not prove whether there is a routing issue present on the default gateway.

Failure: This indicates that the problem lies between VM1 and the default gateway. The
router/switches might be down, or you may be using an incorrect default gateway. Compare
the configuration with that on another server that you know is functioning correctly. Try
pinging another server on the local network.

3. Neutron router - This is the virtual SDN (Software-defined Networking) router that Red Hat
OpenStack Platform uses to direct the traffic of virtual machines.

Success: Firewall is allowing ICMP traffic, the Networking node is online.

Failure: Confirm whether ICMP traffic is permitted in the security group of the instance.
Check that the Networking node is online, confirm that all the required services are running,
and review the L3 agent log (/var/log/neutron/l3-agent.log).

4. Physical switch - The physical switch manages traffic between nodes on the same physical

67
Red Hat OpenStack Platform 16.1 Networking Guide

4. Physical switch - The physical switch manages traffic between nodes on the same physical
network.

Success: Traffic sent by a VM to the physical switch must pass through the virtual network
infrastructure, indicating that this segment is functioning correctly.

Failure: Check that the physical switch port is configured to trunk the required VLANs.

5. VM2 - Attempt to ping a VM on the same subnet, on the same Compute node.

Success: The NIC driver and basic IP configuration on VM1 are functional.

Failure: Validate the network configuration on VM1. Or, firewall on VM2 might simply be
blocking ping traffic. In addition, verify the virtual switching configuration and review the
Open vSwitch log files.

6.2. VIEWING CURRENT PORT STATUS


A basic troubleshooting task is to create an inventory of all of the ports attached to a router and
determine the port status (DOWN or ACTIVE).

Procedure

1. To view all the ports that attach to the router named r1, run the following command:

# openstack port list --router r1

Sample output

+--------------------------------------+------+-------------------+--------------------------------------------
------------------------------------------+
| id | name | mac_address | fixed_ips
|
+--------------------------------------+------+-------------------+--------------------------------------------
------------------------------------------+
| b58d26f0-cc03-43c1-ab23-ccdb1018252a | | fa:16:3e:94:a7:df | {"subnet_id": "a592fdba-
babd-48e0-96e8-2dd9117614d3", "ip_address": "192.168.200.1"} |
| c45e998d-98a1-4b23-bb41-5d24797a12a4 | | fa:16:3e:ee:6a:f7 | {"subnet_id": "43f8f625-
c773-4f18-a691-fd4ebfb3be54", "ip_address": "172.24.4.225"} |
+--------------------------------------+------+-------------------+--------------------------------------------
------------------------------------------+

2. To view the details of each port, run the following command. Include the port ID of the port that
you want to view. The result includes the port status, indicated in the following example as
having an ACTIVE state:

# openstack port show b58d26f0-cc03-43c1-ab23-ccdb1018252a

Sample output

+-----------------------+--------------------------------------------------------------------------------------
+
| Field | Value |
+-----------------------+--------------------------------------------------------------------------------------

68
CHAPTER 6. TROUBLESHOOTING NETWORKS

+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:host_id | node.example.com |
| binding:profile | {} |
| binding:vif_details | {"port_filter": true, "ovs_hybrid_plug": true} |
| binding:vif_type | ovs |
| binding:vnic_type | normal |
| device_id | 49c6ebdc-0e62-49ad-a9ca-58cea464472f |
| device_owner | network:router_interface |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "a592fdba-babd-48e0-96e8-2dd9117614d3", "ip_address":
"192.168.200.1"} |
| id | b58d26f0-cc03-43c1-ab23-ccdb1018252a |
| mac_address | fa:16:3e:94:a7:df |
| name | |
| network_id | 63c24160-47ac-4140-903d-8f9a670b0ca4
|
| security_groups | |
| status | ACTIVE |
| tenant_id | d588d1112e0f496fb6cac22f9be45d49 |
+-----------------------+--------------------------------------------------------------------------------------
+

3. Perform step 2 for each port to determine its status.

6.3. TROUBLESHOOTING CONNECTIVITY TO VLAN PROVIDER


NETWORKS
OpenStack Networking can trunk VLAN networks through to the SDN switches. Support for VLAN-
tagged provider networks means that virtual instances can integrate with server subnets in the physical
network.

Procedure

1. Ping the gateway with ping <gateway-IP-address>.


Consider this example, in which a network is created with these commands:

# openstack network create --provider-network-type vlan --provider-physical-network phy-


eno1 --provider-segment 120 provider
# openstack subnet create --no-dhcp --allocation-pool
start=192.168.120.1,end=192.168.120.153 --gateway 192.168.120.254 --network provider
public_subnet

In this example, the gateway IP address is 192.168.120.254.

$ ping 192.168.120.254

2. If the ping fails, do the following:

a. Confirm that you have network flow for the associated VLAN.
It is possible that the VLAN ID has not been set. In this example, OpenStack Networking is
configured to trunk VLAN 120 to the provider network. (See --
provider:segmentation_id=120 in the example in step 1.)
b. Confirm the VLAN flow on the bridge interface using the command, ovs-ofctl dump-flows
69
Red Hat OpenStack Platform 16.1 Networking Guide

b. Confirm the VLAN flow on the bridge interface using the command, ovs-ofctl dump-flows
<bridge-name>.
In this example the bridge is named br-ex:

# ovs-ofctl dump-flows br-ex

NXST_FLOW reply (xid=0x4):


cookie=0x0, duration=987.521s, table=0, n_packets=67897, n_bytes=14065247,
idle_age=0, priority=1 actions=NORMAL
cookie=0x0, duration=986.979s, table=0, n_packets=8, n_bytes=648, idle_age=977,
priority=2,in_port=12 actions=drop

6.4. REVIEWING THE VLAN CONFIGURATION AND LOG FILES


To help validate or troubleshoot a deployment, you can:

verify the registration and status of Red Hat Openstack Platform (RHOSP) Networking service
(neutron) agents.

validate network configuration values such as VLAN ranges.

Procedure

1. Use the openstack network agent list command to verify that the RHOSP Networking service
agents are up and registered with the correct host names.

(overcloud)[stack@undercloud~]$ openstack network agent list


+--------------------------------------+--------------------+-----------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------+-------+----------------+
| a08397a8-6600-437d-9013-b2c5b3730c0c | Metadata agent | rhelosp.example.com | :-)
| True |
| a5153cd2-5881-4fc8-b0ad-be0c97734e6a | L3 agent | rhelosp.example.com | :-) |
True |
| b54f0be7-c555-43da-ad19-5593a075ddf0 | DHCP agent | rhelosp.example.com | :-)
| True |
| d2be3cb0-4010-4458-b459-c5eb0d4d354b | Open vSwitch agent | rhelosp.example.com |
:-) | True |
+--------------------------------------+--------------------+-----------------------+-------+----------------+

2. Review /var/log/containers/neutron/openvswitch-agent.log. Look for confirmation that the


creation process used the ovs-ofctl command to configure VLAN trunking.

3. Validate external_network_bridge in the /etc/neutron/l3_agent.ini file. If there is a hardcoded


value in the external_network_bridge parameter, you cannot use a provider network with the
L3-agent, and you cannot create the necessary flows. The external_network_bridge value
must be in the format `external_network_bridge = "" `.

4. Check the network_vlan_ranges value in the /etc/neutron/plugin.ini file. For provider


networks, do not specify the numeric VLAN ID. Specify IDs only when using VLAN isolated
project networks.

5. Validate the OVS agent configuration file bridge mappings, to confirm that the bridge
mapped to phy-eno1 exists and is properly connected to eno1.

70
CHAPTER 6. TROUBLESHOOTING NETWORKS

6.5. PERFORMING BASIC ICMP TESTING WITHIN THE ML2/OVN


NAMESPACE
As a basic troubleshooting step, you can attempt to ping an instance from an OVN metadata interface
that is on the same layer 2 network.

Prerequisites

RHOSP deployment, with ML2/OVN as the Networking service (neutron) default mechanism
driver.

Procedure

1. Log in to the overcloud using your Red Hat OpenStack Platform credentials.

2. Run the openstack server list command to obtain the name of a VM instance.

3. Run the openstack server show command to determine the Compute node on which the
instance is running.

Example

$ openstack server show my_instance -c OS-EXT-SRV-ATTR:host \


-c addresses

Sample output

+----------------------+-------------------------------------------------+
| Field | Value |
+----------------------+-------------------------------------------------+
| OS-EXT-SRV-ATTR:host | compute0.overcloud.example.com |
| addresses | finance-network1=192.0.2.2; provider- |
| | storage=198.51.100.13 |
+----------------------+-------------------------------------------------+

4. Log in to the Compute node host.

Example

$ ssh [email protected]

5. Run the ip netns list command to see the OVN metadata namespaces.

Sample output

ovnmeta-07384836-6ab1-4539-b23a-c581cf072011 (id: 1)
ovnmeta-df9c28ea-c93a-4a60-b913-1e611d6f15aa (id: 0)

6. Using the metadata namespace run an ip netns exec command to ping the associated network.

Example

71
Red Hat OpenStack Platform 16.1 Networking Guide

$ sudo ip netns exec ovnmeta-df9c28ea-c93a-4a60-b913-1e611d6f15aa \


ping 192.0.2.2

Sample output

PING 192.0.2.2 (192.0.2.2) 56(84) bytes of data.


64 bytes from 192.0.2.2: icmp_seq=1 ttl=64 time=0.470 ms
64 bytes from 192.0.2.2: icmp_seq=2 ttl=64 time=0.483 ms
64 bytes from 192.0.2.2: icmp_seq=3 ttl=64 time=0.183 ms
64 bytes from 192.0.2.2: icmp_seq=4 ttl=64 time=0.296 ms
64 bytes from 192.0.2.2: icmp_seq=5 ttl=64 time=0.307 ms
^C
--- 192.0.2.2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 122ms
rtt min/avg/max/mdev = 0.183/0.347/0.483/0.116 ms

Additional resources

server show in the Command Line Interface Reference

6.6. TROUBLESHOOTING FROM WITHIN PROJECT NETWORKS


(ML2/OVS)
In Red Hat Openstack Platform (RHOSP) ML2/OVS networks, all project traffic is contained within
network namespaces so that projects can configure networks without interfering with each other. For
example, network namespaces allow different projects to have the same subnet range of 192.168.1.1/24
without interference between them.

Prerequisites

RHOSP deployment, with ML2/OVS as the Networking service (neutron) default mechanism
driver.

Procedure

1. Determine which network namespace contains the network, by listing all of the project networks
using the openstack network list command:

$ openstack network list

In this output, note that the ID for the web-servers network (9cb32fe0-d7fb-432c-b116-
f483c6497b08). The command appends the network ID to the network namespace, which
enables you to identify the namespace in the next step.

Sample output

+--------------------------------------+-------------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-------------+-------------------------------------------------------+
| 9cb32fe0-d7fb-432c-b116-f483c6497b08 | web-servers | 453d6769-fcde-4796-a205-
66ee01680bba 192.168.212.0/24 |
| a0cc8cdd-575f-4788-a3e3-5df8c6d0dd81 | private | c1e58160-707f-44a7-bf94-
8694f29e74d3 10.0.0.0/24 |

72
CHAPTER 6. TROUBLESHOOTING NETWORKS

| baadd774-87e9-4e97-a055-326bb422b29b | private | 340c58e1-7fe7-4cf2-96a7-


96a0a4ff3231 192.168.200.0/24 |
| 24ba3a36-5645-4f46-be47-f6af2a7d8af2 | public | 35f3d2cb-6e4b-4527-a932-
952a395c4bb3 172.24.4.224/28 |
+--------------------------------------+-------------+-------------------------------------------------------+

2. List all the network namespaces using the ip netns list command:

# ip netns list

The output contains a namespace that matches the web-servers network ID.

In this output, the namespace is qdhcp-9cb32fe0-d7fb-432c-b116-f483c6497b08.

Sample output

qdhcp-9cb32fe0-d7fb-432c-b116-f483c6497b08
qrouter-31680a1c-9b3e-4906-bd69-cb39ed5faa01
qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b
qdhcp-a0cc8cdd-575f-4788-a3e3-5df8c6d0dd81
qrouter-e9281608-52a6-4576-86a6-92955df46f56

3. Examine the configuration of the web-servers network by running commands within the
namespace, prefixing the troubleshooting commands with ip netns exec <namespace>.
In this example, the route -n command is used.

Example

# ip netns exec qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b route -n

Sample output

Kernel IP routing table


Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.24.4.225 0.0.0.0 UG 0 0 0 qg-8d128f89-87
172.24.4.224 0.0.0.0 255.255.255.240 U 0 0 0 qg-8d128f89-87
192.168.200.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-8efd6357-96

6.7. PERFORMING ADVANCED ICMP TESTING WITHIN THE


NAMESPACE (ML2/OVS)
You can troubleshoot Red Hat Openstack Platform (RHOSP) ML2/OVS networks, using a combination
of tcpdump and ping commands.

Prerequisites

RHOSP deployment, with ML2/OVS as the Networking service (neutron) default mechanism
driver.

Procedure

1. Capture ICMP traffic using the tcpdump command:

Example
73
Red Hat OpenStack Platform 16.1 Networking Guide

Example

# ip netns exec qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b tcpdump -qnntpi any icmp

2. In a separate command line window, perform a ping test to an external network:

Example

# ip netns exec qrouter-62ed467e-abae-4ab4-87f4-13a9937fbd6b ping www.example.com

3. In the terminal running the tcpdump session, observe detailed results of the ping test.

Sample output

tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 65535 bytes
IP (tos 0xc0, ttl 64, id 55447, offset 0, flags [none], proto ICMP (1), length 88)
172.24.4.228 > 172.24.4.228: ICMP host 192.168.200.20 unreachable, length 68
IP (tos 0x0, ttl 64, id 22976, offset 0, flags [DF], proto UDP (17), length 60)
172.24.4.228.40278 > 192.168.200.21: [bad udp cksum 0xfa7b -> 0xe235!] UDP, length 32

NOTE

When you perform a tcpdump analysis of traffic, you see the responding packets heading
to the router interface rather than to the VM instance. This is expected behavior, as the
qrouter performs Destination Network Address Translation (DNAT) on the return
packets.

6.8. CREATING ALIASES FOR OVN TROUBLESHOOTING COMMANDS


OVN database commands (such as ovn-nbctl show) run on the ovn_controller container. The container
runs on the controller node and compute nodes. To simplify your access to the commands, create and
source a script that defines aliases.

Prerequisites

Red Hat OpenStack Platform deployment with ML2/OVN as the default mechanism driver.

Procedure

1. With your OpenStack credentials, log in to the overcloud node where you want to run the ovn
commands.

2. Create a shell script file that contains the ovn commands that you want to run.

Example

vi ~/bin/ovn-alias.sh

3. Add the ovn commands, and save the script file.

Example

74
CHAPTER 6. TROUBLESHOOTING NETWORKS

EXTERNAL_ID=\
$(sudo ovs-vsctl get open . external_ids:ovn-remote | awk -F: '{print $2}')
export NBDB=tcp:${EXTERNAL_ID}:6641
export SBDB=tcp:${EXTERNAL_ID}:6642

alias ovn-sbctl="sudo podman exec ovn_controller ovn-sbctl --db=$SBDB"


alias ovn-nbctl="sudo podman exec ovn_controller ovn-nbctl --db=$NBDB"
alias ovn-trace="sudo podman exec ovn_controller ovn-trace --db=$SBDB"

4. Source the script file.

Example

# source ovn-alias.sh

Validation

Run a command to confirm that your script file works properly.

Example

# ovn-nbctl show

Sample output

switch 26ce22db-1795-41bd-b561-9827cbd81778 (neutron-f8e79863-6c58-43d0-8f7d-


8ec4a423e13b) (aka internal_network)
port 1913c3ae-8475-4b60-a479-df7bcce8d9c8
addresses: ["fa:16:3e:33:c1:fc 192.168.254.76"]
port 1aabaee3-b944-4da2-bf0a-573215d3f3d9
addresses: ["fa:16:3e:16:cb:ce 192.168.254.74"]
port 7e000980-59f9-4a0f-b76a-4fdf4e86f27b
type: localport
addresses: ["fa:16:3e:c9:30:ed 192.168.254.2"]

6.9. MONITORING OVN LOGICAL FLOWS


OVN uses logical flows that are tables of flows with a priority, match, and actions. These logical flows are
distributed to the ovn-controller running on each Red Hat Openstack Platform (RHOSP) Compute
node. Use the ovn-sbctl lflow-list command on the Controller mode to view the full set of logical flows.

Prerequisites

RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism
driver.

Procedure

1. On the Controller node, run the ovn-sbctl --db=tcp:172.17.1.10:6642 lflow-list command.

2. Inspect the output.

75
Red Hat OpenStack Platform 16.1 Networking Guide

$ ovn-sbctl --db=tcp:172.17.1.10:6642 lflow-list


Datapath: "sw0" (d7bf4a7b-e915-4502-8f9d-5995d33f5d10) Pipeline: ingress
table=0 (ls_in_port_sec_l2 ), priority=100 , match=(eth.src[40]), action=(drop;)
table=0 (ls_in_port_sec_l2 ), priority=100 , match=(vlan.present), action=(drop;)
table=0 (ls_in_port_sec_l2 ), priority=50 , match=(inport == "sw0-port1" && eth.src ==
{00:00:00:00:00:01}), action=(next;)
table=0 (ls_in_port_sec_l2 ), priority=50 , match=(inport == "sw0-port2" && eth.src ==
{00:00:00:00:00:02}), action=(next;)
table=1 (ls_in_port_sec_ip ), priority=0 , match=(1), action=(next;)
table=2 (ls_in_port_sec_nd ), priority=90 , match=(inport == "sw0-port1" && eth.src ==
00:00:00:00:00:01 && arp.sha == 00:00:00:00:00:01), action=(next;)
table=2 (ls_in_port_sec_nd ), priority=90 , match=(inport == "sw0-port1" && eth.src ==
00:00:00:00:00:01 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll ==
00:00:00:00:00:01) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:01)))), action=
(next;)
table=2 (ls_in_port_sec_nd ), priority=90 , match=(inport == "sw0-port2" && eth.src ==
00:00:00:00:00:02 && arp.sha == 00:00:00:00:00:02), action=(next;)
table=2 (ls_in_port_sec_nd ), priority=90 , match=(inport == "sw0-port2" && eth.src ==
00:00:00:00:00:02 && ip6 && nd && ((nd.sll == 00:00:00:00:00:00 || nd.sll ==
00:00:00:00:00:02) || ((nd.tll == 00:00:00:00:00:00 || nd.tll == 00:00:00:00:00:02)))), action=
(next;)
table=2 (ls_in_port_sec_nd ), priority=80 , match=(inport == "sw0-port1" && (arp || nd)),
action=(drop;)
table=2 (ls_in_port_sec_nd ), priority=80 , match=(inport == "sw0-port2" && (arp || nd)),
action=(drop;)
table=2 (ls_in_port_sec_nd ), priority=0 , match=(1), action=(next;)
table=3 (ls_in_pre_acl ), priority=0 , match=(1), action=(next;)
table=4 (ls_in_pre_lb ), priority=0 , match=(1), action=(next;)
table=5 (ls_in_pre_stateful ), priority=100 , match=(reg0[0] == 1), action=(ct_next;)
table=5 (ls_in_pre_stateful ), priority=0 , match=(1), action=(next;)
table=6 (ls_in_acl ), priority=0 , match=(1), action=(next;)
table=7 (ls_in_qos_mark ), priority=0 , match=(1), action=(next;)
table=8 (ls_in_lb ), priority=0 , match=(1), action=(next;)
table=9 (ls_in_stateful ), priority=100 , match=(reg0[1] == 1), action=
(ct_commit(ct_label=0/1); next;)
table=9 (ls_in_stateful ), priority=100 , match=(reg0[2] == 1), action=(ct_lb;)
table=9 (ls_in_stateful ), priority=0 , match=(1), action=(next;)
table=10(ls_in_arp_rsp ), priority=0 , match=(1), action=(next;)
table=11(ls_in_dhcp_options ), priority=0 , match=(1), action=(next;)
table=12(ls_in_dhcp_response), priority=0 , match=(1), action=(next;)
table=13(ls_in_l2_lkup ), priority=100 , match=(eth.mcast), action=(outport =
"_MC_flood"; output;)
table=13(ls_in_l2_lkup ), priority=50 , match=(eth.dst == 00:00:00:00:00:01), action=
(outport = "sw0-port1"; output;)
table=13(ls_in_l2_lkup ), priority=50 , match=(eth.dst == 00:00:00:00:00:02), action=
(outport = "sw0-port2"; output;)
Datapath: "sw0" (d7bf4a7b-e915-4502-8f9d-5995d33f5d10) Pipeline: egress
table=0 (ls_out_pre_lb ), priority=0 , match=(1), action=(next;)
table=1 (ls_out_pre_acl ), priority=0 , match=(1), action=(next;)
table=2 (ls_out_pre_stateful), priority=100 , match=(reg0[0] == 1), action=(ct_next;)
table=2 (ls_out_pre_stateful), priority=0 , match=(1), action=(next;)
table=3 (ls_out_lb ), priority=0 , match=(1), action=(next;)
table=4 (ls_out_acl ), priority=0 , match=(1), action=(next;)
table=5 (ls_out_qos_mark ), priority=0 , match=(1), action=(next;)
table=6 (ls_out_stateful ), priority=100 , match=(reg0[1] == 1), action=
(ct_commit(ct_label=0/1); next;)

76
CHAPTER 6. TROUBLESHOOTING NETWORKS

table=6 (ls_out_stateful ), priority=100 , match=(reg0[2] == 1), action=(ct_lb;)


table=6 (ls_out_stateful ), priority=0 , match=(1), action=(next;)
table=7 (ls_out_port_sec_ip ), priority=0 , match=(1), action=(next;)
table=8 (ls_out_port_sec_l2 ), priority=100 , match=(eth.mcast), action=(output;)
table=8 (ls_out_port_sec_l2 ), priority=50 , match=(outport == "sw0-port1" && eth.dst ==
{00:00:00:00:00:01}), action=(output;)
table=8 (ls_out_port_sec_l2 ), priority=50 , match=(outport == "sw0-port2" && eth.dst ==
{00:00:00:00:00:02}), action=(output;)

Key differences between OVN and OpenFlow include:

OVN ports are logical entities that reside somewhere on a network, not physical ports on a single
switch.

OVN gives each table in the pipeline a name in addition to its number. The name describes the
purpose of that stage in the pipeline.

The OVN match syntax supports complex Boolean expressions.

The actions supported in OVN logical flows extend beyond those of OpenFlow. You can
implement higher level features, such as DHCP, in the OVN logical flow syntax.

ovn-trace
The ovn-trace command can simulate how a packet travels through the OVN logical flows, or help you
determine why a packet is dropped. Provide the ovn-trace command with the following parameters:

DATAPATH
The logical switch or logical router where the simulated packet starts.
MICROFLOW
The simulated packet, in the syntax used by the ovn-sb database.

This example displays the --minimal output option on a simulated packet and shows that the packet
reaches its destination:

$ ovn-trace --minimal sw0 'inport == "sw0-port1" && eth.src == 00:00:00:00:00:01 && eth.dst ==
00:00:00:00:00:02'
# reg14=0x1,vlan_tci=0x0000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,dl_type=0x0000
output("sw0-port2");

In more detail, the --summary output for this same simulated packet shows the full execution pipeline:

$ ovn-trace --summary sw0 'inport == "sw0-port1" && eth.src == 00:00:00:00:00:01 && eth.dst ==
00:00:00:00:00:02'
# reg14=0x1,vlan_tci=0x0000,dl_src=00:00:00:00:00:01,dl_dst=00:00:00:00:00:02,dl_type=0x0000
ingress(dp="sw0", inport="sw0-port1") {
outport = "sw0-port2";
output;
egress(dp="sw0", inport="sw0-port1", outport="sw0-port2") {
output;
/* output to "sw0-port2", type "" */;
};
};

The example output shows:

77
Red Hat OpenStack Platform 16.1 Networking Guide

The packet enters the sw0 network from the sw0-port1 port and runs the ingress pipeline.

The outport variable is set to sw0-port2 indicating that the intended destination for this packet
is sw0-port2.

The packet is output from the ingress pipeline, which brings it to the egress pipeline for sw0
with the outport variable set to sw0-port2.

The output action is executed in the egress pipeline, which outputs the packet to the current
value of the outport variable, which is sw0-port2.

Additional resources

See the ovn-trace man page for complete details.

6.10. MONITORING OPENFLOWS


You can use ovs-ofctl dump-flows command to monitor the OpenFlow flows on a logical switch in your
Red Hat Openstack Platform (RHOSP) network.

Prerequisites

RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism
driver.

Procedure

1. On the Controller node, run the ovs-ofctl dump-flows command.

Example

ovs-ofctl dump-flows br-int

2. Inspect the output, which resembles the following output.

Sample output

$ ovs-ofctl dump-flows br-int


NXST_FLOW reply (xid=0x4):
cookie=0x0, duration=72.132s, table=0, n_packets=0, n_bytes=0, idle_age=72,
priority=10,in_port=1,dl_src=00:00:00:00:00:01 actions=resubmit(,1)
cookie=0x0, duration=60.565s, table=0, n_packets=0, n_bytes=0, idle_age=60,
priority=10,in_port=2,dl_src=00:00:00:00:00:02 actions=resubmit(,1)
cookie=0x0, duration=28.127s, table=0, n_packets=0, n_bytes=0, idle_age=28, priority=0
actions=drop
cookie=0x0, duration=13.887s, table=1, n_packets=0, n_bytes=0, idle_age=13,
priority=0,in_port=1 actions=output:2
cookie=0x0, duration=4.023s, table=1, n_packets=0, n_bytes=0, idle_age=4,
priority=0,in_port=2 actions=output:1

6.11. LISTING THE OVN DATABASE TABLES IN AN ML2/OVN


DEPLOYMENT

List the tables in the OVN northbound and southbound database tables to gain insight to the
78
CHAPTER 6. TROUBLESHOOTING NETWORKS

List the tables in the OVN northbound and southbound database tables to gain insight to the
configuration and to troubleshoot issues with Red Hat OpenStack Platform (RHOSP) networks.

Prerequisites

RHOSP deployment with ML2/OVN as the Networking service (neutron) default mechanism
driver.

Create OVN aliases as described in, ovn-db-aliases-creating_neutron-troubleshoot.

Procedure

1. List the OVN southbound database tables.

Example

# ovn-sbctl list

2. List the OVN northbound database tables.

Example

# ovn-nbctl list

6.12. VALIDATING YOUR ML2/OVN DEPLOYMENT


Validating the ML2/OVN networks on your Red Hat OpenStack Platform (RHOSP) deployment consists
of creating a test network and subnet and performing diagnostic tasks such as verifying that specfic
containers are running.

Prerequisites

New deployment of RHOSP, with ML2/OVN as the Networking service (neutron) default
mechanism driver.

Procedure

1. Create a test network and subnet.

NETWORK_ID=\
$(openstack network create internal_network | awk '/\| id/ {print $4}')

openstack subnet create internal_subnet \


--network $NETWORK_ID \
--dns-nameserver 8.8.8.8 \
--subnet-range 192.168.254.0/24

If you encounter errors, perform the steps that follow.

2. Verify that the relevant containers are running:

# sudo podman ps -a | grep ovn_

As shown in the following sample, the output should list OVN containers including openstack-
79
Red Hat OpenStack Platform 16.1 Networking Guide

As shown in the following sample, the output should list OVN containers including openstack-
ovn-northd, openstack-neutron-metadata-agent-ovn, openstack-ovn-controller,
openstack-ovn-sb-db-server, and openstack-ovn-sb-db-server.

Sample output

090ac17ac010 registry.redhat.io/rhosp15-rhel8/openstack-ovn-northd:15.0-77
dumb-init --singl... 11 hours ago Up 11 hours ago ovn_northd
e4031dea8094 registry.redhat.io/rhosp15-rhel8/openstack-neutron-metadata-agent-
ovn:15.0-74 dumb-init --singl... 11 hours ago Up 11 hours ago ovn_metadata_agent
a2983bc0f06f registry.redhat.io/rhosp15-rhel8/openstack-ovn-controller:15.0-76 dumb-
init --singl... 11 hours ago Up 11 hours ago ovn_controller
5b8dfbef6260 registry.redhat.io/rhosp15-rhel8/openstack-ovn-sb-db-server:15.0-78
dumb-init --singl... 11 hours ago Up 11 hours ago ovn_south_db_server
cf7bcb3731ad registry.redhat.io/rhosp15-rhel8/openstack-ovn-nb-db-server:15.0-76
dumb-init --singl... 11 hours ago Up 11 hours ago ovn_north_db_server

3. Inspect log files for error messages.

grep -r ERR /var/log/containers/openvswitch/ /var/log/containers/neutron/

4. Query the northbound and southbound databases to check for responsiveness.

Example

# ovn-sbctl list
# ovn-sbctl list

5. Attempt to ping an instance from an OVN metadata interface that is on the same layer 2
network.
For more information, see Section 6.5, “Performing basic ICMP testing within the ML2/OVN
namespace”.

6. If you need to contact Red Hat for support, perform the steps described in this Red Hat
Solution, How to collect all required logs for Red Hat Support to investigate an OpenStack
issue.

Additional resources

network create in the Command Line Interface Reference

subnet create in the Command Line Interface Reference

6.13. SELECTING ML2/OVN LOG DEBUG AND INFO MODES


Set ML2/OVN logging to debug mode for additional troubleshooting information. Set logging back to
info mode to use less disk space when you do not need additional debugging information.

Prerequisites

New deployment of Red Hat OpenStack Platform 16.0 or higher, with ML2/OVN as the default
mechanism driver.

80
CHAPTER 6. TROUBLESHOOTING NETWORKS

Changing the ML2/OVN logging mode

Set the logging mode to debug as needed:

# podman exec -it ovn_controller ovs_appctl -t ovn-controller vlog/set dbg

Set the logging mode to info as needed:

# podman exec -it ovn_controller ovs_appctl -t ovn-controller vlog/set info

Additional resources

See ML2/OVN log files

6.14. ML2/OVN LOG FILES


Log files track events related to the deployment and operation of the ML2/OVN mechanism driver.

Table 6.1. ML2/OVN log files per node

Nodes Log Path:


/var/log/containers/openvswitc
h…​

Controller, Compute, Networking OVS northbound database server …​/ovn-controller.log

Controller OVS northbound database server …​/ovsdb-server-nb.log

Controller OVS southbound database server …​/ovsdb-server-sb.log

Controller OVN northbound database server …​/ovn-northd.log

81
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR


OPENSTACK NETWORKING
This chapter documents the common physical switch configuration steps required for OpenStack
Networking. Vendor-specific configuration is included for certain switches.

7.1. PLANNING YOUR PHYSICAL NETWORK ENVIRONMENT


The physical network adapters in your OpenStack nodes carry different types of network traffic, such as
instance traffic, storage data, or authentication requests. The type of traffic these NICs carry affects
how you must configure the ports on the physical switch.

First, you must decide which physical NICs oFn your Compute node you want to carry which types of
traffic. Then, when the NIC is cabled to a physical switch port, you must configure the switch port to allow
trunked or general traffic.

For example, the following diagram depicts a Compute node with two NICs, eth0 and eth1. Each NIC is
cabled to a Gigabit Ethernet port on a physical switch, with eth0 carrying instance traffic, and eth1
providing connectivity for OpenStack services:

Figure 7.1. Sample network layout

NOTE

This diagram does not include any additional redundant NICs required for fault tolerance.

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide

7.2. CONFIGURING A CISCO CATALYST SWITCH

7.2.1. About trunk ports

With OpenStack Networking you can connect instances to the VLANs that already exist on your physical
82
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

With OpenStack Networking you can connect instances to the VLANs that already exist on your physical
network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the
same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For
example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the
8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch.

7.2.2. Configuring trunk ports for a Cisco Catalyst switch


If using a Cisco Catalyst switch running Cisco IOS, you might use the following configuration
syntax to allow traffic for VLANs 110 and 111 to pass through to your instances.
This configuration assumes that your physical node has an ethernet cable connected to
interface GigabitEthernet1/0/12 on the physical switch.

IMPORTANT

These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.

interface GigabitEthernet1/0/12
description Trunk to Compute Node
spanning-tree portfast trunk
switchport trunk encapsulation dot1q
switchport mode trunk
switchport trunk native vlan 2
switchport trunk allowed vlan 2,110,111

Use the following list to understand these parameters:

Field Description

interface GigabitEthernet1/0/12 The switch port that the NIC of the X node
connects to. Ensure that you replace the
GigabitEthernet1/0/12 value with the correct
port value for your environment. Use the show
interface command to view a list of ports.

description Trunk to Compute Node A unique and descriptive value that you can use
to identify this interface.

spanning-tree portfast trunk If your environment uses STP, set this value to
instruct Port Fast that this port is used to trunk
traffic.

switchport trunk encapsulation dot1q Enables the 802.1q trunking standard (rather
than ISL). This value varies depending on the
configuration that your switch supports.

switchport mode trunk Configures this port as a trunk port, rather than
an access port, meaning that it allows VLAN
traffic to pass through to the virtual switches.

83
Red Hat OpenStack Platform 16.1 Networking Guide

Field Description

switchport trunk native vlan 2 Set a native VLAN to instruct the switch where
to send untagged (non-VLAN) traffic.

switchport trunk allowed vlan 2,110,111 Defines which VLANs are allowed through the
trunk.

7.2.3. About access ports


Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs
to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other
operational requirements, such as transporting management traffic or Block Storage data. These ports
are commonly known as access ports and usually require a simpler configuration than trunk ports.

7.2.4. Configuring access ports for a Cisco Catalyst switch


Using the example from the Figure 7.1, “Sample network layout” diagram, GigabitEthernet1/0/13
(on a Cisco Catalyst switch) is configured as an access port for eth1.
In this configuration,your physical node has an ethernet cable connected to interface
GigabitEthernet1/0/12 on the physical switch.

IMPORTANT

These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.

interface GigabitEthernet1/0/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
spanning-tree portfast

These settings are described below:

Field Description

interface GigabitEthernet1/0/13 The switch port that the NIC of the X node
connects to. Ensure that you replace the
GigabitEthernet1/0/12 value with the correct
port value for your environment. Use the show
interface command to view a list of ports.

description Access port for Compute A unique and descriptive value that you can use
Node to identify this interface.

switchport mode access Configures this port as an access port, rather


than a trunk port.

84
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

Field Description

switchport access vlan 200 Configures the port to allow traffic on VLAN
200. You must configure your Compute node
with an IP address from this VLAN.

spanning-tree portfast If using STP, set this value to instruct STP not to
attempt to initialize this as a trunk, allowing for
quicker port handshakes during initial
connections (such as server reboot).

7.2.5. About LACP port aggregation


You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to
form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a
dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends:
on the physical NICs, and on the physical switch ports.

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide.

7.2.6. Configuring LACP on the physical NIC


You can configure Link Aggregation Control Protocol (LACP) on a physical NIC.

Procedure

1. Edit the /home/stack/network-environment.yaml file:

- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000

2. Configure the Open vSwitch bridge to use LACP:

BondInterfaceOvsOptions:
"mode=802.3ad"

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide

85
Red Hat OpenStack Platform 16.1 Networking Guide

7.2.7. Configuring LACP for a Cisco Catalyst switch


In this example, the Compute node has two NICs using VLAN 100:

Procedure

1. Physically connect both NICs on the Compute node to the switch (for example, ports 12 and 13).

2. Create the LACP port channel:

interface port-channel1
switchport access vlan 100
switchport mode access
spanning-tree guard root

3. Configure switch ports 12 (Gi1/0/12) and 13 (Gi1/0/13):

sw01# config t
Enter configuration commands, one per line. End with CNTL/Z.

sw01(config) interface GigabitEthernet1/0/12


switchport access vlan 100
switchport mode access
speed 1000
duplex full
channel-group 10 mode active
channel-protocol lacp

interface GigabitEthernet1/0/13
switchport access vlan 100
switchport mode access
speed 1000
duplex full
channel-group 10 mode active
channel-protocol lacp

4. Review your new port channel. The resulting output lists the new port-channel Po1, with
member ports Gi1/0/12 and Gi1/0/13:

sw01# show etherchannel summary


<snip>

Number of channel-groups in use: 1


Number of aggregators: 1

Group Port-channel Protocol Ports


------+-------------+-----------+-----------------------------------------------
1 Po1(SD) LACP Gi1/0/12(D) Gi1/0/13(D)

NOTE

Remember to apply your changes by copying the running-config to the startup-


config: copy running-config startup-config.

86
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

7.2.8. About MTU settings


You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000
bytes) are required for certain NFS or iSCSI traffic.

NOTE

You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.

Additional resources

Configuring maximum transmission unit (MTU) settings

7.2.9. Configuring MTU settings for a Cisco Catalyst switch


Complete the steps in this example procedure to enable jumbo frames on your Cisco Catalyst 3750
switch.

1. Review the current MTU settings:

sw01# show system mtu

System MTU size is 1600 bytes


System Jumbo MTU size is 1600 bytes
System Alternate MTU size is 1600 bytes
Routing MTU size is 1600 bytes

2. MTU settings are changed switch-wide on 3750 switches, and not for individual interfaces. Run
the following commands to configure the switch to use jumbo frames of 9000 bytes. You might
prefer to configure the MTU settings for individual interfaces, if your switch supports this
feature.

sw01# config t
Enter configuration commands, one per line. End with CNTL/Z.

sw01(config)# system mtu jumbo 9000


Changes to the system jumbo MTU will not take effect until the next reload is done

NOTE

Remember to save your changes by copying the running-config to the startup-


config: copy running-config startup-config.

3. Reload the switch to apply the change.

IMPORTANT

Reloading the switch causes a network outage for any devices that are
dependent on the switch. Therefore, reload the switch only during a scheduled
maintenance period.

87
Red Hat OpenStack Platform 16.1 Networking Guide

sw01# reload
Proceed with reload? [confirm]

4. After the switch reloads, confirm the new jumbo MTU size.
The exact output may differ depending on your switch model. For example, System MTU might
apply to non-Gigabit interfaces, and Jumbo MTU might describe all Gigabit interfaces.

sw01# show system mtu

System MTU size is 1600 bytes


System Jumbo MTU size is 9000 bytes
System Alternate MTU size is 1600 bytes
Routing MTU size is 1600 bytes

7.2.10. About LLDP discovery


The ironic-python-agent service listens for LLDP packets from connected switches. The collected
information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery
Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection
process.

7.2.11. Configuring LLDP for a Cisco Catalyst switch

Procedure

1. Run the lldp run command to enable LLDP globally on your Cisco Catalyst switch:

sw01# config t
Enter configuration commands, one per line. End with CNTL/Z.

sw01(config)# lldp run

2. View any neighboring LLDP-compatible devices:

sw01# show lldp neighbor


Capability codes:
(R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device
(W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other

Device ID Local Intf Hold-time Capability Port ID


DEP42037061562G3 Gi1/0/11 180 B,T 422037061562G3:P1

Total entries displayed: 1

NOTE

Remember to save your changes by copying the running-config to the startup-config:


copy running-config startup-config.

7.3. CONFIGURING A CISCO NEXUS SWITCH

88
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

7.3.1. About trunk ports


With OpenStack Networking you can connect instances to the VLANs that already exist on your physical
network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the
same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For
example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the
8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch.

7.3.2. Configuring trunk ports for a Cisco Nexus switch


If using a Cisco Nexus you might use the following configuration syntax to allow traffic for
VLANs 110 and 111 to pass through to your instances.
This configuration assumes that your physical node has an ethernet cable connected to
interface Ethernet1/12 on the physical switch.

IMPORTANT

These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.

interface Ethernet1/12
description Trunk to Compute Node
switchport mode trunk
switchport trunk allowed vlan 2,110,111
switchport trunk native vlan 2
end

7.3.3. About access ports


Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs
to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other
operational requirements, such as transporting management traffic or Block Storage data. These ports
are commonly known as access ports and usually require a simpler configuration than trunk ports.

7.3.4. Configuring access ports for a Cisco Nexus switch

Procedure

Using the example from the Figure 7.1, “Sample network layout” diagram, Ethernet1/13 (on a
Cisco Nexus switch) is configured as an access port for eth1. This configuration assumes that
your physical node has an ethernet cable connected to interface Ethernet1/13 on the physical
switch.

IMPORTANT

These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.

interface Ethernet1/13
description Access port for Compute Node

89
Red Hat OpenStack Platform 16.1 Networking Guide

switchport mode access


switchport access vlan 200

7.3.5. About LACP port aggregation


You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to
form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a
dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends:
on the physical NICs, and on the physical switch ports.

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide.

7.3.6. Configuring LACP on the physical NIC


You can configure Link Aggregation Control Protocol (LACP) on a physical NIC.

Procedure

1. Edit the /home/stack/network-environment.yaml file:

- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000

2. Configure the Open vSwitch bridge to use LACP:

BondInterfaceOvsOptions:
"mode=802.3ad"

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide

7.3.7. Configuring LACP for a Cisco Nexus switch


In this example, the Compute node has two NICs using VLAN 100:

Procedure

1. Physically connect the Compute node NICs to the switch (for example, ports 12 and 13).

2. Confirm that LACP is enabled:

90
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

(config)# show feature | include lacp


lacp 1 enabled

3. Configure ports 1/12 and 1/13 as access ports, and as members of a channel group.
Depending on your deployment, you can deploy trunk interfaces rather than access interfaces.

For example, for Cisco UCI the NICs are virtual interfaces, so you might prefer to configure
access ports exclusively. Often these interfaces contain VLAN tagging configurations.

interface Ethernet1/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
channel-group 10 mode active

interface Ethernet1/13
description Access port for Compute Node
switchport mode access
switchport access vlan 200
channel-group 10 mode active

NOTE

When you use PXE to provision nodes on Cisco switches, you might need to set the
options no lacp graceful-convergence and no lacp suspend-individual to bring up the
ports and boot the server. For more information, see your Cisco switch documentation.

7.3.8. About MTU settings


You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000
bytes) are required for certain NFS or iSCSI traffic.

NOTE

You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.

Additional resources

Configuring maximum transmission unit (MTU) settings

7.3.9. Configuring MTU settings for a Cisco Nexus 7000 switch


Apply MTU settings to a single interface on 7000-series switches.

Procedure

Run the following commands to configure interface 1/12 to use jumbo frames of 9000 bytes:

interface ethernet 1/12


mtu 9216
exit

91
Red Hat OpenStack Platform 16.1 Networking Guide

7.3.10. About LLDP discovery


The ironic-python-agent service listens for LLDP packets from connected switches. The collected
information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery
Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection
process.

7.3.11. Configuring LLDP for a Cisco Nexus 7000 switch

Procedure

You can enable LLDP for individual interfaces on Cisco Nexus 7000-series switches:

interface ethernet 1/12


lldp transmit
lldp receive
no lacp suspend-individual
no lacp graceful-convergence

interface ethernet 1/13


lldp transmit
lldp receive
no lacp suspend-individual
no lacp graceful-convergence

NOTE

Remember to save your changes by copying the running-config to the startup-config:


copy running-config startup-config.

7.4. CONFIGURING A CUMULUS LINUX SWITCH

7.4.1. About trunk ports


With OpenStack Networking you can connect instances to the VLANs that already exist on your physical
network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the
same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For
example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the
8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch.

7.4.2. Configuring trunk ports for a Cumulus Linux switch


This configuration assumes that your physical node has transceivers connected to switch ports swp1 and
swp2 on the physical switch.

IMPORTANT

These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.

Procedure

92
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

Use the following configuration syntax to allow traffic for VLANs 100 and 200 to pass through
to your instances.

auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports glob swp1-2
bridge-vids 100 200

7.4.3. About access ports


Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs
to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other
operational requirements, such as transporting management traffic or Block Storage data. These ports
are commonly known as access ports and usually require a simpler configuration than trunk ports.

7.4.4. Configuring access ports for a Cumulus Linux switch


This configuration assumes that your physical node has an ethernet cable connected to the interface on
the physical switch. Cumulus Linux switches use eth for management interfaces and swp for
access/trunk ports.

IMPORTANT

These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.

Procedure

Using the example from the Figure 7.1, “Sample network layout” diagram, swp1 (on a Cumulus
Linux switch) is configured as an access port.

auto bridge
iface bridge
bridge-vlan-aware yes
bridge-ports glob swp1-2
bridge-vids 100 200

auto swp1
iface swp1
bridge-access 100

auto swp2
iface swp2
bridge-access 200

7.4.5. About LACP port aggregation


You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to
form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a

93
Red Hat OpenStack Platform 16.1 Networking Guide

dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends:
on the physical NICs, and on the physical switch ports.

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide.

7.4.6. About MTU settings


You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000
bytes) are required for certain NFS or iSCSI traffic.

NOTE

You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.

Additional resources

Configuring maximum transmission unit (MTU) settings

7.4.7. Configuring MTU settings for a Cumulus Linux switch

Procedure

This example enables jumbo frames on your Cumulus Linux switch.

auto swp1
iface swp1
mtu 9000

NOTE

Remember to apply your changes by reloading the updated configuration: sudo


ifreload -a

7.4.8. About LLDP discovery


The ironic-python-agent service listens for LLDP packets from connected switches. The collected
information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery
Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection
process.

7.4.9. Configuring LLDP for a Cumulus Linux switch


By default, the LLDP service lldpd runs as a daemon and starts when the switch boots.

Procedure

To view all LLDP neighbors on all ports/interfaces, run the following command:

cumulus@switch$ netshow lldp

94
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

Local Port Speed Mode Remote Port Remote Host Summary


---------- --- --------- ----- ----- ----------- --------
eth0 10G Mgmt ==== swp6 mgmt-sw IP: 10.0.1.11/24
swp51 10G Interface/L3 ==== swp1 spine01 IP: 10.0.0.11/32
swp52 10G Interface/L ==== swp1 spine02 IP: 10.0.0.11/32

7.5. CONFIGURING A EXTREME EXOS SWITCH

7.5.1. About trunk ports


With OpenStack Networking you can connect instances to the VLANs that already exist on your physical
network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the
same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For
example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the
8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch.

7.5.2. Configuring trunk ports on an Extreme Networks EXOS switch


If using an X-670 series switch, refer to the following example to allow traffic for VLANs 110 and 111 to
pass through to your instances.

IMPORTANT

These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.

Procedure

This configuration assumes that your physical node has an ethernet cable connected to
interface 24 on the physical switch. In this example, DATA and MNGT are the VLAN names.

#create vlan DATA tag 110


#create vlan MNGT tag 111
#configure vlan DATA add ports 24 tagged
#configure vlan MNGT add ports 24 tagged

7.5.3. About access ports


Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs
to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other
operational requirements, such as transporting management traffic or Block Storage data. These ports
are commonly known as access ports and usually require a simpler configuration than trunk ports.

7.5.4. Configuring access ports for an Extreme Networks EXOS switch


This configuration assumes that your physical node has an ethernet cable connected to interface 10 on
the physical switch.

IMPORTANT
95
Red Hat OpenStack Platform 16.1 Networking Guide

IMPORTANT

These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.

Procedure

In this configuration example, on a Extreme Networks X-670 series switch, 10 is used as an


access port for eth1.

create vlan VLANNAME tag NUMBER


configure vlan Default delete ports PORTSTRING
configure vlan VLANNAME add ports PORTSTRING untagged

For example:

#create vlan DATA tag 110


#configure vlan Default delete ports 10
#configure vlan DATA add ports 10 untagged

7.5.5. About LACP port aggregation


You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to
form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a
dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends:
on the physical NICs, and on the physical switch ports.

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide.

7.5.6. Configuring LACP on the physical NIC


You can configure Link Aggregation Control Protocol (LACP) on a physical NIC.

Procedure

1. Edit the /home/stack/network-environment.yaml file:

- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000

2. Configure the Open vSwitch bridge to use LACP:

96
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

BondInterfaceOvsOptions:
"mode=802.3ad"

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide

7.5.7. Configuring LACP on an Extreme Networks EXOS switch

Procedure

In this example, the Compute node has two NICs using VLAN 100:

enable sharing MASTERPORT grouping ALL_LAG_PORTS lacp


configure vlan VLANNAME add ports PORTSTRING tagged

For example:

#enable sharing 11 grouping 11,12 lacp


#configure vlan DATA add port 11 untagged

NOTE

You might need to adjust the timeout period in the LACP negotiation script. For
more information, see
https://gtacknowledge.extremenetworks.com/articles/How_To/LACP-
configured-ports-interfere-with-PXE-DHCP-on-servers

7.5.8. About MTU settings


You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000
bytes) are required for certain NFS or iSCSI traffic.

NOTE

You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.

Additional resources

Configuring maximum transmission unit (MTU) settings

7.5.9. Configuring MTU settings on an Extreme Networks EXOS switch

Procedure

Run the commands in this example to enable jumbo frames on an Extreme Networks EXOS
switch and configure support for forwarding IP packets with 9000 bytes:

enable jumbo-frame ports PORTSTRING


configure ip-mtu 9000 vlan VLANNAME

97
Red Hat OpenStack Platform 16.1 Networking Guide

Example

# enable jumbo-frame ports 11


# configure ip-mtu 9000 vlan DATA

7.5.10. About LLDP discovery


The ironic-python-agent service listens for LLDP packets from connected switches. The collected
information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery
Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection
process.

7.5.11. Configuring LLDP settings on an Extreme Networks EXOS switch

Procedure

In this example, LLDP is enabled on an Extreme Networks EXOS switch. 11 represents the port
string:

enable lldp ports 11

7.6. CONFIGURING A JUNIPER EX SERIES SWITCH

7.6.1. About trunk ports


With OpenStack Networking you can connect instances to the VLANs that already exist on your physical
network. The term trunk is used to describe a port that allows multiple VLANs to traverse through the
same port. Using these ports, VLANs can span across multiple switches, including virtual switches. For
example, traffic tagged as VLAN110 in the physical network reaches the Compute node, where the
8021q module directs the tagged traffic to the appropriate VLAN on the vSwitch.

7.6.2. Configuring trunk ports for a Juniper EX Series switch

Procedure

If using a Juniper EX series switch running Juniper JunOS, use the following configuration
syntax to allow traffic for VLANs 110 and 111 to pass through to your instances.
This configuration assumes that your physical node has an ethernet cable connected to
interface ge-1/0/12 on the physical switch.

IMPORTANT

These values are examples. You must change the values in this example to match
those in your environment. Copying and pasting these values into your switch
configuration without adjustment can result in an unexpected outage.

ge-1/0/12 {
description Trunk to Compute Node;
unit 0 {
family ethernet-switching {

98
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

port-mode trunk;
vlan {
members [110 111];
}
native-vlan-id 2;
}
}
}

7.6.3. About access ports


Not all NICs on your Compute node carry instance traffic, and so you do not need to configure all NICs
to allow multiple VLANs to pass through. Access ports require only one VLAN, and might fulfill other
operational requirements, such as transporting management traffic or Block Storage data. These ports
are commonly known as access ports and usually require a simpler configuration than trunk ports.

7.6.4. Configuring access ports for a Juniper EX Series switch


This example on, a Juniper EX series switch, shows ge-1/0/13 as an access port for eth1.

IMPORTANT

These values are examples. You must change the values in this example to match those in
your environment. Copying and pasting these values into your switch configuration
without adjustment can result in an unexpected outage.

Procedure
This configuration assumes that your physical node has an ethernet cable connected to interface ge-
1/0/13 on the physical switch.

ge-1/0/13 {
description Access port for Compute Node
unit 0 {
family ethernet-switching {
port-mode access;
vlan {
members 200;
}
native-vlan-id 2;
}
}
}

7.6.5. About LACP port aggregation


You can use Link Aggregation Control Protocol (LACP) to bundle multiple physical NICs together to
form a single logical channel. Also known as 802.3ad (or bonding mode 4 in Linux), LACP creates a
dynamic bond for load-balancing and fault tolerance. You must configure LACP at both physical ends:
on the physical NICs, and on the physical switch ports.
Additional resources
99
Red Hat OpenStack Platform 16.1 Networking Guide

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide.

7.6.6. Configuring LACP on the physical NIC


You can configure Link Aggregation Control Protocol (LACP) on a physical NIC.

Procedure

1. Edit the /home/stack/network-environment.yaml file:

- type: linux_bond
name: bond1
mtu: 9000
bonding_options:{get_param: BondInterfaceOvsOptions};
members:
- type: interface
name: nic3
mtu: 9000
primary: true
- type: interface
name: nic4
mtu: 9000

2. Configure the Open vSwitch bridge to use LACP:

BondInterfaceOvsOptions:
"mode=802.3ad"

Additional resources

Network Interface Bonding in the Advanced Overcloud Customization guide

7.6.7. Configuring LACP for a Juniper EX Series switch


In this example, the Compute node has two NICs using VLAN 100.

Procedure

1. Physically connect the Compute node’s two NICs to the switch (for example, ports 12 and 13).

2. Create the port aggregate:

chassis {
aggregated-devices {
ethernet {
device-count 1;
}
}
}

3. Configure switch ports 12 (ge-1/0/12) and 13 (ge-1/0/13) to join the port aggregate ae1:

100
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

interfaces {
ge-1/0/12 {
gigether-options {
802.3ad ae1;
}
}
ge-1/0/13 {
gigether-options {
802.3ad ae1;
}
}
}

NOTE

For Red Hat OpenStack Platform director deployments, in order to PXE boot
from the bond, you must configure one of the bond members as lacp force-up
toensure that only one bond member comes up during introspection and first
boot. The bond member that you configure with lacp force-up must be the same
bond member that has the MAC address in instackenv.json (the MAC address
known to ironic must be the same MAC address configured with force-up).

4. Enable LACP on port aggregate ae1:

interfaces {
ae1 {
aggregated-ether-options {
lacp {
active;
}
}
}
}

5. Add aggregate ae1 to VLAN 100:

interfaces {
ae1 {
vlan-tagging;
native-vlan-id 2;
unit 100 {
vlan-id 100;
}
}
}

6. Review your new port channel. The resulting output lists the new port aggregate ae1 with
member ports ge-1/0/12 and ge-1/0/13:

> show lacp statistics interfaces ae1

Aggregated interface: ae1

101
Red Hat OpenStack Platform 16.1 Networking Guide

LACP Statistics: LACP Rx LACP Tx Unknown Rx Illegal Rx


ge-1/0/12 0 0 0 0
ge-1/0/13 0 0 0 0

NOTE

Remember to apply your changes by running the commit command.

7.6.8. About MTU settings


You must adjust your MTU size for certain types of network traffic. For example, jumbo frames (9000
bytes) are required for certain NFS or iSCSI traffic.

NOTE

You must change MTU settings from end-to-end on all hops that the traffic is expected
to pass through, including any virtual switches.

Additional resources

Configuring maximum transmission unit (MTU) settings

7.6.9. Configuring MTU settings for a Juniper EX Series switch


This example enables jumbo frames on your Juniper EX4200 switch.

NOTE

The MTU value is calculated differently depending on whether you are using Juniper or
Cisco devices. For example, 9216 on Juniper would equal to 9202 for Cisco. The extra
bytes are used for L2 headers, where Cisco adds this automatically to the MTU value
specified, but the usable MTU will be 14 bytes smaller than specified when using Juniper.
So in order to support an MTU of 9000 on the VLANs, the MTU of 9014 would have to be
configured on Juniper.

Procedure

1. For Juniper EX series switches, MTU settings are set for individual interfaces. These commands
configure jumbo frames on the ge-1/0/14 and ge-1/0/15 ports:

set interfaces ge-1/0/14 mtu 9216


set interfaces ge-1/0/15 mtu 9216

NOTE

Remember to save your changes by running the commit command.

2. If using a LACP aggregate, you will need to set the MTU size there, and not on the member
NICs. For example, this setting configures the MTU size for the ae1 aggregate:

set interfaces ae1 mtu 9216

102
CHAPTER 7. CONFIGURING PHYSICAL SWITCHES FOR OPENSTACK NETWORKING

7.6.10. About LLDP discovery


The ironic-python-agent service listens for LLDP packets from connected switches. The collected
information can include the switch name, port details, and available VLANs. Similar to Cisco Discovery
Protocol (CDP), LLDP assists with the discovery of physical hardware during the director introspection
process.

7.6.11. Configuring LLDP for a Juniper EX Series switch


You can enable LLDP globally for all interfaces, or just for individual ones.

Procedure

Use the following too enable LLDP globally on your Juniper EX 4200 switch:

lldp {
interface all{
enable;
}
}
}

Use the following to enable LLDP for the single interface ge-1/0/14:

lldp {
interface ge-1/0/14{
enable;
}
}
}

NOTE

Remember to apply your changes by running the commit command.

103
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 8. CONFIGURING MAXIMUM TRANSMISSION UNIT


(MTU) SETTINGS

8.1. MTU OVERVIEW


OpenStack Networking can calculate the largest possible maximum transmission unit (MTU) size that
you can apply safely to instances. The MTU value specifies the maximum amount of data that a single
network packet can transfer; this number is variable depending on the most appropriate size for the
application. For example, NFS shares might require a different MTU size to that of a VoIP application.

NOTE

You can use the openstack network show <network_name> command to view the
largest possible MTU values that OpenStack Networking calculates. net-mtu is a neutron
API extension that is not present in some implementations. The MTU value that you
require can be advertised to DHCPv4 clients for automatic configuration, if supported by
the instance, as well as to IPv6 clients through Router Advertisement (RA) packets. To
send Router Advertisements, the network must be attached to a router.

You must configure MTU settings consistently from end-to-end. This means that the MTU setting must
be the same at every point the packet passes through, including the VM, the virtual network
infrastructure, the physical network, and the destination server.

For example, the circles in the following diagram indicate the various points where an MTU value must
be adjusted for traffic between an instance and a physical server. You must change the MTU value for
very interface that handles network traffic to accommodate packets of a particular MTU size. This is
necessary if traffic travels from the instance 192.168.200.15 through to the physical server 10.20.15.25:

Inconsistent MTU values can result in several network issues, the most common being random packet
loss that results in connection drops and slow network performance. Such issues are problematic to

104
CHAPTER 8. CONFIGURING MAXIMUM TRANSMISSION UNIT (MTU) SETTINGS

troubleshoot because you must identify and examine every possible network point to ensure it has the
correct MTU value.

8.2. CONFIGURING MTU SETTINGS IN DIRECTOR


This example demonstrates how to set the MTU using the NIC config templates. You must set the MTU
on the bridge, bond (if applicable), interface(s), and VLAN(s):

-
type: ovs_bridge
name: br-isolated
use_dhcp: false
mtu: 9000 # <--- Set MTU
members:
-
type: ovs_bond
name: bond1
mtu: 9000 # <--- Set MTU
ovs_options: {get_param: BondInterfaceOvsOptions}
members:
-
type: interface
name: ens15f0
mtu: 9000 # <--- Set MTU
primary: true
-
type: interface
name: enp131s0f0
mtu: 9000 # <--- Set MTU
-
type: vlan
device: bond1
vlan_id: {get_param: InternalApiNetworkVlanID}
mtu: 9000 # <--- Set MTU
addresses:
-
ip_netmask: {get_param: InternalApiIpSubnet}
-
type: vlan
device: bond1
mtu: 9000 # <--- Set MTU
vlan_id: {get_param: TenantNetworkVlanID}
addresses:
-
ip_netmask: {get_param: TenantIpSubnet}

8.3. REVIEWING THE RESULTING MTU CALCULATION


You can view the calculated MTU value, which is the largest possible MTU value that instances can use.
Use this calculated MTU value to configure all interfaces involved in the path of network traffic.

# openstack network show <network>

105
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS)


POLICIES
You can offer varying service levels for instances by using quality of service (QoS) policies to apply rate
limits to egress and ingress traffic.

You can apply QoS policies to individual ports. You can also apply QoS policies to a project network,
where ports with no specific policy attached inherit the policy.

NOTE

Internal network owned ports, such as DHCP and internal router ports, are excluded from
network policy application.

You can apply, modify, or remove QoS policies dynamically. However, for guaranteed minimum
bandwidth QoS policies, you can only apply modifications when there are no instances that use any of
the ports the policy is assigned to.

NOTE

Currently, the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver
(ML2/OVN) does not support QoS policies.

9.1. QOS RULES


You can configure the following rule types to define a quality of service (QoS) policy in the Red Hat
OpenStack Platform (RHOSP) Networking service (neutron):

Bandwidth limit (bandwidth_limit)


Provides bandwidth limitations on networks, ports, or floating IPs. If implemented, any traffic that
exceeds the specified rate is dropped.
Minimum bandwidth (minimum_bandwidth)
Provides minimum bandwidth constraints on certain types of traffic. If implemented, best efforts are
made to provide no less than the specified bandwidth to each port on which the rule is applied.
DSCP marking (dscp_marking)
Marks network traffic with a Differentiated Services Code Point (DSCP) value.

QoS policies can be enforced in various contexts, including virtual machine instance placements, floating
IP assignments, and gateway IP assignments.

Depending on the enforcement context and on the mechanism driver you use, a QoS rule affects egress
traffic (upload from instance), ingress traffic (download to instance), or both.

Table 9.1. Supported traffic direction by driver (all QoS rule types)

Rule Supported traffic direction by mechanism driver

ML2/OVS ML2/SR-IOV ML2/OVN

Bandwidth limit Egress [1][2] and Egress only [3] Egress and ingress
ingress

106
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES

Minimum bandwidth Egress only [4][5] Egress only Currently, no support


[6]

DSCP marking Egress only N/A Egress only

[1] The OVS egress bandwidth limit is performed in the TAP interface and is traffic policing, not traffic
shaping.

[2] In RHOSP 16.2.2 and later, the OVS egress bandwidth limit is supported in hardware offloaded ports
by applying the QoS policy in the network interface using ip link commands.

[3] The mechanism drivers ignore the max_burst_kbps parameter because they do not support it.

[4] Rule applies only to non-tunnelled networks: flat and VLAN.

[5] The OVS egress minimum bandwidth is supported in hardware offloaded ports by applying the QoS
policy in the network interface using ip link commands.

[6] https://bugzilla.redhat.com/show_bug.cgi?id=2060310

Table 9.2. Supported traffic direction by driver for placement reporting and scheduling (minimum
bandwidth only)

Enforcement type Supported traffic by direction mechanism driver

ML2/OVS ML2/SR-IOV ML2/OVN

Placement Egress and ingress Egress and ingress Currently, no support

Table 9.3. Supported traffic direction by driver for enforcement types (bandwidth limit only)

Enforcement type Supported traffic direction by mechanism driver

ML2/OVS ML2/OVN

Floating IP Egress and ingress Egress and ingress

Gateway IP Egress and ingress Currently, no support[1]

[1] https://bugzilla.redhat.com/show_bug.cgi?id=2064185

Additional resources

Creating and applying a bandwidth limit QoS policy and rule

Creating and applying a guaranteed minimum bandwidth QoS policy and rule

107
Red Hat OpenStack Platform 16.1 Networking Guide

Creating and applying a DSCP marking QoS policy and rule for egress traffic

9.2. CREATING AND APPLYING A QOS POLICY AND RULE


To create a quality of service (QoS) policy and rule and apply the policy to a port, complete the following
steps:

Procedure

1. Identify the ID of the project you want to create the QoS policy for:

(overcloud) $ openstack project list

-------------------------------------------+
| ID | Name |
-------------------------------------------+
| 8c409e909fb34d69bc896ab358317d60 | admin |
| 92b6c16c7c7244378a062be0bfd55fa0 | service |
-------------------------------------------+

2. Create the new QoS policy:

(overcloud) $ openstack network qos policy create --share --project <project_ID>


<policy_name>

3. Create new rules for the QoS policy:

(overcloud) $ openstack network qos rule create --type <rule-type> [rule properties]
<policy_name>

Table 9.4. Rule properties

Property Description

max_kbps The maximum rate (in Kbps) that the instance


can send.

108
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES

Property Description

max_burst_kbps The maximum amount of data (in kbits) that the


port can send in an instant if the token buffer is
full. The token buffer replenishes at a
"max_kbps" rate.

The burst value for TCP traffic can be set as 80%


of desired bandwidth limit value. For example, if
the bandwidth limit is set to 1000kbps then a
burst value of 800kbps is sufficient.

NOTE

If the burst value is set too


low, bandwidth usage is
throttled even with a proper
bandwidth limit setting,
resulting in a lower than
expected bandwidth.

If the configured burst value


is too high, too few packets
could be limited, resulting in
a higher than expected
bandwidth limit.

min-kbps The minimum bandwidth (in Kbps) guaranteed


to an instance.

ingress/egress The direction of traffic the rule is applied to.


From the perspective of a cloud server, ingress
indicates download, and egress indicates upload.

dscp-mark Specifies the decimal value of a DSCP mark.

4. Configure a port or network to apply the policy to. You can update an existing port or network,
or create a new port or network to apply the policy to:

Apply the policy to an existing port:

(overcloud) $ openstack port set --qos-policy <policy_name> <port_name|port_ID>

Create a new port:

109
Red Hat OpenStack Platform 16.1 Networking Guide

(overcloud) $ openstack port create --qos-policy <policy_name> --network


<network_name|network_ID> <port_name|port_ID>

Apply the policy to an existing network:

(overcloud) $ openstack network set --qos-policy <policy_name>


<network_name|network_ID>

Create a new network:

(overcloud) $ openstack network create --qos-policy <policy_name> <network_name>

9.3. CONTROLLING MINIMUM BANDWIDTH BY USING QOS POLICIES


A guaranteed minimum bandwidth QoS rule consists of two distinct contexts: back-end enforcement
and resource allocation scheduling.

The network back end, ML2/OVS or ML2/SR-IOV, attempts to guarantee that each port on which the
rule is applied has no less than the specified network bandwidth.

Resource allocation scheduling means that the Compute service (nova) chooses a host with enough
bandwidth on which to spawn a VM instance, after having queried the Placement service to retrieve the
amount of bandwidth on the different network interfaces and Compute nodes.

You can use QoS minimum bandwidth rules for either purpose or for both.

The following table identifies the Modular Layer 2 (ML2) mechanism drivers that support minimum
bandwidth QoS policies.

Table 9.5. ML2 mechanism drivers that support minimum bandwidth QoS

ML2 mechanism driver Agent VNIC types

ML2/SR-IOV sriovnicswitch direct

ML2/OVS openvswitch normal

Additional resources

Section 9.3.1, “Using network back-end policies to enforce a minimum bandwidth”

Section 9.3.2, “Scheduling instances by using minimum bandwidth QoS policies”

9.3.1. Using network back-end policies to enforce a minimum bandwidth


You can guarantee a minimum bandwidth for network traffic for ports on which Red Hat OpenStack
Platform (RHOSP) quality of service (QoS) policies are applied. These ports must be backed by a
physical network—​either a flat or a VLAN network. Enforcing a minimum bandwidth on either an
ML2/OVS or ML2/SR-IOV back end consists of these steps:

1. Enable the QoS service plug-in for the RHOSP Networking service (neutron).

110
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES

2. Redeploy your overcloud.

3. Create a minimum bandwidth QoS policy and configure its rules.

4. Apply the minimum bandwidth QoS policy to one or more ports.

Currently, the Modular Layer 2 plug-in with the Open Virtual Network mechanism driver (ML2/OVN)
does not support minimum bandwidth QoS rules.

Prerequisites

You must be the stack user with access to the RHOSP undercloud.

You must use the ML2/OVS or ML2/SR-IOV mechanism drivers.

Do not mix ports with and without bandwidth guarantees on the same physical interface, as the
ports without a guarantee might be denied necessary resources (starvation).

TIP

Create host aggregates to separate ports with bandwidth guarantees from those ports without
bandwidth guarantees.

Procedure

1. Log in to the undercloud host as the stack user.

2. Source the undercloud credentials file:

$ source ~/stackrc

3. Create a YAML custom environment file.

Example

$ vi /home/stack/templates/my-neutron-environment.yaml

4. Enable the QoS service plug-in for the RHOSP Networking service.
Your environment file must contain the keywords parameter_defaults. On a new line below
parameter_defaults add qos to the NeutronServicePlugins parameter.

Example

parameter_defaults:
NeutronServicePlugins: 'qos'

5. Run the deployment command and include the core heat templates, other environment files,
and this new custom environment file.
The order of the environment files is important as the parameters and resources defined in
subsequent environment files take precedence.

Example

111
Red Hat OpenStack Platform 16.1 Networking Guide

$ openstack overcloud deploy --templates \


-e <other_environment_files> \
-e /home/stack/templates/my-neutron-environment.yaml

6. Source your credentials file.

Example

$ source ~/overcloudrc

7. Identify the ID of the project you want to create the QoS policy for:

$ openstack project list

Sample output

+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+

8. Using the project ID from the previous step, create a QoS policy for the project.

Example
In this example, a QoS policy named guaranteed_min_bw is created for the admin project:

$ openstack network qos policy create --share \


--project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw

9. Configure the rules for the policy.

Example
In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps
are created for the policy named guaranteed_min_bw:

$ openstack network qos rule create \


--type minimum-bandwidth --min-kbps 40000000 \
--ingress guaranteed_min_bw

$ openstack network qos rule create \


--type minimum-bandwidth --min-kbps 40000000 \
--egress guaranteed_min_bw

10. Configure a port to apply the policy to.

Example

In this example, the guaranteed_min_bw policy is applied to port ID, 56x9aiw1-2v74-144x-


112
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES

In this example, the guaranteed_min_bw policy is applied to port ID, 56x9aiw1-2v74-144x-


c2q8-ed8w423a6s12:

$ openstack port set --qos-policy guaranteed_min_bw \


56x9aiw1-2v74-144x-c2q8-ed8w423a6s12

Verification

ML2/SR-IOV
Using root access, log in to the Compute node, and show the details of the virtual functions that
are held in the physical function.

Example

# ip -details link show enp4s0f1

Sample output

50: enp4s0f1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq


master mx-bond state UP mode DEFAULT group default qlen 1000
link/ether 98:03:9b:9d:73:74 brd ff:ff:ff:ff:ff:ff permaddr 98:03:9b:9d:73:75 promiscuity 0
minmtu 68 maxmtu 9978
bond_slave state BACKUP mii_status UP link_failure_count 0 perm_hwaddr
98:03:9b:9d:73:75 queue_id 0 addrgenmode eui64 numtxqueues 320 numrxqueues 40
gso_max_size 65536 gso_max_segs 65535 portname p1 switchid 74739d00039b0398
vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable,
trust off, query_rss off
vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable,
trust off, query_rss off
vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable,
trust off, query_rss off
vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable,
trust off, query_rss off
vf 4 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable,
trust off, query_rss off
vf 5 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable,
trust off, query_rss off
vf 6 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable,
trust off, query_rss off
vf 7 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable,
trust off, query_rss off
vf 8 link/ether fa:16:3e:2a:d2:7f brd ff:ff:ff:ff:ff:ff, tx rate 999 (Mbps), max_tx_rate
999Mbps, spoof checking off, link-state disable, trust off, query_rss off
vf 9 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking off, link-state disable,
trust off, query_rss off

ML2/OVS
Using root access, log in to the compute node, show the tc rules and classes on the physical
bridge interface.

Example

# tc class show dev mx-bond

113
Red Hat OpenStack Platform 16.1 Networking Guide

Sample output

class htb 1:11 parent 1:fffe prio 0 rate 4Gbit ceil 34359Mbit burst 9000b cburst 8589b
class htb 1:1 parent 1:fffe prio 0 rate 72Kbit ceil 34359Mbit burst 9063b cburst 8589b
class htb 1:fffe root rate 34359Mbit ceil 34359Mbit burst 8589b cburst 8589b

Additional resources

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

network qos policy create in the Command Line Interface Reference

network qos rule create in the Command Line Interface Reference

port set in the Command Line Interface Reference

9.3.2. Scheduling instances by using minimum bandwidth QoS policies


You can apply a minimum bandwidth QoS policy to a port to guarantee that the host on which its Red
Hat OpenStack Platform (RHOSP) VM instance is spawned has a minimum network bandwidth.
Configuration of minimum bandwidth placement enforcement consists of these steps:

1. Enabling the QoS and the Placement service plug-ins for the RHOSP Networking service
(neutron).

2. Configuring resource_provider_hypervisors for the neutron agent.

3. Configure the resource provider ingress and egress bandwidths for the relevant agents on each
Compute node.

4. Redeploy your overcloud.

5. Create a minimum bandwidth QoS policy and configure its rules.

6. Apply the minimum bandwidth QoS policy to one or more ports.

Prerequisites

The Placement service must support microversion 1.29.

The Compute service (nova) must support microversion 2.72.

The Networking service must support the following API extensions:

agent-resources-synced

port-resource-request

qos-bw-minimum-ingress

You must use the ML2/OVS or ML2/SR-IOV mechanism drivers.

You must be the stack user with access to the RHOSP undercloud.
You can only modify a minimum bandwidth QoS policy when there are no instances using any of
114
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES

You can only modify a minimum bandwidth QoS policy when there are no instances using any of
the ports the policy is assigned to. The Networking service cannot update the Placement API
usage information if a port is bound.

Procedure

1. Log in to the undercloud host as the stack user.

2. Source the undercloud credentials file:

$ source ~/stackrc

3. Create a YAML custom environment file.

Example

$ vi /home/stack/templates/my-neutron-environment.yaml

4. Enable the QoS and the Placement service plug-ins` for the RHOSP Networking service.
Your environment file must contain the keywords parameter_defaults. On a new line below
parameter_defaults add qos and placement to the NeutronServicePlugins parameter.

Example

parameter_defaults:
NeutronServicePlugins: 'qos,placement'

5. To use minimum bandwidth placement enforcement, you must configure


resource_provider_hypervisors.
Beneath NeutronServicePlugins add the following configuration for
resource_provider_hypervisors depending on your Modular Layer 2 (ML2) mechanism driver.

ML2/SR-IOV

Example

parameter_defaults:
NeutronServicePlugins: 'qos,placement'
ExtraConfig:
Neutron::agents::ml2::sriov::resource_provider_hypervisors:
"ens5:%{hiera('fqdn_canonical')},ens6:%{hiera('fqdn_canonical')}"

ML2/OVS

Example

parameter_defaults:
NeutronServicePlugins: 'qos,placement'
ExtraConfig:
Neutron::agents::ml2::ovs::resource_provider_hypervisors

6. Optional: To mark vnic_types as not supported when multiple ML2 mechanism drivers support

115
Red Hat OpenStack Platform 16.1 Networking Guide

6. Optional: To mark vnic_types as not supported when multiple ML2 mechanism drivers support
them by default and multiple agents are being tracked in the Placement service, also add the
following configuration to your environment file:

parameter_defaults:
...
NeutronOvsVnicTypeBlacklist: direct
# NeutronSriovVnicTypeBlacklist: direct

7. Configure the resource provider ingress and egress bandwidths for the relevant agents on each
Compute node that needs to provide a minimum bandwidth.
You can configure only ingress or egress, or both, using the following formats:

Configure only egress bandwidth, in kbps:

NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:,<bridge1>:
<egress_kbps>:,...,<bridgeN>:<egress_kbps>:

Configure only ingress bandwidth, in kbps:

NeutronOvsResourceProviderBandwidths: <bridge0>::<ingress_kbps>,<bridge1>::
<ingress_kbps>,...,<bridgeN>::<ingress_kbps>

Configure both egress and ingress bandwidth, in kbps:

NeutronOvsResourceProviderBandwidths: <bridge0>:<egress_kbps>:<ingress_kbps>,
<bridge1>:<egress_kbps>:<ingress_kbps>,...,<bridgeN>:<egress_kbps>:
<ingress_kbps>

Example - OVS agent


To configure the resource provider ingress and egress bandwidths for the OVS agent, add
the following configuration to your network environment file:

parameter_defaults:
...
NeutronBridgeMappings: physnet0:br-physnet0
NeutronOvsResourceProviderBandwidths: br-physnet0:10000000:10000000

Example - SRIOV agent


To configure the resource provider ingress and egress bandwidths for the SRIOV agent, add
the following configuration to your network environment file:

parameter_defaults:
...
NeutronML2PhysicalNetworkMtus: physnet0:ens5,physnet0:ens6
NeutronSriovResourceProviderBandwidths:
ens5:40000000:40000000,ens6:40000000:40000000

8. Run the deployment command and include the core heat templates, other environment files,
and this new custom environment file.
The order of the environment files is important as the parameters and resources defined in
subsequent environment files take precedence.

116
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES

$ openstack overcloud deploy --templates \


-e <other_environment_files> \
-e /home/stack/templates/my-network-environment.yaml

9. Source your credentials file.

Example

$ source ~/overcloudrc

10. Identify the ID of the project you want to create the QoS policy for:

$ openstack project list

Sample output

+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+

11. Using the project ID from the previous step, create a QoS policy for the project.

Example
In this example, a QoS policy named guaranteed_min_bw is created for the admin project:

$ openstack network qos policy create --share \


--project 98a2f53c20ce4d50a40dac4a38016c69 guaranteed_min_bw

12. Configure the rules for the policy.

Example
In this example, QoS rules for ingress and egress with a minimum bandwidth of 40000000 kbps
are created for the policy named guaranteed_min_bw:

$ openstack network qos rule create \


--type minimum-bandwidth --min-kbps 40000000 \
--ingress guaranteed_min_bw
$ openstack network qos rule create \
--type minimum-bandwidth --min-kbps 40000000 \
--egress guaranteed_min_bw

13. Configure a port to apply the policy to.

Example
In this example, the guaranteed_min_bw policy is applied to port ID, 56x9aiw1-2v74-144x-
c2q8-ed8w423a6s12:

117
Red Hat OpenStack Platform 16.1 Networking Guide

$ openstack port set --qos-policy guaranteed_min_bw \


56x9aiw1-2v74-144x-c2q8-ed8w423a6s12

Verification

1. Log in to the undercloud host as the stack user.

2. Source the undercloud credentials file:

$ source ~/stackrc

3. List all the available resource providers:

$ openstack --os-placement-api-version 1.17 resource provider list

Sample output

+--------------------------------------+-----------------------------------------------------+------------+----
----------------------------------+--------------------------------------+
| uuid | name | generation |
root_provider_uuid | parent_provider_uuid |
+--------------------------------------+-----------------------------------------------------+------------+----
----------------------------------+--------------------------------------+
| 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | dell-r730-014.localdomain |
28 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | None |
| 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | dell-r730-063.localdomain |
18 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | None |
| e2f5082a-c965-55db-acb3-8daf9857c721 | dell-r730-063.localdomain:NIC Switch agent
| 0 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | 6b15ddce-13cf-4c85-a58f-
baec5b57ab52 |
| d2fb0ef4-2f45-53a8-88be-113b3e64ba1b | dell-r730-014.localdomain:NIC Switch agent
| 0 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | 31d3d88b-bc3a-41cd-9dc0-
fda54028a882 |
| f1ca35e2-47ad-53a0-9058-390ade93b73e | dell-r730-063.localdomain:NIC Switch
agent:enp6s0f1 | 13 | 6b15ddce-13cf-4c85-a58f-baec5b57ab52 | e2f5082a-c965-55db-
acb3-8daf9857c721 |
| e518d381-d590-5767-8f34-c20def34b252 | dell-r730-014.localdomain:NIC Switch
agent:enp6s0f1 | 19 | 31d3d88b-bc3a-41cd-9dc0-fda54028a882 | d2fb0ef4-2f45-53a8-
88be-113b3e64ba1b |
+--------------------------------------+-----------------------------------------------------+------------+----
----------------------------------+--------------------------------------+

4. Check the bandwidth a specific resource provides.

(undercloud)$ openstack --os-placement-api-version 1.17 \


resource provider inventory list <rp_uuid>

Example
In this example, the bandwidth provided by interface enp6s0f1 on the host dell-r730-014 is
checked, using the resource provider UUID, e518d381-d590-5767-8f34-c20def34b252:

118
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES

[stack@dell-r730-014 nova]$ openstack --os-placement-api-version 1.17 \


resource provider inventory list e518d381-d590-5767-8f34-c20def34b252

Sample output

+----------------------------+------------------+----------+------------+----------+-----------+----------+
| resource_class | allocation_ratio | min_unit | max_unit | reserved | step_size |
total |
+----------------------------+------------------+----------+------------+----------+-----------+----------+
| NET_BW_EGR_KILOBIT_PER_SEC | 1.0 | 1 | 2147483647 | 0| 1|
10000000 |
| NET_BW_IGR_KILOBIT_PER_SEC | 1.0 | 1 | 2147483647 | 0| 1|
10000000 |
+----------------------------+------------------+----------+------------+----------+-----------+----------+

5. To check claims against the resource provider when instances are running, run the following
command:

(undercloud)$ openstack --os-placement-api-version 1.17 \


resource provider show --allocations <rp_uuid>

Example
In this example, claims against the resource provider are checked on the host, dell-r730-014,
using the resource provider UUID, e518d381-d590-5767-8f34-c20def34b252:

[stack@dell-r730-014 nova]$ openstack --os-placement-api-version 1.17 resource provider


show --allocations e518d381-d590-5767-8f34-c20def34b252 -f value -c allocations

Sample output

{3cbb9e07-90a8-4154-8acd-b6ec2f894a83: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, 8848b88b-4464-443f-bf33-5d4e49fd6204: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, 9a29e946-698b-4731-bc28-89368073be1a: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, a6c83b86-9139-4e98-9341-dc76065136cc: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC:
3000000}}, da60e33f-156e-47be-a632-870172ec5483: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 1000000, NET_BW_IGR_KILOBIT_PER_SEC:
1000000}}, eb582a0e-8274-4f21-9890-9a0d55114663: {resources:
{NET_BW_EGR_KILOBIT_PER_SEC: 3000000, NET_BW_IGR_KILOBIT_PER_SEC:
3000000}}}

Additional resources

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

network qos policy create in the Command Line Interface Reference

119
Red Hat OpenStack Platform 16.1 Networking Guide

network qos rule create in the Command Line Interface Reference

port set in the Command Line Interface Reference

9.4. CREATING AND APPLYING A BANDWIDTH LIMIT QOS POLICY


AND RULE
You can create a QoS policy that limits the bandwidth on networks, ports or floating IPs, and drops any
traffic that exceeds the specified rate. To create and apply a bandwidth limit QoS policy and rule,
complete the following steps:

Procedure

1. If the qos extension is not already enabled for OpenStack Networking in


/etc/neutron/plugins/ml2/<agent_name>_agent.ini, then perform the following steps:

a. Create a custom environment file, and add the following configuration:

parameter_defaults:
NeutronSriovAgentExtensions: 'qos'

b. To apply this configuration, deploy the overcloud, adding your custom environment file to
the stack along with your other environment files:

(undercloud) $ openstack overcloud deploy --templates \


-e [your environment files]
-e /home/stack/templates/<custom-environment-file>.yaml

For more information, see "Modifying the Overcloud Environment" in the Director
Installation and Usage guide.

2. Identify the ID of the project you want to create the QoS policy for:

(overcloud) $ openstack project list


+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+

3. Create a QoS policy named "bw-limiter" in the admin project:

(overcloud) $ openstack network qos policy create --share --project


98a2f53c20ce4d50a40dac4a38016c69 bw-limiter

4. Configure the rules for the "bw-limiter" policy:

(overcloud) $ openstack network qos rule create --type bandwidth-limit --max_kbps 3000 --
max_burst_kbps 300 bw-limiter

120
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES

5. Configure a port to apply the "bw-limiter" policy to:

(overcloud) $ openstack port set --qos-policy bw-limiter <port_name|port_ID>

9.5. DSCP MARKING FOR EGRESS TRAFFIC


You can use differentiated services code point (DSCP) to implement quality-of-service (QoS) policies
on your network by embedding relevant values in the IP headers. The Networking service (neutron) QoS
policies can use DSCP marking to manage egress traffic on neutron ports and networks.

Procedure

1. If you are:

using ML2/OVN, skip to step 2.

using ML2/OVS without a tunneling protocol, skip to step 2.

using ML2/OVS with a tunneling protocol (VXLAN and GRE), then perform the following
steps:

a. Log in to the undercloud host as the stack user.

b. Source the undercloud credentials file:

$ source ~/stackrc

c. Create a custom YAML environment file.

Example

$ vi /home/stack/templates/my-neutron-environment.yaml

d. In the YAML environment file under parameter_defaults, add the following lines:

parameter_defaults:
ControllerExtraConfig:
neutron::config::server_config:
agent/dscp_inherit:
value: true

When dscp_inherit is true, the Networking service copies the DSCP value of the inner
header to the outer header.

e. Run the deployment command and include the core heat templates, environment files,
and this new custom environment file.

IMPORTANT

The order of the environment files is important as the parameters and


resources defined in subsequent environment files take precedence.

Example

121
Red Hat OpenStack Platform 16.1 Networking Guide

$ openstack overcloud deploy --templates \


-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/ \
neutron-ovs.yaml \
-e /home/stack/templates/my-neutron-environment.yaml

2. Source your credentials file.

Example

$ source ~/overcloudrc

3. Create a new QoS policy:

Example

openstack network qos policy create --project 98a2f53c20ce4d50a40dac4a38016c69 qos-


web-servers

4. Create a DSCP rule and apply it to a policy.

Example
In this example, a DSCP rule is created using DSCP mark 18 and is applied to the qos-web-
servers policy:

openstack network qos rule create --type dscp-marking --dscp-mark 18 qos-web-servers

Sample output

Created a new dscp_marking_rule:


+-----------+--------------------------------------+
| Field | Value |
+-----------+--------------------------------------+
| dscp_mark | 18 |
| id | d7f976ec-7fab-4e60-af70-f59bf88198e6 |
+-----------+--------------------------------------+

5. You can change the DSCP value assigned to a rule.

Example

openstack network qos rule set --dscp-mark 22 qos-web-servers d7f976ec-7fab-4e60-af70-


f59bf88198e6

6. You can delete a DSCP rule.

Example

openstack network qos rule delete qos-web-servers d7f976ec-7fab-4e60-af70-f59bf88198e6

Verification

122
CHAPTER 9. CONFIGURING QUALITY OF SERVICE (QOS) POLICIES

Confirm that the DSCP rule (d7f976ec-7fab-4e60-af70-f59bf88198e6) is applied to the QoS


policy (qos-web-servers).

Example

openstack network qos rule list qos-web-servers

Sample output

+-----------+--------------------------------------+
| dscp_mark | id |
+-----------+--------------------------------------+
| 18 | d7f976ec-7fab-4e60-af70-f59bf88198e6 |
+-----------+--------------------------------------+

Additional resources

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

network qos rule create in the Command Line Interface Reference

network qos rule set in the Command Line Interface Reference

network qos rule delete in the Command Line Interface Reference

network qos rule list in the Command Line Interface Reference

9.6. HOW TO VERIFY THE APPLICATION OF YOUR QOS POLICY AND


RULES
Use the following commands to verify the creation and application of your quality of service (QoS)
policy and rules.

Action Command

List the available QoS policies $ openstack network qos policy list

Show details of a specific QoS policy $ openstack network qos policy show
<policy_name>

List the available QoS rules $ openstack network qos rule type list

List the rules of a specific QoS policy $ openstack network qos rule list
<policy_name>

Show details of a specific rule $ openstack network qos rule type show
<rule_id>

123
Red Hat OpenStack Platform 16.1 Networking Guide

Action Command

List the available ports $ openstack port list

Show details of a specific port $ openstack port show <port_ID/port_name>

9.7. RBAC FOR QOS POLICIES


You can add a role-based access control (RBAC) for quality-of-service (QoS) policies. As a result, you
can now make QoS policies available to certain projects.

For example, you can now create a QoS policy that allows for lower-priority network traffic, and have it
only apply to certain projects. Run the following command to assign the bw-limiter policy to the project,
demo:

# openstack network rbac create --type qos_policy --target-project


80bf5732752a41128e612fe615c886c6 --action access_as_shared bw-limiter

124
CHAPTER 10. CONFIGURING BRIDGE MAPPINGS

CHAPTER 10. CONFIGURING BRIDGE MAPPINGS


This chapter contains information about configuring bridge mappings in Red Hat OpenStack Platform.

10.1. OVERVIEW OF BRIDGE MAPPINGS


A bridge mapping associates a physical network name (an interface label) to a bridge created with OVS
or OVN. In this example, the physical name (datacentre) is mapped to the external bridge (br-ex):

bridge_mappings = datacentre:br-ex

Bridge mappings allow provider network traffic to reach the physical network. Traffic leaves the
provider network from the qg-xxx interface of the router and arrives at br-int. For OVS, a patch port
between br-int and br-ex then allows the traffic to pass through the bridge of the provider network and
out to the physical network. OVN creates a patch port on a hypervisor only when there is a VM bound to
the hypervisor that requires the port.

You configure the bridge mapping on the network node on which the router is scheduled. Router traffic
can egress using the correct physical network, as represented by the provider network.

NOTE

The Networking service supports only one bridge for each physical network. Do not map
more than one physical network to the same bridge.

10.2. TRAFFIC FLOW


Each external network is represented by an internal VLAN ID, which is tagged to the router qg-xxx port.
When a packet reaches phy-br-ex, the br-ex port strips the VLAN tag and moves the packet to the
physical interface and then to the external network.

The return packet from the external network arrives on br-ex and moves to br-int using phy-br-ex <->
int-br-ex. When the packet is going through br-ex to br-int, the packet’s external vlan ID is replaced by
an internal vlan tag in br-int, and this allows qg-xxx to accept the packet.

In the case of egress packets, the packet’s internal vlan tag is replaced with an external vlan tag in br-ex
(or in the external bridge that is defined in the network_vlan_ranges parameter).

10.3. CONFIGURING BRIDGE MAPPINGS


Red Hat OpenStack Platform (RHOSP) director uses predefined NIC templates to install and configure
your initial networking configuration.

You can customize aspects of your initial networking configuration, such as bridge mappings, by using
the NeutronBridgeMappings parameter in a customized environment file. You call the environment file
in the openstack overcloud deploy command.

Prerequisites

You must configure bridge mappings on the network node on which the router is scheduled.

For both ML2/OVS and ML2/OVN DVR configurations, you must configure bridge mappings
for the compute nodes, too.

125
Red Hat OpenStack Platform 16.1 Networking Guide

Procedure

1. Create a custom environment file and add the NeutronBridgeMappings heat parameter with
values that are appropriate for your site.

parameter_defaults:
NeutronBridgeMappings: "datacentre:br-ex,tenant:br-tenant"

The NeutronBridgeMappings heat parameter associates a physical name ( datacentre) to a


bridge (br-ex).

NOTE

When the NeutronBridgeMappings parameter is not used, the default maps the
external bridge on hosts (br-ex) to a physical name (datacentre).

2. To apply this configuration, deploy the overcloud, adding your custom environment file to the
stack along with your other environment files.

(undercloud) $ openstack overcloud deploy --templates \


-e [your environment files]
-e /home/stack/templates/<custom-environment-file>.yaml

3. You are ready for the next steps, which are the following:

a. Using the network VLAN ranges, create the provider networks that represent the
corresponding external networks. (You use the physical name when creating neutron
provider networks or floating IP networks.)

b. Connect the external networks to your project networks with router interfaces.

Additional resources

Network environment parameters in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

10.4. MAINTAINING BRIDGE MAPPINGS FOR OVS


After removing any OVS bridge mappings, you must perform a subsequent cleanup to ensure that the
bridge configuration is cleared of any associated patch port entries. You can perform this operation in
the following ways:

Manual port cleanup - requires careful removal of the superfluous patch ports. No outages of
network connectivity are required.

Automated port cleanup - performs an automated cleanup, but requires an outage, and requires
that the necessary bridge mappings be re-added. Choose this option during scheduled
maintenance windows when network connectivity outages can be tolerated.

NOTE
126
CHAPTER 10. CONFIGURING BRIDGE MAPPINGS

NOTE

When OVN bridge mappings are removed, the OVN controller automatically cleans up
any associated patch ports.

10.4.1. Cleaning up OVS patch ports manually


After removing any OVS bridge mappings, you must also remove the associated patch ports.

Prerequisites

The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports.

A system outage is not required to perform a manual patch port cleanup.

You can identify the patch ports to cleanup by their naming convention:

In br-$external_bridge patch ports are named phy-<external bridge name> (for example,
phy-br-ex2).

In br-int patch ports are named int-<external bridge name> (for example, int-br-ex2).

Procedure

1. Use ovs-vsctl to remove the OVS patch ports associated with the removed bridge mapping
entry:

# ovs-vsctl del-port br-ex2 datacentre


# ovs-vsctl del-port br-tenant tenant

2. Restart neutron-openvswitch-agent:

# service neutron-openvswitch-agent restart

10.4.2. Cleaning up OVS patch ports automatically


After removing any OVS bridge mappings, you must also remove the associated patch ports.

NOTE

When OVN bridge mappings are removed, the OVN controller automatically cleans up
any associated patch ports.

Prerequisites

The patch ports that you are cleaning up must be Open Virtual Switch (OVS) ports.

Cleaning up patch ports automatically with the neutron-ovs-cleanup command causes a


network connectivity outage, and should be performed only during a scheduled maintenance
window.

Use the flag --ovs_all_ports to remove all patch ports from br-int, cleaning up tunnel ends from
br-tun, and patch ports from bridge to bridge.

127
Red Hat OpenStack Platform 16.1 Networking Guide

The neutron-ovs-cleanup command unplugs all patch ports (instances, qdhcp/qrouter, among
others) from all OVS bridges.

Procedure

1. Run the neutron-ovs-cleanup command with the --ovs_all_ports flag.

IMPORTANT

Perfoming this step will result in a total networking outage.

# /usr/bin/neutron-ovs-cleanup
--config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini
--log-file /var/log/neutron/ovs-cleanup.log --ovs_all_ports

2. Restore connectivity by redeploying the overcloud.


When you rerun the openstack overcloud deploy command, your bridge mapping values are
reapplied.

NOTE

After a restart, the OVS agent does not interfere with any connections that are
not present in bridge_mappings. So, if you have br-int connected to br-ex2, and
br-ex2 has some flows on it, removing br-int from the bridge_mappings
configuration does not disconnect the two bridges when you restart the OVS
agent or the node.

Additional resources

Network environment parameters in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

128
CHAPTER 11. VLAN-AWARE INSTANCES

CHAPTER 11. VLAN-AWARE INSTANCES

11.1. VLAN TRUNKS AND VLAN TRANSPARENT NETWORKS


Instances can send and receive VLAN-tagged traffic over a single virtual NIC. This is particularly useful
for NFV applications (VNFs) that expect VLAN-tagged traffic, allowing a single virtual NIC to serve
multiple customers or services.

In ML2/OVN deployments you can support VLAN-aware instances using VLAN transparent networks.
As an alternative in ML2/OVN or ML2/OVS deployments, you can support VLAN-aware instances using
trunks.

In a VLAN transparent network, you set up VLAN tagging in the VM instances. The VLAN tags are
transferred over the network and consumed by the VM instances on the same VLAN, and ignored by
other instances and devices. In a VLAN transparent network, the VLANs are managed in the VM
instance. You do not need to set up the VLAN in the OpenStack Networking Service (neutron).

VLAN trunks support VLAN-aware instances by combining VLANs into a single trunked port. For
example, a project data network can use VLANs or tunneling (VXLAN, GRE, or Geneve) segmentation,
while the instances see the traffic tagged with VLAN IDs. Network packets are tagged immediately
before they are injected to the instance and do not need to be tagged throughout the entire network.

The following table compares certain features of VLAN transparent networks and VLAN trunks.

Transparent Trunk

Mechanism driver ML2/OVN ML2/OVN, ML2/OVS


support

VLAN setup VM instance OpenStack Networking Service


managed by (neutron)

IP assignment Configured in VM instance Assigned by DHCP

VLAN ID Flexible. You can set the VLAN ID in the Fixed. Instances must use the
instance VLAN ID configured in the trunk

11.2. ENABLING VLAN TRANSPARENCY IN ML2/OVN DEPLOYMENTS


Enable VLAN transparency if you need to send VLAN tagged traffic between virtual machine (VM)
instances. In a VLAN transparent network you can configure the VLANS directly in the VMs without
configuring them in neutron.

Prerequisites

Deployment of Red Hat OpenStack Platform 16.1 or higher, with ML2/OVN as the mechanism
driver.

Provider network of type VLAN or Geneve. Do not use VLAN transparency in deployments with
flat type provider networks.

Ensure that the external switch supports 802.1q VLAN stacking using ethertype 0x8100 on both

129
Red Hat OpenStack Platform 16.1 Networking Guide

Ensure that the external switch supports 802.1q VLAN stacking using ethertype 0x8100 on both
VLANs. OVN VLAN transparency does not support 802.1ad QinQ with outer provider VLAN
ethertype set to 0x88A8 or 0x9100.

Procedure

1. In an environment file on the undercloud node, set the EnableVLANTransparency parameter


to True. For example, add the following lines to ovn-extras.yaml.

parameter_defaults:
EnableVLANTransparency: True

2. Include the environment file in the openstack overcloud deploy command with any other
environment files that are relevant to your environment and deploy the overcloud.

$ openstack overcloud deploy \


--templates \

-e <other_overcloud_environment_files> \

-e ovn-extras.yaml \

Replace <other_overcloud_environment_files> with the list of environment files that are part
of your existing deployment.

3. Create networks using the --transparent-vlan argument. Also set the network MTU to 4 bytes
less than the MTU of the underlay network to accommodate the extra tagging required by
VLAN transparency.

Example

openstack network create network-name --project project-uuid --transparent-vlan --mtu 1438

4. Set up a VLAN interface on each participating VM.

Example

ip link add link eth0 name vlan50 type vlan id 50


ip link set vlan50 up
ip addr add 192.128.111.3/24 dev vlan50

Verification

1. Ping between two VMs on the VLAN using the vlan50 IP address.

2. Use tcpdump on eth0 to see if the packets arrive with the VLAN tag intact.

Additional resources

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

130
CHAPTER 11. VLAN-AWARE INSTANCES

11.3. REVIEWING THE TRUNK PLUG-IN


During a Red Hat openStack deployment, the trunk plug-in is enabled by default. You can review the
configuration on the controller nodes:

On the controller node, confirm that the trunk plug-in is enabled in the /var/lib/config-
data/neutron/etc/neutron/neutron.conf file:

service_plugins=router,qos,trunk

11.4. CREATING A TRUNK CONNECTION


To implement trunks for VLAN-tagged traffic, create a parent port and attach the new port to an
existing neutron network. When you attach the new port, OpenStack Networking adds a trunk
connection to the parent port you created. Next, create subports. These subports connect VLANs to
instances, which allow connectivity to the trunk. Within the instance operating system, you must also
create a sub-interface that tags traffic for the VLAN associated with the subport.

1. Identify the network that contains the instances that require access to the trunked VLANs. In
this example, this is the public network:

openstack network list


+--------------------------------------+---------+--------------------------------------+
| ID | Name | Subnets |
+--------------------------------------+---------+--------------------------------------+
| 82845092-4701-4004-add7-838837837621 | private | 434c7982-cd96-4c41-a8c9-
b93adbdcb197 |
| 8d8bc6d6-5b28-4e00-b99e-157516ff0050 | public | 3fd811b4-c104-44b5-8ff8-
7a86af5e332c |
+--------------------------------------+---------+--------------------------------------+

2. Create the parent trunk port, and attach it to the network that the instance connects to. In this
example, create a neutron port named parent-trunk-port on the public network. This trunk is the
parent port, as you can use it to create subports.

openstack port create --network public parent-trunk-port


+-----------------------+-----------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-----------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | |
| binding_profile | |
| binding_vif_details | |
| binding_vif_type | unbound |
| binding_vnic_type | normal |
| created_at | 2016-10-20T02:02:33Z |
| description | |
| device_id | |
| device_owner | |
| extra_dhcp_opts | |
| fixed_ips | ip_address='172.24.4.230', subnet_id='dc608964-9af3-4fed-9f06-
6d3844fb9b9b' |
| headers | |

131
Red Hat OpenStack Platform 16.1 Networking Guide

| id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 |
| mac_address | fa:16:3e:33:c4:75 |
| name | parent-trunk-port |
| network_id | 871a6bd8-4193-45d7-a300-dcb2420e7cc3 |
| project_id | 745d33000ac74d30a77539f8920555e7 |
| project_id | 745d33000ac74d30a77539f8920555e7 |
| revision_number |4 |
| security_groups | 59e2af18-93c6-4201-861b-19a8a8b79b23 |
| status | DOWN |
| updated_at | 2016-10-20T02:02:33Z |
+-----------------------+-----------------------------------------------------------------------------+

3. Create a trunk using the port that you created in step 2. In this example the trunk is named
parent-trunk.

openstack network trunk create --parent-port parent-trunk-port parent-trunk


+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | UP |
| created_at | 2016-10-20T02:05:17Z |
| description | |
| id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 |
| name | parent-trunk |
| port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 |
| revision_number | 1 |
| status | DOWN |
| sub_ports | |
| tenant_id | 745d33000ac74d30a77539f8920555e7 |
| updated_at | 2016-10-20T02:05:17Z |
+-----------------+--------------------------------------+

4. View the trunk connection:

openstack network trunk list


+--------------------------------------+--------------+--------------------------------------+-------------+
| ID | Name | Parent Port | Description |
+--------------------------------------+--------------+--------------------------------------+-------------+
| 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 | parent-trunk | 20b6fdf8-0d43-475a-a0f1-
ec8f757a4a39 | |
+--------------------------------------+--------------+--------------------------------------+-------------+

5. View the details of the trunk connection:

openstack network trunk show parent-trunk


+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | UP |
| created_at | 2016-10-20T02:05:17Z |
| description | |
| id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 |
| name | parent-trunk |
| port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 |
| revision_number | 1 |

132
CHAPTER 11. VLAN-AWARE INSTANCES

| status | DOWN |
| sub_ports | |
| tenant_id | 745d33000ac74d30a77539f8920555e7 |
| updated_at | 2016-10-20T02:05:17Z |
+-----------------+--------------------------------------+

11.5. ADDING SUBPORTS TO THE TRUNK


1. Create a neutron port.
This port is a subport connection to the trunk. You must also specify the MAC address that you
assigned to the parent port:

openstack port create --network private --mac-address fa:16:3e:33:c4:75 subport-trunk-port


+-----------------------+--------------------------------------------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------------------------------------------+
| admin_state_up | UP |
| allowed_address_pairs | |
| binding_host_id | |
| binding_profile | |
| binding_vif_details | |
| binding_vif_type | unbound |
| binding_vnic_type | normal |
| created_at | 2016-10-20T02:08:14Z |
| description | |
| device_id | |
| device_owner | |
| extra_dhcp_opts | |
| fixed_ips | ip_address='10.0.0.11', subnet_id='1a299780-56df-4c0b-a4c0-
c5a612cef2e8' |
| headers | |
| id | 479d742e-dd00-4c24-8dd6-b7297fab3ee9 |
| mac_address | fa:16:3e:33:c4:75 |
| name | subport-trunk-port |
| network_id | 3fe6b758-8613-4b17-901e-9ba30a7c4b51 |
| project_id | 745d33000ac74d30a77539f8920555e7 |
| project_id | 745d33000ac74d30a77539f8920555e7 |
| revision_number |4 |
| security_groups | 59e2af18-93c6-4201-861b-19a8a8b79b23 |
| status | DOWN |
| updated_at | 2016-10-20T02:08:15Z |
+-----------------------+--------------------------------------------------------------------------+

NOTE

If you receive the error HttpException: Conflict, confirm that you are creating
the subport on a different network to the one that has the parent trunk port. This
example uses the public network for the parent trunk port, and private for the
subport.

2. Associate the port with the trunk (parent-trunk), and specify the VLAN ID ( 55):

openstack network trunk set --subport port=subport-trunk-port,segmentation-


type=vlan,segmentation-id=55 parent-trunk

133
Red Hat OpenStack Platform 16.1 Networking Guide

11.6. CONFIGURING AN INSTANCE TO USE A TRUNK


You must configure the VM instance operating system to use the MAC address that the Red Hat
OpenStack Platform (RHOSP) Networking service (neutron) assigned to the subport. You can also
configure the subport to use a specific MAC address during the subport creation step.

Prerequisites

If you are performing live migrations of your Compute nodes, ensure that the RHOSP
Networking service RPC response timeout is appropriately set for your RHOSP deployment.
The RPC response timeout value can vary between sites and is dependent on the system speed.
The general recommendation is to set the value to at least 120 seconds per/100 trunk ports.
The best practice is to measure the trunk port bind process time for your RHOSP deployment,
and then set the RHOSP Networking service RPC response timeout appropriately. Try to keep
the RPC response timeout value low, but also provide enough time for the RHOSP Networking
service to receive an RPC response. For more information, see Section 11.7, “Configuring
Networking service RPC timeout”.

Procedure

1. Review the configuration of your network trunk, using the network trunk command.

Example

$ openstack network trunk list

Sample output

+---------------------+--------------+---------------------+-------------+
| ID | Name | Parent Port | Description |
+---------------------+--------------+---------------------+-------------+
| 0e4263e2-5761-4cf6- | parent-trunk | 20b6fdf8-0d43-475a- | |
| ab6d-b22884a0fa88 | | a0f1-ec8f757a4a39 | |
+---------------------+--------------+---------------------+-------------+

Example

$ openstack network trunk show parent-trunk

Sample output

+-----------------+------------------------------------------------------+
| Field | Value |
+-----------------+------------------------------------------------------+
| admin_state_up | UP |
| created_at | 2021-10-20T02:05:17Z |
| description | |
| id | 0e4263e2-5761-4cf6-ab6d-b22884a0fa88 |
| name | parent-trunk |
| port_id | 20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 |
| revision_number | 2 |
| status | DOWN |

134
CHAPTER 11. VLAN-AWARE INSTANCES

| sub_ports | port_id='479d742e-dd00-4c24-8dd6-b7297fab3ee9', segm |


| | entation_id='55', segmentation_type='vlan' |
| tenant_id | 745d33000ac74d30a77539f8920555e7 |
| updated_at | 2021-08-20T02:10:06Z |
+-----------------+------------------------------------------------------+

2. Create an instance that uses the parent port-id as its vNIC.

Example

openstack server create --image cirros --flavor m1.tiny --security-group default --key-name
sshaccess --nic port-id=20b6fdf8-0d43-475a-a0f1-ec8f757a4a39 testInstance

Sample output

+--------------------------------------+---------------------------------+
| Property | Value |
+--------------------------------------+---------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | |
| OS-EXT-SRV-ATTR:host |- |
| OS-EXT-SRV-ATTR:hostname | testinstance |
| OS-EXT-SRV-ATTR:hypervisor_hostname | - |
| OS-EXT-SRV-ATTR:instance_name | |
| OS-EXT-SRV-ATTR:kernel_id | |
| OS-EXT-SRV-ATTR:launch_index |0 |
| OS-EXT-SRV-ATTR:ramdisk_id | |
| OS-EXT-SRV-ATTR:reservation_id | r-juqco0el |
| OS-EXT-SRV-ATTR:root_device_name | - |
| OS-EXT-SRV-ATTR:user_data |- |
| OS-EXT-STS:power_state |0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at |- |
| OS-SRV-USG:terminated_at |- |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | uMyL8PnZRBwQ |
| config_drive | |
| created | 2021-08-20T03:02:51Z |
| description |- |
| flavor | m1.tiny (1) |
| hostId | |
| host_status | |
| id | 88b7aede-1305-4d91-a180-67e7eac |
| | 8b70d |
| image | cirros (568372f7-15df-4e61-a05f |
| | -10954f79a3c4) |
| key_name | sshaccess |
| locked | False |
| metadata | {} |
| name | testInstance |
| os-extended-volumes:volumes_attached | [] |
| progress |0 |
| security_groups | default |

135
Red Hat OpenStack Platform 16.1 Networking Guide

| status | BUILD |
| tags | [] |
| tenant_id | 745d33000ac74d30a77539f8920555e |
| |7 |
| updated | 2021-08-20T03:02:51Z |
| user_id | 8c4aea738d774967b4ef388eb41fef5 |
| |e |
+--------------------------------------+---------------------------------+

Additional resources

Configuring Networking service RPC timeout

11.7. CONFIGURING NETWORKING SERVICE RPC TIMEOUT


There can be situations when you must modify the Red Hat OpenStack Platform (RHOSP) Networking
service (neutron) RPC response timeout. For example, live migrations for Compute nodes that use
trunk ports can fail if the timeout value is too low.

The RPC response timeout value can vary between sites and is dependent on the system speed. The
general recommendation is to set the value to at least 120 seconds per/100 trunk ports.

If your site uses trunk ports, the best practice is to measure the trunk port bind process time for your
RHOSP deployment, and then set the RHOSP Networking service RPC response timeout appropriately.
Try to keep the RPC response timeout value low, but also provide enough time for the RHOSP
Networking service to receive an RPC response.

By using a manual hieradata override, rpc_response_timeout, you can set the RPC response timeout
value for the RHOSP Networking service.

Procedure

1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.

Example

$ vi /home/stack/templates/my-modules-environment.yaml

TIP

The RHOSP Orchestration service (heat) uses a set of plans called templates to install and
configure your environment. You can customize aspects of the overcloud with a custom
environment file, which is a special type of template that provides customization for your heat
templates.

2. In the YAML environment file under ExtraConfig, set the appropriate value (in seconds) for
rpc_response_timeout. (The default value is 60 seconds.)

Example

parameter_defaults:
ExtraConfig:
neutron::rpc_response_timeout: 120

NOTE
136
CHAPTER 11. VLAN-AWARE INSTANCES

NOTE

The RHOSP Orchestration service (heat) updates all RHOSP nodes with the
value you set in the custom environment file, however this value only impacts the
RHOSP Networking components.

3. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.

IMPORTANT

The order of the environment files is important as the parameters and resources
defined in subsequent environment files take precedence.

Example

$ openstack overcloud deploy --templates \


-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-
environment.yaml

Additional resources

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

11.8. UNDERSTANDING TRUNK STATES


ACTIVE: The trunk is working as expected and there are no current requests.

DOWN: The virtual and physical resources for the trunk are not in sync. This can be a temporary
state during negotiation.

BUILD: There has been a request and the resources are being provisioned. After successful
completion the trunk returns to ACTIVE.

DEGRADED: The provisioning request did not complete, so the trunk has only been partially
provisioned. It is recommended to remove the subports and try again.

ERROR: The provisioning request was unsuccessful. Remove the resource that caused the error
to return the trunk to a healthier state. Do not add more subports while in the ERROR state, as
this can cause more issues.

137
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 12. CONFIGURING RBAC POLICIES

12.1. OVERVIEW OF RBAC POLICIES


Role-based access control (RBAC) policies in OpenStack Networking allow granular control over shared
neutron networks. OpenStack Networking uses a RBAC table to control sharing of neutron networks
among projects, allowing an administrator to control which projects are granted permission to attach
instances to a network.

As a result, cloud administrators can remove the ability for some projects to create networks and can
instead allow them to attach to pre-existing networks that correspond to their project.

12.2. CREATING RBAC POLICIES


This example procedure demonstrates how to use a role-based access control (RBAC) policy to grant a
project access to a shared network.

1. View the list of available networks:

# openstack network list


+--------------------------------------+-------------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-------------+-------------------------------------------------------+
| fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 | web-servers | 20512ffe-ad56-4bb4-b064-
2cb18fecc923 192.168.200.0/24 |
| bcc16b34-e33e-445b-9fde-dd491817a48a | private | 7fe4a05a-4b81-4a59-8c47-
82c965b0e050 10.0.0.0/24 |
| 9b2f4feb-fee8-43da-bb99-032e4aaf3f85 | public | 2318dc3b-cff0-43fc-9489-
7d4cf48aaab9 172.24.4.224/28 |
+--------------------------------------+-------------+-------------------------------------------------------+

2. View the list of projects:

# openstack project list


+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 4b0b98f8c6c040f38ba4f7146e8680f5 | auditors |
| 519e6344f82e4c079c8e2eabb690023b | services |
| 80bf5732752a41128e612fe615c886c6 | demo |
| 98a2f53c20ce4d50a40dac4a38016c69 | admin |
+----------------------------------+----------+

3. Create a RBAC entry for the web-servers network that grants access to the auditors project
(4b0b98f8c6c040f38ba4f7146e8680f5):

# openstack network rbac create --type network --target-project


4b0b98f8c6c040f38ba4f7146e8680f5 --action access_as_shared web-servers
Created a new rbac_policy:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| action | access_as_shared |
| id | 314004d0-2261-4d5e-bda7-0181fcf40709 |

138
CHAPTER 12. CONFIGURING RBAC POLICIES

| object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 |
| object_type | network |
| target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 |
| project_id | 98a2f53c20ce4d50a40dac4a38016c69 |
+----------------+--------------------------------------+

As a result, users in the auditors project can connect instances to the web-servers network.

12.3. REVIEWING RBAC POLICIES


1. Run the openstack network rbac list command to retrieve the ID of your existing role-based
access control (RBAC) policies:

# openstack network rbac list


+--------------------------------------+-------------+--------------------------------------+
| id | object_type | object_id |
+--------------------------------------+-------------+--------------------------------------+
| 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-
7237e5fcabd3 |
| bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-
032e4aaf3f85 |
+--------------------------------------+-------------+--------------------------------------+

2. Run the openstack network rbac-show command to view the details of a specific RBAC entry:

# openstack network rbac show 314004d0-2261-4d5e-bda7-0181fcf40709


+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| action | access_as_shared |
| id | 314004d0-2261-4d5e-bda7-0181fcf40709 |
| object_id | fa9bb72f-b81a-4572-9c7f-7237e5fcabd3 |
| object_type | network |
| target_project | 4b0b98f8c6c040f38ba4f7146e8680f5 |
| project_id | 98a2f53c20ce4d50a40dac4a38016c69 |
+----------------+--------------------------------------+

12.4. DELETING RBAC POLICIES


1. Run the openstack network rbac list command to retrieve the ID of your existing role-based
access control (RBAC) policies:

# openstack network rbac list


+--------------------------------------+-------------+--------------------------------------+
| id | object_type | object_id |
+--------------------------------------+-------------+--------------------------------------+
| 314004d0-2261-4d5e-bda7-0181fcf40709 | network | fa9bb72f-b81a-4572-9c7f-
7237e5fcabd3 |
| bbab1cf9-edc5-47f9-aee3-a413bd582c0a | network | 9b2f4feb-fee8-43da-bb99-
032e4aaf3f85 |
+--------------------------------------+-------------+--------------------------------------+

2. Run the openstack network rbac delete command to delete the RBAC, using the ID of the
139
Red Hat OpenStack Platform 16.1 Networking Guide

2. Run the openstack network rbac delete command to delete the RBAC, using the ID of the
RBAC that you want to delete:

# openstack network rbac delete 314004d0-2261-4d5e-bda7-0181fcf40709


Deleted rbac_policy: 314004d0-2261-4d5e-bda7-0181fcf40709

12.5. GRANTING RBAC POLICY ACCESS FOR EXTERNAL NETWORKS


You can grant role-based access control (RBAC) policy access to external networks (networks with
gateway interfaces attached) using the --action access_as_external parameter.

Complete the steps in the following example procedure to create a RBAC for the web-servers network
and grant access to the engineering project (c717f263785d4679b16a122516247deb):

Create a new RBAC policy using the --action access_as_external option:

# openstack network rbac create --type network --target-project


c717f263785d4679b16a122516247deb --action access_as_external web-servers
Created a new rbac_policy:
+----------------+--------------------------------------+
| Field | Value |
+----------------+--------------------------------------+
| action | access_as_external |
| id | ddef112a-c092-4ac1-8914-c714a3d3ba08 |
| object_id | 6e437ff0-d20f-4483-b627-c3749399bdca |
| object_type | network |
| target_project | c717f263785d4679b16a122516247deb |
| project_id | c717f263785d4679b16a122516247deb |
+----------------+--------------------------------------+

As a result, users in the engineering project are able to view the network or connect instances to
it:

$ openstack network list


+--------------------------------------+-------------+------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+-------------+------------------------------------------------------+
| 6e437ff0-d20f-4483-b627-c3749399bdca | web-servers | fa273245-1eff-4830-b40c-
57eaeac9b904 192.168.10.0/24 |
+--------------------------------------+-------------+------------------------------------------------------+

140
CHAPTER 13. CONFIGURING DISTRIBUTED VIRTUAL ROUTING (DVR)

CHAPTER 13. CONFIGURING DISTRIBUTED VIRTUAL


ROUTING (DVR)

13.1. UNDERSTANDING DISTRIBUTED VIRTUAL ROUTING (DVR)


When you deploy Red Hat OpenStack Platform you can choose between a centralized routing model or
DVR.

Each model has advantages and disadvantages. Use this document to carefully plan whether centralized
routing or DVR better suits your needs.

New default RHOSP deployments use DVR and the Modular Layer 2 plug-in with the Open Virtual
Network mechanism driver (ML2/OVN).

DVR is disabled by default in ML2/OVS deployments.

13.1.1. Overview of Layer 3 routing


The Red Hat OpenStack Platform Networking service (neutron) provides routing services for project
networks. Without a router, VM instances in a project network can communicate with other instances
over a shared L2 broadcast domain. Creating a router and assigning it to a project network allows the
instances in that network to communicate with other project networks or upstream (if an external
gateway is defined for the router).

13.1.2. Routing flows


Routing services in Red Hat OpenStack Platform (RHOSP) can be categorized into three main flows:

East-West routing - routing of traffic between different networks in the same project. This
traffic does not leave the OpenStack deployment. This definition applies to both IPv4 and IPv6
subnets.

North-South routing with floating IPs - Floating IP addressing is a one-to-one NAT that can
be modified and that floats between instances. While floating IPs are modeled as a one-to-one
association between the floating IP and a neutron port, they are implemented by association
with a neutron router that performs the NAT translation. The floating IPs themselves are taken
from the uplink network that provides the router with external connectivity. As a result,
instances can communicate with external resources (such as endpoints on the internet) or the
other way around. Floating IPs are an IPv4 concept and do not apply to IPv6. It is assumed that
the IPv6 addressing used by projects uses Global Unicast Addresses (GUAs) with no overlap
across the projects, and therefore can be routed without NAT.

North-South routing without floating IPs (also known as SNAT) - The Networking service
offers a default port address translation (PAT) service for instances that do not have allocated
floating IPs. With this service, instances can communicate with external endpoints through the
router, but not the other way around. For example, an instance can browse a website on the
internet, but a web browser outside cannot browse a website hosted within the instance. SNAT
is applied for IPv4 traffic only. In addition, Networking service networks that are assigned GUAs
prefixes do not require NAT on the Networking service router external gateway port to access
the outside world.

13.1.3. Centralized routing


Originally, the Networking service (neutron) was designed with a centralized routing model where a

141
Red Hat OpenStack Platform 16.1 Networking Guide

project’s virtual routers, managed by the neutron L3 agent, are all deployed in a dedicated node or
cluster of nodes (referred to as the Network node, or Controller node). This means that each time a
routing function is required (east/west, floating IPs or SNAT), traffic would traverse through a dedicated
node in the topology. This introduced multiple challenges and resulted in sub-optimal traffic flows. For
example:

Traffic between instances flows through a Controller node - when two instances need to
communicate with each other using L3, traffic has to hit the Controller node. Even if the
instances are scheduled on the same Compute node, traffic still has to leave the Compute
node, flow through the Controller, and route back to the Compute node. This negatively
impacts performance.

Instances with floating IPs receive and send packets through the Controller node - the external
network gateway interface is available only at the Controller node, so whether the traffic is
originating from an instance, or destined to an instance from the external network, it has to flow
through the Controller node. Consequently, in large environments the Controller node is subject
to heavy traffic load. This would affect performance and scalability, and also requires careful
planning to accommodate enough bandwidth in the external network gateway interface. The
same requirement applies for SNAT traffic.

To better scale the L3 agent, the Networking service can use the L3 HA feature, which distributes the
virtual routers across multiple nodes. In the event that a Controller node is lost, the HA router will
failover to a standby on another node and there will be packet loss until the HA router failover
completes.

13.2. DVR OVERVIEW


Distributed Virtual Routing (DVR) offers an alternative routing design. DVR isolates the failure domain
of the Controller node and optimizes network traffic by deploying the L3 agent and schedule routers on
every Compute node. DVR has these characteristics:

East-West traffic is routed directly on the Compute nodes in a distributed fashion.

North-South traffic with floating IP is distributed and routed on the Compute nodes. This
requires the external network to be connected to every Compute node.

North-South traffic without floating IP is not distributed and still requires a dedicated Controller
node.

The L3 agent on the Controller node uses the dvr_snat mode so that the node serves only
SNAT traffic.

The neutron metadata agent is distributed and deployed on all Compute nodes. The metadata
proxy service is hosted on all the distributed routers.

13.3. DVR KNOWN ISSUES AND CAVEATS


Support for DVR is limited to the ML2 core plug-in and the Open vSwitch (OVS) mechanism
driver or ML2/OVN mechanism driver. Other back ends are not supported.

On ML2/OVS DVR deployments, network traffic for the Red Hat OpenStack Platform Load-
balancing service (octavia) goes through the Controller and network nodes, instead of the
compute nodes.

With an ML2/OVS mechanism driver network back end and DVR, it is possible to create VIPs.

142
CHAPTER 13. CONFIGURING DISTRIBUTED VIRTUAL ROUTING (DVR)

With an ML2/OVS mechanism driver network back end and DVR, it is possible to create VIPs.
However, the IP address assigned to a bound port using allowed_address_pairs, should match
the virtual port IP address (/32).
If you use a CIDR format IP address for the bound port allowed_address_pairs instead, port
forwarding is not configured in the back end, and traffic fails for any IP in the CIDR expecting to
reach the bound IP port.

SNAT (source network address translation) traffic is not distributed, even when DVR is enabled.
SNAT does work, but all ingress/egress traffic must traverse through the centralized Controller
node.

In ML2/OVS deployments, IPv6 traffic is not distributed, even when DVR is enabled. All
ingress/egress traffic goes through the centralized Controller node. If you use IPv6 routing
extensively with ML2/OVS, do not use DVR.
Note that in ML2/OVN deployments, all east/west traffic is always distributed, and north/south
traffic is distributed when DVR is configured.

In ML2/OVS deployments, DVR is not supported in conjunction with L3 HA. If you use DVR with
Red Hat OpenStack Platform 16.1 director, L3 HA is disabled. This means that routers are still
scheduled on the Network nodes (and load-shared between the L3 agents), but if one agent
fails, all routers hosted by this agent fail as well. This affects only SNAT traffic. The
allow_automatic_l3agent_failover feature is recommended in such cases, so that if one
network node fails, the routers are rescheduled to a different node.

DHCP servers, which are managed by the neutron DHCP agent, are not distributed and are still
deployed on the Controller node. The DHCP agent is deployed in a highly available
configuration on the Controller nodes, regardless of the routing design (centralized or DVR).

Compute nodes require an interface on the external network attached to an external bridge.
They use this interface to attach to a VLAN or flat network for an external router gateway, to
host floating IPs, and to perform SNAT for VMs that use floating IPs.

In ML2/OVS deployments, each Compute node requires one additional IP address. This is due
to the implementation of the external gateway port and the floating IP network namespace.

VLAN, GRE, and VXLAN are all supported for project data separation. When you use GRE or
VXLAN, you must enable the L2 Population feature. The Red Hat OpenStack Platform director
enforces L2 Population during installation.

13.4. SUPPORTED ROUTING ARCHITECTURES


Red Hat OpenStack Platform (RHOSP) supports both centralized, high-availability (HA) routing and
distributed virtual routing (DVR) in the RHOSP versions listed:

RHOSP centralized HA routing support began in RHOSP 8.

RHOSP distributed routing support began in RHOSP 12.

13.5. DEPLOYING DVR WITH ML2 OVS


To deploy and manage distributed virtual routing (DVR) in an ML2/OVS deployment, you configure
settings in heat templates and environment files.

You use heat template settings to provision host networking:

Configure the interface connected to the physical network for external network traffic on both

143
Red Hat OpenStack Platform 16.1 Networking Guide

Configure the interface connected to the physical network for external network traffic on both
the Compute and Controller nodes.

Create a bridge on Compute and Controller nodes, with an interface for external network traffic.

You also configure the Networking service (neutron) to match the provisioned networking environment
and allow traffic to use the bridge.

The default settings are provided as guidelines only. They are not expected to work in production or test
environments which may require customization for network isolation, dedicated NICs, or any number of
other variable factors. In setting up an environment, you need to correctly configure the bridge mapping
type parameters used by the L2 agents and the external facing bridges for other agents, such as the L3
agent.

The following example procedure shows how to configure a proof-of-concept environment using the
typical defaults.

Procedure

1. Verify that the value for OS::TripleO::Compute::Net::SoftwareConfig matches the value of


OS::TripleO::Controller::Net::SoftwareConfig in the file overcloud-resource-registry.yaml or
in an environment file included in the deployment command.
This value names a file, such as net_config_bridge.yaml. The named file configures Neutron
bridge mappings for external networks Compute node L2 agents. The bridge routes traffic for
the floating IP addresses hosted on Compute nodes in a DVR deployment. Normally, you can
find this filename value in the network environment file that you use when deploying the
overcloud, such as environments/net-multiple-nics.yaml.

NOTE

If you customize the network configuration of the Compute node, you may need
to add the appropriate configuration to your custom files instead.

2. Verify that the Compute node has an external bridge.

a. Make a local copy of the openstack-tripleo-heat-templates directory.

b. $ cd <local_copy_of_templates_directory.

c. Run the process-templates script to render the templates to a temporary output directory:

$ ./tools/process-templates.py -r <roles_data.yaml> \
-n <network_data.yaml> -o <temporary_output_directory>

d. Check the role files in <temporary_output_directory>/network/config.

3. If needed, customize the Compute template to include an external bridge that matches the
Controller nodes, and name the custom file path in
OS::TripleO::Compute::Net::SoftwareConfig in an environment file.

4. Include the environments/services/neutron-ovs-dvr.yaml file in the deployment command when


deploying the overcloud:

$ openstack overcloud deploy --templates -e /usr/share/openstack-tripleo-heat-


templates/environments/services/neutron-ovs-dvr.yaml

144
CHAPTER 13. CONFIGURING DISTRIBUTED VIRTUAL ROUTING (DVR)

5. Verify that L3 HA is disabled.

NOTE

The external bridge configuration for the L3 agent was deprecated in Red Hat
OpenStack Platform 13 and removed in Red Hat OpenStack Platform 15.

13.6. MIGRATING CENTRALIZED ROUTERS TO DISTRIBUTED ROUTING


This section contains information about upgrading to distributed routing for Red Hat OpenStack
Platform deployments that use L3 HA centralized routing.

Procedure

1. Upgrade your deployment and validate that it is working correctly.

2. Run the director stack update to configure DVR.

3. Confirm that routing functions correctly through the existing routers.

4. You cannot transition an L3 HA router to distributed directly. Instead, for each router, disable
the L3 HA option, and then enable the distributed option:

a. Disable the router:

Example

$ openstack router set --disable router1

b. Clear high availability:

Example

$ openstack router set --no-ha router1

c. Configure the router to use DVR:

Example

$ openstack router set --distributed router1

d. Enable the router:

Example

$ openstack router set --enable router1

e. Confirm that distributed routing functions correctly.

Additional resources

Deploying DVR with ML2 OVS

145
Red Hat OpenStack Platform 16.1 Networking Guide

13.7. DEPLOYING ML2/OVN OPENSTACK WITH DISTRIBUTED


VIRTUAL ROUTING (DVR) DISABLED
New Red Hat OpenStack Platform (RHOSP) deployments default to the neutron Modular Layer 2 plug-
in with the Open Virtual Network mechanism driver (ML2/OVN) and DVR.

In a DVR topology, compute nodes with floating IP addresses route traffic between virtual machine
instances and the network that provides the router with external connectivity (north-south traffic).
Traffic between instances (east-west traffic) is also distributed.

You can optionally deploy with DVR disabled. This disables north-south DVR, requiring north-south
traffic to traverse a controller or networker node. East-west routing is always distributed in an an
ML2/OVN deployment, even when DVR is disabled.

Prerequisites

RHOSP 16.1 distribution ready for customization and deployment.

Procedure

1. Create a custom environment file, and add the following configuration:

parameter_defaults:
NeutronEnableDVR: false

2. To apply this configuration, deploy the overcloud, adding your custom environment file to the
stack along with your other environment files. For example:

(undercloud) $ openstack overcloud deploy --templates \


-e [your environment files]
-e /home/stack/templates/<custom-environment-file>.yaml

13.7.1. Additional resources


Understanding distributed virtual routing (DVR) in the Networking Guide.

146
CHAPTER 14. PROJECT NETWORKING WITH IPV6

CHAPTER 14. PROJECT NETWORKING WITH IPV6

14.1. IPV6 SUBNET OPTIONS


When you create IPv6 subnets in a Red Hat OpenStack Platform (RHOSP) project network you can
specify address mode and Router Advertisement mode to obtain a particular result as described in the
following table.

NOTE

RHOSP does not support IPv6 prefix delegation in ML2/OVN deployments. You must set
the Global Unicast Address prefix manually.

RA Mode Address Mode Result

ipv6_ra_mode=not set ipv6-address-mode=slaac The instance receives an IPv6


address from the external router
(not managed by OpenStack
Networking) using Stateless
Address Autoconfiguration
(SLAAC).

NOTE

OpenStack
Networking
supports only
EUI-64 IPv6
address
assignment for
SLAAC. This
allows for
simplified IPv6
networking, as
hosts self-assign
addresses based
on the base 64-
bits plus the MAC
address. You
cannot create
subnets with a
different netmask
and
address_assign_ty
pe of SLAAC.

ipv6_ra_mode=not set ipv6-address-mode=dhcpv6- The instance receives an IPv6


stateful address and optional information
from OpenStack Networking
(dnsmasq) using DHCPv6
stateful.

147
Red Hat OpenStack Platform 16.1 Networking Guide

RA Mode Address Mode Result

ipv6_ra_mode=not set ipv6-address-mode=dhcpv6- The instance receives an IPv6


stateless address from the external router
using SLAAC, and optional
information from OpenStack
Networking (dnsmasq) using
DHCPv6 stateless.

ipv6_ra_mode=slaac ipv6-address-mode=not-set The instance uses SLAAC to


receive an IPv6 address from
OpenStack Networking (radvd).

ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=not-set The instance receives an IPv6


address and optional information
from an external DHCPv6 server
using DHCPv6 stateful.

ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=not-set The instance receives an IPv6


address from OpenStack
Networking (radvd) using SLAAC,
and optional information from an
external DHCPv6 server using
DHCPv6 stateless.

ipv6_ra_mode=slaac ipv6-address-mode=slaac The instance receives an IPv6


address from OpenStack
Networking (radvd) using SLAAC.

ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=dhcpv6- The instance receives an IPv6


stateful address from OpenStack
Networking (dnsmasq) using
DHCPv6 stateful, and optional
information from OpenStack
Networking (dnsmasq) using
DHCPv6 stateful.

ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=dhcpv6- The instance receives an IPv6


stateless address from OpenStack
Networking (radvd) using SLAAC,
and optional information from
OpenStack Networking
(dnsmasq) using DHCPv6
stateless.

14.2. CREATE AN IPV6 SUBNET USING STATEFUL DHCPV6


You can create an IPv6 subnet in a Red Hat OpenStack (RHOSP) project network.

For example, you can create an IPv6 subnet using Stateful DHCPv6 in network named database-servers
in a project named QA.

148
CHAPTER 14. PROJECT NETWORKING WITH IPV6

Procedure

1. Retrieve the project ID of the Project where you want to create the IPv6 subnet. These values
are unique between OpenStack deployments, so your values differ from the values in this
example.

# openstack project list


+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 25837c567ed5458fbb441d39862e1399 | QA |
| f59f631a77264a8eb0defc898cb836af | admin |
| 4e2e1951e70643b5af7ed52f3ff36539 | demo |
| 8561dff8310e4cd8be4b6fd03dc8acf5 | services |
+----------------------------------+----------+

2. Retrieve a list of all networks present in OpenStack Networking (neutron), and note the name of
the network where you want to host the IPv6 subnet:

# openstack network list


+--------------------------------------+------------------+----------------------------------------------------
---------+
| id | name | subnets |
+--------------------------------------+------------------+----------------------------------------------------
---------+
| 8357062a-0dc2-4146-8a7f-d2575165e363 | private | c17f74c4-db41-4538-af40-
48670069af70 10.0.0.0/24 |
| 31d61f7d-287e-4ada-ac29-ed7017a54542 | public | 303ced03-6019-4e79-a21c-
1942a460b920 172.24.4.224/28 |
| 6aff6826-4278-4a35-b74d-b0ca0cbba340 | database-servers |
|
+--------------------------------------+------------------+----------------------------------------------------
---------+

3. Include the project ID, network name, and ipv6 address mode in the openstack subnet create
command:

# openstack subnet create --ip-version 6 --ipv6-address-mode dhcpv6-stateful --project


25837c567ed5458fbb441d39862e1399 --network database-servers --subnet-range
fdf8:f53b:82e4::53/125 subnet_name

Created a new subnet:


+-------------------+--------------------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------------------+
| allocation_pools | {"start": "fdf8:f53b:82e4::52", "end": "fdf8:f53b:82e4::56"} |
| cidr | fdf8:f53b:82e4::53/125 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | fdf8:f53b:82e4::51 |
| host_routes | |
| id | cdfc3398-997b-46eb-9db1-ebbd88f7de05 |
| ip_version |6 |
| ipv6_address_mode | dhcpv6-stateful |
| ipv6_ra_mode | |

149
Red Hat OpenStack Platform 16.1 Networking Guide

| name | |
| network_id | 6aff6826-4278-4a35-b74d-b0ca0cbba340 |
| tenant_id | 25837c567ed5458fbb441d39862e1399 |
+-------------------+--------------------------------------------------------------+

Validation steps

1. Validate this configuration by reviewing the network list. Note that the entry for database-
servers now reflects the newly created IPv6 subnet:

# openstack network list


+--------------------------------------+------------------+----------------------------------------------------
---------+
| id | name | subnets |
+--------------------------------------+------------------+----------------------------------------------------
---------+
| 6aff6826-4278-4a35-b74d-b0ca0cbba340 | database-servers | cdfc3398-997b-46eb-9db1-
ebbd88f7de05 fdf8:f53b:82e4::50/125 |
| 8357062a-0dc2-4146-8a7f-d2575165e363 | private | c17f74c4-db41-4538-af40-
48670069af70 10.0.0.0/24 |
| 31d61f7d-287e-4ada-ac29-ed7017a54542 | public | 303ced03-6019-4e79-a21c-
1942a460b920 172.24.4.224/28 |
+--------------------------------------+------------------+----------------------------------------------------
---------+

Result
As a result of this configuration, instances that the QA project creates can receive a DHCP IPv6
address when added to the database-servers subnet:

# openstack server list


+--------------------------------------+------------+--------+------------+-------------+---------------------
----------------+
| ID | Name | Status | Task State | Power State | Networks
|
+--------------------------------------+------------+--------+------------+-------------+---------------------
----------------+
| fad04b7a-75b5-4f96-aed9-b40654b56e03 | corp-vm-01 | ACTIVE | - | Running |
database-servers=fdf8:f53b:82e4::52 |
+--------------------------------------+------------+--------+------------+-------------+---------------------
----------------+

Additional resources

To find the Router Advertisement mode and address mode combinations to achieve a particular result in
an IPv6 subnet, see IPv6 subnet options in the Networking Guide.

150
CHAPTER 15. MANAGING PROJECT QUOTAS

CHAPTER 15. MANAGING PROJECT QUOTAS

15.1. CONFIGURING PROJECT QUOTAS


OpenStack Networking (neutron) supports the use of quotas to constrain the number of resources
created by tenants/projects.

Procedure

You can set project quotas for various network components in the /var/lib/config-
data/neutron/etc/neutron/neutron.conf file.
For example, to limit the number of routers that a project can create, change the quota_router
value:

quota_router = 10

In this example, each project is limited to a maximum of 10 routers.

For a listing of the quota settings, see sections that immediately follow.

15.2. L3 QUOTA OPTIONS


Here are quota options available for layer 3 (L3) networking:

quota_floatingip - The number of floating IPs available to a project.

quota_network - The number of networks available to a project.

quota_port - The number of ports available to a project.

quota_router - The number of routers available to a project.

quota_subnet - The number of subnets available to a project.

quota_vip - The number of virtual IP addresses available to a project.

15.3. FIREWALL QUOTA OPTIONS


Here are quota options available for managing firewalls for projects:

quota_firewall - The number of firewalls available to a project.

quota_firewall_policy - The number of firewall policies available to a project.

quota_firewall_rule - The number of firewall rules available to a project.

15.4. SECURITY GROUP QUOTA OPTIONS


The Networking service quota engine manages security groups and security group rules, and it is not
possible to set all quotas to zero before creating the default security group (and the two default
security group rules that accepts all egress traffic for IPv4 and IPv6). When you create a new project,
the Networking service does not create the default security group until a network or a port is created, or
until you list the security group or the security group rules.

151
Red Hat OpenStack Platform 16.1 Networking Guide

Here are quota options available for managing the number of security groups that projects can create:

quota_security_group - The number of security groups available to a project.

quota_security_group_rule - The number of security group rules available to a project.

15.5. MANAGEMENT QUOTA OPTIONS


Here are additonal options available to administrators for managing quotas for projects:

default_quota* - The default number of resources available to a project.

quota_health_monitor* - The number of health monitors available to a project.


Health monitors do not consume resources, however the quota option is available because
OpenStack Networking considers health monitors as resource consumers.

quota_member - The number of pool members available to a project.


Pool members do not consume resources, however the quota option is available because
OpenStack Networking considers pool members as resource consumers.

quota_pool - The number of pools available to a project.

152
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS

CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS

16.1. ADVANTAGES OF ROUTED PROVIDER NETWORKS


In Red Hat OpenStack Platform, operators can create routed provider networks (RPNs). RPNs are
typically used in edge deployments, and rely on multiple layer 2 network segments instead of traditional
networks that have only one segment.

RPNs simplify the cloud for end users because they see only one network. For cloud operators, RPNs
deliver scalabilty and fault tolerance. For example, if a major error occurs, only one segment is impacted
instead of the entire network failing.

Before routed provider networks (RPNs), operators typically had to choose from one of the following
architectures:

A single, large layer 2 network

Multiple, smaller layer 2 networks

Single, large layer 2 networks become complex when scaling and reduce fault tolerance (increase failure
domains).

Multiple, smaller layer 2 networks scale better and shrink failure domains, but can introduce complexity
for end users.

Starting with Red Hat OpenStack Platform 16.1.1 and later, you can deploy RPNs using the ML2/OVS or
the SR-IOV mechanism drivers.

Additional resources

Fundamentals of routed provider networks

Limitations of routed provider networks

Preparing for a routed provider network

Creating a routed provider network

Migrating a non-routed to a routed provider network

16.2. FUNDAMENTALS OF ROUTED PROVIDER NETWORKS


The association between a segment and a subnet distinguishes a routed provider network (RPN) from
other types of networks. Each network segment requires at least one subnet that explicitly belongs to
that segment. The OpenStack Networking service (neutron) enforces that either zero or all subnets on a
particular network associate with a segment. For example, attempting to create a subnet without a
segment on a network containing subnets with segments generates an error.

With RPNs, the IP addresses available to virtual machine (VM) instances depend on the segment of the
network available on the particular compute node. The Networking service port can be associated with
only one network segment.

Similar to conventional networking, layer 2 (switching) handles transit of traffic between ports on the
same network segment and layer 3 (routing) handles transit of traffic between segments.

The Networking service does not provide layer 3 services between segments. Instead, it relies on
153
Red Hat OpenStack Platform 16.1 Networking Guide

The Networking service does not provide layer 3 services between segments. Instead, it relies on
physical network infrastructure to route subnets. Thus, both the Networking service and physical
network infrastructure must contain configuration for routed provider networks, similar to conventional
provider networks.

Because the Compute service (nova) scheduler is not network segment aware, when you deploy RPNs,
you must map each leaf or rack segment or DCN edge site to a Compute service host-aggregate or
availability zone.

If you require a DHCP-metadata service, you must define an availability zone for each edge site or
network segment, to ensure that the local DHCP agent is deployed.

Additional resources

Advantages of routed provider networks

Limitations of routed provider networks

Preparing for a routed provider network

Creating a routed provider network

Migrating a non-routed to a routed provider network

16.3. LIMITATIONS OF ROUTED PROVIDER NETWORKS


Routed provider networks (RPNs) are not supported by all mechanism drivers and there are restrictions
with the Compute service scheduler and other software as noted in the following list:

Routed provider networks are supported only by the ML2/OVS and SR-IOV mechanism drivers.
Open Virtual Network (OVN) is not supported.

OVS-DPDK (without DHCP) support for remote and edge deployments is in Technology
Preview in Red Hat OpenStack Platform 16.1.4 and later.

North-south routing with central SNAT or a floating IP is not supported.

When using SR-IOV or PCI pass-through, physical network (physnet) names must be the same
in central and remote sites or segments. You cannot reuse segment IDs.

The Compute service (nova) scheduler is not segment-aware. (You must map each segment or
edge site to a Compute host-aggregate or availability zone.) Currently, there are only two VM
instance boot options available:

Boot using port-id and no IP address, specifying Compute availability zone (segment or
edge site).

Boot using network-id, specifying the Compute availability zone (segment or edge site).

Cold or live migration works only when you specify the destination Compute availability zone
(segment or edge site).

16.4. PREPARING FOR A ROUTED PROVIDER NETWORK


There are several tasks that you must perform before you can create a routed provider network in Red
Hat OpenStack Platform (RHOSP).

154
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS

Procedure

1. Within a network, use a unique physical network name for each segment. This enables reuse of
the same segmentation details between subnets.
For example, use the same VLAN ID across all segments of a particular provider network.

2. Implement routing between segments.


Each subnet on a segment must contain the gateway address of the router interface on that
particular subnet.

Table 16.1. Sample segments for routing

Segment Version Addresses Gateway

segment1 4 203.0.113.0/24 203.0.113.1

segment1 6 fd00:203:0:113::/64 fd00:203:0:113::1

segment2 4 198.51.100.0/24 198.51.100.1

segment2 6 fd00:198:51:100::/64 fd00:198:51:100::1

3. Map segments to compute nodes.


Routed provider networks imply that compute nodes reside on different segments. Ensure that
every compute host in a router provider network has direct connectivity to one of its segments.

Table 16.2. Sample segment to Compute node mappings

Host Rack Physical network

compute0001 rack 1 segment 1

compute0002 rack 1 segment 1

… … …

compute0101 rack 2 segment 2

compute0102 rack 2 segment 2

compute0102 rack 2 segment 2

… … …

4. Deploy at least one DHCP agent per segment.


Unlike conventional provider networks, a DHCP agent cannot support more than one segment
within a network. Deploy DHCP agents on the compute nodes containing the segments rather
than on the network nodes to reduce the node count.

Table 16.3. Sample DCHP agent per segment mapping

155
Red Hat OpenStack Platform 16.1 Networking Guide

Host Rack Physical network

network0001 rack 1 segment 1

network0002 rack 1 segment 1

… … …

You deploy a DCHP agent and a Networking service metadata agent on the Compute nodes by
using a custom roles file.

Here is an example:

###########################################################################
####
# Role: ComputeSriov #
###########################################################################
####
- name: ComputeSriov
description: |
Compute SR-IOV Role
CountDefault: 1
networks:
External:
subnet: external_subnet
InternalApi:
subnet: internal_api_subnet
Tenant:
subnet: tenant_subnet
Storage:
subnet: storage_subnet
RoleParametersDefault:
TunedProfileName: "cpu-partitioning"
update_serial: 25
ServicesDefault:
- OS::TripleO::Services::Aide
- OS::TripleO::Services::AuditD
- OS::TripleO::Services::BootParams
- OS::TripleO::Services::CACerts
...
- OS::TripleO::Services::NeutronDhcpAgent
- OS::TripleO::Services::NeutronMetadataAgent
...

In a custom environment file, add the following key-value pair:

parameter_defaults:
....
NeutronEnableIsolatedMetadata: 'True'
....

5. Ensure that the RHOSP Placement service, the python3-osc-placement package, is installed
on the undercloud.

156
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS

This package is available on the undercloud in RHOSP 16.1.6 and later. For earlier versions of
RHOSP you must install the package manually. To check which version of RHOSP you are
running, enter the following command on the undercloud:

$ cat /etc/rhosp-release
Red Hat OpenStack Platform release 16.1.5 GA (Train)

To install the Placement service, log in to the undercloud as root, and run this command:

# yum install python3-osc-placement

Additional resources

Advantages of routed provider networks

Fundamentals of routed provider networks

Limitations of routed provider networks

Creating a routed provider network

Migrating a non-routed to a routed provider network

16.5. CREATING A ROUTED PROVIDER NETWORK


Routed provider networks (RPNs) simplify the Red Hat OpenStack Platform (RHOSP) cloud for end
users because they see only one network. For cloud operators, RPNs deliver scalabilty and fault
tolerance.

When you perform this procedure, you create an RPN with two network segments. Each segment
contains one IPv4 subnet and one IPv6 subnet.

Prerequisites

Complete the steps in Preparing for a routed provider network .

Procedure

1. Create a VLAN provider network that includes a default segment.


In this example, the VLAN provider network is named multisegment1 and uses a physical
network called provider1 and a VLAN whose ID is 128:

Example

$ openstack network create --share --provider-physical-network provider1 \


--provider-network-type vlan --provider-segment 128 multisegment1

Sample output

+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |

157
Red Hat OpenStack Platform 16.1 Networking Guide

| id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| l2_adjacency | True |
| mtu | 1500 |
| name | multisegment1 |
| port_security_enabled | True |
| provider:network_type | vlan |
| provider:physical_network | provider1 |
| provider:segmentation_id | 128 |
| revision_number |1 |
| router:external | Internal |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | [] |
+---------------------------+--------------------------------------+

2. Rename the default network segment to segment1.

a. Obtain the segment ID:

$ openstack network segment list --network multisegment1

Sample output

+--------------------------------------+----------+--------------------------------------+--------------
+---------+
| ID | Name | Network | Network Type |
Segment |
+--------------------------------------+----------+--------------------------------------+--------------
+---------+
| 43e16869-ad31-48e4-87ce-acf756709e18 | None | 6ab19caa-dda9-4b3d-abc4-
5b8f435b98d9 | vlan | 128 |
+--------------------------------------+----------+--------------------------------------+--------------
+---------+

b. Using the segment ID, rename the network segment to segment1:

$ openstack network segment set --name segment1 43e16869-ad31-48e4-87ce-


acf756709e18

3. Create a second segment on the provider network.


In this example, the network segment uses a physical network called provider2 and a VLAN
whose ID is 129:

Example

$ openstack network segment create --physical-network provider2 \


--network-type vlan --segment 129 --network multisegment1 segment2

Sample output

+------------------+--------------------------------------+

158
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS

| Field | Value |
+------------------+--------------------------------------+
| description | None |
| headers | |
| id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| name | segment2 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| network_type | vlan |
| physical_network | provider2 |
| revision_number | 1 |
| segmentation_id | 129 |
| tags | [] |
+------------------+--------------------------------------+

4. Verify that the network contains the segment1 and segment2 segments:

$ openstack network segment list --network multisegment1

Sample output

+--------------------------------------+----------+--------------------------------------+--------------+-----
----+
| ID | Name | Network | Network Type | Segment |
+--------------------------------------+----------+--------------------------------------+--------------+-----
----+
| 053b7925-9a89-4489-9992-e164c8cc8763 | segment2 | 6ab19caa-dda9-4b3d-abc4-
5b8f435b98d9 | vlan | 129 |
| 43e16869-ad31-48e4-87ce-acf756709e18 | segment1 | 6ab19caa-dda9-4b3d-abc4-
5b8f435b98d9 | vlan | 128 |
+--------------------------------------+----------+--------------------------------------+--------------+-----
----+

5. Create one IPv4 subnet and one IPv6 subnet on the segment1 segment.
In this example, the IPv4 subnet uses 203.0.113.0/24:

Example

$ openstack subnet create \


--network multisegment1 --network-segment segment1 \
--ip-version 4 --subnet-range 203.0.113.0/24 \
multisegment1-segment1-v4

Sample output

+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 203.0.113.2-203.0.113.254 |
| cidr | 203.0.113.0/24 |
| enable_dhcp | True |
| gateway_ip | 203.0.113.1 |
| id | c428797a-6f8e-4cb1-b394-c404318a2762 |
| ip_version |4 |
| name | multisegment1-segment1-v4 |

159
Red Hat OpenStack Platform 16.1 Networking Guide

| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 |
| tags | [] |
+-------------------+--------------------------------------+

In this example, the IPv6 subnet uses fd00:203:0:113::/64:

Example

$ openstack subnet create \


--network multisegment1 --network-segment segment1 \
--ip-version 6 --subnet-range fd00:203:0:113::/64 \
--ipv6-address-mode slaac multisegment1-segment1-v6

Sample output

+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | fd00:203:0:113::2-fd00:203:0:113:ffff:ffff:ffff:ffff |
| cidr | fd00:203:0:113::/64 |
| enable_dhcp | True |
| gateway_ip | fd00:203:0:113::1 |
| id | e41cb069-9902-4c01-9e1c-268c8252256a |
| ip_version |6 |
| ipv6_address_mode | slaac |
| ipv6_ra_mode | None |
| name | multisegment1-segment1-v6 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 43e16869-ad31-48e4-87ce-acf756709e18 |
| tags | [] |
+-------------------+------------------------------------------------------+

NOTE

By default, IPv6 subnets on provider networks rely on physical network


infrastructure for stateless address autoconfiguration (SLAAC) and router
advertisement.

6. Create one IPv4 subnet and one IPv6 subnet on the segment2 segment.
In this example, the IPv4 subnet uses 198.51.100.0/24:

Example

$ openstack subnet create \


--network multisegment1 --network-segment segment2 \
--ip-version 4 --subnet-range 198.51.100.0/24 \
multisegment1-segment2-v4

Sample output

160
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS

+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 198.51.100.2-198.51.100.254 |
| cidr | 198.51.100.0/24 |
| enable_dhcp | True |
| gateway_ip | 198.51.100.1 |
| id | 242755c2-f5fd-4e7d-bd7a-342ca95e50b2 |
| ip_version |4 |
| name | multisegment1-segment2-v4 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| tags | [] |
+-------------------+--------------------------------------+

In this example, the IPv6 subnet uses fd00:198:51:100::/64:

Example

$ openstack subnet create \


--network multisegment1 --network-segment segment2 \
--ip-version 6 --subnet-range fd00:198:51:100::/64 \
--ipv6-address-mode slaac multisegment1-segment2-v6

Sample output

+-------------------+--------------------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------------------+
| allocation_pools | fd00:198:51:100::2-fd00:198:51:100:ffff:ffff:ffff:ffff |
| cidr | fd00:198:51:100::/64 |
| enable_dhcp | True |
| gateway_ip | fd00:198:51:100::1 |
| id | b884c40e-9cfe-4d1b-a085-0a15488e9441 |
| ip_version |6 |
| ipv6_address_mode | slaac |
| ipv6_ra_mode | None |
| name | multisegment1-segment2-v6 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| revision_number | 1 |
| segment_id | 053b7925-9a89-4489-9992-e164c8cc8763 |
| tags | [] |
+-------------------+--------------------------------------------------------+

Verification

1. Verify that each IPv4 subnet associates with at least one DHCP agent:

$ openstack network agent list --agent-type dhcp --network multisegment1

Sample output

161
Red Hat OpenStack Platform 16.1 Networking Guide

+--------------------------------------+------------+-------------+-------------------+-------+-------+------
--------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary
|
+--------------------------------------+------------+-------------+-------------------+-------+-------+------
--------------+
| c904ed10-922c-4c1a-84fd-d928abaf8f55 | DHCP agent | compute0001 | nova | :-)
| UP | neutron-dhcp-agent |
| e0b22cc0-d2a6-4f1c-b17c-27558e20b454 | DHCP agent | compute0101 | nova | :-)
| UP | neutron-dhcp-agent |
+--------------------------------------+------------+-------------+-------------------+-------+-------+------
--------------+

2. Verify that inventories were created for each segment IPv4 subnet in the Compute service
placement API.
Run this command for all segment IDs:

$ SEGMENT_ID=053b7925-9a89-4489-9992-e164c8cc8763
$ openstack resource provider inventory list $SEGMENT_ID

Sample output
In this sample output, only one of the segments is shown:

+----------------+------------------+----------+----------+-----------+----------+-------+
| resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total |
+----------------+------------------+----------+----------+-----------+----------+-------+
| IPV4_ADDRESS | 1.0 | 1| 2| 1| 1 | 30 |
+----------------+------------------+----------+----------+-----------+----------+-------+

3. Verify that host aggregates were created for each segment in the Compute service:

$ openstack aggregate list

Sample output
In this example, only one of the segments is shown:

+----+---------------------------------------------------------+-------------------+
| Id | Name | Availability Zone |
+----+---------------------------------------------------------+-------------------+
| 10 | Neutron segment id 053b7925-9a89-4489-9992-e164c8cc8763 | None |
+----+---------------------------------------------------------+-------------------+

4. Launch one or more instances. Each instance obtains IP addresses according to the segment it
uses on the particular compute node.

NOTE
162
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS

NOTE

If a fixed IP is specified by the user in the port create request, that particular IP is
allocated immediately to the port. However, creating a port and passing it to an
instance yields a different behavior than conventional networks. If the fixed IP is
not specified on the port create request, the Networking service defers
assignment of IP addresses to the port until the particular compute node
becomes apparent. For example, when you run this command:

$ openstack port create --network multisegment1 port1

Sample output

+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | UP |
| binding_vnic_type | normal |
| id | 6181fb47-7a74-4add-9b6b-f9837c1c90c4 |
| ip_allocation | deferred |
| mac_address | fa:16:3e:34:de:9b |
| name | port1 |
| network_id | 6ab19caa-dda9-4b3d-abc4-5b8f435b98d9 |
| port_security_enabled | True |
| revision_number |1 |
| security_groups | e4fcef0d-e2c5-40c3-a385-9c33ac9289c5 |
| status | DOWN |
| tags | [] |
+-----------------------+--------------------------------------+

Additional resources

Advantages of routed provider networks

Fundamentals of routed provider networks

Limitations of routed provider networks

Preparing for a routed provider network

Migrating a non-routed to a routed provider network

16.6. MIGRATING A NON-ROUTED NETWORK TO A ROUTED


PROVIDER NETWORK
You can migrate a non-routed network to a routed provider network (RPN) by associating the subnet of
the network with the ID of the network segment.

Prerequisites

The non-routed network you are migrating must contain only one segment and only one subnet.

IMPORTANT
163
Red Hat OpenStack Platform 16.1 Networking Guide

IMPORTANT

In non-routed provider networks that contain multiple subnets or network


segments it is not possible to safely migrate to an RPN. In non-routed networks,
addresses from the subnet allocation pools are assigned to ports without
consideration of the network segment to which the port is bound.

Procedure

1. For the network that is being migrated, obtain the ID of the current network segment.

Example

$ openstack network segment list --network my_network

Sample output

+--------------------------------------+------+--------------------------------------+--------------+---------
+
| ID | Name | Network | Network Type | Segment |
+--------------------------------------+------+--------------------------------------+--------------+---------
+
| 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 | None | 45e84575-2918-471c-95c0-
018b961a2984 | flat | None |
+--------------------------------------+------+--------------------------------------+--------------+---------
+

2. For the network that is being migrated, obtain the ID of the current subnet.

Example

$ openstack network segment list --network my_network

Sample output

+--------------------------------------+-----------+--------------------------------------+---------------+
| ID | Name | Network | Subnet |
+--------------------------------------+-----------+--------------------------------------+---------------+
| 71d931d2-0328-46ae-93bc-126caf794307 | my_subnet | 45e84575-2918-471c-95c0-
018b961a2984 | 172.24.4.0/24 |
+--------------------------------------+-----------+--------------------------------------+---------------+

3. Verify that the current segment_id of the subnet has a value of None.

Example

$ openstack subnet show my_subnet --c segment_id

Sample output

+------------+-------+
| Field | Value |
+------------+-------+

164
CHAPTER 16. DEPLOYING ROUTED PROVIDER NETWORKS

| segment_id | None |
+------------+-------+

4. Change the value of the subnet segment_id to the network segment ID.
Here is an example:

$ openstack subnet set --network-segment 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7


my_subnet

Verification

Verify that the subnet is now associated with the desired network segment.

Example

$ openstack subnet show my_subnet --c segment_id

Sample output

+------------+--------------------------------------+
| Field | Value |
+------------+--------------------------------------+
| segment_id | 81e5453d-4c9f-43a5-8ddf-feaf3937e8c7 |
+------------+--------------------------------------+

Additional resources

Advantages of routed provider networks

Fundamentals of routed provider networks

Limitations of routed provider networks

Preparing for a routed provider network

Creating a routed provider network

165
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 17. CONFIGURING ALLOWED ADDRESS PAIRS

17.1. OVERVIEW OF ALLOWED ADDRESS PAIRS


An allowed address pair is when you identify a specific MAC address, IP address, or both to allow network
traffic to pass through a port regardless of the subnet. When you define allowed address pairs, you are
able to use protocols like VRRP (Virtual Router Redundancy Protocol) that float an IP address between
two VM instances to enable fast data plane failover.

You define allowed address pairs using the Red Hat OpenStack Platform command-line client
openstack port command.

IMPORTANT

Be aware that you should not use the default security group with a wider IP address range
in an allowed address pair. Doing so can allow a single port to bypass security groups for
all other ports within the same network.

For example, this command impacts all ports in the network and bypasses all security
groups:

# openstack port set --allowed-address


mac_address=3e:37:09:4b,ip_address=0.0.0.0/0
9e67d44eab334f07bf82fa1b17d824b6

NOTE

With an ML2/OVN mechanism driver network back end, it is possible to create VIPs.
However, the IP address assigned to a bound port using allowed_address_pairs, should
match the virtual port IP address (/32).

If you use a CIDR format IP address for the bound port allowed_address_pairs instead,
port forwarding is not configured in the back end, and traffic fails for any IP in the CIDR
expecting to reach the bound IP port.

Additional resources

port command in the Command Line Interface Reference

Creating a port and allowing one address pair

Adding allowed address pairs

17.2. CREATING A PORT AND ALLOWING ONE ADDRESS PAIR


Creating a port with an allowed address pair enables network traffic to flow through the port regardless
of the subnet.

IMPORTANT

Do not use the default security group with a wider IP address range in an allowed address
pair. Doing so can allow a single port to bypass security groups for all other ports within
the same network.

166
CHAPTER 17. CONFIGURING ALLOWED ADDRESS PAIRS

Procedure

Use the following command to create a port and allow one address pair:

# openstack port create --network <network> --allowed-address mac_address=<mac-


address>,ip_address=<ip-cidr> <port-name>

Additional resources

port command in the Command Line Interface Reference

Overview of allowed address pairs

Adding allowed address pairs

17.3. ADDING ALLOWED ADDRESS PAIRS


You can add an allowed address pair to a port to enable network traffic to flow through the port
regardless of the subnet.

IMPORTANT

Do not use the default security group with a wider IP address range in an allowed address
pair. Doing so can allow a single port to bypass security groups for all other ports within
the same network.

Procedure

Use the following command to add allowed address pairs:

# openstack port set --allowed-address mac_address=<mac_address>,ip_address=<ip_cidr>


<port>

NOTE

You cannot set an allowed-address pair that matches the mac_address and
ip_address of a port. This is because such a setting has no effect since traffic
matching the mac_address and ip_address is already allowed to pass through
the port.

Additional resources

port command in the Command Line Interface Reference

Overview of allowed address pairs

Creating a port and allowing one address pair

167
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING


TASKS
Sometimes you might need to perform administration tasks on the Red Hat OpenStack Platform
Networking service (neutron) such as configuring the Layer 2 Population driver or specifying the name
assigned to ports by the internal DNS.

18.1. CONFIGURING THE L2 POPULATION DRIVER


The L2 Population driver enables broadcast, multicast, and unicast traffic to scale out on large overlay
networks. By default, Open vSwitch GRE and VXLAN replicate broadcasts to every agent, including
those that do not host the destination network. This design requires the acceptance of significant
network and processing overhead. The alternative design introduced by the L2 Population driver
implements a partial mesh for ARP resolution and MAC learning traffic; it also creates tunnels for a
particular network only between the nodes that host the network. This traffic is sent only to the
necessary agent by encapsulating it as a targeted unicast.

To enable the L2 Population driver, complete the following steps:

1. Enable the L2 population driver by adding it to the list of mechanism drivers. You also must enable at
least one tunneling driver: either GRE, VXLAN, or both. Add the appropriate configuration options to the
ml2_conf.ini file:

[ml2]
type_drivers = local,flat,vlan,gre,vxlan,geneve
mechanism_drivers = l2population

NOTE

Neutron’s Linux Bridge ML2 driver and agent were deprecated in Red Hat OpenStack
Platform 11. The Open vSwitch (OVS) plugin OpenStack Platform director default, and is
recommended by Red Hat for general usage.

2. Enable L2 population in the openvswitch_agent.ini file. Enable it on each node that contains the L2
agent:

[agent]
l2_population = True

NOTE

To install ARP reply flows, configure the arp_responder flag:

[agent]
l2_population = True
arp_responder = True

18.2. TUNING KEEPALIVED TO AVOID VRRP PACKET LOSS

If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs,
168
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS

If the number of highly available (HA) routers on a single host is high, when an HA router fail over occurs,
the Virtual Router Redundancy Protocol (VRRP) messages might overflow the IRQ queues. This
overflow stops Open vSwitch (OVS) from responding and forwarding those VRRP messages.

To avoid VRRP packet overload, you must increase the VRRP advertisement interval using the
ha_vrrp_advert_int parameter in the ExtraConfig section for the Controller role.

Procedure

1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.

Example

$ source ~/stackrc

2. Create a custom YAML environment file.

Example

$ vi /home/stack/templates/my-neutron-environment.yaml

TIP

The Red Hat OpenStack Platform Orchestration service (heat) uses a set of plans called
templates to install and configure your environment. You can customize aspects of the
overcloud with a custom environment file , which is a special type of template that provides
customization for your heat templates.

3. In the YAML environment file, increase the VRRP advertisement interval using the
ha_vrrp_advert_int argument with a value specific for your site. (The default is 2 seconds.)
You can also set values for gratuitous ARP messages:

ha_vrrp_garp_master_repeat
The number of gratuitous ARP messages to send at one time after the transition to the
master state. (The default is 5 messages.)
ha_vrrp_garp_master_delay
The delay for second set of gratuitous ARP messages after the lower priority advert is
received in the master state. (The default is 5 seconds.)

Example

parameter_defaults:
ControllerExtraConfig:
neutron::agents::l3::ha_vrrp_advert_int: 7
neutron::config::l3_agent_config:
DEFAULT/ha_vrrp_garp_master_repeat:
value: 5
DEFAULT/ha_vrrp_garp_master_delay:
value: 5

4. Run the openstack overcloud deploy command and include the core heat templates,
169
Red Hat OpenStack Platform 16.1 Networking Guide

4. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.

IMPORTANT

The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.

Example

$ openstack overcloud deploy --templates \


-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-
environment.yaml

Additional resources

2.1.2 Data Forwarding Rules, Subsection 2 in RFC 4541

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

18.3. SPECIFYING THE NAME THAT DNS ASSIGNS TO PORTS


You can specify the name assigned to ports by the internal DNS when you enable the Red Hat
OpenStack Platform (RHOSP) Networking service (neutron) dns_domain for ports extension
(dns_domain_ports).

You enable the dns_domain for ports extension by declaring the RHOSP Orchestration (heat)
NeutronPluginExtensions parameter in a YAML-formatted environment file. Using a corresponding
parameter, NeutronDnsDomain, you specify your domain name, which overrides the default value,
openstacklocal. After redeploying your overcloud, you can use the OpenStack Client port commands,
port set or port create, with --dns-name to assign a port name.

Also, when the dns_domain for ports extension is enabled, the Compute service automatically populates
the dns_name attribute with the hostname attribute of the instance during the boot of VM instances.
At the end of the boot process, dnsmasq recognizes the allocated ports by their instance hostname.

Procedure

1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.

Example

$ source ~/stackrc

2. Create a custom YAML environment file (my-neutron-environment.yaml).

NOTE
170
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS

NOTE

Values inside parentheses are sample values that are used in the example
commands in this procedure. Substitute these sample values with ones that are
appropriate for your site.

Example

$ vi /home/stack/templates/my-neutron-environment.yaml

TIP

The undercloud includes a set of Orchestration service templates that form the plan for your
overcloud creation. You can customize aspects of the overcloud with environment files, which
are YAML-formatted files that override parameters and resources in the core Orchestration
service template collection. You can include as many environment files as necessary.

3. In the environment file, add a parameter_defaults section. Under this section, add the
dns_domain for ports extension, dns_domain_ports.

Example

parameter_defaults:
NeutronPluginExtensions: "qos,port_security,dns_domain_ports"

NOTE

If you set dns_domain_ports, ensure that the deployment does not also use
dns_domain, the DNS Integration extension. These extensions are incompatible,
and both extensions cannot be defined simultaneously.

4. Also in the parameter_defaults section, add your domain name ( example.com) using the
NeutronDnsDomain parameter.

Example

parameter_defaults:
NeutronPluginExtensions: "qos,port_security,dns_domain_ports"
NeutronDnsDomain: "example.com"

5. Run the openstack overcloud deploy command and include the core Orchestration templates,
environment files, and this new environment file.

IMPORTANT

The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.

Example

$ openstack overcloud deploy --templates \

171
Red Hat OpenStack Platform 16.1 Networking Guide

-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-
environment.yaml

Verification

1. Log in to the overcloud, and create a new port (new_port) on a network (public). Assign a DNS
name (my_port) to the port.

Example

$ source ~/overcloudrc
$ openstack port create --network public --dns-name my_port new_port

2. Display the details for your port (new_port).

Example

$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port

Output

+-------------------------+----------------------------------------------+
| Field | Value |
+-------------------------+----------------------------------------------+
| dns_assignment | fqdn='my_port.example.com', |
| | hostname='my_port', |
| | ip_address='10.65.176.113' |
| dns_domain | example.com |
| dns_name | my_port |
| name | new_port |
+-------------------------+----------------------------------------------+

Under dns_assignment, the fully qualified domain name (fqdn) value for the port contains a
concatenation of the DNS name (my_port) and the domain name (example.com) that you set
earlier with NeutronDnsDomain.

3. Create a new VM instance (my_vm) using the port (new_port) that you just created.

Example

$ openstack server create --image rhel --flavor m1.small --port new_port my_vm

4. Display the details for your port (new_port).

Example

$ openstack port show -c dns_assignment -c dns_domain -c dns_name -c name new_port

Output

+-------------------------+----------------------------------------------+

172
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS

| Field | Value |
+-------------------------+----------------------------------------------+
| dns_assignment | fqdn='my_vm.example.com', |
| | hostname='my_vm', |
| | ip_address='10.65.176.113' |
| dns_domain | example.com |
| dns_name | my_vm |
| name | new_port |
+-------------------------+----------------------------------------------+

Note that the Compute service changes the dns_name attribute from its original value
(my_port) to the name of the instance with which the port is associated ( my_vm).

Additional resources

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

port in the Command Line Interface Reference

server create in the Command Line Interface Reference

18.4. ASSIGNING DHCP ATTRIBUTES TO PORTS


You can use Red Hat Openstack Plaform (RHOSP) Networking service (neutron) extensions to add
networking functions. You can use the extra DHCP option extension (extra_dhcp_opt) to configure
ports of DHCP clients with DHCP attributes. For example, you can add a PXE boot option such as tftp-
server, server-ip-address, or bootfile-name to a DHCP client port.

The value of the extra_dhcp_opt attribute is an array of DHCP option objects, where each object
contains an opt_name and an opt_value. IPv4 is the default version, but you can change this to IPv6 by
including a third option, ip-version=6.

When a VM instance starts, the RHOSP Networking service supplies port information to the instance
using DHCP protocol. If you add DHCP information to a port already connected to a running instance,
the instance only uses the new DHCP port information when the instance is restarted.

Some of the more common DHCP port attributes are: bootfile-name, dns-server, domain-name, mtu,
server-ip-address, and tftp-server. For the complete set of acceptable values for opt_name, refer to
the DHCP specification.

Prerequisites

You must have RHOSP administrator privileges.

Procedure

1. Log in to the undercloud host as the stack user.

2. Source the undercloud credentials file:

$ source ~/stackrc

173
Red Hat OpenStack Platform 16.1 Networking Guide

3. Create a custom YAML environment file.

Example

$ vi /home/stack/templates/my-octavia-environment.yaml

4. Your environment file must contain the keywords parameter_defaults. Under these keywords,
add the extra DHCP option extension, extra_dhcp_opt.

Example

parameter_defaults:
NeutronPluginExtensions: "qos,port_security,extra_dhcp_opt"

5. Run the deployment command and include the core heat templates, environment files, and this
new custom environment file.

The order of the environment files is important because the parameters and resources
defined in subsequent environment files take precedence.

Example

$ openstack overcloud deploy --templates \


-e <your_environment_files> \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/octavia.yaml \
-e /home/stack/templates/my-octavia-environment.yaml

Verification

1. Source your credentials file.

Example

$ source ~/overcloudrc

2. Create a new port (new_port) on a network (public). Assign a valid attribute from the DHCP
specification to the new port.

Example

$ openstack port create --extra-dhcp-option name=domain-name,value=test.domain --extra-


dhcp-option name=ntp-server,value=192.0.2.123 --network public new_port

3. Display the details for your port (new_port).

Example

$ openstack port show new_port -c extra_dhcp_opts

Sample output

+-----------------+--------------------------------------------------------------------+

174
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS

| Field | Value |
+-----------------+--------------------------------------------------------------------+
| extra_dhcp_opts | ip_version='4', opt_name='domain-name', opt_value='test.domain' |
| | ip_version='4', opt_name='ntp-server', opt_value='192.0.2.123' |
+-----------------+--------------------------------------------------------------------+

Additional resources

OVN supported DHCP options

Dynamic Host Configuration Protocol (DHCP) and Bootstrap Protocol (BOOTP) Parameters

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

port create in the Command Line Interface Reference

port show in the Command Line Interface Reference

18.5. LOADING KERNEL MODULES


Some features in Red Hat OpenStack Platform (RHOSP) require certain kernel modules to be loaded.
For example, the OVS firewall driver requires you to load the nf_conntrack_proto_gre kernel module to
support GRE tunneling between two VM instances.

By using a special Orchestration service (heat) parameter, ExtraKernelModules, you can ensure that
heat stores configuration information about the required kernel modules needed for features like GRE
tunneling. Later, during normal module management, these required kernel modules are loaded.

Procedure

1. On the undercloud host, logged in as the stack user, create a custom YAML environment file.

Example

$ vi /home/stack/templates/my-modules-environment.yaml

TIP

Heat uses a set of plans called templates to install and configure your environment. You can
customize aspects of the overcloud with a custom environment file , which is a special type of
template that provides customization for your heat templates.

2. In the YAML environment file under parameter_defaults, set ExtraKernelModules to the


name of the module that you want to load.

Example

ComputeParameters:
ExtraKernelModules:
nf_conntrack_proto_gre: {}

175
Red Hat OpenStack Platform 16.1 Networking Guide

ControllerParameters:
ExtraKernelModules:
nf_conntrack_proto_gre: {}

3. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.

IMPORTANT

The order of the environment files is important as the parameters and resources
defined in subsequent environment files take precedence.

Example

$ openstack overcloud deploy --templates \


-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/my-modules-
environment.yaml

Verification

If heat has properly loaded the module, you should see output when you run the lsmod
command on the Compute node:

Example

sudo lsmod | grep nf_conntrack_proto_gre

Additional resources

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

18.6. CONFIGURING SHARED SECURITY GROUPS


When you want one or more Red Hat OpenStack Platform (RHOSP) projects to be able to share data,
you can use the RHOSP Networking service (neutron) RBAC policy feature to share a security group.
You create security groups and Networking service role-based access control (RBAC) policies using the
OpenStack Client.

Prerequisites

You have at least two RHOSP projects that you want to share.

In one of the projects, the current project, you have created a security group that you want to
share with another project, the target project.
In this example, the ping_ssh security group is created:

Example

176
CHAPTER 18. COMMON ADMINISTRATIVE NETWORKING TASKS

$ openstack security group create ping_ssh

Procedure

1. Log in to the overcloud for the current project that contains the security group.

2. Obtain the name or ID of the target project.

$ openstack project list

3. Obtain the name or ID of the security group that you want to share between RHOSP projects.

$ openstack security group list

4. Using the identifiers from the previous steps, create an RBAC policy using the openstack
network rbac create command.
In this example, the ID of the target project is 32016615de5d43bb88de99e7f2e26a1e. The ID of
the security group is 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24:

Example

$ openstack network rbac create --target-project \


32016615de5d43bb88de99e7f2e26a1e --action access_as_shared \
--type security_group 5ba835b7-22b0-4be6-bdbe-e0722d1b5f24

--target-project
specifies the project that requires access to the security group.

TIP

You can share data between all projects by using the --target-all-projects argument instead
of --target-project <target-project>. By default, only the admin user has this privilege.

--action access_as_shared
specifies what the project is allowed to do.
--type
indicates that the target object is a security group.
5ba835b7-22b0-4be6-bdbe-e0722d1b5f24
is the ID of the particular security group which is being granted access to.

The target project is able to access the security group when running the OpenStack Client security
group commands, in addition to being able to bind to its ports. No other users (other than
administrators and the owner) are able to access the security group.

TIP

To remove access for the target project, delete the RBAC policy that allows it using the openstack
network rbac delete command.

Additional resources

177
Red Hat OpenStack Platform 16.1 Networking Guide

security group create in the Command Line Interface Reference

network rbac create in the Command Line Interface Reference

178
CHAPTER 19. CONFIGURING LAYER 3 HIGH AVAILABILITY (HA)

CHAPTER 19. CONFIGURING LAYER 3 HIGH AVAILABILITY


(HA)

19.1. RHOSP NETWORKING SERVICE WITHOUT HIGH AVAILABILITY


(HA)
Red Hat OpenStack Platform (RHOSP) Networking service deployments without any high availability
(HA) features are vulnerable to physical node failures.

In a typical deployment, projects create virtual routers, which are scheduled to run on physical
Networking service Layer 3 (L3) agent nodes. This becomes an issue when you lose an L3 agent node
and the dependent virtual machines subsequently lose connectivity to external networks. Any floating IP
addresses are also unavailable. In addition, connectivity is lost between any networks that the router
hosts.

19.2. OVERVIEW OF LAYER 3 HIGH AVAILABILITY (HA)


This active/passive high availability (HA) configuration uses the industry standard VRRP (as defined in
RFC 3768) to protect project routers and floating IP addresses. A virtual router is randomly scheduled
across multiple Red Hat OpenStack Platform (RHOSP) Networking service nodes, with one designated
as the active router, and the remainder serving in a standby role.

NOTE

To deploy Layer 3 (L3) HA, you must maintain similar configuration on the redundant
Networking service nodes, including floating IP ranges and access to external networks.

In the following diagram, the active Router1 and Router2 routers are running on separate physical L3
Networking service agent nodes. L3 HA has scheduled backup virtual routers on the corresponding
nodes, ready to resume service in the case of a physical node failure. When the L3 agent node fails, L3
HA reschedules the affected virtual router and floating IP addresses to a working node:

During a failover event, instance TCP sessions through floating IPs remain unaffected, and migrate to
the new L3 node without disruption. Only SNAT traffic is affected by failover events.

The L3 agent is further protected when in an active/active HA mode.

Additional resources

179
Red Hat OpenStack Platform 16.1 Networking Guide

Virtual Router Redundancy Protocol (VRRP)

19.3. LAYER 3 HIGH AVAILABILITY (HA) FAILOVER CONDITIONS


Layer 3 (L3) high availability (HA) for the Red Hat OpenStack Platform (RHOSP) Networking service
automatically reschedules protected resources in the following events:

The Networking service L3 agent node shuts down or otherwise loses power because of a
hardware failure.

The L3 agent node becomes isolated from the physical network and loses connectivity.

NOTE

Manually stopping the L3 agent service does not induce a failover event.

19.4. PROJECT CONSIDERATIONS FOR LAYER 3 HIGH AVAILABILITY


(HA)
Red Hat OpenStack Platform (RHOSP) Networking service Layer 3 (L3) high availability (HA)
configuration occurs in the back end and is invisible to the project. Projects can continue to create and
manage their virtual routers as usual, however there are some limitations to be aware of when designing
your L3 HA implementation:

L3 HA supports up to 255 virtual routers per project.

Internal VRRP messages are transported within a separate internal network, created
automatically for each project. This process occurs transparently to the user.

When implementing high availability (HA) routers on ML2/OVS, each L3 agent spawns haproxy
and neutron-keepalived-state-change-monitor processes for each router. Each process
consumes approximately 20MB of memory. By default, each HA router resides on three L3
agents and consumes resources on each of the nodes. Therefore, when sizing your RHOSP
networks, ensure that you have allocated enough memory to support the number of HA routers
that you plan to implement.

19.5. HIGH AVAILABILITY (HA) CHANGES TO THE RHOSP


NETWORKING SERVICE
The Red Hat OpenStack Platform (RHOSP) Networking service (neutron) API has been updated to
allow administrators to set the --ha=True/False flag when creating a router, which overrides the default
configuration of l3_ha in /var/lib/config-data/neutron/etc/neutron/neutron.conf.

High availability (HA) changes to neutron-server:

Layer 3 (L3) HA assigns the active role randomly, regardless of the scheduler used by the
Networking service (whether random or leastrouter).

The database schema has been modified to handle allocation of virtual IP addresses (VIPs)
to virtual routers.

A transport network is created to direct L3 HA traffic.

HA changes to the Networking service L3 agent:

A new keepalived manager has been added, providing load-balancing and HA capabilities.
180
CHAPTER 19. CONFIGURING LAYER 3 HIGH AVAILABILITY (HA)

A new keepalived manager has been added, providing load-balancing and HA capabilities.

IP addresses are converted to VIPs.

19.6. ENABLING LAYER 3 HIGH AVAILABILITY (HA) ON RHOSP


NETWORKING SERVICE NODES
During installation, Red Hat OpenStack Platform (RHOSP) director enables high availability (HA) for
virtual routers by default when you have at least two RHOSP Controllers and are not using distributed
virtual routing (DVR). Using an RHOSP Orchestration service (heat) parameter,
max_l3_agents_per_router, you can set the maximum number of RHOSP Networking service Layer 3
(L3) agents on which an HA router is scheduled.

Prerequisites

Your RHOSP deployment does not use DVR.

You have at least two RHOSP Controllers deployed.

Procedure

1. Log in to the undercloud as the stack user, and source the stackrc file to enable the director
command line tools.

Example

$ source ~/stackrc

2. Create a custom YAML environment file.

Example

$ vi /home/stack/templates/my-neutron-environment.yaml

TIP

The Orchestration service (heat) uses a set of plans called templates to install and configure
your environment. You can customize aspects of the overcloud with a custom environment file ,
which is a special type of template that provides customization for your heat templates.

3. Set the NeutronL3HA parameter to true in the YAML environment file. This ensures HA is
enabled even if director did not set it by default.

parameter_defaults:
NeutronL3HA: 'true'

4. Set the maximum number of L3 agents on which an HA router is scheduled.


Set the max_l3_agents_per_router parameter to a value between the minimum and total
number of network nodes in your deployment. (A zero value indicates that the router is
scheduled on every agent.)

Example

181
Red Hat OpenStack Platform 16.1 Networking Guide

parameter_defaults:
NeutronL3HA: 'true'
ControllerExtraConfig:
neutron::server::max_l3_agents_per_router: 2

In this example, if you deploy four Networking service nodes, only two L3 agents protect each
HA virtual router: one active, and one standby.

If you set the value of max_l3_agents_per_router to be greater than the number of available
network nodes, you can scale out the number of standby routers by adding new L3 agents. For
every new L3 agent node that you deploy, the Networking service schedules additional standby
versions of the virtual routers until the max_l3_agents_per_router limit is reached.

5. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and this new custom environment file.

IMPORTANT

The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.

Example

$ openstack overcloud deploy --templates \


-e [your-environment-files] \
-e /usr/share/openstack-tripleo-heat-templates/environments/services/my-neutron-
environment.yaml

NOTE

When NeutronL3HA is set to true, all virtual routers that are created default to
HA routers. When you create a router, you can override the HA option by including
the --no-ha option in the openstack router create command:

# openstack router create --no-ha

Additional resources

Environment files in the Advanced Overcloud Customization guide

Including environment files in overcloud creation in the Advanced Overcloud Customization


guide

19.7. REVIEWING HIGH AVAILABILITY (HA) RHOSP NETWORKING


SERVICE NODE CONFIGURATIONS

Procedure

Run the ip address command within the virtual router namespace to return a high availability
(HA) device in the result, prefixed with ha-.

# ip netns exec qrouter-b30064f9-414e-4c98-ab42-646197c74020 ip address

182
CHAPTER 19. CONFIGURING LAYER 3 HIGH AVAILABILITY (HA)

<snip>
2794: ha-45249562-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc
noqueue state DOWN group default
link/ether 12:34:56:78:2b:5d brd ff:ff:ff:ff:ff:ff
inet 169.254.0.2/24 brd 169.254.0.255 scope global ha-54b92d86-4f

With Layer 3 HA enabled, virtual routers and floating IP addresses are protected against individual node
failure.

183
Red Hat OpenStack Platform 16.1 Networking Guide

CHAPTER 20. REPLACING NETWORKER NODES


In certain circumstances a Red Hat OpenStack Platform (RHOSP) node with a Networker profile in a
high availability cluster might fail. (For more information, see Tagging nodes into profiles in the Director
Installation and Usage guide.) In these situations, you must remove the node from the cluster and
replace it with a new Networker node that runs the Networking service (neutron) agents.

The topics in this section are:

Section 20.1, “Preparing to replace network nodes”

Section 20.2, “Replacing a Networker node”

Section 20.3, “Rescheduling nodes and cleaning up the Networking service”

20.1. PREPARING TO REPLACE NETWORK NODES


Replacing a Networker node on a Red Hat OpenStack Platform (RHOSP) overcloud, requires that you
perform several preparation steps. Completing all of the required preparation steps helps you to avoid
complications during the Networker node replacement process.

Prerequisites

Your RHOSP deployment is highly available with three or more Networker nodes.

Procedure

1. Log in to your undercloud as the stack user.

2. Source the undercloud credentials file:

$ source ~/stackrc

3. Check the current status of the overcloud stack on the undercloud:

$ openstack stack list --nested

The overcloud stack and its subsequent child stacks should have a status of either
CREATE_COMPLETE or UPDATE_COMPLETE.

4. Ensure that you have a recent backup image of the undercloud node by running the Relax-and-
Recover tool.
For more information, see the Backing up and restoring the undercloud and control plane nodes
guide.

5. Log in to a Controller node as root.

6. Open an interactive bash shell on the container and check the status of the Galera cluster:

# pcs status

Ensure that the Controller nodes are in Master mode.

Sample output

184
CHAPTER 20. REPLACING NETWORKER NODES

* Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-


mariadb:pcmklatest]:
* galera-bundle-0 (ocf::heartbeat:galera): Master controller-0
* galera-bundle-1 (ocf::heartbeat:galera): Master controller-1
* galera-bundle-2 (ocf::heartbeat:galera): Master controller-2

7. Log on to the RHOSP director node and check the nova-compute service:

$ sudo systemctl status tripleo_nova_compute


$ openstack baremetal node list

The output should show all non-maintenance mode nodes as up.

8. Make sure all undercloud services are running:

$ sudo systemctl -t service

20.2. REPLACING A NETWORKER NODE


In certain circumstances a Red Hat OpenStack Platform (RHOSP) node with a Networker profile in a
high availability cluster might fail. Replacing a Networker node requires running the openstack
overcloud deploy command to update the overcloud with a the new node.

Prerequisites

Your RHOSP deployment is highly available with three or more Networker nodes.

The node that you add must be able to connect to the other nodes in the cluster over the
network.

You have performed the steps described in Section 20.1, “Preparing to replace network nodes”

Procedure

1. Log in to your undercloud as the stack user.

2. Source the undercloud credentials file:

Example

$ source ~/stackrc

3. Identify the index of the node to remove:

$ openstack baremetal node list -c UUID -c Name -c "Instance UUID"

Sample output

+--------------------------------------+------+--------------------------------------+
| UUID | Name | Instance UUID |
+--------------------------------------+------+--------------------------------------+
| 36404147-7c8a-41e6-8c72-6af1e339da2a | None | 7bee57cf-4a58-4eaf-b851-f3203f6e5e05
|

185
Red Hat OpenStack Platform 16.1 Networking Guide

| 91eb9ac5-7d52-453c-a017-0f2fb289c3cd | None | None |


| 75b25e9a-948d-424a-9b3b-0f2fb289c3cd | None | None |
| 038727da-6a5c-425f-bd45-16aa2bc4ba91 | None | 763bfec2-9354-466a-ae65-
1fdf45d35c61 |
| dc2292e6-4056-46e0-8848-165d06fcc948 | None | 2017b481-706f-44e1-852a-
57fb03ecef11 |
| c7eadcea-e377-4392-9fc3-716f1bd57527 | None | 5f73c7d7-4826-49a5-b6be-
0a95c6bdd2f8 |
| da3a8d19-8a59-4e9d-923a-29254d688f6d | None | cfefaf60-8311-4bc3-9416-46852e2cb83f
|
| 807cb6ce-6b94-4cd1-9969-d390650854c7 | None | c07c13e6-a845-4791-9628-
c8514585fe27 |
| 0c245daa-7817-4ae9-a883-fed2e9c68d6c | None | 844c9a88-713a-4ff1-8737-
30858d724593 |
| e6499ef7-3db2-4ab4-bfa7-feb44c6591c6 | None | aef7c27a-f0b4-4814-b0ff-d3f792321212 |
| 7545385c-bc49-4eb9-b13c-201368ce1c62 | None | c2e40164-c659-4849-a28f-
a7b270ed2970 |
+--------------------------------------+------+--------------------------------------+

4. Set the node into maintenance mode by using the baremetal node maintenance set
command.

Example

$ openstack baremetal node maintenance set e6499ef7-3db2-4ab4-bfa7-ef59539bf972

5. Create a JSON file to add the new node to the node pool that contains RHOSP director.

Example

{
"nodes":[
{
"mac":[
"dd:dd:dd:dd:dd:dd"
],
"cpu":"4",
"memory":"6144",
"disk":"40",
"arch":"x86_64",
"pm_type":"ipmi",
"pm_user":"admin",
"pm_password":"p@55w0rd!",
"pm_addr":"192.168.24.207"
}
]
}

For more information, see Adding nodes to the overcloud in the Director Installation and Usage
guide.

6. Run the openstack overcloud node import command to register the new node.

Example

186
CHAPTER 20. REPLACING NETWORKER NODES

$ openstack overcloud node import newnode.json

7. After registering the new node, launch the introspection process by using the following
commands:

$ openstack baremetal node manage <node>


$ openstack overcloud node introspect <node> --provide

8. Tag the new node with the Networker profile by using the openstack baremetal node set
command.

Example

$ openstack baremetal node set --property \


capabilities='profile:networker,boot_option:local' \
91eb9ac5-7d52-453c-a017-c0e3d823efd0

9. Create a ~/templates/remove-networker.yaml environment file that defines the index of the


node that you intend to remove:

Example

parameters:
NetworkerRemovalPolicies:
[{'resource_list': ['1']}]

10. Create a ~/templates/node-count-networker.yaml environment file and set the total count of
Networker nodes in the file.

Example

parameter_defaults:
OvercloudNetworkerFlavor: networker
NetworkerCount: 3

11. Run the openstack overcloud deploy command and include the core heat templates,
environment files, and the environment files that you modified.

IMPORTANT

The order of the environment files is important because the parameters and
resources defined in subsequent environment files take precedence.

$ openstack overcloud deploy --templates \


-e <your_environment_files> \
-e /home/stack/templates/node-count-networker.yaml \
-e /home/stack/templates/remove-networker.yaml

RHOSP director removes the old Networker node, creates a new one, and updates the
overcloud stack.

Verification

187
Red Hat OpenStack Platform 16.1 Networking Guide

1. Check the status of the overcloud stack:

$ openstack stack list --nested

2. Verify that the new Networker node is listed, and the old one is removed.

$ openstack server list -c ID -c Name -c Status

Sample output

+--------------------------------------+------------------------+--------+
| ID | Name | Status |
+--------------------------------------+------------------------+--------+
| 861408be-4027-4f53-87a6-cd3cf206ba7a | overcloud-compute-0 | ACTIVE |
| 0966e9ae-f553-447a-9929-c4232432f718 | overcloud-compute-1 | ACTIVE |
| 9c08fa65-b38c-4b2e-bd47-33870bff06c7 | overcloud-compute-2 | ACTIVE |
| a7f0f5e1-e7ce-4513-ad2b-81146bc8c5af | overcloud-controller-0 | ACTIVE |
| cfefaf60-8311-4bc3-9416-6a824a40a9ae | overcloud-controller-1 | ACTIVE |
| 97a055d4-aefd-481c-82b7-4a5f384036d2 | overcloud-controller-2 | ACTIVE |
| 844c9a88-713a-4ff1-8737-6410bf551d4f | overcloud-networker-0 | ACTIVE |
| c2e40164-c659-4849-a28f-507eb7edb79f | overcloud-networker-2 | ACTIVE |
| 425a0828-b42f-43b0-940c-7fb02522753a | overcloud-networker-3 | ACTIVE |
+--------------------------------------+------------------------+--------+

Additional resources

Adding nodes to the overcloud in the Director Installation and Usage guide

Registering Nodes for the Overcloud in the Director Installation and Usage guide

baremetal node manage in the Command Line Interface Reference

overcloud node introspect in the Command Line Interface Reference

Environment files in the Advanced Overcloud Customization guide

Including Environment Files in Overcloud Creation in the Advanced Overcloud Customization


guide

20.3. RESCHEDULING NODES AND CLEANING UP THE NETWORKING


SERVICE
As a part of replacing a Red Hat OpenStack Platform (RHOSP) Networker node, remove all Networking
service agents on the removed node from the database. Doing so ensures that the Networking service
does not identify the agents as out-of-service ("dead"). For ML2/OVS users, removing agents from the
removed node enables the DHCP resources to be automatically rescheduled to other Networker nodes.

Prerequisites

Your RHOSP deployment is highly available with three or more Networker nodes.

Procedure

188
CHAPTER 20. REPLACING NETWORKER NODES

1. Log in to your undercloud as the stack user.

2. Source the overcloud credentials file:

Example

$ source ~/overcloudrc

3. Verify that the RHOSP Networking service processes exist, and are marked out-of-service (xxx)
for the overcloud-networker-1.

$ openstack network agent list -c ID -c Binary -c Host -c Alive | grep overcloud-networker-1

Sample output for ML2/OVN

+--------------------------------------+-----------------------+-------+-------------------------------+
| ID | Host | Alive | Binary |
+--------------------------------------+-----------------------+-------+-------------------------------+
| 26316f47-4a30-4baf-ba00-d33c9a9e0844 | overcloud-networker-1 | xxx | ovn-controller
|
+--------------------------------------+-----------------------+-------+-------------------------------+

Sample output for ML2/OVS

+--------------------------------------+-----------------------+-------+------------------------+
| ID | Host | Alive | Binary |
+--------------------------------------+-----------------------+-------+------------------------+
| 8377-66d75323e466c-b838-1149e10441ee | overcloud-networker-1 | xxx | neutron-
metadata-agent |
| b55d-797668c336707-a2cf-cba875eeda21 | overcloud-networker-1 | xxx | neutron-l3-agent
|
| 9dcb-00a9e32ecde42-9458-01cfa9742862 | overcloud-networker-1 | xxx | neutron-ovs-
agent |
| be83-e4d9329846540-9ae6-1540947b2ffd | overcloud-networker-1 | xxx | neutron-dhcp-
agent |
+--------------------------------------+-----------------------+-------+------------------------+

4. Capture the UUIDs of the agents registered for overcloud-networker-1.

$ AGENT_UUIDS=$(openstack network agent list -c ID -c Host -c Alive -c Binary -f value |


grep overcloud-networker-1 | cut -d\ -f1)

5. Delete any remaining overcloud-networker-1 agents from the database.

$ for agent in $AGENT_UUIDS; do neutron agent-delete $agent ; done

Sample output

Deleted agent(s): 26316f47-4a30-4baf-ba00-d33c9a9e0844

Additional resources

189
Red Hat OpenStack Platform 16.1 Networking Guide

network agent list in the Command Line Interface Reference

190
CHAPTER 21. IDENTIFYING VIRTUAL DEVICES WITH TAGS

CHAPTER 21. IDENTIFYING VIRTUAL DEVICES WITH TAGS

21.1. OVERVIEW OF VIRTUAL DEVICE TAGGING


If you launch an instance with multiple network interfaces or block devices, you can use device tagging
to communicate the intended role of each device to the instance operating system. Tags are assigned to
devices at instance boot time, and are available to the instance operating system through the metadata
API and the configuration drive (if enabled).

The tags are set using the following parameters:

--block-device tag=device metadata

--nic tag=device metadata

21.2. TAGGING VIRTUAL DEVICES

Procedure

To tag virtual devices, use the tag parameters, --block-device and --nic, when creating
instances.
Here is an example:

$ nova boot test-vm --flavor m1.tiny --image cirros \


--nic net-id=55411ca3-83dd-4036-9158-bf4a6b8fb5ce,tag=nfv1 \
--block-device id=b8c9bef7-aa1d-4bf4-a14d-17674b370e13,bus=virtio,tag=database-server
NFVappServer

The resulting tags are added to the existing instance metadata and are available through both
the metadata API, and on the configuration drive.

In this example, the following devices section populates the metadata:

Sample contents of the meta_data.json file:

{
"devices": [
{
"type": "nic",
"bus": "pci",
"address": "0030:00:02.0",
"mac": "aa:00:00:00:01",
"tags": ["nfv1"]
},
{
"type": "disk",
"bus": "pci",
"address": "0030:00:07.0",
"serial": "disk-vol-227",
"tags": ["database-server"]
}
]
}

The device tag metadata is available using GET /openstack/latest/meta_data.json from the
191
Red Hat OpenStack Platform 16.1 Networking Guide

The device tag metadata is available using GET /openstack/latest/meta_data.json from the
metadata API.

If the configuration drive is enabled, and mounted under /configdrive in the instance operating
system, the metadata is also present in /configdrive/openstack/latest/meta_data.json.

192

You might also like