Openstack Network
Openstack Network
Abstract
This guide targets OpenStack administrators seeking to deploy and manage OpenStack Networking
(neutron). This guide documents the OpenStack Kilo release.
Contents
Conventions
Notices
Command prompts
Introduction to networking
Basic networking
Network components
Tunnel technologies
Network namespaces
Network address translation
Introduction to OpenStack Networking (neutron)
Overview and components
Service and component hierarchy
Configuration
Server
ML2 plug-in
Deployment scenarios
Scenario: Legacy with Open vSwitch
Scenario: Legacy with Linux Bridge
Scenario: High Availability using Distributed Virtual Routing (DVR)
Scenario: High Availability using VRRP (L3HA) with Open vSwitch
Scenario: High Availability using VRRP (L3HA) with Linux Bridge
Scenario: Provider networks with Open vSwitch
Scenario: Provider networks with Linux bridge
Migration
Migrate legacy nova-network to OpenStack Networking (neutron)
Legacy to DVR
1/76
Legacy to L3 HA
Miscellaneous
Disabling libvirt networking
Adding high availability for DHCP
Advanced configuration
Operational
LBaaS
FWaaS
VPNaaS
Service chaining
Group policy
Debugging
Using OpenStack Networking with IPv6
Using SR-IOV functionality
Community Support
Documentation
ask.openstack.org
OpenStack mailing lists
The OpenStack wiki
The Launchpad Bugs area
The OpenStack IRC channel
Documentation feedback
OpenStack distribution packages
Glossary
Introduction to networking
The OpenStack Networking service provides an API that allows users to set up and define network
connectivity and addressing in the cloud. The project code-name for Networking services is
neutron. OpenStack Networking handles the creation and management of a virtual networking
infrastructure, including networks, switches, subnets, and routers for devices managed by the
OpenStack Compute service (nova). Advanced services such as firewalls or virtual private networks
(VPNs) can also be used.
OpenStack Networking consists of the neutron-server, a database for persistent storage, and any
number of plug-in agents, which provide other services such as interfacing with native Linux
2/76
Basic networking
3/76
Contents
Ethernet
VLANs
Subnets and ARP
DHCP
IP
TCP/UDP/ICMP
Ethernet
Ethernet is a networking protocol, specified by the IEEE 802.3 standard. Most wired network
interface cards (NICs) communicate using Ethernet.
In the OSI model of networking protocols, Ethernet occupies the second layer, which is known as
the data link layer. When discussing Ethernet, you will often hear terms such as local network, layer
2, L2, link layer and data link layer.
In an Ethernet network, the hosts connected to the network communicate by exchanging frames,
which is the Ethernet terminology for packets. Every host on an Ethernet network is uniquely
identified by an address called the media access control (MAC) address. In particular, in an
OpenStack environment, every virtual machine instance has a unique MAC address, which is
different from the MAC address of the compute host. A MAC address has 48 bits and is typically
represented as a hexadecimal string, such as 08:00:27:b9:88:74. The MAC address is hardcoded into the NIC by the manufacturer, although modern NICs allow you to change the MAC
address programatically. In Linux, you can retrieve the MAC address of a NIC using the ip
command:
$ ip link show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
mode DEFAULT group default qlen 1000
link/ether 08:00:27:b9:88:74 brd ff:ff:ff:ff:ff:ff
Conceptually, you can think of an Ethernet network as a single bus that each of the network hosts
connects to. In early implementations, an Ethernet network consisted of a single coaxial cable that
hosts would tap into to connect to the network. Modern Ethernet networks do not use this approach,
and instead each network host connects directly to a network device called a switch. Still, this
conceptual model is useful, and in network diagrams (including those generated by the OpenStack
dashboard) an Ethernet network is often depicted as if it was a single bus. Youll sometimes hear an
Ethernet network referred to as a layer 2 segment.
In an Ethernet network, every host on the network can send a frame directly to every other host. An
Ethernet network also supports broadcasts, so that one host can send a frame to every host on the
network by sending to the special MAC address ff:ff:ff:ff:ff:ff. ARP and DHCP are two
4/76
notable protocols that use Ethernet broadcasts. Because Ethernet networks support broadcasts, you
will sometimes hear an Ethernet network referred to as a broadcast domain.
When a NIC receives an Ethernet frame, by default the NIC checks to see if the destination MAC
address matches the address of the NIC (or the broadcast address), and the Ethernet frame is
discarded if the MAC address does not match. For a compute host, this behavior is undesirable
because the frame may be intended for one of the instances. NICs can be configured for
promiscuous mode, where they pass all Ethernet frames to the operating system, even if the MAC
address does not match. Compute hosts should always have the appropriate NICs configured for
promiscuous mode.
As mentioned earlier, modern Ethernet networks use switches to interconnect the network hosts. A
switch is a box of networking hardware with a large number of ports, that forwards Ethernet frames
from one connected host to another. When hosts first send frames over the switch, the switch
doesnt know which MAC address is associated with which port. If an Ethernet frame is destined
for an unknown MAC address, the switch broadcasts the frame to all ports. The port learns which
MAC addresses are at which ports by observing the traffic. Once it knows which MAC address is
associated with a port, it can send Ethernet frames to the correct port instead of broadcasting. The
switch maintains the mappings of MAC addresses to switch ports in a table called a forwarding
table or forwarding information base (FIB). Switches can be daisy-chained together, and the
resulting connection of switches and hosts behaves like a single network.
VLANs
VLAN is a networking technology that enables a single switch to act as if it was multiple
independent switches. Specifically, two hosts that are connected to the same switch but on different
VLANs do not see each others traffic. OpenStack is able to take advantage of VLANs to isolate the
traffic of different tenants, even if the tenants happen to have instances running on the same
compute host. Each VLAN has an associated numerical ID, between 1 and 4095. We say VLAN
15 to refer to the VLAN with numerical ID of 15.
To understand how VLANs work, lets consider VLAN applications in a traditional IT environment,
where physical hosts are attached to a physical switch, and no virtualization is involved. Imagine a
scenario where you want three isolated networks, but you only have a single physical switch. The
network administrator would choose three VLAN IDs, say, 10, 11, and 12, and would configure the
switch to associate switchports with VLAN IDs. For example, switchport 2 might be associated
with VLAN 10, switchport 3 might be associated with VLAN 11, and so forth. When a switchport is
configured for a specific VLAN, it is called an access port. The switch is responsible for ensuring
that the network traffic is isolated across the VLANs.
Now consider the scenario that all of the switchports in the first switch become occupied, and so the
organization buys a second switch and connects it to the first switch to expand the available number
of switchports. The second switch is also configured to support VLAN IDs 10, 11, and 12. Now
5/76
imagine host A connected to switch 1 on a port configured for VLAN ID 10 sends an Ethernet
frame intended for host B connected to switch 2 on a port configured for VLAN ID 10. When
switch 1 forwards the Ethernet frame to switch 2, it must communicate that the frame is associated
with VLAN ID 10.
If two switches are to be connected together, and the switches are configured for VLANs, then the
switchports used for cross-connecting the switches must be configured to allow Ethernet frames
from any VLAN to be forwarded to the other switch. In addition, the sending switch must tag each
Ethernet frame with the VLAN ID so that the receiving switch can ensure that only hosts on the
matching VLAN are eligible to receive the frame.
When a switchport is configured to pass frames from all VLANs and tag them with the VLAN IDs
it is called a trunk port. IEEE 802.1Q is the network standard that describes how VLAN tags are
encoded in Ethernet frames when trunking is being used.
Note that if you are using VLANs on your physical switches to implement tenant isolation in your
OpenStack cloud, you must ensure that all of your switchports are configured as trunk ports.
It is important that you select a VLAN range that your current network infrastructure is not using.
For example, if you estimate that your cloud must support a maximum of 100 projects, pick a
VLAN range outside of that value, such as VLAN 200299. OpenStack and all physical network
infrastructure that handles tenant networks must then support this VLAN range.
Trunking is used to connect between different switches. Each trunk uses a tag to identify which
VLAN is in use. This ensures that switches on the same VLAN can communicate.
0.670ms
0.722ms
0.723ms
To reduce the number of ARP requests, operating systems maintain an ARP cache that contains the
mappings of IP addresses to MAC address. On a Linux machine, you can view the contents of the
ARP cache by using the arp command:
$ arp -n
Address
10.0.2.3
10.0.2.2
7/76
HWtype
ether
ether
HWaddress
52:54:00:12:35:03
52:54:00:12:35:02
Flags Mask
C
C
Iface
eth0
eth0
DHCP
Hosts connected to a network use the Dynamic Host Configuration Protocol (DHCP) to
dynamically obtain IP addresses. A DHCP server hands out the IP addresses to network hosts, which
are the DHCP clients.
DHCP clients locate the DHCP server by sending a UDP packet from port 68 to address
255.255.255.255 on port 67. Address 255.255.255.255 is the local network broadcast
address: all hosts on the local network see the UDP packets sent to this address. However, such
packets are not forwarded to other networks. Consequently, the DHCP server must be on the same
local network as the client, or the server will not receive the broadcast. The DHCP server responds
by sending a UDP packet from port 67 to port 68 on the client. The exchange looks like this:
1. The client sends a discover (Im a client at MAC address 08:00:27:b9:88:74, I need
an IP address)
2. The server sends an offer (OK 08:00:27:b9:88:74, Im offering IP address
10.10.0.112)
3. The client sends a request (Server 10.10.0.131, I would like to have IP
10.10.0.112)
4. The server sends an acknowledgement (OK 08:00:27:b9:88:74, IP 10.10.0.112
is yours)
OpenStack uses a third-party program called dnsmasq to implement the DHCP server. Dnsmasq
writes to the syslog (normally found at /var/log/syslog), where you can observe the DHCP request
and replies:
Apr 23 15:53:46 c100-1 dhcpd:
Apr 23 15:53:46 c100-1 dhcpd:
eth2
Apr 23 15:53:48 c100-1 dhcpd:
08:00:27:b9:88:74 via eth2
Apr 23 15:53:48 c100-1 dhcpd:
eth2
When troubleshooting an instance that is not reachable over the network, it can be helpful to
examine this log to verify that all four steps of the DHCP protocol were carried out for the instance
in question.
IP
The Internet Protocol (IP) specifies how to route packets between hosts that are connected to
different local networks. IP relies on special network hosts called routers or gateways. A router is a
host that is connected to at least two local networks and can forward IP packets from one local
network to another. A router has multiple IP addresses: one for each of the networks it is connected
to.
8/76
In the OSI model of networking protocols, IP occupies the third layer, which is known as the
network layer. When discussing IP, you will often hear terms such as layer 3, L3, and network layer.
A host sending a packet to an IP address consults its routing table to determine which machine on
the local network(s) the packet should be sent to. The routing table maintains a list of the subnets
associated with each local network that the host is directly connected to, as well as a list of routers
that are on these local networks.
On a Linux machine, any of the following commands displays the routing table:
$ ip route show
$ route -n
$ netstat -rn
Line 1 of the output specifies the location of the default route, which is the effective routing rule if
none of the other rules match. The router associated with the default route (10.0.2.2 in the
example above) is sometimes referred to as the default gateway. A DHCP server typically transmits
the IP address of the default gateway to the DHCP client along with the clients IP address and a
netmask.
Line 2 of the output specifies that IPs in the 10.0.2.0/24 subnet are on the local network associated
with the network interface eth0.
Line 3 of the output specifies that IPs in the 192.168.27.0/24 subnet are on the local network
associated with the network interface eth1.
Line 4 of the output specifies that IPs in the 192.168.122/24 subnet are on the local network
associated with the network interface virbr0.
The output of the route -n and netsat -rn commands are formatted in a slightly different
way. This example shows how the same routes would be formatted using these commands:
$ route -n
Kernel IP routing table
Destination
Gateway
0.0.0.0
10.0.2.2
10.0.2.0
0.0.0.0
192.168.27.0
0.0.0.0
192.168.122.0
0.0.0.0
Genmask
0.0.0.0
255.255.255.0
255.255.255.0
255.255.255.0
Flags
UG
U
U
U
MSS
0
0
0
0
Window
0
0
0
0
irtt
0
0
0
0
Iface
eth0
eth0
eth1
virbr0
The ip route get command outputs the route for a destination IP address. From the above
example, destination IP address 10.0.2.14 is on the local network of eth0 and would be sent directly:
9/76
The destination IP address 93.184.216.34 is not on any of the connected local networks and would
be forwarded to the default gateway at 10.0.2.2:
$ ip route get 93.184.216.34
93.184.216.34 via 10.0.2.2 dev eth0
src 10.0.2.15
It is common for a packet to hop across multiple routers to reach its final destination. On a Linux
machine, the traceroute and more recent mtr programs prints out the IP address of each router
that an IP packet traverses along its path to its destination.
TCP/UDP/ICMP
For networked software applications to communicate over an IP network, they must use a protocol
layered atop IP. These protocols occupy the fourth layer of the OSI model known as the transport
layer or layer 4. See the Protocol Numbers web page maintained by the Internet Assigned Numbers
Authority (IANA) for a list of protocols that layer atop IP and their associated numbers.
The Transmission Control Protocol (TCP) is the most commonly used layer 4 protocol in
networked applications. TCP is a connection-oriented protocol: it uses a client-server model where
a client connects to a server, where server refers to the application that receives connections. The
typical interaction in a TCP-based application proceeds as follows:
1. Client connects to server.
2. Client and server exchange data.
3. Client or server disconnects.
Because a network host may have multiple TCP-based applications running, TCP uses an
addressing scheme called ports to uniquely identify TCP-based applications. A TCP port is
associated with a number in the range 1-65535, and only one application on a host can be associated
with a TCP port at a time, a restriction that is enforced by the operating system.
A TCP server is said to listen on a port. For example, an SSH server typically listens on port 22. For
a client to connect to a server using TCP, the client must know both the IP address of a servers host
and the servers TCP port.
The operating system of the TCP client application automatically assigns a port number to the
client. The client owns this port number until the TCP connection is terminated, after which time the
operating system reclaims the port number. These types of ports are referred to as ephemeral ports.
IANA maintains a registry of port numbers for many TCP-based services, as well as services that
use other layer 4 protocols that employ ports. Registering a TCP port number is not required, but
registering a port number is helpful to avoid collisions with other services. See Appendix B.
Firewalls and default ports of the OpenStack Configuration Reference for the default TCP ports
10/76
11/76
The Internet Control Message Protocol (ICMP) is a protocol used for sending control messages
over an IP network. For example, a router that receives an IP packet may send an ICMP packet back
to the source if there is no route in the routers routing table that corresponds to the destination
address (ICMP code 1, destination host unreachable) or if the IP packet is too large for the router to
handle (ICMP code 4, fragmentation required and dont fragment flag is set).
The ping and mtr Linux command-line tools are two examples of network utilities that use ICMP.
Network components
Contents
Switches
Routers
Firewalls
Load balancers
Switches
A switch is a device that is used to connect devices on a network. Switches forward packets on to
other devices, using packet switching to pass data along only to devices that need to receive it.
Switches operate at layer 2 of the OSI model.
Routers
A router is a networking device that connects multiple networks together. Routers are connected to
two or more networks. When they receive data packets, they use a routing table to determine which
networks to pass the information to.
Firewalls
A firewall is a network device that controls the incoming and outgoing network traffic based on an
applied rule set.
Load balancers
A load balancer is a network device that distributes network or application traffic across a number
of servers.
12/76
Tunnel technologies
Contents
Network namespaces
Contents
SNAT
DNAT
One-to-one NAT
Network Address Translation (NAT) is a process for modifying the source or destination addresses
in the headers of an IP packet while the packet is in transit. In general, the sender and receiver
applications are not aware that the IP packets are being manipulated.
NAT is often implemented by routers, and so we will refer to the host performing NAT as a NAT
router. However, in OpenStack deployments it is typically Linux servers that implement the NAT
functionality, not hardware routers. These servers use the iptables software package to implement
the NAT functionality.
There are multiple variations of NAT, and here we describe three kinds commonly found in
14/76
OpenStack deployments.
SNAT
In Source Network Address Translation (SNAT), the NAT router modifies the IP address of the
sender in IP packets. SNAT is commonly used to enable hosts with private addresses to
communicate with servers on the public Internet.
RFC 1918 reserves the following three subnets as private addresses:
10.0.0.0/8
172.16.0.0/12
192.168.0.0/16
These IP addresses are not publicly routable, meaning that a host on the public Internet can not send
an IP packet to any of these addresses. Private IP addresses are widely used in both residential and
corporate environments.
Often, an application running on a host with a private IP address will need to connect to a server on
the public Internet. One such example is a user who wants to access a public website such as
www.openstack.org. If the IP packets reach the web server at www.openstack.org with a private IP
address as the source, then the web server cannot send packets back to the sender.
SNAT solves this problem by modifying the source IP address to an IP address that is routable on
the public Internet. There are different variations of SNAT; in the form that OpenStack deployments
use, a NAT router on the path between the sender and receiver replaces the packets source IP
address with the routers public IP address. The router also modifies the source TCP or UDP port to
another value, and the router maintains a record of the senders true IP address and port, as well as
the modified IP address and port.
When the router receives a packet with the matching IP address and port, it translates these back to
the private IP address and port, and forwards the packet along.
Because the NAT router modifies ports as well as IP addresses, this form of SNAT is sometimes
referred to as Port Address Translation (PAT). It is also sometimes referred to as NAT overload.
OpenStack uses SNAT to enable applications running inside of instances to connect out to the
public Internet.
DNAT
In Destination Network Address Translation (DNAT), the NAT router modifies the IP address of the
destination in IP packet headers.
OpenStack uses DNAT to route packets from instances to the OpenStack metadata service.
Applications running inside of instances access the OpenStack metadata service by making HTTP
GET requests to a web server with IP address 169.254.169.254. In an OpenStack deployment, there
15/76
is no host with this IP address. Instead, OpenStack uses DNAT to change the destination IP of these
packets so they reach the network interface that a metadata service is listening on.
One-to-one NAT
In one-to-one NAT, the NAT router maintains a one-to-one mapping between private IP addresses
and public IP addresses. OpenStack uses one-to-one NAT to implement floating IP addresses.
16/76
open source network technologies, including routers, switches, virtual switches and softwaredefined networking (SDN) controllers.
OpenStack Networking plug-in and agents
Plugs and unplugs ports, creates networks or subnets, and provides IP addressing. The chosen
plug-in and agents differ depending on the vendor and technologies used in the particular
cloud. It is important to mention that only one plug-in can be used at a time.
Messaging queue
Accepts and routes RPC requests between agents to complete API operations. Message queue
is used in the ML2 plug-in for RPC between the neutron server and neutron agents that run on
each hypervisor, in the ML2 mechanism drivers for Open vSwitch and Linux bridge.
Tenant networks
Users create tenant networks for connectivity within projects. By default, they are fully isolated and
are not shared with other projects. OpenStack Networking supports the following types of network
isolation and overlay technologies.
Flat
All instances reside on the same network, which can also be shared with the hosts. No VLAN
tagging or other network segregation takes place.
VLAN
Networking allows users to create multiple provider or tenant networks using VLAN IDs
(802.1Q tagged) that correspond to VLANs present in the physical network. This allows
instances to communicate with each other across the environment. They can also
communicate with dedicated servers, firewalls, load balancers, and other networking
infrastructure on the same layer 2 VLAN.
GRE and VXLAN
VXLAN and GRE are encapsulation protocols that create overlay networks to activate and
control communication between compute instances. A Networking router is required to allow
traffic to flow outside of the GRE or VXLAN tenant network. A router is also required to
connect directly-connected tenant networks with external networks, including the Internet.
The router provides the ability to connect to instances directly from an external network using
floating IP addresses.
18/76
Provider networks
The OpenStack administrator creates provider networks. These networks map to existing physical
networks in the data center. Useful network types in this category are flat (untagged) and VLAN
(802.1Q tagged).
To configure rich network topologies, you can create and configure networks and subnets and other
OpenStack services like Compute will request to be connected to these networks by requesting
virtual ports. In particular, Networking supports each tenant having multiple private networks and
enables tenants to choose their own IP addressing scheme, even if those IP addresses overlap with
those that other tenants use.
Subnets
A block of IP addresses and associated configuration state. This is also known as the native IPAM
(IP Address Management) provided by the networking service for both tenant and provider
networks. Subnets are used to allocate IP addresses when new ports are created on a network.
Ports
A port is a connection point for attaching a single device, such as the NIC of a virtual server, to a
virtual network. Also describes the associated network configuration, such as the MAC and IP
addresses to be used on that port.
19/76
Routers
This is a logical component that forwards data packets between networks. It also provides L3 and
NAT forwarding to provide external network access for VMs on tenant networks. Required by
certain plug-ins only.
Security groups
A security group acts as a virtual firewall for your compute instances to control inbound and
outbound traffic. Security groups act at the port level, not the subnet level. Therefore, each port in a
subnet could be assigned to a different set of security groups. If you dont specify a particular group
at launch time, the instance is automatically assigned to the default security group for that network.
Security groups and security group rules give administrators and tenants the ability to specify the
type of traffic and direction (ingress/egress) that is allowed to pass through a port. A security group
is a container for security group rules. When a port is created, it is associated with a security group.
If a security group is not specified, the port is associated with a default security group. By default,
this group drops all ingress traffic and allows all egress. Rules can be added to this group in order to
change the behavior.
Extensions
The OpenStack networking service is extensible. Extensions serve two purposes: they allow the
introduction of new features in the API without requiring a version change and they allow the
introduction of vendor specific niche functionality. Applications can programmatically list available
extensions by performing a GET on the /extensions URI. Note that this is a versioned request;
that is, an extension available in one API version might not be available in another.
Server
Overview and concepts
Plug-ins
Overview and concepts
Agents
Overview and concepts
Layer 2 (Ethernet and Switching)
Layer 3 (IP and Routing)
Miscellaneous
20/76
Services
Routing services
VPNaaS
LbaaS
FwaaS
Server
Overview and concepts
Provides API, manages database, etc.
Plug-ins
Overview and concepts
Manages agents
Agents
Overview and concepts
Provides layer 2/3 connectivity to instances
Handles physical-virtual network transition
Handles metadata, etc.
21/76
Miscellaneous
Metadata
Overview and concepts
Services
Routing services
VPNaaS
The Virtual Private Network as a Service (VPNaaS) is a neutron extension that introduces the VPN
feature set.
LbaaS
The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load balancers. The
reference implementation is based on the HAProxy software load balancer.
FwaaS
The Firewall-as-a-Service (FWaaS) API is an experimental API that enables early adopters and
vendors to test their networking implementations.
3. Configuration
This content currently under development. For general configuration, see the Configuration
Reference.
Server
Architecture
Configuration file organization, relationships, etc.
ML2 plug-in
Overview
4.Deployment scenarios
Scenario: Legacy with Open vSwitch
Prerequisites
22/76
23/76
Architecture
Packet flow
Example configuration
Scenario: Legacy with Linux Bridge
Prerequisites
Architecture
Packet flow
Example configuration
Scenario: High Availability using Distributed Virtual Routing (DVR)
Prerequisites
Architecture
Packet flow
Example configuration
Scenario: High Availability using VRRP (L3HA) with Open vSwitch
Prerequisites
Architecture
Packet flow
Example configuration
Create initial networks
Scenario: High Availability using VRRP (L3HA) with Linux Bridge
Prerequisites
Architecture
Packet flow
Example configuration
Scenario: Provider networks with Open vSwitch
Prerequisites
Architecture
Packet flow
Example configuration
Scenario: Provider networks with Linux bridge
Prerequisites
Architecture
Packet flow
Example configuration
Prerequisites
Infrastructure
OpenStack services - controller node
OpenStack services - network node
OpenStack services - compute nodes
Architecture
Packet flow
Case 1: North-south for instances with a fixed IP address
Case 2: North-south for instances with a floating IP address
Case 3: East-west for instances on different networks
Case 4: East-west for instances on the same network
Example configuration
Controller node
Network node
Compute nodes
Verify service operation
Create initial networks
Verify network operation
This scenario describes a legacy (basic) implementation of the OpenStack Networking service using
the ML2 plug-in with Open vSwitch (OVS).
The legacy implementation contributes the networking portion of self-service virtual data center
infrastructure by providing a method for regular (non-privileged) users to manage virtual networks
within a project and includes the following components:
Project (tenant) networks
Project networks provide connectivity to instances for a particular project. Regular (nonprivileged) users can manage project networks within the allocation that an administrator or
operator defines for for them. Project networks can use VLAN, GRE, or VXLAN transport
methods depending on the allocation. Project networks generally use private IP address
ranges (RFC1918) and lack connectivity to external networks such as the Internet.
Networking refers to IP addresses on project networks as fixed IP addresses.
External networks
External networks provide connectivity to external networks such as the Internet. Only
24/76
administrative (privileged) users can manage external networks because they interface with
the physical network infrastructure. External networks can use flat or VLAN transport
methods depending on the physical network infrastructure and generally use public IP
address ranges.
Note
A flat network essentially uses the untagged or native VLAN. Similar to layer-2 properties
of physical networks, only one flat network can exist per external bridge. In most cases,
production deployments should use VLAN transport for external networks.
Routers
Routers typically connect project and external networks. By default, they implement SNAT
to provide outbound external connectivity for instances on project networks. Each router
uses an IP address in the external network allocation for SNAT. Routers also use DNAT to
provide inbound external connectivity for instances on project networks. Networking refers
to IP addresses on routers that provide inbound external connectivity for instances on project
networks as floating IP addresses. Routers can also connect project networks that belong to
the same project.
Supporting services
Other supporting services include DHCP and metadata. The DHCP service manages IP
addresses for instances on project networks. The metadata service provides an API for
instances on project networks to obtain metadata such as SSH keys.
The example configuration creates one flat external network and one VXLAN project (tenant)
network. However, this configuration also supports VLAN external networks, VLAN project
networks, and GRE project networks.
Prerequisites
These prerequisites define the minimal physical infrastructure and immediate OpenStack service
dependencies necessary to deploy this scenario. For example, the Networking service immediately
depends on the Identity service and the Compute service immediately depends on the Networking
service. These dependencies lack services such as the Image service because the Networking
service does not immediately depend on it. However, the Compute service depends on the Image
service to launch an instance. The example configuration in this scenario assumes basic
configuration knowledge of Networking service components.
Infrastructure
1. One controller node with one network interface: management.
2. One network node with four network interfaces: management, project tunnel networks,
25/76
VLAN project networks, and external (typically the Internet). The Open vSwitch bridge
br-vlan must contain a port on the VLAN interface and Open vSwitch bridge br-ex
must contain a port on the external interface.
3. At least one compute node with three network interfaces: management, project tunnel
networks, and VLAN project networks. The Open vSwitch bridge br-vlan must contain a
port on the VLAN interface.
To improve understanding of network traffic flow, the network and compute nodes contain a
separate network interface for VLAN project networks. In production environments, VLAN project
networks can use any Open vSwitch bridge with access to a network interface. For example, the
br-tun bridge.
In the example configuration, the management network uses 10.0.0.0/24, the tunnel network uses
10.0.1.0/24, and the external network uses 203.0.113.0/24. The VLAN network does not require an
IP address range because it only handles layer-2 connectivity.
26/76
27/76
Note
For VLAN external and project networks, the physical network infrastructure must support VLAN
tagging. For best performance with VXLAN and GRE project networks, the network infrastructure
should support jumbo frames.
Warning
Linux distributions often package older releases of Open vSwitch that can introduce issues during
operation with the Networking service. We recommend using at least the latest long-term stable
(LTS) release of Open vSwitch for the best experience and support from Open vSwitch. See
http://www.openvswitch.org for available releases and the installation instructions for building
newer releases from source on various distributions.
Implementing VXLAN networks requires Linux kernel 3.13 or newer.
28/76
Architecture
The legacy architecture provides basic virtual networking components in your environment.
Routing among project and external networks resides completely on the network node. Although
more simple to deploy than other architectures, performing all functions on the network node
creates a single point of failure and potential performance issues. Consider deploying DVR or L3
HA architectures in production environments to provide redundancy and increas
29/76
30/76
31/76
32/76
Packet flow
Note
North-south network traffic travels between an instance and external network, typically the Internet.
East-west network traffic travels between instances.
External network
Network 203.0.113.0/24
IP address allocation from 203.0.113.101 to 203.0.113.200
Project network router interface 203.0.113.101 TR
Project network
Network 192.168.1.0/24
Gateway 192.168.1.1 with MAC address TG
Compute node 1
Instance 1 192.168.1.11 with MAC address I1
Instance 1 resides on compute node 1 and uses a project network.
The instance sends a packet to a host on the external network.
The following steps involve compute node 1:
1. The instance 1 tap interface (1) forwards the packet to the Linux bridge qbr. The packet
contains destination MAC address TG because the destination resides on another network.
2. Security group rules (2) on the Linux bridge qbr handle state tracking for the packet.
3. The Linux bridge qbr forwards the packet to the Open vSwitch integration bridge br-int.
4. The Open vSwitch integration bridge br-int adds the internal tag for the project network.
5. For VLAN project networks:
1. The Open vSwitch integration bridge br-int forwards the packet to the Open
vSwitch VLAN bridge br-vlan.
2. The Open vSwitch VLAN bridge br-vlan replaces the internal tag with the actual
VLAN tag of the project network.
3. The Open vSwitch VLAN bridge br-vlan forwards the packet to the network node
via the VLAN interface.
6. For VXLAN and GRE project networks:
1. The Open vSwitch integration bridge br-int forwards the packet to the Open
vSwitch tunnel bridge br-tun.
2. The Open vSwitch tunnel bridge br-tun wraps the packet in a VXLAN or GRE
tunnel and adds a tag to identify the project network.
3. The Open vSwitch tunnel bridge br-tun forwards the packet to the network node
via the tunnel interface.
The following steps involve the network node:
1. For VLAN project networks:
1. The VLAN interface forwards the packet to the Open vSwitch VLAN bridge brvlan.
2. The Open vSwitch VLAN bridge br-vlan forwards the packet to the Open
vSwitch integration bridge br-int.
34/76
2.
3.
4.
5.
6.
7.
3. The Open vSwitch integration bridge br-int replaces the actual VLAN tag of the
project network with the internal tag.
For VXLAN and GRE project networks:
1. The tunnel interface forwards the packet to the Open vSwitch tunnel bridge brtun.
2. The Open vSwitch tunnel bridge br-tun unwraps the packet and adds the internal
tag for the project network.
3. The Open vSwitch tunnel bridge br-tun forwards the packet to the Open vSwitch
integration bridge br-int.
The Open vSwitch integration bridge br-int forwards the packet to the qr interface (3) in
the router namespace qrouter. The qr interface contains the project network gateway IP
address TG.
The iptables service (4) performs SNAT on the packet using the qg interface (5) as the
source IP address. The qg interface contains the project network router interface IP address
TR.
The router namespace qrouter forwards the packet to the Open vSwitch integration
bridge br-int via the qg interface.
The Open vSwitch integration bridge br-int forwards the packet to the Open vSwitch
external bridge br-ex.
The Open vSwitch external bridge br-ex forwards the packet to the external network via
the external interface.
Note
Return traffic follows similar steps in reverse.
35/76
37/76
38/76
39/76
2.
3.
4.
5.
6.
7.
8.
41/76
1. The VLAN interface forwards the packet to the Open vSwitch VLAN bridge brvlan.
2. The Open vSwitch VLAN bridge br-vlan forwards the packet to the Open
vSwitch integration bridge br-int.
3. The Open vSwitch integration bridge br-int replaces the actual VLAN tag of
project network 2 with the internal tag.
2. For VXLAN and GRE project networks:
1. The tunnel interface forwards the packet to the Open vSwitch tunnel bridge br42/76
tun.
2. The Open vSwitch tunnel bridge br-tun unwraps the packet and adds the internal
tag for project network 2.
3. The Open vSwitch tunnel bridge br-tun forwards the packet to the Open vSwitch
integration bridge br-int.
3. The Open vSwitch integration bridge br-int forwards the packet to the Linux bridge qbr.
4. Security group rules (5) on the Linux bridge qbr handle firewalling and state tracking for
the packet.
5. The Linux bridge qbr forwards the packet to the tap interface (6) on instance 2.
Note
Return traffic follows similar steps in reverse.
44/76
45/76
Example configuration
Use the following example configuration as a template to deploy this scenario in your environment.
Controller node
1. Configure common options. Edit the /etc/neutron/neutron.conf file:
[DEFAULT]
verbose = True
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
46/76
Network node
1. Configure the kernel to enable packet forwarding and disable reverse path filtering. Edit the
/etc/sysctl.conf file:
net.ipv4.ip_forward=1
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
47/76
Note
The external_network_bridge option intentionally contains no value.
6. Configure the DHCP agent. Edit the /etc/neutron/dhcp_agent.ini file:
[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
dhcp_delete_namespaces = True
[DEFAULT]
dnsmasq_config_file = /etc/neutron/dnsmasq-neutron.conf
Open vSwitch
Open vSwitch agent
L3 agent
DHCP agent
Metadata agent
Compute nodes
1. Configure the kernel to enable iptables on bridges and disable reverse path filtering. Edit the
/etc/sysctl.conf file:
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
49/76
tunnel_types = gre,vxlan
[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
enable_ipset = True
50/76
+---------------------------+--------------------------------------+
| Field
| Value
|
+---------------------------+--------------------------------------+
| admin_state_up
| True
|
| id
| e5f9be2f-3332-4f2d-9f4d-7f87a5a7692e |
| name
| ext-net
|
| provider:network_type
| flat
|
| provider:physical_network | external
|
| provider:segmentation_id |
|
| router:external
| True
|
| shared
| False
|
| status
| ACTIVE
|
| subnets
|
|
| tenant_id
| 96393622940e47728b6dcdb2ef405f50
|
+---------------------------+--------------------------------------+
Note
The example configuration contains vlan as the first project network type. Only an administrative
user can create other types of networks such as GRE or VXLAN. The following commands use the
admin project credentials to create a VXLAN project network.
1. Obtain the ID of a regular project. For example, using the demo project:
$ openstack project show demo
+-------------+----------------------------------+
| Field
| Value
|
+-------------+----------------------------------+
| description | Demo Project
|
| enabled
| True
|
| id
| 443cd1596b2e46d49965750771ebbfe1 |
| name
| demo
|
+-------------+----------------------------------+
51/76
3. Source the regular project credentials. The following steps use the demo project.
4. Create a subnet on the project network:
$ neutron subnet-create demo-net --name demo-subnet --gateway
192.168.1.1 \
192.168.1.0/24
Created a new subnet:
+-------------------+--------------------------------------------------+
| Field
| Value
|
+-------------------+--------------------------------------------------+
| allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} |
| cidr
| 192.168.1.0/24
|
| dns_nameservers
|
|
| enable_dhcp
| True
|
| gateway_ip
| 192.168.1.1
|
| host_routes
|
|
| id
| c7b42e58-a2f4-4d63-b199-d266504c03c9
|
| ip_version
| 4
|
| ipv6_address_mode |
|
| ipv6_ra_mode
|
|
| name
| demo-subnet
|
| network_id
| 6e9c5324-68d1-47a8-98d5-8268db955475
|
| tenant_id
| 443cd1596b2e46d49965750771ebbfe1
|
+-------------------+--------------------------------------------------+
52/76
| name
| demo-router
|
| routes
|
|
| status
| ACTIVE
|
| tenant_id
| 443cd1596b2e46d49965750771ebbfe1
|
+-----------------------+--------------------------------------+
Note
The qdhcp namespace might not exist until launching an instance.
2. Determine the external network gateway IP address for the project network on the router,
typically the lowest IP address in the external subnet IP allocation range:
$ neutron router-port-list demo-router
+--------------------------------------+------+------------------+-------------------------------------------------------------------------------------+
| id
| name | mac_address
|
fixed_ips
|
+--------------------------------------+------+------------------+-------------------------------------------------------------------------------------+
| b1a894fd-aee8-475c-9262-4342afdc1b58 |
| fa:16:3e:c1:20:55 |
{"subnet_id": "c7b42e58-a2f4-4d63-b199-d266504c03c9", "ip_address":
"192.168.1.1"}
|
| ff5f93c6-3760-4902-a401-af78ff61ce99 |
| fa:16:3e:54:d7:8c |
{"subnet_id": "cd9c15a1-0a66-4bbe-b1b4-4b7edd936f7a", "ip_address":
"203.0.113.101"} |
+--------------------------------------+------+------------------+-------------------------------------------------------------------------------------+
3. On the controller node or any host with access to the external network, ping the external
network gateway IP address on the project router:
53/76
$ ping -c 4 203.0.113.101
PING 203.0.113.101 (203.0.113.101) 56(84) bytes of data.
64 bytes from 203.0.113.101: icmp_req=1 ttl=64 time=0.619
64 bytes from 203.0.113.101: icmp_req=2 ttl=64 time=0.189
64 bytes from 203.0.113.101: icmp_req=3 ttl=64 time=0.165
64 bytes from 203.0.113.101: icmp_req=4 ttl=64 time=0.216
ms
ms
ms
ms
--- 203.0.113.101 ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.165/0.297/0.619/0.187 ms
4. Source the regular project credentials. The following steps use the demo project.
5. Launch an instance with an interface on the project network.
6. Obtain console access to the instance.
1. Test connectivity to the project router:
$ ping -c 4 192.168.1.1
PING 192.168.1.1 (192.168.1.1) 56(84)
64 bytes from 192.168.1.1: icmp_req=1
64 bytes from 192.168.1.1: icmp_req=2
64 bytes from 192.168.1.1: icmp_req=3
64 bytes from 192.168.1.1: icmp_req=4
bytes of data.
ttl=64 time=0.357
ttl=64 time=0.473
ttl=64 time=0.504
ttl=64 time=0.470
ms
ms
ms
ms
--- 192.168.1.1 ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.357/0.451/0.504/0.055 ms
ms
ms
ms
ms
--- openstack.org ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
7. Create the appropriate security group rules to allow ping and SSH access to the instance. For
example:
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp
| -1
| -1
| 0.0.0.0/0 |
|
+-------------+-----------+---------+-----------+--------------+
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
54/76
| tcp
| 22
| 22
| 0.0.0.0/0 |
|
+-------------+-----------+---------+-----------+--------------+
11.On the controller node or any host with access to the external network, ping the floating IP
address associated with the instance:
$ ping -c 4 203.0.113.102
PING 203.0.113.102 (203.0.113.112) 56(84) bytes of data.
64 bytes from 203.0.113.102: icmp_req=1 ttl=63 time=3.18 ms
64 bytes from 203.0.113.102: icmp_req=2 ttl=63 time=0.981 ms
64 bytes from 203.0.113.102: icmp_req=3 ttl=63 time=1.06 ms
64 bytes from 203.0.113.102: icmp_req=4 ttl=63 time=0.929 ms
--- 203.0.113.102 ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
55/76
Prerequisites
Infrastructure
OpenStack services - controller node
OpenStack services - compute nodes
Architecture
Packet flow
Case 1: North-south
Case 2: East-west for instances on different networks
Case 3: East-west for instances on the same network
Example configuration
Controller node
Compute nodes
Verify service operation
Create initial networks
Verify network operation
This scenario describes a provider networks implementation of the OpenStack Networking service
using the ML2 plug-in with Open vSwitch (OVS).
Provider networks generally offer simplicity, performance, and reliability at the cost of flexibility.
Unlike other scenarios, only administrators can manage provider networks because they require
configuration of physical network infrastructure. Also, provider networks lack the concept of fixed
and floating IP addresses because they only handle layer-2 connectivity for instances.
In many cases, operators who are already familiar with network architectures that rely on the
physical network infrastructure can easily deploy OpenStack Networking on it. Over time, operators
can test and implement cloud networking features in their environment.
Before OpenStack Networking introduced Distributed Virtual Routers (DVR), all network traffic
traversed one or more dedicated network nodes, which limited performance and reliability. Physical
network infrastructures typically offer better performance and reliability than general-purpose hosts
that handle various network operations in software.
In general, the OpenStack Networking software components that handle layer-3 operations impact
performance and reliability the most. To improve performance and reliability, provider networks
move layer-3 operations to the physical network infrastructure.
56/76
In one particular use case, the OpenStack deployment resides in a mixed environment with
conventional virtualization and bare-metal hosts that use a sizable physical network infrastructure.
Applications that run inside the OpenStack deployment might require direct layer-2 access,
typically using VLANs, to applications outside of the deployment.
The example configuration creates a VLAN provider network. However, it also supports flat
(untagged or native) provider networks.
Prerequisites
These prerequisites define the minimum physical infrastructure and OpenStack service
dependencies that you need to deploy this scenario. For example, the Networking service
immediately depends on the Identity service and the Compute service immediately depends on the
Networking service. These dependencies lack services such as the Image service because the
Networking service does not immediately depend on it. However, the Compute service depends on
the Image service to launch an instance. The example configuration in this scenario assumes basic
configuration knowledge of Networking service components.
For illustration purposes, the management network uses 10.0.0.0/24 and provider networks use
192.0.2.0/24, 198.51.100.0/24, and 203.0.113.0/24.
Infrastructure
1. One controller node with two network interfaces: management and provider. The provider
interface connects to a generic network that physical network infrastructure switches/routes
to external networks (typically the Internet). The Open vSwitch bridge br-provider
must contain a port on the provider network interface.
2. At least two compute nodes with two network interfaces: management and provider. The
provider interface connects to a generic network that the physical network infrastructure
switches/routes to external networks (typically the Internet). The Open vSwitch bridge brprovider must contain a port on the provider network interface.
57/76
58/76
Architecture
The general provider network architecture uses physical network infrastructure to handle switching
and routing of network traffic.
59/76
60/76
Note
61/76
For illustration purposes, the diagram contains two different provider networks.
The compute nodes contain the following network components:
1. Open vSwitch agent managing virtual switches, connectivity among them, and interaction
via virtual ports with other network components such as Linux bridges and underlying
interfaces.
2. Linux bridges handling security groups.
Note
Due to limitations with Open vSwitch and iptables, the Networking service uses a Linux
bridge to manage security groups for instances.
62/76
Note
For illustration purposes, the diagram contains two different provider networks.
Packet flow
Note
North-south network traffic travels between an instance and external network, typically the Internet.
East-west network traffic travels between instances.
Note
Open vSwitch uses VLANs internally to segregate networks that traverse bridges. The VLAN ID
usually differs from the segmentation ID of the virtual network.
Case 1: North-south
The physical network infrastructure handles routing and potentially other services between the
provider and external network. In this case, provider and external simply differentiate between a
network available to instances and a network only accessible via router, respectively, to illustrate
that the physical network infrastructure handles routing. However, provider networks support direct
connection to external networks such as the Internet.
External network
63/76
Network 203.0.113.0/24
Provider network (VLAN)
Network 192.0.2.0/24
Gateway 192.0.2.1 with MAC address TG
Compute node 1
Instance 1 192.0.2.11 with MAC address I1
Instance 1 resides on compute node 1 and uses a provider network.
The instance sends a packet to a host on the external network.
64/76
Network: 198.51.100.0/24
Gateway: 198.51.100.1 with MAC address TG2
Compute node 1
Instance 1: 192.0.2.11 with MAC address I1
Compute node 2
Instance 2: 198.51.100.11 with MAC address I2
Instance 1 resides on compute node 1 and uses provider network 1.
Instance 2 resides on compute node 2 and uses provider network 2.
Instance 1 sends a packet to instance 2.
4. The Open vSwitch integration bridge br-int forwards the packet to the Linux bridge qbr.
5. Security group rules (5) on the Linux bridge qbr handle firewalling and state tracking for
the packet.
6. The Linux bridge qbr forwards the packet to the tap interface (6) on instance 2.
Note
Return traffic follows similar steps in reverse.
67/76
4. The Open vSwitch integration bridge br-int forwards the packet to the Linux bridge qbr.
5. Security group rules (4) on the Linux bridge qbr handle firewalling and state tracking for
the packet.
6. The Linux bridge qbr forwards the packet to the tap interface (5) on instance 2.
Note
Return traffic follows similar steps in reverse
Example configuration
Use the following example configuration as a template to deploy this scenario in your environment.
Note
The lack of L3 agents in this scenario prevents operation of the conventional metadata agent. You
must use a configuration drive to provide instance metadata.
Controller node
1. Configure the kernel to disable reverse path filtering. Edit the /etc/sysctl.conf file:
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
Note
The service_plugins option contains no value because the Networking service does
not provide layer-3 services such as routing.
4. Configure the ML2 plug-in and Open vSwitch agent. Edit the
/etc/neutron/plugins/ml2/ml2_conf.ini file:
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = openvswitch
69/76
[ml2_type_flat]
flat_networks = provider
[ml2_type_vlan]
network_vlan_ranges = provider
[ovs]
bridge_mappings = provider:br-provider
[securitygroup]
firewall_driver =
neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
enable_ipset = True
Note
The tenant_network_types option contains no value because the architecture does
not support project (private) networks.
Note
The provider value in the network_vlan_ranges option lacks VLAN ID ranges to
support use of arbitrary VLAN IDs.
5. Configure the DHCP agent. Edit the /etc/neutron/dhcp_agent.ini file:
[DEFAULT]
verbose = True
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
dhcp_delete_namespaces = True
8. Add the provider network interface as a port on the Open vSwitch provider bridge brprovider:
$ ovs-vsctl add-port br-provider PROVIDER_INTERFACE
Replace PROVIDER_INTERFACE with the name of the underlying interface that handles
provider networks. For example, eth1.
9. Start the following services:
Server
Open vSwitch agent
70/76
DHCP agent
Compute nodes
1. Configure the kernel to disable reverse path filtering. Edit the /etc/sysctl.conf file:
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.all.rp_filter=0
7. Add the provider network interface as a port on the Open vSwitch provider bridge brprovider:
$ ovs-vsctl add-port br-provider PROVIDER_INTERFACE
Replace PROVIDER_INTERFACE with the name of the underlying interface that handles
provider networks. For example, eth1.
8. Start the following services:
Open vSwitch agent
Note
The shared option allows any project to use this network.
3. Create a subnet on the provider network:
$ neutron subnet-create provider-101 203.0.113.0/24 \
--name provider-101-subnet --gateway 203.0.113.1
72/76
Note
The qdhcp namespace might not exist until launching an instance.
2. Source the regular project credentials. The following steps use the demo project.
3. Create the appropriate security group rules to allow ping and SSH access to the instance. For
example:
$ nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| icmp
| -1
| -1
| 0.0.0.0/0 |
|
+-------------+-----------+---------+-----------+--------------+
$ nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
+-------------+-----------+---------+-----------+--------------+
| IP Protocol | From Port | To Port | IP Range | Source Group |
+-------------+-----------+---------+-----------+--------------+
| tcp
| 22
| 22
| 0.0.0.0/0 |
|
+-------------+-----------+---------+-----------+--------------+
73/76
+-------------------------------------+-----------------------------------------------------------------+
| Property
| Value
|
+-------------------------------------+-----------------------------------------------------------------+
| OS-DCF:diskConfig
| MANUAL
|
| OS-EXT-AZ:availability_zone
| nova
|
| OS-EXT-SRV-ATTR:host
| |
| OS-EXT-SRV-ATTR:hypervisor_hostname | |
| OS-EXT-SRV-ATTR:instance_name
| instance-00000001
|
| OS-EXT-STS:power_state
| 0
|
| OS-EXT-STS:task_state
| scheduling
|
| OS-EXT-STS:vm_state
| building
|
| OS-SRV-USG:launched_at
| |
| OS-SRV-USG:terminated_at
| |
| accessIPv4
|
|
| accessIPv6
|
|
| adminPass
| h7CkMdkRXuuh
|
| config_drive
|
|
| created
| 2015-07-22T20:40:16Z
|
| flavor
| m1.tiny (1)
|
| hostId
|
|
| id
| dee2a9f4-e24c-444d-8c94386f11f74af5
|
| image
| cirros-0.3.3-x86_64-disk
(2b6bb38f-f69f-493c-a1c0-264dfd4188d8) |
| key_name
| |
| metadata
| {}
|
| name
| test_server
|
| os-extended-volumes:volumes_attached | []
|
| progress
| 0
|
| security_groups
| default
|
| status
| BUILD
|
| tenant_id
| 5f2db133e98e4bc2999ac2850ce2acd1
74/76
|
| updated
| 2015-07-22T20:40:16Z
|
| user_id
| ea417ebfa86741af86f84a5dbcc97cd2
|
+-------------------------------------+-----------------------------------------------------------------+
5. Determine the IP address of the instance. The following step uses 203.0.113.3.
$ nova list
+--------------------------------------+-------------+-------+------------+-------------+--------------------------+
| ID
| Name
| Status | Task State
| Power State | Networks
|
+--------------------------------------+-------------+-------+------------+-------------+--------------------------+
| dee2a9f4-e24c-444d-8c94-386f11f74af5 | test_server | ACTIVE | | Running
| provider-101=203.0.113.3 |
+--------------------------------------+-------------+-------+------------+-------------+--------------------------+
6. On the controller node or any host with access to the provider network, ping the IP address
of the instance:
$ ping -c 4 203.0.113.3
PING 203.0.113.3 (203.0.113.3) 56(84)
64 bytes from 203.0.113.3: icmp_req=1
64 bytes from 203.0.113.3: icmp_req=2
64 bytes from 203.0.113.3: icmp_req=3
64 bytes from 203.0.113.3: icmp_req=4
bytes of data.
ttl=63 time=3.18 ms
ttl=63 time=0.981 ms
ttl=63 time=1.06 ms
ttl=63 time=0.929 ms
--- 203.0.113.3 ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 3002ms
rtt min/avg/max/mdev = 0.929/1.539/3.183/0.951 ms
ms
ms
ms
ms
--- openstack.org ping statistics --4 packets transmitted, 4 received, 0% packet loss, time 3003ms
rtt min/avg/max/mdev = 17.431/17.575/17.734/0.143 ms
75/76
76/76