0% found this document useful (0 votes)
64 views50 pages

What's New in OpenStack 17 - 17 - 1 External

Red Hat OpenStack Platform 17 introduces significant enhancements including a virtualized control plane on OpenShift, advanced networking features, and improved security measures such as secure RBAC and FIPS compatibility. The roadmap outlines future developments, emphasizing lifecycle support and scalability improvements. Key themes focus on deployment ease, observability, and storage enhancements, aimed at optimizing user experience and operational efficiency.

Uploaded by

analytic doc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views50 pages

What's New in OpenStack 17 - 17 - 1 External

Red Hat OpenStack Platform 17 introduces significant enhancements including a virtualized control plane on OpenShift, advanced networking features, and improved security measures such as secure RBAC and FIPS compatibility. The roadmap outlines future developments, emphasizing lifecycle support and scalability improvements. Key themes focus on deployment ease, observability, and storage enhancements, aimed at optimizing user experience and operational efficiency.

Uploaded by

analytic doc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

What’s New in Red Hat

OpenStack Platform 17
October 2022
1
Agenda (Technical Product Update)

▸ Introduction
▸ Change Summary
▸ Technical Messaging Changes
▸ Roadmap
▸ Call to Action
▸ Virtualization
▸ Networking

2
F18017-190601
Red Hat OpenStack Platform 17
What’s Changed? Feature highlights

Enhanced Advanced
Deployment Networking Improved security

Virtualized control plane on OpenShift Hardware Offload Secure RBAC - Tech Preview 17.0
Offer a bridge for customers to explore OSP SmartNICs provide acceleration for Expanded granular Role-Based Access Control of
and OCP together, with dynamic resource security groups, crypto tasks and more
authorisation across OpenStack services providing:
allocation increased functionality, improved auditability, and a
Hardware virtio (vDPA)
Achieve SR-IOV performance, increased reduced the attack surface
Director Lite Deployment portability across vendors, and live
Simplify deployments with fewer required migration
FIPS Compatibility - Tech Preview 17.0
services and a smaller footprint Addressing the security control requirements for the
BGP Dynamic routing
IPv6 connectivity, multi-cluster OpenStack undercloud operating in FIPS mode
Red Hat Ceph Storage 5 connectivity
Deployment and Management of RHCSv5
via new cephadm and Ceph Orchestrator Scale beyond 1000 nodes
with decoupled day2 management Massive scale for massive innovation

3
* Under planning and subject to change
CONFIDENTIAL

DAY 2 THEMES

LIFECYCLE OBSERVABILITY SCALE

Ease of Observability Services Easy to consume and managed Extensible and easy to scale Plugin -
Deployment and upgrades Observability Stack, Integrations and Play Observability Services and APIs
Release alignment and consistency for Add-Ons Easy to extend add-on community
APIs and Observatorium for Monitoring Consistent Metrics and Analysis Tools Ecosystem & Partners
& Logging for Visualization Extended Framework Monitoring and
Consistent Logs, Event Queries and Logging APIs for Enhanced OpenStack
Correlation Experiences for OpenStack Observability on Observatorium
Hybrid Clouds

V1
What's new in Red Hat OpenStack Platform 17
CONFIDENTIAL

STORAGE THEMES

Enhanced Operator Experience User Experience Features Security

Cephadm integration with RHCSv5 Cinder deferred deletion (clonev2) Expose volume secret IDs
Multi-path deployment automation Glance image upload compression
S3 backend & zstd for backups Create share from CephFS snapshot
Cinder NVMe-oF support Backup & restore across AZ (CLI)
Manila manage/unmanage Glance Distributed Image import
Manila multi backends Default volume type per tenant
Multi Ceph clusters RBD support
Cinder backup on separate Ceph
cluster
V1
What's new in Red Hat OpenStack Platform 17
CONFIDENTIAL

SECURITY THEMES

Identity and Access Management Automated compliance management Protect platform data at rest and in motion

● Secure RBAC ● OpenSCAP Integration ● Secret Management


● Federation ● FIPS Compliance ● HSM Integration
● MemCached implementation
● FedRAMP
● Certificate Management
● ANSSI ● TLS Everywhere

V1
What's new in Red Hat OpenStack Platform 17
Red Hat OpenStack Platform Lifecycle

2021 2022 2023 2024 2025 2026 2027 2028

Red Hat OpenStack Supported in-place upgrade paths:


Based on Newton
Platform 10 ● RHOSP 10 to 13 (until Dec 2021)
● RHOSP 13 to 16.2 GA
● RHOSP 16.2 to 17.0 (no upgrade)
Red Hat OpenStack Platform 13 Based on Queens
● RHOSP 16.2 to 17.1 (planned)
Based on Train Support Phases:
Red Hat OpenStack Platform 16.1 - Train With Ussuri backports ● RHOSP 17.0 (Full Support Sept 22,2022 - Sept .22, 2023)
● RHOSP 17.1 (Full Support planned Q2 2022 - planned Q2
Red Hat OpenStack Platform 16.2 - Train 2024)
● RHOSP 17.1 (Maintenance Support planned Q2 2024 -
planned Q2 2026)
RHOS Platform 17.0 Based on Wallaby ● RHOSP 17.1 (ELS planned Q2 2026 - April, 2027)

Based on Wallaby
Red Hat OpenStack Platform 17.1 With Xena backports

Long life releases


7

13, 16, 17
CONFIDENTIAL

Roadmap Details

V1
What's new in Red Hat OpenStack Platform 17
OpenStack 17 Roadmap
17.0 17.1
● Moving to q35 default machine-type ● Compute extended hybrid states for upgrades
● Compute node supporting multiple NVIDIA vGPU types ● Power efficiency: turning off the unused cores [TP]
[TP] ● `Socket` PCI NUMA affinity policy

Compute

Compute
● UEFI Secure Boot
● UEFI Secure Boot [TP]
● vTPM encryption
vTPM encryption [TP]

`

● Virtio Data Path Acceleration (vDPA) with Cold-migrate, Resize,
● Virtio Data Path Acceleration (vDPA) [TP] Evacuate, Shelve
● Pinned and non-pinned CPUs in the same instance [TP] ● Pinned and non-pinned CPUs in the same instance
● Support trait reporting via provider.yaml [TP] ● Support trait reporting via provider.yaml (SST-BF/ CP, CAT,...)
● Windows Server 2022 guests

● Cinder S3 backend & zstd for backups


● RHCSv5 with Cephadm integration
● Cinder NVMe-oF support
● Image compression
Storage

Storage
● Multi Ceph cluster support (for non Edge)
● Create share from snap with CephFS
● Swift Container sharding
● Multipath deployment automation
● Manila multi backend of same kind
● Distributed Image Import
● Manila manage/unmanage
● Default volume type per tenant

● OVN Migration - Trunking, OVS firewalls


● OVN Migration scale, restore from backup

Networking
Networking

● Multi-cloud interconnect with BGP for


● Octavia LB - vertical scaling, SCTP, OVN ACLs
control-plane and public IP advertisement [TP]
● OVN Stateless Security groups
● Designate Full GA
● Designate support for bring-your-own BIND
instances (GA)
● Octavia with HA at Edge sites (GA)
OpenStack 17 Roadmap
17.0 17.1

● Support for OSP Director operator deployment ● OSP Director operator enhancements
(17.0.z)

Day I
Day I

● STF LIght 1.6 Disconnected Installation (RHCC) on


● STF 1.5 Disconnected Installation (Community)
OpenShift Aligned Releases
● STF Release Alignment with OpenShift Releases
● Leverage OpenShift RHCC Operators & Releases
● Enhanced Supported for RHCC Operators
Day II

Day II
Alignment
● STF Support for SYSLOG Streaming to Kafka
● STF Support for Thanos/Prometheus Metrics &
Monitoring APIs
● STF Support for Loki Logging and APIs

● OVN Migration - Trunking, OVS firewalls ● Upgrade from RHOSP 16.2 to RHOSP 17.1
Upgrades

Upgrades
● No Upgrade from RHOSP 16.2 to RHOSP 17.0 - ● Mixed RHEL version upgrade support
the upgrade support comes in RHOSP 17.1
OpenStack 17 Roadmap
17.0 17.1
● Project Scoped Personas Secure RBAC [TP] ● Support for Federation via OpenIDC
● FIPS Compatibility [TP]
● Enabling keystone caching when using Fernet
● Support for creation of SSL certificates using

Security

Security
private keys stronger than 2048bits Tokens
● Manage certmonger certificate via ansible instead

`
of puppet
● Support vTPM
● RHEL 9.0 Security enhancements

● Q35 machine type ● Mix pinned and unpinned vCPUs for a given VM
● OVN Migration Fast Datapath ● Optional NUMA affinity for Neutron ports
● OVS-DPDK- Stateless Security groups ● vDPA virtio 1.1 - OVS HW offload [TP] - live
NFV

NFV
migration
● OVN TC/flower conntrack offload GA - FIP/NAT,
QoS metering max & min bandwidth

● RHCSv5 integration ● Octavia at the Edge


● Nova Caching API ● Designate at the Edge

Edge
Edge

● Improved image management with Ceph ● Image at the edge with third party backends
● Nova segment aware scheduler
OpenStack 17 Roadmap
17.0 17.1
● MetalLB - BGP & Router Sharding support ● Support for DCN with with Telco/NFV functionality
● Support out-of-tree Kubernetes OpenStack Cloud

Shift on Stack

` Stack
Provider and Cinder CSI ● Further scale improvements
● Support for OVS-DPDK Worker
● OVS Hardware offload

Shift on
● DPDK support to host-device plugin
● Support for OVS-DPDK Worker
● Support for DCN for Enterprise use cases (TP)
● Scale improvements

● Controller HA deployment across multiple L2 ● Optional use of AMQP for RPC instead of
High Availability

High Availability
networks/Multirack HA (TP) RabbitMQ in the control plane
● Optionally use AMQP for RPC instead of ● Mariabackup controller recovery
RabbitMQ in the control plane (TP) ● Authentication Plugin SHA-256 support for
RHOSP in MariaDB (ed25519)

● 750 Nodes per cluster ● 1000 nodes per cluster


● Raft ovsdb clustering ● Edge scale
Scale

Scale
● NFV Conntrack scale ● OVN
● Shift on Stack 300+ nodes?
CONFIDENTIAL

Extending OpenStack
Deployment to
OpenShift

V1
What's new in Red Hat OpenStack Platform 17
Extend OpenStack deployment to OpenShift
Platform
With virtualized control plane and managed
by the Metal3/Ironic service
● Red Hat OpenStack Platform can be already deployed today
on Bare-Metal, and virtual control plane on RHV
● With the availability of OpenShift virtualization, OpenStack
control plane can be virtualized and treated as an OpenShift OpenStack VMs
workload, to offer OpenStack Services

○ All machines are managed by the Metal3/Ironic service


that is self-hosted by the OpenShift cluster
Red Hat OpenShift Container Platform
○ Allowing for bare-metal provisioning flexibility, as
machines can be used for OpenStack or OpenShift
Red Hat Enterprise Linux CoreOS
workload
○ Leveraging a new OpenStack Director Operator
○ GA in Red Hat OpenStack 17 and Backported to 16.2 Physical machine

V1
OSP Director Operator
(Custom Resource Definitions)

Hardware Provisioning Software Configuration

OpenStackNet
OpenStackPlaybookGenerator

Integrated IPv4/IPv6 IPAM

OpenStackControlPlane
Generate Ansible Playbooks

Kubevirt
15
OpenStackClient (pod)
Ansible Playbooks
OpenStackBaremetalSet Git Store

Metal3
Execute Ansible, Run openstackclient
Deployment Flow
Setup OSP Director Create the git and root
Deploy The Networks CR
Operator secrets

Create the RHEL Image


Create the baremetal CR Create the ctlpne CR
PV for the ctlplne

For Disconnected:
Playbook Generator in
Create the TripleO and Create the
interactive mode and
Heat Config Maps PlayBookGenerator
login to the generated
pod
Log in to the osp client For Disconnected:
pod and run the deploy For Online:
16
Tail the generator log till Run the stack creation
script -a to accept the git script in the generate
playbooks and -p to it finishes
pod and follow the log till
execute the stack completion
For Disconnected:
creation
Log in to the client and
execute a script /
playbook to update local
disconnected registry
creds and repos for
deployment
Virtualized control plane on OpenShift
Virtualization

OSP Compute OSP Compute


Baremetal Baremetal
OCP OSP Controller OSP Controller OSP Controller
App Virtual Machine Virtual Machine Virtual Machine
Pod

OCP OCP OCP OCP OCP OCP OCP OCP


App App App App App App App App
Pod Pod Pod Pod Pod Pod Pod Pod

Infra Infra Infra Infra Infra Infra Infra Infra


Pod Pod Pod Pod Pod Pod Pod Pod

Worker-4 Worker-5
Worker-0 Worker-1 Worker-2 Worker-3 (worker-osp) (worker-osp)
(worker) (worker) (worker) (worker)

Infra Infra Infra Infra Infra Infra


Pod Pod Pod Pod Pod Pod

Master-0 Master-1 Master-2

Baremetal V1
Openstack NextGen Deployment : Components deployment High
Level Flow

OpenStack Installer
(MetaOperator)

Step2-Data Plane Step3- Data Dataplane


Step1 - ctlplane deploy operator/job Plane components Step4-
deploy/Adopt deploy/Adopt ctlplane
(Compute,
Storage, connection ack
Networking)

Other
Operators
(Galera,
SRIOV,
Horizon Nova Keystone Neutron Glance Designate AMQP)
Operator Operator Operator Operator Operator Operator
Storage at the EDGE with Distributed Compute Nodes
CONFIDENTIAL Designator

DCN with Ceph Storage Dashboard


OPTIONAL
Undercloud AZ0 AZ0 ● Leverage Ceph dashboard
+Container registry
for every sites!
Compute nodes ● Future integration with STF
Ceph cluster 0
for centralised view
Controller nodes
OPTIONAL
● 161.4 backport target
PRIMARY SITE /
Control Stack 0

L3 Routed

AZ1 AZ2 AZ3 AZ4 AZ5

HCI Nodes x3+ HCI Nodes x3+ HCI Nodes x3+ HCI Nodes x3+ HCI Nodes x3+

Optional Compute nodes xN Optional Compute nodes xN Optional Compute nodes xN Optional Compute nodes xN Optional Compute nodes xN

DCN SITE / Stack 1 DCN SITE / Stack 2 DCN SITE / Stack 3 DCN SITE / Stack 4 DCN SITE / Stack 5

1. HaProxy is not deployed at the edge sites, customers need to set up LB/VIP themselves or access the dashboard directly.
2. The Dashboard itself remains HA
OSP 16.1 DCN Solution Overview

OSP 16.2 DCN Architecture with Ceph Storage (non HCI)


OPTIONAL
Undercloud AZ0 AZ0
+Container registry

Compute nodes

Ceph cluster 0

Controller nodes
OPTIONAL
PRIMARY SITE /
Control Stack 0

L3 Routed

AZ1 AZ2 AZ3 AZ4 AZ5

Ceph Nodes x3+ Ceph Nodes x3+ Ceph Nodes x3+ Ceph Nodes x3+ Ceph Nodes x3+

Compute nodes xN Compute nodes xN Compute nodes xN Compute nodes xN Compute nodes xN

DCN SITE / Stack 1 DCN SITE / Stack 2 DCN SITE / Stack 3 DCN SITE / Stack 4 DCN SITE / Stack 5

20
CONFIDENTIAL

OpenShift on OpenStack
Architectural Update

V1
What's new in Red Hat OpenStack Platform 17
Shift on Stack – DCN Architecture Fully Stretched Clusters

● Leveraging Spine/Leaf Network topologies Undercloud AZ0 AZ0


+Container registry
and Routed Provider networks
Cluster0 Cluster0 Workers
Masters
● Fully Distributed OCP Clusters,
Controller nodes
controlplane and compute under different
OSP Computes / HCI OSP Computes / HCI
subnets PRIMARY SITE AZ0

● Focus on Campus HA and workload


isolation (OCP Master node RTT latency L3 Routed
so low latency interconnect are mandatory)
AZ1 AZ2 AZ3
● Dev Preview in OCP 4.12 Cluster2 Master0 Cluster2 Master1
+ Workers + Workers
Cluster1 Cluster2 Master2
Cluster0
● Requires OSP 16.2 DCN architecture Workers
Masters/Workers + Workers
OSP Computes / HCI OSP Computes / HCI OSP Computes / HCI

DCN - AZ1 DCN - AZ2 DCN - AZ3

22
CONFIDENTIAL

Running the Control


Plane of RHOSP in
RHV?

V1
What's new in Red Hat OpenStack Platform 17
Red Hat Virtualization Product Support Life Cycle
Red Hat Virtualization is reaching end of
support in 2024
● August 31, 2022: Red Hat Virtualization start of Maintenance Support Phase
● August 31, 2024: Red Hat Virtualization end of Maintenance Support Phase
● Existing OpenStack customers running the control plane on RHV will no longer
receive Maintenance Support for Red Hat defined Critical and Important
Security errata advisories (RHSAs) and Urgent and Selected (at Red Hat
discretion) High Priority Bug Fix errata advisories (RHBAs) may be released as
they become available. Other errata advisories may be delivered as appropriate.
● August 31, 2026: Red Hat Virtualization reaches its End of Life
https://access.redhat.com/support/policy/updates/rhev
CONFIDENTIAL

NETWORKING OVN migration


Multi-Cloud Interconnect
with BGP
OSP Designate

V1
What's new in Red Hat OpenStack Platform 17
Packed with features CONFIDENTIAL

Open Virtual Network (OVN) distributed control and services


OVN Default Neutron SDN plugin

▸ Scalable overlay management


Keystone Nova Neutron server DHCP/metadata
Internal DNS
VM and efficient L2 connectivity with
Mechanism
Horizon Glance driver Floating IP Geneve overlay tunneling
ML2/OVN
DVR routing
▸ Distributed control-plane
MariaDB OVN northdb Geneve Overlay
OVN northd Multicast
with OVSDB for improved scale
RabbitMQ OVN southdb OVSDB OVS-vswitchd and performance
OVN-controller openflow
Database services ▸ Distributed services
OVSDB-server
on the compute node reduces
Red Hat OpenStack Platform controller 0 OVN Chassis (Compute)
the latency going to the central
HA

controller or gateway nodes


Red Hat OpenStack Platform controller 1

26
V1
What's new in Red Hat OpenStack Platform 17
OSP OVN roadmap
17.0 17.1

● OVN Migration ● Neutron port optional NUMA


○ Support for deployments affinity
with OVS trunks ● OVN QOS: min bandwidth
○ Support for deployments ● OVN security group logging
with hybrid iptables firewall ● OVN Stateless Security groups
rules ● Neutron Availability Zones: Limit

`
● Active-Active HA for OVN overlays to AZs
northbound and southbound DB ● OVN TC/flower conntrack offload
GA
● OVN stateless security
groups/ACLs offload
● vDPA OVS HW offload [TP]
Routed Datacenter and Multi-cloud with BGP CONFIDENTIAL
CONFIDENTIAL NDA required

Phase-1 - BGP Direct Connect, BGP for Control-plane and ECMP

Phases OSP release Advantages/ Limitations

BGP Direct Connect OSP 17.0 Tech Distributed traffic


External/Public IP advertisement Preview (SE case by No overlapping IP addresses
BGP public IP advertising Provider networks, tenant FIPs, VIPs case basis) Kernel Networking
OSP 17.1 GA No Fast datapath
(kernel routing)
Global config with Director

ECMP for dataplane OSP 17.0 Tech Avoid bonding


BGP at deployment for nodes with ECMP routing to ToRs Preview Default route to ToRs

BGP for control-plane OSP 17.0 Tech


Controllers on separate racks (BGP to advertise active VIP) Preview
OSP 17.1 GA

(Future) OVN BGP integration for NFV OSP 18


Fast datapath (OVS-DPDK, OVS HW offload)

28
V1
What's new in Red Hat OpenStack Platform 17
BGP Routed DataCenter - Controller High Availability
CONFIDENTIAL
CONFIDENTIAL NDA required

Client
(TCP)
All 3 controllers are on separate racks with
Leaf/ToR1 Leaf/ToR2 Leaf/ToR3 different subnets.
● Resilience against power disruptions
192.1.0.1/24 192.2.0.1/24 192.3.0.1/24 ● Controllers can be in separate DC sites, DCN
Controller-1 [Active] Controller-2 Controller-3 edge sites or Availability Zones (AZs)
VIP provided ceph storage is local
192.1.1.1
HA-proxy HA-proxy HA-proxy
Pacemaker uses L3 IP instead of
pacemaker pacemaker pacemaker keepalived/VRRP (L2 based) to detect liveliness
Keepalived Keepalived Keepalived between controllers
VRRP VRRP VRRP
BGP BGP BGP
Controller-1 [Active] advertises External VIP
API services API services API services 192.1.1.1 to Leaf/ToR1 via BGP
Nova, Neutron, Nova, Neutron, Nova, Neutron,
Keystone Keystone Keystone
OpenStack client connects to the Controller-1
DB services DB services DB services [Active] and API services are load balanced by
MariaDB, MariaDB, MariaDB,
Galera Galera Galera ha-proxy service on Controller-1
Messaging Messaging Messaging
RabbitMQ RabbitMQ RabbitMQ
29 V1
What's new in Red Hat OpenStack Platform 17
BGP Routed DataCenter with Provider Networks and ECMP load balancing CONFIDENTIAL
CONFIDENTIAL NDA required

Controller-0 Master
Spine
Tenant with provider networks for dataplane
ToR0
Controller-0
Neutron server Master ● ToR is peering via BGP with compute nodes

P1
ovn ML2/ ● RHEL FRRouting BGP speaker + BFD (liveliness)
northdb & OVN Advertise via BGP: Node and VM IPs

P2

southdb
● Migrate VMs to different rack and keep the same external IP

ToR 1-1 ToR 1-2


Rack1 AZ1
AS 65000
Advertise Nexthop local
BGP+ BFD Routes interfaces Inject Default Next-hop to
100.65.0.2/30
route ToRs
pxe/API 100.64.0.2/30 Node Lo0 100.64.0.2
P1 P2 ovn-controller 192.1.1.3/32 100.65.0.2 0.0.0.0/0 100.64.0.1
DHCP 100.65.0.1
OVS VM1 IP 100.64.0.2
br-ex FRR BGP 1721.1.5/32 100.65.0.2
FRR BFD
ECMP 172.1.1.0/24
external (Provider VLAN)
datapath VM1 IP (loopback)
172.1.1.5/32
Compute1
Compute 1-1
V1
What's new in Red Hat OpenStack Platform 17
BGP Routed DataCenter with Provider Networks & Bonding CONFIDENTIAL
CONFIDENTIAL NDA required

Rack0
AS 65000 Spine
Controller-0 Master Provider networks and dataplane
Controller-0 ToR0
Neutron server Master ● ToR is peering via BGP with compute nodes

P1
ovn ML2/ ● RHEL FRRouting iBGP or eBGP speaker
northdb & OVN 2 BGP peers per node, use Node IP to peer

P2

southdb
● OVN to advertise via BGP: VM Loopback IPs
● OVN DVR via External Provider Network GW
ToR 1-1 ToR 1-2
100.64.0.1/24
Rack1 AZ1 100.64.0.2/24 Rack2 AZ2
AS 65000
BGP+ AS 65000 Advertise VM Nexthop
BFD Bond: VLAN100 Loopback Node IP
100.64.0.5/30 ToR 2-1 ToR 2-2
Bond VLAN100: Network (External)
100.64.0.3/24
P1 P2 ovn-controller VM1 Compute1-1 GW
DHCP
Node IP 172.1.1.5/32 100.64.0.3
Node IP 192.1.2.3/32
OVS 192.1.1.3/32 VM2 Compute 2-1 GW
br-ex FRR BGP VM2 IP (loopback)
br-int 172.1.1.6/32 172.1.1.6/32 100.64.0.5
172.1.1.1/24 FRR BFD
Compute 2-1
(Provider VLAN)
VM1 IP (loopback)
172.1.1.5/32
https://docs.openstack.org/neutron-dynamic-routing/latest/contributor/testing.html#environment-archit
Compute 1-1 ecture
V1
What's new in Red Hat OpenStack Platform 17
Phase-1 Multi-cloud with BGP Direct Connect (OSP 17.0 TP/ 17.1 GA) CONFIDENTIAL
CONFIDENTIAL NDA required

OSP clouds in the same DataCenter or across SD-WAN [Calico, AWS Direct Connect]
BGP for Control-plane and advertises Node IP, VM IP (Provider Networks), Floating IPs as public addresses (no VPN) distributed or
centralized traffic

DataCenter 1 DataCenter 2
Leaf 2-1 Leaf 2-2
OSP cloud1 Leaf 1-1 Leaf 1-2 BGP Public IP OSP cloud2
AS 65001 Node, VM IP AS 65002
FRR BGP/BFD
Public IP
Prov-net1 Prov-net
BGP External BGP External 2
OVN GW IP, FIP IP, FIP OVN GW
OVN GW OVN GW
OVN GW SD-WAN OVN GW
Controller Controller
GW-IP1.1 GW-IP2.1 Compute1
Compute1
FRR
Node-IP2.1
Node-IP1.1 FRR DC-GW FRR
32 FRR

G
en
e

ev
ev VM-IP

e
n
VM-IP
VM1-2 Ge VM2-1
2.2
1.1

BGP for node control-plane & public IPs, Provider BGP for OVN GW advertise External network, FIPs
networks, provisioning networks Node, VM, VIP
Geneve overlay with centralized datapath V1
BGP External IP distributed datapath
What's new in Red Hat OpenStack Platform 17
OSP Designate GA (OSP 17.0)
CONFIDENTIAL

▸ Support for Bind 9 as Authoritative server and Unbound DNS Resolver (OSP 17.0)
▸ Support for 3rd party nameserver (OSP 17.1)
▸ HA support with HAProxy
▸ TripleO support for deployment and configuration of Designate components V1
What's new in Red Hat OpenStack Platform 17
CONFIDENTIAL

Compute

V1
What's new in Red Hat OpenStack Platform 17
Red Hat OpenStack Platform 17.0 Compute

● Moving to q35 default machine-type


● UEFI Secure Boot [TP]
● vTPM encryption [TP]
Compute

● Compute node supporting multiple NVIDIA vGPU types [TP]


● Virtio Data Path Acceleration (vDPA) [TP]
● Pinned and non-pinned CPUs in the same instance [TP]
● Support trait reporting via provider.yaml (SST-BF/ CP, CAT,...) [TP]

[TP]: Tech Preview

35
F18017-190601
Q35 default machine type
Switch to default pc-q35-rhel9.0.0 machine type by default

Only for new Red Hat OpenStack Platform 17.0 deployments

This Q35 default for OSP 17.0 provides several improvements, including:

● Higher limit to the number of vCPUs (384 instead of 255)


● The ability to live-migrate instances between different RHEL 9.x minor releases
● Support for native PCIe hotplug – this is faster than the ACPI-based hotplug used by the
older, `i440fx` PC machine type.
● UEFI support — legacy boot will slowly vanish from modern operating systems; Microsoft
Windows 10 requiring UEFI Secure Boot for full functionality
● Secure Boot support

36
F18017-190601
UEFI Secure Boot

● Secure Boot aims to ensure no unsigned kernel code runs on the machine
● Protects guests from boot-time malware
● Validates that the code executed by the guest firmware is trusted.

$ openstack image create uefi-secure-boot --disk-format qcow2 --container-format bare


--file rhel90.qcow2

$ openstack image set --property hw_firmware_type=uefi --property os_secure_boot=required


uefi-secure-boot

$ openstack server create --flavor m1.micro --image uefi-secure-boot test-vm

37
F18017-190601
Emulated Trusted Platform Module (vTPM)

● The chain of trust for virtualization starts with vTPM


● New director parameter to configure vTPM:

NovaEnableVTPM: true

● Director will configure libvirt swtpm_enabled and swtpm_enabled in nova.conf


● Needed for Microsoft Windows 2022

$ openstack flavor set flavor-with-tpm --property hw:tpm_version=2.0

38
F18017-190601
Compute node supporting multiple NVIDIA vGPU types

● Cold migrate and resize instances having vGPUs


● Compute node supporting multiple NVIDIA vGPU types

$ ls /sys/bus/pci/devices/0000\:04\:00.0/mdev_supported_types/
nvidia-222 nvidia-223 nvidia-224 nvidia-225 nvidia-226 nvidia-227 nvidia-228 nvidia-229
nvidia-230 nvidia-231 nvidia-232 nvidia-233 nvidia-234 nvidia-252 nvidia-319 nvidia-320
nvidia-321

$ grep enabled_vgpu_types /etc/nova.conf


39
enabled_vgpu_types = nvidia-319,nvidia-320
F18017-190601
vDPA as an enhancement to traditional SR-IOV
● Only the data path part of the Virtio spec in hardware
Application
● Control path can be vendor specific, “translated” from/to Virtio
User
Guest Kernel
control path
Virtio-net kernel driver
● Advantages:
○ offer hardware-accelerated networking without requiring
QEMU tenants to install vendor-specific drivers in their guests.
User
Host Kernel ○ leverage hardware-accelerated networking while
vDPA framework
maintaining the ability to have transparent live migration
● Known hardware:
Vendor vDPA drv
○ Nvidia ConnectX-6 DX and Bluefield-2
○ Intel devices using IFC driver
40
○ Full Virtio offload devices F18017-190601
Add ability to use pinned and non-pinned CPUs in the same instance

0 4 1 5 0 4 1 5 0 4 1 5

2 6 3 7 2 6 3 7 2 6 3 7

Boot a pinned Boot an unpinned Boot a mixed pinning


instance instance instance
1 Instance pinned to any two 2 Instance pinned to any two 3 Some instance cores pinned to
cores on host from unused cores from unused cores from
cpu_dedicated_set cpu_shared_set cpu_dedicated_set.
Remainder float across cores
listed in cpu_shared_set.
Director configuration:
NovaComputeCpuDedicatedSet: "0,1,2,4,5,6"
NovaComputeCpuSharedSet: "3,7"

nova.conf:
cpu_dedicated_set = 0,1,2,4,5,6
41
cpu_shared_set = 3,7
F18017-190601
Support trait reporting via provider.yaml (SST-BF/ CP, CAT,...)

● Group special host with label


● Label host in the flavor
● Can be used for special hardware
● Remove the need to add manually the traits by hand
● Allow to build more scheduling features

[compute]
# Directory of yaml files containing resource provider configuration.
# Files in this directory will be processed in lexicographic order.
provider_config_location = /etc/nova/provider_config/

42
F18017-190601
CONFIDENTIAL

Storage

V1
What's new in Red Hat OpenStack Platform 17
CONFIDENTIAL

OSP17.0 with RHCSv5

New Deployment Framework with Cephadm


RHCS deployed first
▸ Separate dedicated step
▸ Aligned with traditional storage deployments
▸ RHCS deployment issues don’t interrupt the overall deployment
▸ Improved troubleshooting experience
▸ Director still manages bare metal and RHEL deployments

Decoupled day 2 operations


▸ Most day 2 operations are now performed outside director
・ Config change, disk replacement, updates, upgrades, etc
・ No need to trigger an entire stack update
44 ▸ Node addition / removal still managed by director
V1
What's new in Red Hat OpenStack Platform 17
CONFIDENTIAL
CONFIDENTIAL Internal Only

Deployment workflows

Single Step

Deploy Base
OSP + RHCS Provision Networks Provision ALL nodes Ceph Cluster Deploy OSP & Finalise RHCS deploy
Configures OSP
Deployment workflow (RBD ready)
openstack overcloud \ openstack overcloud \ openstack overcloud \ openstack overcloud deploy --templates \
network provision … node provision \ ceph deploy \ …
… …

RHCS node scale out Re-run


Add node(s) in Re-run
workflow overcloud
Baremetal Config Node provisioning
provisioning
openstack overcloud \ openstack overcloud deploy --templates \
node provision \ …

45
V1
What's new in Red Hat OpenStack Platform 17
CONFIDENTIAL
CONFIDENTIAL Designator

Ceph RGW by default

Simplifying day 1 configuration with better defaults

● RGW is now automatically enabled when deploying Ceph


○ Used to be swift by default
○ RGW had to be explicitly enabled

● Option to opt-out from object storage

● 16.2 to 17.1 upgrades path won’t migrate Swift to RGW

V1
What's new in Red Hat OpenStack Platform 17
CONFIDENTIAL
CONFIDENTIAL Designator

Auto copy images at the edge

Central Site OSP Control plane

Improve DCN user experience


Glance

● Booting a VM at the edge required a local image copy


● Nova now automatically imports the image if not present
● VM can then boot as usual Copy image xyz to site 2
● Still recommended to have a local copy to speed up the Vms Images Volumes
Pool Pool Pool
boot process
Central Ceph Cluster

Nova

Edge compute

Edge Site 1 Edge Site 2

Volume Volume
Vms Images Vms Images
s s
Pool Pool Pool Pool
Pool Pool
Site 1 Ceph Cluster Site 2 Ceph Cluster
V1
What's new in Red Hat OpenStack Platform 17
CONFIDENTIAL
CONFIDENTIAL Designator

Create shares from Cephfs Snapshots

Improve user experience with advanced snapshot management

● Full support for CephFS snapshots


○ CephFS Native
○ CephFS NFS (Ganesha)

● Support for creating a new share from a snapshot Share A


Share A
Share B
○ Convenient way to duplicate a share at a given time Snapshot Snap New Share
From snap
○ Rollback to previous a version (Clone)

● Admin controlled feature Red Hat Ceph Storage 5


○ Admin enables it on a per share type basis

● Re-uses existing CLI


○ manila snapshot-create (...)
○ manila create (...) --snapshot-id <snap-id> (...)

● New shares from snapshot copies the data


○ No copy on write
○ No dependency from snapshot V1
What's new in Red Hat OpenStack Platform 17
CONFIDENTIAL
CONFIDENTIAL Designator

Default volume type per project


Project 1 Project 2 Project 3
No default set Type B as Type C as
default default

Fine grain control over project’s storage defaults

● Allows admins to set a default volume type per tenant

● Control over what type / backend tenant’s should consume by default


○ Types also expose extra attributes (encryption, QoS, etc)
○ Fallback to global default if no tenant’s default is set Volume Volume Volume
Type A Type B Type C
(Global Default)

● Avoid users to specify their “most common” type at volume creation


○ Users can still use a specific type
Cinder Backend(s)
● Can map volume types to AZ & tiers for better isolation control

● New admin CLI options


○ cinder default-type-set <vol-type-id> <project-id>
○ cinder default-type-unset <project-id>
○ cinder default-type-list [--project-id <project-id>]

V1
What's new in Red Hat OpenStack Platform 17
Thank you linkedin.com/company/red-hat

youtube.com/user/RedHatVideos

Red Hat is the world’s leading provider of


facebook.com/redhatinc
enterprise open source software solutions.
Award-winning support, training, and consulting
services make twitter.com/RedHat
50 Red Hat a trusted adviser to the Fortune 500.

You might also like