Guide - Expose Your Applications Using OVHcloud Public Cloud Load Balancer v1
Guide - Expose Your Applications Using OVHcloud Public Cloud Load Balancer v1
Objective
This guide aims to explain how to use OVHcloud Public Cloud Load Balancer
to expose your applications hosted on Managed Kubernetes Service (MKS).
If you’re not comfortable with the different ways of exposing your
applications in Kubernetes, or if you’re not familiar with the notion of
service type ‘loadbalancer’, we do recommend to start by reading the guide
explaining how to Expose your application deployed on an OVHcloud
Managed Kubernetes Service, you can find the details on different methods
to expose your containerized applications hosted in Managed Kubernetes
Service.
This guide uses some concepts that are specific to our Public Cloud Load
Balancer (listener, pool, health monitor, member, …) and to the OVHcloud
Public Cloud Network (Gateway, Floating IP). You can find more information
regarding Public Cloud Network products concepts on our official
documentation, for example networking concepts and loadbalancer
concepts.
Prerequisites
Kubernetes version
1.24.13-3>=
1.25.9-3>=
1.26.4-3>=
1.27>=
First step is to make sure that you have an existing vRack on your Public
Cloud Project, to do so you can follow this guide that explains how Configure
a vRack for Public Cloud.
Billing
When exposing your load balancer publicly (public-to-public or public-to-
private): - if it does not already exist, a single OVHcloud Gateway will be
automatically created and charged for all Load Balancers spawned in the
subnet https://www.ovhcloud.com/en-gb/public-cloud/prices/#10394. - a
Public Floating IP will be used: https://www.ovhcloud.com/en-gb/public-
cloud/prices/#10346 - Each Public Cloud Load Balancer is billed according
to its flavor: https://www.ovhcloud.com/en-gb/public-cloud/prices/#10420
> [!primary]
>
> Note: Each publicly exposed Load Balancer has his own Public
Floating IP. Outgoing traffic doesn't consume OVHcloud Gateway
bandwidth.
>
> [!warning]
>
> During the MKS-Public Cloud Load Balancer Beta (CCM), since the
Public Cloud Load Balancer is GA, the Public Cloud Load Balancer
usage as well as the other network components (Gateway & Floating
IPs) will be billed
>
Instructions
During the beta phase, if you want a Kubernetes load balancer service to be
deployed using Public Cloud Load Balancer rather than the historical
Loadbalancer for Kubernetes solution, you’ll need to add the annotation:
loadbalancer.ovhcloud.com/class: "octavia" on your Kubernetes
Service manifest.
Here’s a simple example of how to use the Public Cloud Load Balancer
apiVersion: v1
kind: Service
metadata:
labels:
app: test-lb
name: test-lb-service
namespace: test-lb-ns
annotations:
loadbalancer.ovhcloud.com/class: octavia
loadbalancer.ovhcloud.com/flavor: small
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: test-lb
type: LoadBalancer
Use cases
You can find a set a examples on how to use our Public Cloud Load Balancer
with Managed Kubernetes Service (MKS) on our dedicated Github
repository https://github.com/ovh/public-cloud-examples
Public-to-Private (your cluster is attached to a private network/
subnet)
Service example:
apiVersion: v1
kind: Service
metadata:
name: my-lb-service
namespace: test-lb-ns
annotations:
loadbalancer.ovhcloud.com/class: "octavia"
loadbalancer.ovhcloud.com/flavor: "medium" //optional,
default = small
labels:
app: test-octavia
spec:
ports:
- name: client
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
Private-to-Private
Service example:
apiVersion: v1
kind: Service
metadata:
name: my-lb-service
namespace: test-lb-ns
annotations:
loadbalancer.ovhcloud.com/class: "octavia"
service.beta.kubernetes.io/openstack-internal-load-balancer:
"true"
labels:
app: test-octavia
spec:
ports:
- name: client
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
Service example:
apiVersion: v1
kind: Service
metadata:
name: my-lb-service
namespace: test-lb-ns
annotations:
loadbalancer.ovhcloud.com/class: "octavia"
loadbalancer.ovhcloud.com/flavor: "medium" //optional,
default = small
labels:
app: test-octavia
spec:
ports:
- name: client
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
Supported Annotations & Features
Service annotations
• loadbalancer.ovhcloud.com/class
During the Beta phase it is mandatory to specify the class of the load
balancer you want to create. Authorized values: ‘octavia’ = Public
Cloud Load Balancer, ‘iolb’ = Loadbalancer for Managed Kubernetes
Service (will be deprecated in futur versions). Default value is ‘iolb’.
• loadbalancer.ovhcloud.com/flavor:
• service.beta.kubernetes.io/openstack-internal-load-balancer
• loadbalancer.openstack.org/subnet-id
• loadbalancer.openstack.org/member-subnet-id
• loadbalancer.openstack.org/network-id
• loadbalancer.openstack.org/port-id
The port ID for load balancer private IP. Can be used if you want to use
a specific private IP.
• loadbalancer.openstack.org/connection-limit
• loadbalancer.openstack.org/proxy-protocol
• loadbalancer.openstack.org/x-forwarded-for
• loadbalancer.openstack.org/timeout-client-data
• loadbalancer.openstack.org/timeout-member-connect
• loadbalancer.openstack.org/timeout-member-data
• loadbalancer.openstack.org/timeout-tcp-inspect
• loadbalancer.openstack.org/enable-health-monitor
Defines whether to create health monitor for the load balancer pool.
Default is true. The health monitor can be created or deleted
dynamically. A health monitor is required for services with
externalTrafficPolicy: Local.
• loadbalancer.openstack.org/health-monitor-delay
Defines the health monitor delay in seconds for the loadbalancer pools.
Default value (ms) = 5000
• loadbalancer.openstack.org/health-monitor-timeout
Defines the health monitor timeout in seconds for the loadbalancer
pools. This value should be less than delay. Default value (ms) = 3000
• loadbalancer.openstack.org/health-monitor-max-retries
Defines the health monitor retry count for the loadbalancer pool
members. Default value = 1
• loadbalancer.openstack.org/flavor-id
The id of the flavor that is used for creating the loadbalancer. Not
useful as we provide loadbalancer.ovhcloud.com/flavor
• loadbalancer.openstack.org/load-balancer-id
• loadbalancer.openstack.org/hostname
• loadbalancer.openstack.org/load-balancer-address
• loadbalancer.openstack.org/health-monitor-max-retries-down
Defines the health monitor retry count for the loadbalancer pool
members to be marked down.
• loadbalancer.openstack.org/availability-zone
apiVersion: v1
kind: Service
metadata:
name: my-medium-lb
annotations:
loadbalancer.ovhcloud.com/class: "octavia"
loadbalancer.ovhcloud.com/flavor: "medium"
labels:
app: demo-upgrade
spec:
loadBalancerIP: 141.94.215.240 # Use the IP address from
the previous service
ports:
- name: client
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
• Until the deletion of the previous service, this Service will only deploy
the LoadBalancer without a floating IP.
• When the Floating IP is available (the deletion of the initial LB service
will unbound the IP), the floating IP will be attach to this new LB.
[!warning]
annotations:
loadbalancer.ovhcloud.com/class: "iolb"
annotations:
loadbalancer.ovhcloud.com/class: "octavia"
[!warning]
Resources Naming
When deploying LoadBalancer through Kubernetes Service with type
LoadBalancer, the Cloud Controller Manager (CCM) implementation will
automatically create Public Cloud resources (LoadBalancer, Listener, Pool,
Health-monitor, Gateway, Network, Subnet,…). In order to easily identify
those resources, here is the naming templates:
Ressource Naming
Public Cloud
mks_resource_$mks_cluster_shortname_$namespace_$k8s_service_
Load
service-id
Balancer
listener_mks_resource_$listener_n°_$mks_cluster_shortname_$nam
Listener
name
Pool pool_mks_resource_$pool_n°_$mks_cluster_shortname_$namespace
monitor_mks_resource_$mks_cluster_shortname_$namespace_servi
Health-
Network(onlyautomaticallycreatedinPublic−to−Publicscenario)|k8s
monitor
cluster−mks_cluster_id
Subnet (only
automatically
created in k8s-cluster-mksclusterid||Gateway/Router|k8s − cluster−mks_cluste
Public-to-
Public
scenario)
Others resources
• Exposing applications using services of LoadBalancer type
• Using Octavia Ingress Controller
• OVHcloud Load Balancer concepts
• How to monitor your Public Cloud Load Balancer with Prometheus
Go further
Visit the Github examples repository .