0% found this document useful (0 votes)
31 views13 pages

Guide - Expose Your Applications Using OVHcloud Public Cloud Load Balancer v1

Uploaded by

ayoubchahboun69
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views13 pages

Guide - Expose Your Applications Using OVHcloud Public Cloud Load Balancer v1

Uploaded by

ayoubchahboun69
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Expose your applications using

OVHcloud Public Cloud Load


Balancer
[!warning]

Usage of the Public Cloud Load Balancer with Managed


Kubernetes Service (MKS) is currently in Beta phase. This feature
is also available to customers using the services of our US
subsidiary.

Objective
This guide aims to explain how to use OVHcloud Public Cloud Load Balancer
to expose your applications hosted on Managed Kubernetes Service (MKS).
If you’re not comfortable with the different ways of exposing your
applications in Kubernetes, or if you’re not familiar with the notion of
service type ‘loadbalancer’, we do recommend to start by reading the guide
explaining how to Expose your application deployed on an OVHcloud
Managed Kubernetes Service, you can find the details on different methods
to expose your containerized applications hosted in Managed Kubernetes
Service.

Our Public Cloud Load Balancer is relying on Openstack Octavia project,


this project provides a Cloud Controller Manager (CCM) allowing
Kubernetes clusters to interact with Load Balancers. For Managed
Kubernetes Service (MKS), this Cloud Controller is installed and configured
by our team allowing you to easily create, use and configure our Public
Cloud Load Balancers. You can find the CCM opensource project
documentation here.

This guide uses some concepts that are specific to our Public Cloud Load
Balancer (listener, pool, health monitor, member, …) and to the OVHcloud
Public Cloud Network (Gateway, Floating IP). You can find more information
regarding Public Cloud Network products concepts on our official
documentation, for example networking concepts and loadbalancer
concepts.

Prerequisites
Kubernetes version

To be able to deploy Public Cloud Load Balancer, your Managed Kubernetes


Service must run or have been upgraded to the following patch versions:
Kubernetes versions

1.24.13-3>=

1.25.9-3>=

1.26.4-3>=

1.27>=

Network prerequisite to expose your Load Balancers publicly

First step is to make sure that you have an existing vRack on your Public
Cloud Project, to do so you can follow this guide that explains how Configure
a vRack for Public Cloud.

If you plan to expose your Load Balancer publicly, in order to attach a


FloatingIP to your Load Balancer, it is mandatory to have an OVHcloud
Gateway (an OpenStack router) deployed on the subnet hosting your Load
Balancer. If it does not exist when you create your first Public Cloud Load
Balancer, a S size Managed Gateway will be automatically created. That is
why we do recommend to deploy your MKS clusters on a network and
subnet where an OVHcloud Gateway can be created (manually or
automatically - cf. Creating a private network with Gateway) or is already
existing.

If you have an existing/already deployed cluster, if: - The Subnet’s


GatewayIP is already used by an OVHcloud Gateway, nothing needs to
be done. The current OVHcloud Gateway (Openstack Router) will be used. -
The subnet does not have an IP reserved for a Gateway, you will have
to provide or create a compatible subnet. Three options: - Edit an existing
subnet to reserve an IP for a Gateway : please refer to the Update a subnet
properties documentation. - Provide another compatible subnet: a subnet
with an existing OVHcloud Gateway or with an IP address reserved for a
Gateway (Creating a private network with Gateway) - Use a subnet
dedicated for your load balancer: this option can be used on the Managed
under ‘advanced parameters’/‘LoadbalancerSubnet’ or using APIs/Infra as
Code using the ‘LoadBalancerSubnetID’ parameter. - The GatewayIP is
already assigned to a non-OVHcloud Gateway (Openstack Router),
two options: - Provide another compatible subnet: a subnet with an existing
OVHcloud Gateway or with an IP address reserved for a Gateway (Creating
a private network with Gateway) - Use a subnet dedicated for your load
balancers: this option can be used on the OVHcloud manager under
‘advanced parameters’/‘Loadbalancer Subnet’ or using APIs/Infra as Code
with the ‘LoadBalancerSubnetID’ parameter.
Limitations
• Layer 7 Policy & Rules and TLS Termination (TERMINATED_HTTPS
listener) are not available yet. For such use cases you can rely on
Octavia Ingress Controller
• UDP proxy protocol is not supported

Billing
When exposing your load balancer publicly (public-to-public or public-to-
private): - if it does not already exist, a single OVHcloud Gateway will be
automatically created and charged for all Load Balancers spawned in the
subnet https://www.ovhcloud.com/en-gb/public-cloud/prices/#10394. - a
Public Floating IP will be used: https://www.ovhcloud.com/en-gb/public-
cloud/prices/#10346 - Each Public Cloud Load Balancer is billed according
to its flavor: https://www.ovhcloud.com/en-gb/public-cloud/prices/#10420

> [!primary]
>
> Note: Each publicly exposed Load Balancer has his own Public
Floating IP. Outgoing traffic doesn't consume OVHcloud Gateway
bandwidth.
>

> [!warning]
>
> During the MKS-Public Cloud Load Balancer Beta (CCM), since the
Public Cloud Load Balancer is GA, the Public Cloud Load Balancer
usage as well as the other network components (Gateway & Floating
IPs) will be billed
>

Instructions
During the beta phase, if you want a Kubernetes load balancer service to be
deployed using Public Cloud Load Balancer rather than the historical
Loadbalancer for Kubernetes solution, you’ll need to add the annotation:
loadbalancer.ovhcloud.com/class: "octavia" on your Kubernetes
Service manifest.

Here’s a simple example of how to use the Public Cloud Load Balancer

1. Deployment of a functional Managed Kubernetes (MKS) cluster using


the OVHcloud manager, Terraform, Pulumi or APIs.
2. Retrieve the kubeconfig file needed to use kubectl tool (via OVHcloud
manager, Terraform, Pulumi or API). You can use this guide
3. Create a Namespace and a Deployment resource using the following
command:
kubectl create namespace test-lb-ns
kubectl create deployment test-lb --image=nginx -n=test-lb-ns

1. Copy/Paste the following code on a file named test-lb-service.yaml

apiVersion: v1
kind: Service
metadata:
labels:
app: test-lb
name: test-lb-service
namespace: test-lb-ns
annotations:
loadbalancer.ovhcloud.com/class: octavia
loadbalancer.ovhcloud.com/flavor: small
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: test-lb
type: LoadBalancer

1. Create a ‘Service’ using the following command:

$ kubectl apply -f test-lb-service.yaml

1. Retrieve Service IP address using the following command line:

$ kubectl get service test-lb-service -n=test-lb-ns


NAME TYPE CLUSTER-IP EXTERNAL-
IP PORT(S) AGE
test-lb-service LoadBalancer 10.3.107.18
141.94.215.240 80:30172/TCP 12m

1. Open a web browser and access: http://141.94.215.240

Use cases
You can find a set a examples on how to use our Public Cloud Load Balancer
with Managed Kubernetes Service (MKS) on our dedicated Github
repository https://github.com/ovh/public-cloud-examples
Public-to-Private (your cluster is attached to a private network/
subnet)

In a public-to-private scenario you will use your Load Balancer to publicly


expose application that are hosted on your Managed Kubernetes Cluster.
Main benefit is that your Kubernetes nodes are not exposed on internet with
that scenario.

Service example:

apiVersion: v1
kind: Service
metadata:
name: my-lb-service
namespace: test-lb-ns
annotations:
loadbalancer.ovhcloud.com/class: "octavia"
loadbalancer.ovhcloud.com/flavor: "medium" //optional,
default = small
labels:
app: test-octavia
spec:
ports:
- name: client
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer

Private-to-Private

In a private-to-private scenario your Load Balancer is not exposed publicly, it


might be useful if you want to expose your containerized service inside you
OVHcloud private network.

Service example:

apiVersion: v1
kind: Service
metadata:
name: my-lb-service
namespace: test-lb-ns
annotations:
loadbalancer.ovhcloud.com/class: "octavia"
service.beta.kubernetes.io/openstack-internal-load-balancer:
"true"
labels:
app: test-octavia
spec:
ports:
- name: client
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer

Public-to-Public (you are using a public Managed Kubernetes


Cluster)

In a public-to-public scenario all your Kubernetes nodes have a public


network interface, inter-node/pod communication will rely on public
network. This is the easiest way to deploy a MKS cluster as it does not
require to create a network and subnet topology. Although all your nodes
already carry a public IP address for exposing your applications, you can
choose to use a loadbalancer to expose them behind a single IP address.

Service example:

apiVersion: v1
kind: Service
metadata:
name: my-lb-service
namespace: test-lb-ns
annotations:
loadbalancer.ovhcloud.com/class: "octavia"
loadbalancer.ovhcloud.com/flavor: "medium" //optional,
default = small
labels:
app: test-octavia
spec:
ports:
- name: client
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
Supported Annotations & Features
Service annotations

• loadbalancer.ovhcloud.com/class

During the Beta phase it is mandatory to specify the class of the load
balancer you want to create. Authorized values: ‘octavia’ = Public
Cloud Load Balancer, ‘iolb’ = Loadbalancer for Managed Kubernetes
Service (will be deprecated in futur versions). Default value is ‘iolb’.

• loadbalancer.ovhcloud.com/flavor:

Not a standard Openstack Octavia annotations (specific to OVHcloud).


The size used for creating the loadbalancer. Specifications can be found
on Load Balancer specifications page. Authorized values =>
small,medium,large. Default is ‘small’.

• service.beta.kubernetes.io/openstack-internal-load-balancer

If ‘true’, the loadbalancer will only have an IP on the private network


(no Floating IP is associated with the Load Balancer). Default is ‘false’.

• loadbalancer.openstack.org/subnet-id

The subnet ID where the private IP of the load balancer will be


retrieved. By default, the subnet-id of the subnet configured for your
OVHcloud Managed Kubernetes Service cluster will be used.

• loadbalancer.openstack.org/member-subnet-id

Member subnet ID of the load balancer created. By default, the subnet-


id of the subnet configured for your OVHcloud Managed Kubernetes
Service cluster will be used.

• loadbalancer.openstack.org/network-id

The network ID which will allocate virtual IP for loadbalancer. By


default, the network-id of the network configured for your OVHcloud
Managed Kubernetes Service cluster will be used.

• loadbalancer.openstack.org/port-id

The port ID for load balancer private IP. Can be used if you want to use
a specific private IP.

• loadbalancer.openstack.org/connection-limit

The maximum number of connections per second allowed for the


listener. Positive integer or -1 for unlimited (default). This annotation
supports update operation.
• loadbalancer.openstack.org/keep-floatingip

If ‘true’, the floating IP will NOT be deleted upon load balancer


deletion. Default is ‘false’. Useful if you want to keep your floating API
after Load Balancer deletion.

• loadbalancer.openstack.org/proxy-protocol

If ‘true’, the loadbalancer pool protocol will be set as PROXY. Default is


‘false’.

• loadbalancer.openstack.org/x-forwarded-for

Not supported. If you want perform Layer 7 load balancing we do


recommend to use the official Octavia Ingress-controller: https://
github.com/kubernetes/cloud-provider-openstack/blob/master/docs/
octavia-ingress-controller/using-octavia-ingress-controller.md

• loadbalancer.openstack.org/timeout-client-data

Frontend client inactivity timeout in milliseconds for the load balancer.


Default value (ms) = 50000.

• loadbalancer.openstack.org/timeout-member-connect

Backend member connection timeout in milliseconds for the load


balancer. Default value (ms) = 5000.

• loadbalancer.openstack.org/timeout-member-data

Backend member inactivity timeout in milliseconds for the load


balancer. Default value (ms) = 50000.

• loadbalancer.openstack.org/timeout-tcp-inspect

Time to wait for additional TCP packets for content inspection in


milliseconds for the load balancer. Default value (ms) = 0.

• loadbalancer.openstack.org/enable-health-monitor

Defines whether to create health monitor for the load balancer pool.
Default is true. The health monitor can be created or deleted
dynamically. A health monitor is required for services with
externalTrafficPolicy: Local.

• loadbalancer.openstack.org/health-monitor-delay

Defines the health monitor delay in seconds for the loadbalancer pools.
Default value (ms) = 5000

• loadbalancer.openstack.org/health-monitor-timeout
Defines the health monitor timeout in seconds for the loadbalancer
pools. This value should be less than delay. Default value (ms) = 3000

• loadbalancer.openstack.org/health-monitor-max-retries

Defines the health monitor retry count for the loadbalancer pool
members. Default value = 1

• loadbalancer.openstack.org/flavor-id

The id of the flavor that is used for creating the loadbalancer. Not
useful as we provide loadbalancer.ovhcloud.com/flavor

• loadbalancer.openstack.org/load-balancer-id

This annotation is automatically added to the Service if it’s not


specified when creating. After the Service is created successfully it
shouldn’t be changed, otherwise the Service won’t behave as expected.

If this annotation is specified with a valid cloud load balancer ID when


creating Service, the Service is reusing this load balancer rather than
creating another one. Again, it shouldn’t be changed after the Service
is created.

If this annotation is specified, the other annotations which define the


load balancer features will be ignored.

• loadbalancer.openstack.org/hostname

This annotations explicitly sets a hostname in the status of the load


balancer service.

• loadbalancer.openstack.org/load-balancer-address

This annotation is automatically added and it contains the floating ip


address of the load balancer service. When using
loadbalancer.openstack.org/hostname annotation it is the only place
to see the real address of the load balancer.

// NOT SUPPORTED YET

• loadbalancer.openstack.org/health-monitor-max-retries-down

Defines the health monitor retry count for the loadbalancer pool
members to be marked down.

• loadbalancer.openstack.org/availability-zone

The name of the loadbalancer availability zone to use. It is ignored if


the Octavia version doesn’t support availability zones yet.
Features

Resize your LoadBalancer

There is no proper way to ‘resize’ your loadbalancer yet (work in progress).


The best alternative to change the flavor of your load balancer is to recreate
a new Kubernetes Service that will use the same public IP as an existing
one. You can find the complete HowTo and examples on our public Github
repository: https://github.com/ovh/public-cloud-examples

• First, make sure that the existing service is using the


loadbalancer.openstack.org/keep-floatingip annotation, if not the
public floating IP will be released. (It can be added after the service
creation).
• Get the public IP of your existing service

$ kubectl get service my-small-lb


NAME TYPE CLUSTER-IP EXTERNAL-
IP PORT(S) AGE
test-lb-todel LoadBalancer 10.3.107.18
141.94.215.240 80:30172/TCP 12m

• Create a new service with the new expected flavor:

apiVersion: v1
kind: Service
metadata:
name: my-medium-lb
annotations:
loadbalancer.ovhcloud.com/class: "octavia"
loadbalancer.ovhcloud.com/flavor: "medium"
labels:
app: demo-upgrade
spec:
loadBalancerIP: 141.94.215.240 # Use the IP address from
the previous service
ports:
- name: client
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer

• Until the deletion of the previous service, this Service will only deploy
the LoadBalancer without a floating IP.
• When the Floating IP is available (the deletion of the initial LB service
will unbound the IP), the floating IP will be attach to this new LB.

[!warning]

Changing the flavor will lead to a new LoadBalancer creation and


old LoadBalancer deletion. During this changeover your
applications may become inaccessible.

Sharing load balancer with multiple Services

By default, different Services of LoadBalancer type should have different


corresponding cloud load balancers, however, the Cloud Controller Manager
(CCM) allows multiple Services to share a single load balancer. To do so you
can follow the official documentation: Sharing load balancer with multiple
Services

Use PROXY protocol to preserve client IP

When exposing services like nginx-ingress-controller, it’s a common


requirement that the client connection information could pass through proxy
servers and load balancers, therefore visible to the backend services.
Knowing the originating IP address of a client may be useful for setting a
particular language for a website, keeping a denylist of IP addresses, or
simply for logging and statistics purposes. You can follow the official Cloud
Controller Manager documentation on how to Use PROXY protocol to
preserve client IP.

### Migrate from Loadbalancer for Kubernetes to Public Cloud Load


Balancer In order to migrate from an existing Loadbalancer for Kubernetes
to a Public Cloud Load Balancer you will have to modify an existing Service
and change its LoadBalancer class.

Your existing LoadBalancer Service using Loadbalancer for Kubernetes


should have the following annotation:

annotations:
loadbalancer.ovhcloud.com/class: "iolb"

Step 1 - Edit your Service to change the LoadBalancer class to ‘octavia’

annotations:
loadbalancer.ovhcloud.com/class: "octavia"

Step 2 - Apply the change

kubectl apply -f your-service-manifest.yaml

[!warning]

As Loadbalancer for Kubernetes and Public Cloud Load Balancer


does not use the same solution for Public IP allocation, it is not
possible to keep the existing public IP of your Loadbalancer
for Kubernetes. Changing the LoadBalancer class of your Service
will lead to the creation of a new Loadbalancer and the allocation
of a new Public IP (FloatingIP)

Resources Naming
When deploying LoadBalancer through Kubernetes Service with type
LoadBalancer, the Cloud Controller Manager (CCM) implementation will
automatically create Public Cloud resources (LoadBalancer, Listener, Pool,
Health-monitor, Gateway, Network, Subnet,…). In order to easily identify
those resources, here is the naming templates:

Ressource Naming

Public Cloud
mks_resource_$mks_cluster_shortname_$namespace_$k8s_service_
Load
service-id
Balancer

listener_mks_resource_$listener_n°_$mks_cluster_shortname_$nam
Listener
name

Pool pool_mks_resource_$pool_n°_$mks_cluster_shortname_$namespace

monitor_mks_resource_$mks_cluster_shortname_$namespace_servi
Health-
Network(onlyautomaticallycreatedinPublic−to−Publicscenario)|k8s
monitor
cluster−mks_cluster_id

Subnet (only
automatically
created in k8s-cluster-mksclusterid||Gateway/Router|k8s − cluster−mks_cluste
Public-to-
Public
scenario)

Floating IP Name = IP. Description= LB Octavia Name

Others resources
• Exposing applications using services of LoadBalancer type
• Using Octavia Ingress Controller
• OVHcloud Load Balancer concepts
• How to monitor your Public Cloud Load Balancer with Prometheus
Go further
Visit the Github examples repository .

Visit our dedicated Discord channel: https://discord.gg/ovhcloud. Ask


questions, provide feedback and interact directly with the team that builds
our Container and Orchestration services.

If you need training or technical assistance to implement our solutions,


contact your sales representative or click on this link to get a quote and ask
our Professional Services experts for a custom analysis of your project.

Join our community of users on https://community.ovh.com/en/.

You might also like