Application of Software Load Balancing or SLB For SDN
Application of Software Load Balancing or SLB For SDN
Part One
Cloud service providers or CSPs and organizations running Software Defined Networking
or SDN on Windows Server 2016 can distribute Tenant network traffic between virtual network
resources using Software Load Balancing or SLB. SLB in Windows Server enables multiple servers
to be a workload as a host, resulting in high availability and scalability.
You can use the SLB in Windows Server to scale the load adjustment capabilities of the SLB VMs in
the same Hyper-V computing servers used for other virtual machine workloads. For this reason, SLB
supports the rapid creation and removal of Load Balancing Endpoints required for CSP operations. In
addition, Windows Server SLB supports tens of gigabytes per cluster, provides a simple preparation
model, and can be easily scaled out.
How SLB works
SLB works by mapping virtual IPs or VIPs to dynamic IPs or DIPs that are part of the Cloud
VIPs are individual IP addresses that provide public access to a set of Load Balancing virtual
machines. For example, VIs are IP addresses that are displayed on the Internet so that Tenants and
their users can connect to Tenant resources in the Cloud Data Center.
DIPs are the IP addresses for Load Balancing Pool VMs behind VIPs, DIPs are allocated to Tenant
resources in the Cloud infrastructure
VIPs are based on SLB Multiplexer or MUX, MUX consists of one or more VMs, and Network
Controller provides one VIP for each MUX, and each MUX uses the Border Gateway Protocol, or
BGP, to nominate each VIP as a Introduce Route 32 to physical network routers. BGP has made this
possible for the physical network for the following reasons:
Know that a VIP is available in each MUX, even if those MUXs are on different subnets of a third-
layer network. Distribute each VIP's load across all available MUXs using Equal Cost Multi-path or
EMCP routing.
• Automatically detects a malfunction or delete MUX and stops sending traffic to that corrupted
MUX.
• Spread corrupted or deleted MUXs on healthy MUXs.
When public traffic arrives from the Internet, SLB MUX examines that traffic, which includes VIP as
the destination, and then maps and rewrites that traffic with an independent, individual DIP. For
inbound network traffic, this transaction is performed in a two-step process that is split between the
MUX virtual machines and the Hyper-V host, where the destination DIP is located:
Network Adress Translation, or NAT, removes the Hyper-V host encapsulation package, translates
VIP to DIP, re-maps ports, and sends the packet to the DIP VM.
MUX knows how to map VIPs to the correct DIPs, as Load Balancing policies have already been
defined using the Network Controller. These rules include protocol, front-end port, back-end port
When Tenant VMs respond and send network outbound traffic to the Internet or Remote Tenant
locations, since NAT is performed by Hyper-V hosts, it bypasses MUX traffic and goes directly from
the Hyper-V host to the Edge Router. This process bypasses MUX, Direct Server Return or
DSR.And after creating the initial flow of network traffic, the incoming network traffic completely
bypasses the SLB MUX.
In the image below, the user computer performs a DNS query for the IP address of a company's
SharePoint site (in this case a hypothetical company called Contoso) and the process is as follows:
• Returns the DNS VIP server 107.105.47.60 to the user.
• The user sends an HTTP request to the VIP.
• The physical network provides several routes to access the VIPs located in each MUX. Each router
uses ECMP along the way to travel another part of the route and eventually reach a MUX request.
• The MUX that receives the request reviews the set policies and sees that there are two DIPs,
10.10.10.5 and 10.10.20.5, in a virtual network to process the request to VIP 107.105.47.60.
• Selects MUX DIP 10.10.10.5 and encapsulates the Packets using VXLAN to be able to send it to
the Host containing the DIP using the Hosts physical network address
• The host receives the encapsulated package and inspects it. It then removes the capsule and rewrites
the Packet, so the destination is now VIP DIP 10.10.10.5 instead of sending DIP VM traffic.
This request has now reached Contoso SharePoint on Server Farm 2. The server uses its IP address as
the source to generate a response and send it to the user.
The Packet host intercepts the output on the virtual switch, which indicates that the user, the
customer, who is now the destination, has sent the original request to the VIP. The host rewrites the
Packet source and converts it to VIP so that the user does not see the DIP address.
• The host then sends the packet directly to the default physical network gateway. The network uses
the standard routing table to send the packet directly to the end recipient user
Unlike an old load-adjusting device in which the Probe starts from the equipment and moves on the
wire to the DIP, the Probe starts in the SLB on the Host where the DIP is located and goes directly
from the SLB Host Agent to the DIP and distributes the work among the Hosts.
To implement in Windows Server SLB, the Network Controller must first be implemented in
Windows Server 2016 and one or more SLB MUX VMs. In addition, Hyper-V Hosts must be
configured with Hyper-V Virtual Switch with active SDN and ensure that the SLB Host Agent is
running. Routers serving hosts must support Equal Cost Multipath Routing or ECMP Routing and
Border Gateway Protocol or BGP, and must be configured to accept BGP peering requests from SLB
MUXs.
With System Center 2016, the Network Controller can be configured on Windows Server 2016,
including SLB Manager and Health Monitor. You can also use the System Center to implement SLB
MUXs and install SLB Host Agents on PCs running Windows Server 2016 and Hyper-V.
The Network Controller for the SLB Manager plays the role of Host and does the following for the
SLB.
Processing SLB commands that are imported through the Northbound API from System
Center, Windows PowerShell, or another network management application.
SLB MUX processes network incoming traffic and maps VIPs to DIPs, then forwards traffic to the
correct DIP, each MUX also uses BGP to propagate VIP routes to Edge routers. When a MUX fails,
BGP Keep Alive notifies the MUXs, causing the other active MUXs to redistribute the load in the
event of a MUX failure, effectively providing a load adjustment for the load modulators.
When SLB is implemented, System Center, Windows PowerShell, or another management
application must be used to implement SLB Host Agent on any Hyper-V Host computer. SLB Host
Agent can be installed on all versions of Windows 2016 that support Hyper-V, including Nano
Server.
SLB Host Agent is waiting for Policy updates for SLB from Network Controller. Additionally, the
Host Agent sets rules in the Hyper-V Virtual Switch with SDN enabled for SLB that are set on the
Local computer.
For a virtual switch to be compatible with SLB, the Hyper-V Virtual Switch Manager or Windows
PowerShell command must be used to create the switch, and then the Virtual Filtering Platform or
VFP must be enabled for the virtual switch.
Hyper-V Virtual Switch with active SDN does the following for SLB.
Data path processing for SLB.
Bypass MUX for network outbound traffic and send it to the router using the DSR.
Uses the path provided by the Host for outgoing network traffic.
Features of SLB
SLB and DIP support scalability and low latency return path used by Direct Server
Return or DSR.
SLB works when Switch Embedded Teaming or SET or Single Root Input / Output
Virtualization or SR-IOV is also used.
For site-to-site gateway scenarios, SLB provides NAT capability to enable all site-to-site
connections to use a single public IP.
SLB can be installed, including Host Agent and MUX, on Windows Server 2016, Full,
Core and Nano Install.
Cloud scale preparation includes Scale-Out and Scale Up capabilities for MUXs and
Host Agents.
An SLB Manager Network Controller module can support up to eight MUX Instances.
High availability
In an Active / Active setting, SLB can be implemented in more than two Nodes.
MUXs can be added to or removed from the MUX Pool without affecting the SLB
service, which maintains SLB accessibility when the MUXs are patched.
SLB provides an integrated Edge and Multitenant through integration with Microsoft
devices such as RAS Multitenant Gateway, Datacenter Firewall and Route Reflector.