0% found this document useful (0 votes)
102 views9 pages

Application of Software Load Balancing or SLB For SDN

The document discusses Software Load Balancing (SLB) in Windows Server 2016 which enables distributing network traffic between virtual resources for high availability and scalability. SLB uses virtual IPs (VIPs) mapped to dynamic IPs (DIPs) behind load balancing pools. A multiplexer (MUX) examines incoming traffic for VIPs and maps them to DIPs using load balancing policies. Health probes monitor infrastructure health.

Uploaded by

Att Sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
102 views9 pages

Application of Software Load Balancing or SLB For SDN

The document discusses Software Load Balancing (SLB) in Windows Server 2016 which enables distributing network traffic between virtual resources for high availability and scalability. SLB uses virtual IPs (VIPs) mapped to dynamic IPs (DIPs) behind load balancing pools. A multiplexer (MUX) examines incoming traffic for VIPs and maps them to DIPs using load balancing policies. Health probes monitor infrastructure health.

Uploaded by

Att Sam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Application of Software Load Balancing or SLB for SDN -

Part One
Cloud service providers or CSPs and organizations running Software Defined Networking
or  SDN on Windows Server 2016 can distribute Tenant network traffic between virtual network
resources using Software Load Balancing or SLB. SLB in Windows Server enables multiple servers
to be a workload as a host, resulting in high availability and scalability.

This feature has the following features:


Layer 4 Load Balancing Services for North-South or East-West TCU / UDP Traffic
• Load Balancing Public and Internal network traffic
• Supports dynamic IP addresses or DIPs in VLANs and virtual networks created using Hyper-V
network Virtualization
 Health probe support
• Cloud scale readiness, including Scale-out capability and Scale up capability for Multiplexers and
Host Agents

You can use the SLB in Windows Server to scale the load adjustment capabilities of the SLB VMs in
the same Hyper-V computing servers used for other virtual machine workloads. For this reason, SLB
supports the rapid creation and removal of Load Balancing Endpoints required for CSP operations. In
addition, Windows Server SLB supports tens of gigabytes per cluster, provides a simple preparation
model, and can be easily scaled out.
How SLB works

SLB works by mapping virtual IPs or VIPs to dynamic IPs or DIPs that are part of the Cloud

Services suite in the data center.

VIPs are individual IP addresses that provide public access to a set of Load Balancing virtual
machines. For example, VIs are IP addresses that are displayed on the Internet so that Tenants and
their users can connect to Tenant resources in the Cloud Data Center.
DIPs are the IP addresses for Load Balancing Pool VMs behind VIPs, DIPs are allocated to Tenant
resources in the Cloud infrastructure

VIPs are based on SLB Multiplexer or MUX, MUX consists of one or more VMs, and Network
Controller provides one VIP for each MUX, and each MUX uses the Border Gateway Protocol, or
BGP, to nominate each VIP as a Introduce Route 32 to physical network routers. BGP has made this
possible for the physical network for the following reasons:

Know that a VIP is available in each MUX, even if those MUXs are on different subnets of a third-
layer network. Distribute each VIP's load across all available MUXs using Equal Cost Multi-path or
EMCP routing.
• Automatically detects a malfunction or delete MUX and stops sending traffic to that corrupted
MUX.
• Spread corrupted or deleted MUXs on healthy MUXs.

When public traffic arrives from the Internet, SLB MUX examines that traffic, which includes VIP as
the destination, and then maps and rewrites that traffic with an independent, individual DIP. For
inbound network traffic, this transaction is performed in a two-step process that is split between the
MUX virtual machines and the Hyper-V host, where the destination DIP is located:
Network Adress Translation, or NAT, removes the Hyper-V host encapsulation package, translates
VIP to DIP, re-maps ports, and sends the packet to the DIP VM.

MUX knows how to map VIPs to the correct DIPs, as Load Balancing policies have already been
defined using the Network Controller. These rules include protocol, front-end port, back-end port
When Tenant VMs respond and send network outbound traffic to the Internet or Remote Tenant
locations, since NAT is performed by Hyper-V hosts, it bypasses MUX traffic and goes directly from
the Hyper-V host to the Edge Router. This process bypasses MUX, Direct Server Return or
DSR.And after creating the initial flow of network traffic, the incoming network traffic completely
bypasses the SLB MUX.
In the image below, the user computer performs a DNS query for the IP address of a company's
SharePoint site (in this case a hypothetical company called Contoso) and the process is as follows:
• Returns the DNS VIP server 107.105.47.60 to the user.
• The user sends an HTTP request to the VIP.
• The physical network provides several routes to access the VIPs located in each MUX.  Each router
uses ECMP along the way to travel another part of the route and eventually reach a MUX request.
• The MUX that receives the request reviews the set policies and sees that there are two DIPs,
10.10.10.5 and 10.10.20.5, in a virtual network to process the request to VIP 107.105.47.60.
• Selects MUX DIP 10.10.10.5 and encapsulates the Packets using VXLAN to be able to send it to
the Host containing the DIP using the Hosts physical network address

• The host receives the encapsulated package and inspects it. It then removes the capsule and rewrites
the Packet, so the destination is now VIP DIP 10.10.10.5 instead of sending DIP VM traffic.
This request has now reached Contoso SharePoint on Server Farm 2. The server uses its IP address as
the source to generate a response and send it to the user.
The Packet host intercepts the output on the virtual switch, which indicates that the user, the
customer, who is now the destination, has sent the original request to the VIP. The host rewrites the
Packet source and converts it to VIP so that the user does not see the DIP address.
• The host then sends the packet directly to the default physical network gateway.  The network uses
the standard routing table to send the packet directly to the end recipient user

Application of Software Load Balancing or SLB for SDN -


Part II
72 views1March 29, 2017
In the first part, we talked about Cloud Service Providers or CSPs and organizations that run
Software Defined Networking or SDN in Windows Server 2016, as well as how Software Load
Balancing or SLB works . In this section, we will continue to discuss the use of SLB and its
infrastructure.
Executes NAT when the Hyper-V Virtual Switch to which the VMs are connected modulates the
internal network traffic for the data center, for example between tenant resources running on different
servers that are members of the same virtual network.
In Internal Traffic Load Balancing, the first request is sent to and processed by the MUX, the MUX
that selects the appropriate DIP and directs the traffic to that DIP. From there, the generated traffic
flows through the MUX and goes directly from one VM to another.
Load Balancing software includes Health Probes to verify the health of the network infrastructure,
which includes the following.
 TCP Probe to port

 HTTP Probe to port and URL

Unlike an old load-adjusting device in which the Probe starts from the equipment and moves on the
wire to the DIP, the Probe starts in the SLB on the Host where the DIP is located and goes directly
from the SLB Host Agent to the DIP and distributes the work among the Hosts.

Software Load Balancing or SLB infrastructure

To implement in Windows Server SLB, the Network Controller must first be implemented in
Windows Server 2016 and one or more SLB MUX VMs. In addition, Hyper-V Hosts must be
configured with Hyper-V Virtual Switch with active SDN and ensure that the SLB Host Agent is
running. Routers serving hosts must support Equal Cost Multipath Routing or ECMP Routing and
Border Gateway Protocol or BGP, and must be configured to accept BGP peering requests from SLB
MUXs.
With System Center 2016, the Network Controller can be configured on Windows Server 2016,
including SLB Manager and Health Monitor. You can also use the System Center to implement SLB
MUXs and install SLB Host Agents on PCs running Windows Server 2016 and Hyper-V.

The Network Controller for the SLB Manager plays the role of Host and does the following for the
SLB.
 Processing SLB commands that are imported through the Northbound API from System
Center, Windows PowerShell, or another network management application.

 Calculate Policy for Distribution to Hyper-V Hosts and SLB MUXs.

 Provide SLB infrastructure health status.

SLB MUX processes network incoming traffic and maps VIPs to DIPs, then forwards traffic to the
correct DIP, each MUX also uses BGP to propagate VIP routes to Edge routers. When a MUX fails,
BGP Keep Alive notifies the MUXs, causing the other active MUXs to redistribute the load in the
event of a MUX failure, effectively providing a load adjustment for the load modulators.
When SLB is implemented, System Center, Windows PowerShell, or another management
application must be used to implement SLB Host Agent on any Hyper-V Host computer. SLB Host
Agent can be installed on all versions of Windows 2016 that support Hyper-V, including Nano
Server.
SLB Host Agent is waiting for Policy updates for SLB from Network Controller. Additionally, the
Host Agent sets rules in the Hyper-V Virtual Switch with SDN enabled for SLB that are set on the
Local computer.

Hyper-V Virtual Switch with active SDN

For a virtual switch to be compatible with SLB, the Hyper-V Virtual Switch Manager or Windows
PowerShell command must be used to create the switch, and then the Virtual Filtering Platform or
VFP must be enabled for the virtual switch.
Hyper-V Virtual Switch with active SDN does the following for SLB.
 Data path processing for SLB.

 Receive incoming traffic from MUX.

 Bypass MUX for network outbound traffic and send it to the router using the DSR.

 Run on Hyper-V Nano Server Instances.

The router has BGP enabled

The BGP router does the following for SLB.

 Using ECMP, it directs incoming traffic to MUX.

 Uses the path provided by the Host for outgoing network traffic.

 Waiting for route updates for VIPs from SLB MUX.

 Removes SLB MUXs from SLB Rotation if Keep Active fails

Features of SLB

The following are some of the features and capabilities of SLB.


 SLB provides Layer 4 Load Balancing services for TCP / UDP North-South and East-
West traffic.

 SLB can be used on a network based on Hyper-V network virtualization.


 SLB with a VLAN can be used for DIP VMs connected to a Hyper-V Virtual Switch
with active SDN.

 An instance of SLB can handle multiple Tenants.

 SLB and DIP support scalability and low latency return path used by Direct Server
Return or DSR.

 SLB works when Switch Embedded Teaming or SET or Single Root Input / Output
Virtualization or SR-IOV is also used.

 SLB includes support for Internet Protocol Version 4 or IPv4.

 For site-to-site gateway scenarios, SLB provides NAT capability to enable all site-to-site
connections to use a single public IP.

 SLB can be installed, including Host Agent and MUX, on Windows Server 2016, Full,
Core and Nano Install.

Scale and performance

 Cloud scale preparation includes Scale-Out and Scale Up capabilities for MUXs and
Host Agents.

 An SLB Manager Network Controller module can support up to eight MUX Instances.

High availability

 In an Active / Active setting, SLB can be implemented in more than two Nodes.

 MUXs can be added to or removed from the MUX Pool without affecting the SLB
service, which maintains SLB accessibility when the MUXs are patched.

 Instances separate from MUX have a ninety percent uptime.

 Health monitoring data is available to all management entities.


Alignment

 SLB can be implemented and configured with SCVMM.

 SLB provides an integrated Edge and Multitenant through integration with Microsoft
devices such as RAS Multitenant Gateway, Datacenter Firewall and Route Reflector.

You might also like