Friday, August 27, 2021

Software Load Balancer in Azure Stack HCI

If you have applications running on your HCI cluster that serve hundreds (or thousands) of requests from your users, aspects like reliability, availability, and scale become key points. You can’t host your application on just a single VM and depend on it to never have any failures and to keep up with all the requests. A better approach would be to host the application on multiple VMs and balance the traffic among them.

 

Organizations might be using a hardware load balancer (HLB) to address this issue, but HLBs still have multiple shortcomings, mainly:

  • They are costly and hard to configure, manage, monitor, and maintain.
  • They cannot scale up or down efficiently; most of the time, there are either too many or too few HLBs.

All these issues (and many more) can be solved using Software Load Balancers!

Software Load Balancer in Azure Stack HCI

Software Load Balancer (SLB) can evenly distribute network traffic among multiple VMs. Accordingly, it allows you to host the same workload on multiple VMs, providing high availability, scalability, and reliability. SLB works by mapping virtual IP addresses (VIPs) to destination IP addresses (DIPs); a VIP is the IP address that is exposed to the clients to provide access to multiple VMs, and a DIP is the IP address of each of these load balanced VMs. The SLB gets hosted on multiple Multiplexer (MUX) VMs that process the inbound traffic, maps VIPs to DIPs, then forwards the traffic to the correct DIP.

 

With SLB, you can configure and manage load balancing, inbound Network Address Translation (NAT), and outbound access to the Internet for VMs connected to traditional VLAN networks as well as VMs connected to virtual networks. Through the following features, SLB can help you resolve the issues mentioned earlier and many more:

  • Layer 4 load balancing services for north/south and east/west TCP/UDP traffic.
  • Public and internal network traffic load balancing.
  • Supports VMs attached to traditional VLAN networks or virtual networks.
  • Automatic detection of a MUX failure or removal and spreading the load from the failed or removed MUX across the healthy ones.
  • Health probes that validate the health of the backend VMs behind the load balancer.
  • Ready for cloud-scale, including scale-out capability and scale-up capability as you can easily and rapidly create and delete MUX VMs.
  • Direct Server Return (DSR) feature that allows VMs to respond to network traffic directly, bypassing the MUX. After the initial network traffic flow is established, the network traffic bypasses the MUX completely, reducing the latency added through load balancing.

SLB Final.gif

 

Configure and manage SLB in Azure Stack HCI

 

Before configuring SLB, check out the Plan SDN Infrastructure page, and make sure to configure BGP between your SLB MUXes and your Top of the rack (ToR) switches.

 

SLB can be configured through multiple options: standard REST interface, Powershell, Windows Admin Center (WAC), and System Center Virtual Machine Manager (SCVMM). There are three high-level steps to configure SLB for HCI: setup the Network Controller (NC), setup the SLB, and configure load balancing or NAT rules. Below we will go through the instructions to complete the first 2 steps using SDN Express PS scripts, and the last step using WAC and PowerShell. Follow these links to deploy and manage SLB through SCVMM.

 

Note: SLB Deployment is not available through WAC yet, but it will be soon.

 

1. Setup Network Controller and SLB

You can deploy NC and SLB using SDN Express scripts. The scripts are available in the official Microsoft SDN GitHub repository. The scripts need to be downloaded and executed on a machine that has access to the HCI cluster management network. Detailed instructions for executing the script are provided here.

 

Whether you already deployed Network Controller (NC) or not, you can set up SLB and NC through the SDN Express script. If the script finds that you already have deployed NC (using the same parameters), it will skip over NC deployment and go straight to SLB deployment. If you have deployed neither, the script will deploy both components.

 

Configuration File

The script takes a configuration file as input. A template file can be found in the Github repository here. If you have already deployed NC through SDNExpress script, use and add to the configuration file that you already used for NC deployment.

 

Notes:

a. Ensure that the VHD used is of the same build that’s on the Hosts and the NC VMs.

b. The parameters VMLocation, SDNMacPoolStart, SDNMacPoolEnd can use default values.

c. Make sure to configure the “Additional settings for SLB” and “Settings for tenant overlay networks” (even if you’re only managing traditional VLAN networks)

d. Muxes’ MAC and PA addresses must be specified.

- Ensure that the MAC address and PAMACAddress parameters of each Mux VM is outside the SDNMACPool range specified in the General settings by the SDNMACPoolStart  and SDNMACPoolEND .

e. Muxes’ PAIPAddress must be specified.

- Ensure that you pick the PAIPAddresses from the outside of the PA Pool, but inside the PASubnet.

- If your PA Pool includes all the addresses in the PASubnet, you can shrink the pool by setting new values to the PAPoolStart and PAPoolEnd.

f. The following section can be blank: Gateways (Gateways= @()) as well as the following parameters: PoolName, GRESubnet, and Capacity.

 

A sample file is shown below:

 

blog pic 1.pngblog pic 2.png

 

2. Configure Load Balancing or NAT Rules

There are multiple scenarios where SLB can be beneficial. These scenarios and the high-level steps to configure them are listed below. Detailed steps for completing these high-level steps could be found here for WAC and here for PowerShell.

 

A. Load Balancing for Workloads on a VLAN Network or a Virtual Network

Instead of hosting your application on a single VM, enhance its reliability, availability, and scalability by hosting your application on multiple VMs and load balancing traffic among them. To accomplish this, follow these steps:

i. In the Public IP extension, create a Public IP Address that will be reserved for your load balancer. Select a dynamic IP address allocation method, if you want the Public IP to be automatically picked for you.

ii. In the Load Balancer extension, create a Load Balancer of type “Public IP”, and select the Public IP address that you just created.

- Note: you can also use the IP address type, which will assign the IP address directly to the load balancer without creating a public IP resource. In this case, if you delete the Load Balancer, the IP address is returned to the pool.

iii. Create a Back-End Pool of the IP addresses that will act as the back-end DIPs.

iv. Create a health probe to monitor the health state of the DIPs.

v. Create a Load Balancing rule that will evenly distribute the network traffic among the pool of DIPs.

 

 

B. Internal Load Balancing on a Virtual Network

If the user of your back-end application is another one of your front-end applications in the same HCI cluster, you can use an internal load balancer to load balance traffic from your front-end application among the back-end VMs. To accomplish this, follow these steps:

i. Create a Load Balancer of type “Internal” and specify a Private IP Address from the target Virtual Network. This address will act as the VIP that will be exposed to your front-end application to provide access to multiple back-end VMs.

ii. Create a Back-End Pool of the IP addresses that will act as the back-end DIPs.

iii. Create a health probe to monitor the health state of the DIPs.

iv. Create a Load Balancing rule that will evenly distribute the network traffic among the pool of DIPs.

 

 

C. Inbound NAT

Aside from load balancing, you can use a load balancer to achieve better security by hiding your back-end IP address behind a front-end IP address. If your VM(s) only needs to receive traffic, you can set up an Inbound NAT rule to just forward external network traffic to your VM(s). To accomplish this, follow these steps:

i. In the Public IP extension, create a Public IP Address that will be reserved for your load balancer. Select a dynamic IP address allocation method, if you want the Public IP to be automatically picked for you.

ii. In the Load Balancer extension, create a Load Balancer of type “Public IP”

iii. Create an Inbound NAT rule that will forward traffic to your VM through the front-end IP address.

 

 

D. Outbound NAT on a VLAN Network or Virtual Network

If you want to configure your VM(s) to have Internet access, you can set up an Outbound NAT rule to just forward the VM(s) network traffic to the internet. This will allow you to use one front-end public IP address for multiple VMs with private IP addresses.  To accomplish this, follow these steps:

i. Create a Load Balancer of type “Public IP”

ii. Create a Back-End Pool of the IP addresses that will act as the back-end DIPs.

iii. Create an Outbound NAT rule that will forward traffic from your VM through the front-end IP address.

 

 

To sum up, SLB allows you to balance network traffic and provide internet access to/from your applications on your HCI cluster, increasing the reliability of your applications efficiently. Try it out and feel free to forward your feedback and questions to sdn_feedback@microsoft.com.

Posted at https://sl.advdat.com/3jm3zjZ