Blog Archives

Azure Internal Load Balancer and the Windows Azure Pack

I’ve been working with a customer who have been developing their own custom portal for the Windows Azure Pack (WAP) and wanted to host WAP in Azure as they had two datacentres and wanted to ensure that WAP was hosted elsewhere. So in this design my Inframon colleagues and I decided to use Azure to host WAP, ADFS, SQL AlwaysOn for WAP and a few other components (not SMA and SPF though).

The design called for two servers hosting WAP (to start with) to be deployed in Azure and to have all the different WAP components (Tenant API, Tenant Public API, Admin API, etc.) load balanced using the Azure Internal Load Balancer. This also required that we changed the FQDN for the different WAP components and create a DNS entry pointing to the Azure Load Balancer’s IP address.

The diagram below shows the desired installation:

WAP-ILB

Whilst there is a great deal of documentation from Microsoft about how to use the Azure Internal Load Balancer to create a SQL AlwaysOn cluster – which was very useful as we needed one of those for WAP’s databases – there wasn’t very much about using it for anything else.

After many late nights trying to get the Azure Internal Load Balancer to do what we required we started to understand the issues we were having. We had configured the probe port to use the port that WAP was listening on (we didn’t need to change the default WAP port numbers as the customer was happy for them to remain as default as WAP wasn’t customer facing). For example to load balance the Admin API that is on port 30004 we created an Azure Endpoint using the following PowerShell:


Add-AzureInternalLoadBalancer -InternalLoadBalancerName "WAP" -ServiceName "WAP" -StaticVNetIPAddress 192.168.100.25 -SubnetName "WAP"

$Port = 30004

$WAP1 = Get-AzureVM -ServiceName "WAP" -Name "WAP1"
$WAP2 = Get-AzureVM -ServiceName "WAP" -Name "WAP2"

Add-AzureEndpoint -Name "WAP$Port" -Protocol tcp -LocalPort $Port -PublicPort $Port -DirectServerReturn $false -LBSetName "WAP$Port" -ProbePort $Port -ProbeProtocol http -ProbeIntervalInSeconds 15 -ProbeTimeoutInSeconds 31 -InternalLoadBalancerName WAP -ProbePath / -VM $WAP1

Add-AzureEndpoint -Name "WAP$Port" -Protocol tcp -LocalPort $Port -PublicPort $Port -DirectServerReturn $false -LBSetName "WAP$Port" -ProbePort $Port -ProbeProtocol http -ProbeIntervalInSeconds 15 -ProbeTimeoutInSeconds 31 -InternalLoadBalancerName WAP -ProbePath / -VM $WAP2

$WAP1 | Update-AzureVM
$WAP2 | Update-AzureVM

This refused to work.

Unfortunately there is no way (that I can find) to monitor the Azure Internal Load Balancer to find out what was happening with it and what errors, if any, it is receiving from load balanced servers. After much investigation we discovered that as we were using Server Name Indication in IIS the HTTP probe wasn’t working. This was because the probe wasn’t using the SNI name and was receiving back a HTTP 400 BAD REQUEST ERROR. The Azure Internal Load Balancer correctly interpreted this a service failure so the load balancer wouldn’t work.

To resolve this problem we changed the default web site in IIS to listen on port 40091, ensured there was no SNI configured and altered the PowerShell to this:

$ServiceName = ""
$InternalLoadBalancerName = ""
$InternalLoadBalancerIPAddr = ""
$SubnetName = ""

Add-AzureInternalLoadBalancer -InternalLoadBalancerName $InternalLoadBalancerName -ServiceName $ServiceName -StaticVNetIPAddress $InternalLoadBalancerIPAddr -SubnetName $SubnetName

#Get the VM configuration from Azure
$WAP1 = Get-AzureVM -ServiceName $ServiceName -Name "WAP1"
$WAP2 = Get-AzureVM -ServiceName $ServiceName -Name "WAP2"

#The list of ports to be load balanced
$Ports = @("30004","30005","30006","30020","30022","30071","30072","30081","30091")

#Iterate each port in the list creating a new endpoint within the Azure ILB for each VM and port
ForEach($Port in $Ports){

Add-AzureEndpoint -Name "WAP$Port" -Protocol tcp -LocalPort $Port -PublicPort $Port -DirectServerReturn $false -LBSetName "WAP$Port" -ProbePort 40091 -ProbeProtocol http -ProbeIntervalInSeconds 15 -ProbeTimeoutInSeconds 31 -InternalLoadBalancerName $InternalLoadBalancerName -ProbePath / -VM $WAP1
Add-AzureEndpoint -Name "WAP$Port" -Protocol tcp -LocalPort $Port -PublicPort $Port -DirectServerReturn $false -LBSetName "WAP$Port" -ProbePort 40091 -ProbeProtocol http -ProbeIntervalInSeconds 15 -ProbeTimeoutInSeconds 31 -InternalLoadBalancerName $InternalLoadBalancerName -ProbePath / -VM $WAP2

}

#Update the VMs in Azure with their new configuration
$WAP1 | Update-AzureVM
$WAP2 | Update-AzureVM

This resulted in a happy Azure Internal Load Balancer but not a happy WAP deployment…

As each WAP server had all of the required components, Tenant API, Tenant Public API, Admin API, etc. and the IP address of the FQDN was set to the IP address of Azure Internal Load Balancer WAP was unable to communicate with itself across components. The diagram below shows the problem:

Azure-ILB-Fail

After much deliberation on how to solve this, including moving away from the Azure Internal Load Balancer and using a 3rd party tool in Azure, it was decided that we should put an entry in each WAP server’s hosts file for the WAP FQDN to reference itself. This led to a happy deployment!

So if you’re going to use the Azure Load Internal Load Balancer for anything then make sure you understand that servers that sit behind it can’t communicate with the IP address of it. If we had WAP split into each separate component on different servers, in different subnets behind different Azure Internal Load Balancers then this would have been OK but for this customer it would’ve been too much!

Advertisements
%d bloggers like this: