Blog Archives

Azure Internal Load Balancer and the Windows Azure Pack

I’ve been working with a customer who have been developing their own custom portal for the Windows Azure Pack (WAP) and wanted to host WAP in Azure as they had two datacentres and wanted to ensure that WAP was hosted elsewhere. So in this design my Inframon colleagues and I decided to use Azure to host WAP, ADFS, SQL AlwaysOn for WAP and a few other components (not SMA and SPF though).

The design called for two servers hosting WAP (to start with) to be deployed in Azure and to have all the different WAP components (Tenant API, Tenant Public API, Admin API, etc.) load balanced using the Azure Internal Load Balancer. This also required that we changed the FQDN for the different WAP components and create a DNS entry pointing to the Azure Load Balancer’s IP address.

The diagram below shows the desired installation:

WAP-ILB

Whilst there is a great deal of documentation from Microsoft about how to use the Azure Internal Load Balancer to create a SQL AlwaysOn cluster – which was very useful as we needed one of those for WAP’s databases – there wasn’t very much about using it for anything else.

After many late nights trying to get the Azure Internal Load Balancer to do what we required we started to understand the issues we were having. We had configured the probe port to use the port that WAP was listening on (we didn’t need to change the default WAP port numbers as the customer was happy for them to remain as default as WAP wasn’t customer facing). For example to load balance the Admin API that is on port 30004 we created an Azure Endpoint using the following PowerShell:


Add-AzureInternalLoadBalancer -InternalLoadBalancerName "WAP" -ServiceName "WAP" -StaticVNetIPAddress 192.168.100.25 -SubnetName "WAP"

$Port = 30004

$WAP1 = Get-AzureVM -ServiceName "WAP" -Name "WAP1"
$WAP2 = Get-AzureVM -ServiceName "WAP" -Name "WAP2"

Add-AzureEndpoint -Name "WAP$Port" -Protocol tcp -LocalPort $Port -PublicPort $Port -DirectServerReturn $false -LBSetName "WAP$Port" -ProbePort $Port -ProbeProtocol http -ProbeIntervalInSeconds 15 -ProbeTimeoutInSeconds 31 -InternalLoadBalancerName WAP -ProbePath / -VM $WAP1

Add-AzureEndpoint -Name "WAP$Port" -Protocol tcp -LocalPort $Port -PublicPort $Port -DirectServerReturn $false -LBSetName "WAP$Port" -ProbePort $Port -ProbeProtocol http -ProbeIntervalInSeconds 15 -ProbeTimeoutInSeconds 31 -InternalLoadBalancerName WAP -ProbePath / -VM $WAP2

$WAP1 | Update-AzureVM
$WAP2 | Update-AzureVM

This refused to work.

Unfortunately there is no way (that I can find) to monitor the Azure Internal Load Balancer to find out what was happening with it and what errors, if any, it is receiving from load balanced servers. After much investigation we discovered that as we were using Server Name Indication in IIS the HTTP probe wasn’t working. This was because the probe wasn’t using the SNI name and was receiving back a HTTP 400 BAD REQUEST ERROR. The Azure Internal Load Balancer correctly interpreted this a service failure so the load balancer wouldn’t work.

To resolve this problem we changed the default web site in IIS to listen on port 40091, ensured there was no SNI configured and altered the PowerShell to this:

$ServiceName = ""
$InternalLoadBalancerName = ""
$InternalLoadBalancerIPAddr = ""
$SubnetName = ""

Add-AzureInternalLoadBalancer -InternalLoadBalancerName $InternalLoadBalancerName -ServiceName $ServiceName -StaticVNetIPAddress $InternalLoadBalancerIPAddr -SubnetName $SubnetName

#Get the VM configuration from Azure
$WAP1 = Get-AzureVM -ServiceName $ServiceName -Name "WAP1"
$WAP2 = Get-AzureVM -ServiceName $ServiceName -Name "WAP2"

#The list of ports to be load balanced
$Ports = @("30004","30005","30006","30020","30022","30071","30072","30081","30091")

#Iterate each port in the list creating a new endpoint within the Azure ILB for each VM and port
ForEach($Port in $Ports){

Add-AzureEndpoint -Name "WAP$Port" -Protocol tcp -LocalPort $Port -PublicPort $Port -DirectServerReturn $false -LBSetName "WAP$Port" -ProbePort 40091 -ProbeProtocol http -ProbeIntervalInSeconds 15 -ProbeTimeoutInSeconds 31 -InternalLoadBalancerName $InternalLoadBalancerName -ProbePath / -VM $WAP1
Add-AzureEndpoint -Name "WAP$Port" -Protocol tcp -LocalPort $Port -PublicPort $Port -DirectServerReturn $false -LBSetName "WAP$Port" -ProbePort 40091 -ProbeProtocol http -ProbeIntervalInSeconds 15 -ProbeTimeoutInSeconds 31 -InternalLoadBalancerName $InternalLoadBalancerName -ProbePath / -VM $WAP2

}

#Update the VMs in Azure with their new configuration
$WAP1 | Update-AzureVM
$WAP2 | Update-AzureVM

This resulted in a happy Azure Internal Load Balancer but not a happy WAP deployment…

As each WAP server had all of the required components, Tenant API, Tenant Public API, Admin API, etc. and the IP address of the FQDN was set to the IP address of Azure Internal Load Balancer WAP was unable to communicate with itself across components. The diagram below shows the problem:

Azure-ILB-Fail

After much deliberation on how to solve this, including moving away from the Azure Internal Load Balancer and using a 3rd party tool in Azure, it was decided that we should put an entry in each WAP server’s hosts file for the WAP FQDN to reference itself. This led to a happy deployment!

So if you’re going to use the Azure Load Internal Load Balancer for anything then make sure you understand that servers that sit behind it can’t communicate with the IP address of it. If we had WAP split into each separate component on different servers, in different subnets behind different Azure Internal Load Balancers then this would have been OK but for this customer it would’ve been too much!

Modern Style Visio Stencils for Operations Manager

I have created some more modern style Visio stencils for System Center. This time for Operations Manager!

You can download from here.

You can what they look like from here but download them, they’re free!

OpsMgr3

OpsMgr1

OpsMgr2

MDT and DaRT – Locking the Port Used for Remote Connections during OSD

The Microsoft Deployment Toolkit (MDT) brings a lot of functionality to operating system deployment (OSD) as I’m sure many of you are aware. One of the best features is the ability to incorporate the DaRT tools into the MDT boot WIM. This allows for deployment administrators to remotely connect to a device during OSD. This can be extremely useful in a situation where the device is not local to the admin.

Johan Arwidmark has a great post on how to integrate the tools into the MDT environment with ConfigMgr.

One of the issues with the default DaRT configuration is that the remote connections use a dynamic RPC port instead of a specific port during OSD. It is possible to lock down the port when using DaRT in its fully fledged mode however locking it down to a specific port during the OSD phase is not easy.

I’ve recently been working at a customer who have VERY strict firewall policies in place and would not allow dynamic RPC ports to be open from the ConfigMgr Primary Site Server VLAN to the client device VLAN. This led me to investigate how to lock the port used by DaRT during OSD for remote connections.

After trying several different options, including adding a customised DartConfig.dat file to the base Toolsx86.cab file, I was almost at the point of giving up, I didn’t.

Using the DaRT Recovery Image Wizard I created a DaRT image for Windows 8.1 Update and on the Remote Connection tab I enabled the option to Allow Remote Connections and specified a port to use, in this case 3389 as this was what the customer wanted to use:

AllowRemoteConnections

During the process I ticked the option to edit the image before the WIM was created:

EditImage

I then opened the location where the WIM contents were stored and navigated to the Windows\System32 folder to extract the customised DartConfig.dat file:

DartConfig

This file was then copied to a new folder where I’d created a folder structure Windows\System32:

CreateExtrasFolderStructure

I then finish the DaRT Recovery Image Wizard and started to create a new boot image in ConfigMgr using the “Create Boot Image using MDT” option. During the creation wizard I ticked the “Add extra files to the new boot image” option and pointed to the UNC path folder for the folder I had created above:

ExtrasFolder

This created the boot image and crucially overwrote the default DartConfig.dat file with the one I created earlier. This meant that for all Task Sequences using this boot image the customer would be able to connect to the device using the DaRT Remote Control option in MDT using port 3389 at all times.

OnPort3389

 

TechEd North America 2014

So it’s TechEd North America this week, hopefully Microsoft should be giving out some nuggets of information about the next versions of Windows and System Centre, but looking at dates I’d think that information will come out at TechEd Europe in October…

Two of my colleagues are speaking this week, Martyn Coupland and Gordon McKenna MVP, so if your going I’d advise catching their sessions as they know what they’re talking about!

I’m on a customer site all week but I’ll hopefully be able to catch the keynote live and the possibly some other sessions due to the time difference too…

Scripting Shared Nothing Live Migration

UPDATE: 16th September 2016 – Link to download fixed.

I was working with a customer recently to replace their existing Windows Server 2012 Hyper-V clusters and System Center 2012 SP1 Virtual Machine Manager (VMM) installation with new Windows Server 2012 Hyper-V clusters and System Center 2012 R2 Virtual Machine Manager installation.

The customer was concerned about downtime for moving their Virtual Machines (VMs) from their existing clusters to the new ones.

We looked at the option of using Shared Nothing Live Migration (SNLM) to move VMs between the clusters which whilst an option wasn’t entirely realistic due to having in excess of 250 VMs and the names of the Logical Switches were different so each VM took some time to manually process, and being a manual repetitive task is prone to errors. The customer thought they’d have to go through migrating roles, moving CSVs and taking down VMs etc. Whilst that doesn’t sound too bad I wanted to offer a better option.

So looking at the options in PowerShell it was obvious that Move-VM was the cmdlet I wanted to use. Looking at the parameters I found -CompatibilityReport which “Specifies a compatibility report which includes any adjustments required for the move.” my first thought was where do I get one of those from?

After a bit of digging on the internet I discovered Compare-VM which creates a Microsoft.Virtualization.Powershell.CompatibilityReport.

The Compatibility Report fundamentally contains information on what would happen if, in this case, wanted to move a VM from one host to another.

So running:

Compare-VM <VMName> -DestinationHost <DestinationServer> -DestinationStoragePath <DestinationStoragePath> -IncludeStorage

gave me a Compatibility Report with some incompatibilities listed… Again after some digging I determined what these incompatibilities meant and how to resolve them.

I could then run a Compare-VM -CompatibilityReport <VMReport> which essentially says, “if I did this to the VM would it work now?” As long as you get no incompatibilities all is good!

Once that completed we could use the Move-VM –CompatibilityReport <VMReport> function to move a VM from one host to another…

Now whilst all these Compare-VMs are underway the source VM is quite happy existing and running as normal.

So where is this going? After discussions with the customer I expanded the PowerShell script to cope with multiple VMs, check for Pass Through Disks, remove VMs from clusters, etc.

The basics of the script are that it requires several parameters:

  • SourceCluster – where are the VMs to move?
  • DestinationServer – where do you want to move the VMs to? (optional, if this isn’t specified then a random member of the destination cluster is chosen for each VM to be moved)
  • DestinationCluster – what cluster do you want to move the VMs to?
  • SwitchToConnectTo – what is the name of the Virtual Switch to use on the destination server/cluster? For example if you VM/VMs are connected to a virutal switch called LogSwitch1 but your new cluster uses a virtual switch named LogicalSwitch1 you would specifiy LogicalSwitch1 for this parameter.
  • DestinationStoragePath – where do you want to put the VM’s storage on the destination cluster?
  • VMsToMove – this is a list of the VMs to be moved
  • LogPath – the path to a file you want to log the progress of the script to <optional>

Whilst this script may seem a little limited it managed to save the customer a great deal of time in migrating their VMs from their old Hyper-V clusters to their new ones. It can be extended to put in different Storage Paths for differenet VMs, different Virtual Switches etc.

Move-VMusingSNLM

What I’ve been up to…

So 2013 was a bit of a crazy year for me…

After winning a place at TechEd Europe 2013 I got a new job working for Inframon as a System Center and Desktop Implementation Consultant, basically I get to work with System Center 2012 every day! Not only do I get to work with the latest and greatest software every day I get to work with the best System Center guys in the world.

I’ve gone from running a small, but very capable, installation of System Center to deploying different components of it for a variety of customers all over the UK. It’s been challenging but fantastic!

I’d like to put a special thank you out to the Microsoft UK DPE team and TechNet UK team who have inspired me to go out and learn System Center and Hyper-V. Without the free training offered by Microsoft through TechDays (online and in-person), Microsoft Virtual Academy and other free resources I wouldn’t be where I am now.

 

Stop Bashing Windows RT!

There’s a lot of articles in the IT press at the moment about how Windows RT is doomed to fail:

  1. Windows RT: DOA to almost everybody
  2. Microsoft Doesn’t Want To Admit Windows RT Is Dead
  3. Windows gains no tablet traction as PC OEMs turn to Android

The list goes on and on.

I was fortunate enough to attend TechEd Europe this year and Microsoft did an amazing discount offer on both flavours of its Surface hardware. Like the vast majority of attendees (that I spoke to) I purchased both. They ran the offer at TechEd North America, TechEd Europe and the Worldwide Partner Conference.

Yes Windows RT is not full Windows – MS never claimed it was. Did the marketing team go a bit nuts with the naming of the product? Yes. There are many examples of Microsoft products being “attacked” by the marketing department:

  1. Windows RT
  2. Windows Azure Services for Windows Server (now called the Windows Azure Pack)
  3. Windows Azure Active Directory (nothing to do with on-premise Active Directory – when it was first released many people, including me, thought I could use it as another AD controller!)

Anyway – back to Windows RT…

It’s not Windows, it looks like Windows, it smells like Windows but it isn’t. It can’t run traditional desktop apps like Photoshop, Auto Cad, etc. but then neither can the iPad or any flavour of Android device (excluding the wacky hybrids but they’re not running Photoshop on Android). It only runs apps from the Microsoft store – just like a non-jailbreaked i device – and don’t even go there when it comes to the crazy world of Android where each vendor has their own App store!

So what’s the problem(s)?

1. They called it Windows RT

Windows is a brand name, the same as Coca Cola, Diet Coke, Coke Light, etc. May be they should have called it Windows Lite or something similar. Immediately people know it’s different. I’m no marketing expert IN ANYWAY but when Apple released the full iPhone, iPad, etc. (not including the original iPod) it was crystal clear that it wasn’t Mac OS X. It may share certain elements from its big brother but it looks different, smells different, is different. Windows RT looks the same. It obviously shares components with its big brother but it’s different. Easy fix – get rid of the desktop element on Windows RT. Make it Modern apps only, I know it is, however the problem there is Office RT, or whatever the official name is. Surely Microsoft could wrap it in a Modern app so you can’t see it running in desktop mode?

2. The hardware cost of Surface RT (at least) is WAY TOO HIGH!

When Amazon released the Kindle, Kindle Fire and other hardware they sold them at cost or less. May be Microsoft are selling Surface RT devices at cost, if so they need to sort their supply chain out! In my opinion they should have even made a loss on the product. Flood the market with devices. Yes is would’ve made OEMs angry but surely the whole point of releasing a product is getting people to use it! The more adoption a platform gets the more incentive their is for developers to put apps in the associated App store. Apple’s products are premium products, always have been and to some extent always will be. They got there first to the mass market with the iOS devices and the App store – now its has the biggest App store out there (quantity does not equal quality). The vast majority of developers put apps on to the iOS platform before they head over to Android and eventually Windows Phone/RT/8. Once the traction is there the cost of the device can rise, OEMs can make more targeted hardware – higher capacity, more cores, better GPUs, whatever.

3. The price of hardware accessories for Surface RT is WAY TOO HIGH!

£99 for a touch over! £109 for a type cover! £69.99 for a mouse! SERIOUSLY! That is insane. The accessories are very well made and do work well but £99 for the “entry level” keyboard is just wrong. Take £50 off each keyboard, still expensive but much more realistic. £69.99 for a mouse – it’s a mouse not a USB port replicator! Take £30-£40 off that and it would be worth it.

My experience of Windows RT

As I stated earlier I have both flavours of Surface. When I leave my house in the morning there are 4 devices I pick up:

  1. My iPhone 4 – waiting for the Nokia 1020 to be released in the UK then it is GONE
  2. My work Nokia Lumia 900 – great workhorse for what I need it for
  3. My iPad mini – I love my iPad mini, it does everything I want it to do for me. Games, Facebook, Twitter, WordPress etc. It is not a big productivity device for me. Yes I can use iTap for RDP access to my environment, yes there are tools available for an IT Pro to manage a few aspects of IT
  4. My Surface RT – I NEVER leave home without it. Whilst in many respects it does the same as my iPad mini it does so much more for me. I’ve updated mine to Windows 8.1 Preview and it makes it so much better. I don’t use local storage on the device any more. Everything goes through SkyDrive. I’ll sync down some videos (which look so much better on the Surface RT than on the iPad mini), work on some blog posts in Word, update my personal excel spread sheets etc. safe in the knowledge that when I find some Wi-Fi it will sync without me having to tell it to or plugging it in to my PC to take files off (thanks Apple) or have to email documents to myself. I’ve installed a bunch of Apps and am very happy with the quality (quality is so much more important than quantity when it comes to app stores)

Admittedly it can’t do everything my Surface Pro can do but the battery life is much better, it’s lighter and doesn’t feel like you fry an egg on the back of it. I use my Surface Pro too, when I need to.

Windows RT is squarely focused at consumers and so it should be. Did Microsoft marketing get the name wrong – yes they did.

Where are Microsoft heading with the 2012 R2 releases?

So last week I attended my first TechEd Europe in Madrid. I won my place through the Microsoft TechNet UK TechEd Challenge (say that after a few pints…) for my System Center 2012 blog post.

Never in 4 days have I learnt so much!

With the new versions of Windows Server 2012 R2 and System Center 2012 R2 announced at TechEd North America it was the turn of Europe to see what Microsoft had to offer with the latest versions. It is fair to say they’ve not disappointed any one (that much) with the upcoming releases.

First of all – TechEd

Wow. So it was my first Microsoft conference, ever, and I enjoyed every minute. It was great to see so many of the Product Managers, Marketing Managers and downright technical geniuses that had made the trip over to share their enthusiasm for the next release. So out of all the sessions I could’ve possibly attended I only missed 1 – mainly due to my brain trying to process the shear quantity of information!

It’s clear to see the preview releases are very stable (no BSODs on demos) and no pre-recorded demos (unlike some other vendors).

Le Caveat

Everything below is my opinion and should be treated as such. No Microsoft employee has confirmed any of the information below, it is purely my personal speculation.

So what is Microsoft’s vision?

Everything Windows Server 2012 R2 and System Center 2012 R2 is based on the “Cloud OS” vision (I saw that slide so many times…) where there are 3 clouds:

  1. Private cloud: on-premise cloud powered by Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released), the Service Provider Framework and the Windows Azure Pack (which is awesome). This is generally seen as the starting point for everything, doesn’t have to be but if you’ve got it on-premises the rest is easy.
  2. Public cloud: this is Microsoft’s Azure cloud. It is powered by Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released) and the Azure Services (full blown) and some mega storage system using commodity hardware – no specialist SAN.
  3. Service Provider cloud: again running on Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released), the Service Provider Framework and the Windows Azure Pack (which is awesome – still). The idea here is for value added services from a service provider, customer choice – especially around data locations (think data laws).

This leads to the “one consistent platform” message; if your internal users can provision services using the Windows Azure Pack, they can use full blown Azure and Service Provider implementations too; no more learning of several different portals.

So what is enabling this?

The core new features of Windows Server 2012 R2 are going to change the game when it comes to Cloud.

  • Shared VHDX – enables guest clustering without having to expose directly mapped storage to guest VMs. This gives you all the features needed for upgrading underlying infrastructure whilst maintaining availability of guest VMs. Storage Migration will allow you to move the VM’s storage (i.e. the shared VHDX) whilst the underlying hardware is maintained, upgraded, replaced, etc. As internal provider this will make my life so much easier (edit: this point needs to be clarified with some testing, I think I may have this wrong…) I can remove all my directly mapped LUNs and just use Shared VHDX files for the storage. Don’t use snapshots!
  • Online VHDX resize – the ability to change the size of the disk attached to a VM (grow AND shrink), without having to take the VM offline! Note: you still need to change the size of the parition within the guest, however some clever use of PowerShell/System Center Orchestrator (provided the guest trusts the Orchestrator install) will do this however that will require some effort to implement, it isn’t in the box
  • Storage QoS – you can now tune the number of IOPs on a virtual disk. No more IOP hoggers! I believe this only extends to additional disks, not disks with OSs in. As such applications like SQL that love IOPs will have to be configured correctly in guest for the Hyper-V provider to take advantage here (follow MS best practise and you’ll be fine)
  • Live Migration compression – in Windows Server 2012 R2 this will come enabled by default. Most virtualisation hosts are constrained by the amount of RAM they have to offer guests rather than the CPU cycles they can offer. Compression uses spare host CPU cycles to compress the Live Migration of RAM and you can move a VM at twice the speed (if not more). If you’ve got RDMA NICs (and multiples of) then the speed of your RAM will matter (that is not a typo). SMB direct RDMA offloads everything from the system to the NIC cards
  • Extended replica – instead of just being able to replica to one other host you can replica a replica. Perfect for Service Providers who offer replica as service; they’re able to replica the customer’s VMs to another host/data centre without having to have crazy expensive SANs
  • Hyper-V Network Virtualisation Gateway – until the Friday morning of TechEd I referred to this as the “magic gateway”, I just couldn’t figure out how it worked. After attending this session it all became very clear. This appears to be the brains behind the Virtual Networks offering on the Azure public cloud, the load balancer and all the other excellent networking offerings in Azure
  • Windows Server 2012 R2 Tiered Storage Spaces – on the surface this seems to the StorSimple technology migrated to Window Server 2012 R2. By tiering the storage available on Storage Spaces Windows Server will move the most read/written to blocks (blocks not files – blocks could contain files, think VDI deduplication here) to the fastest storage available, this could be SSD, 15K disks, etc. This tiering gives amazing IOPs especially when combined with CSV caching in memory. Best of all – it just uses JBODs on the back end! As I understand it at the moment you can only have 8 nodes in a Scale-out file cluster for this
  • Linux backups of Hyper-V guests – no longer will a VM pause when it is being backed up at the host level (provided your Linux version is correct). Microsoft have shied away from saying they’ve implemented VSS inside Linux but it is basically what they’ve done
  • Oracle support on Hyper-V – this is probably the final hurdle for high-end enterprise adoption of Hyper-V

At TechEd the focus was VERY heavy on the “Cloud OS” vision and how System Center 2012 R2 and the Windows Azure Pack was going to power that throughout:

  • System Center Virtual Machine Manager (VMM) is the king maker. VMM will now deploy VM hosts from bare metal, VMs to hosts (whether that be Hyper-V, VMware or Citrix hosts) and with R2 it will deploy Scale-Out file servers for hosting VM storage from bare metal! Allegedly this list will increase too. Microsoft have stated that there is no reason why you shouldn’t move your workloads to VMs – this is squarely aimed at SQL server workloads. With the ability to SysPrep a SQL server you can now deploy them directly to VMs. Server App-V brings another string to VMM’s bow. As far as I can see Microsoft are targeting VMM at deploying all server workloads – physical and virtual.
  • Windows Azure Pack. This is enabling end-user provisioning of services from pre-defined templates (created in VMM) with an interface that is consistent with the Microsoft Azure Cloud. The Azure Pack sits between the end-user and the Service Provider Framework (this sits in front of System Center) and can be skinned to corporate colours. At a basic level it tells VMM what to do (via service templates) and as such does not necessarily require you to have Hyper-V as your virtualisation host – it will work with VMware and Citrix too. Best of all – it’s extensible. Microsoft will add more services over time and you can add your own in too.

So what about the other System Center components?

Data Protection Manager was relatively quiet, it was confirmed that this component will be able to use a clustered SQL server for its database but there will be no push to cluster DPM. You can make DPM highly available by running it as a VM on a Hyper-V failover cluster. You should be able to use VHDX files to store the DPM backups (this will remove the final pass-through disk in my DPM setup) – these will need to be fixed size though and will probably not support online resize – DPM can get very angry about other applications playing around with its disk(s).

I heard very little mention of System Center Configuration Manger 2012 R2 at TechEd. I may have been in the wrong sessions. With VMM taking over the world role of deploying servers and ConfigMgr having tighter integration with Windows Intune I see it as becoming the client OS manager. Combine it with MDT and it is an extremely effective tool for desktop deployment and compliance monitoring. When it comes to Patch Mangement VMM already has the hosts, how long until it starts looking after guests? Admittedly ConfigMgr gives you all the reports, at the moment…

Operations Manger – there were some further strives forward, especially for monitoring Java applications. System Center Advisor is now baked in to the application (this is Microsoft’s cloud based monitoring that uses information gained from customers to ensure you’re installations are in tip top condition).

I didn’t hear anything about App Controller or End Point Protection.

Summary

The order of products to learn inside and out for effective Microsoft Cloud OS are:

  1. Windows Server 2012 R2: this is the base for everything. Microsoft runs on Microsoft best (or something similar – I’m sure the MS marketing team can correct me here). Once you know how Windows Server works, especially Hyper-V, you’ve got the foundations for your cloud
  2. System Center Virtual Machine Manger: this rules your cloud. VMM provisions and controls your cloud. I cannot stress how important this product will be in the next 12 months and far into the future
  3. System Center Operations Manger: this will monitor your cloud and all the applications running in it. There’s no point in having a bunch of amazing hardware if the services you’re running are performing like a 90 year old in 100 metre sprint. I’d include System Center Advisor in here too
  4. Windows Azure Pack: this is the front door to your cloud. It makes end-user provisioning of services much easier. You can also customise the pack, not only through colour schemes but you can add your own items in there too
  5. Data Protection Manager: no point in having an amazing cloud if you can’t restore data when you/your customer has a problem
  6. Service Manager: the perfect solution for service desk management, CMDB; it integrates with all the System Center components and offers rich reporting.
  7. App Controller: the key to where services get provisioned. From here you can provision services on premise or in the Cloud
  8. Orchestrator: the key to automation. Orchestrator can talk to all the System Center components, Windows Server 2012, Active Directory, Exchange, SQL Server (the MS list goes on and on) and a vast array of non-Microsoft software including BMC Remedy, VMware, etc.
  9. Configuration Manager: this is important to provide rich compliance information, integrated anti-malware protection, etc. I do believe however that with Desired State Configuration in Windows Server 2012 R2 the compliance monitoring aspects of ConfigMgr for servers will be used less and eventually be deprecated

With the alignment of Windows Server and System Center build/deployments Microsoft are making the life of an IT Pro much easier! Unlike when Windows Server 2012 was released there should be no delay in getting the management components up and running too.

Storage Spaces R2 and OEMs

So Microsoft have announced Windows Server 2012 R2 with some great changes to Storage Spaces.

It got me thinking about what are we are going to be seeing in the not too distant future. I think OEMs are going to be creating Storage Spaces in a box – at the moment you can get a cluster in a box solution – this will morph into Storage Spaces in a box.

A cluster in a box is quite simply at least 2 separate servers with a boat load of disks behind which are more than likely SAS Direct Attached Storage (DAS). That gives you a small cluster that can run multiple VMs (if that’s what you want it for, could just be SQL server – probably not supported by the manufacturer though).

So what’s to stop this cluster in a box becoming a Storage Spaces cluster in a box? You’ve got the 2 servers (at least) you’d need for a highly available cluster with the SAS DAS back end.

Take a look at the (rough) diagram below:

SS-1

All the OEMs need to do is change some of the disks to SSDs (this gives you Tiered Storage Spaces in Windows Server 2012 R2), the NIC interfaces on the front end could be optional components – for example 10Gb, InfiBand,etc or just straight 1Gb NICs. Put in multiple NICs and you can team them and you’ve got redudancy – especially with the Windows Server 2012 switch independent option.

All of a sudden you’ve got a storage space in a box that you can connect your Hyper-V Failover Cluster(s) to!

Related articles

System Center Configuration Manager 2012 Compliance Settings

One great feature of System Center Configuration Manager 2012 (ConfigMgr) is the new compliance settings and configuration baselines. In ConfigMgr 2007 this was known as Desired Configuration Management.

In ConfigMgr 2012 Microsoft really raised their game and now allow for automated remediation, which I primarily use for registry settings. How annoying is it when you configure an application to not self update when you install an update (probably via ConfigMgr with System Center Updates Publisher) and it resets the settings and merrily checks for updates – usually leading to calls to the Service Desk along the lines of “My computer is telling me there is an update to application X but it won’t let me install it”?

This is where the awesome compliance setting remediation comes in – it can detect a change, and if instructed to do so in the compliance setting, change the value to what YOU have told it to be, not what the application developer wants it to be.

Group Policy Objects

Group Policy Objects (GPOs) give you ultimate control over a domain joined client (be that server or desktop). If you’ve got the Microsoft Desktop Optimisation Pack (MDOP) then you’ve got access to Microsoft’s Advanced Group Policy Management (AGPM) tools – which are fantastic. MDOP is well worth and it’s cheap (yes that is cheap and Microsoft in the same sentence). It allows you to log change to GPOs, do offline testing and loads more. But what if the left hand doesn’t know what the right hand is doing?

If someone authors a change to a GPO that could potentially change something fundamental, for example changes the Remote Desktop firewall settings, how can you monitor that in ConfigMgr?

Enter Microsoft’s Security Compliance Manager (SCM). You’re probably thinking “What the <insert expletive here>!” Bear with me…

Microsoft’s Security Compliance Manager

SCM is a free Solution Accelerator (of which there are many) from Microsoft that can guide you in deploying GPOs that can help secure your Windows servers and desktops with best practise guidance, documentation galore, and best of all the ability to export CAB files for use in ConfigMgr.

In SCM you can import your existing GPOs and from there you can compare them to Microsoft’s guidance. In addition you can export them to a CAB file for use in ConfigMgr. Big deal? In my opinion – YES! You don’t have to use the comparison aspect, you can just use it as a conduit for the next stage.

In the ConfigMgr console you can import the CAB file in to the compliance settings workspace – this in turn generates an array of compliance settings for you. When you dig a little deeper into these settings you find it uses scripts to check compliance, no auto remediation available here but does a good job of checking settings.

What about just opening the raw ADMX files to find the registry settings?

Rather you than me!

If your GPOs that only contain a few settings you can open the parent ADMX file, find the registry strings and use those for remediation if you want… I don’t know about your environment but that would be a boat load of work for me!

So where’s the benefit?

If you’ve got these settings imported in ConfigMgr you can see when the deployed baselines move away from their GPO settings, this can immediately alert you to one of two things:

  1. An update, whether that be from Microsoft or another company (remember you can control quite a lot of applications via GPOs, not just Microsoft’s – Google Chrome anyone?), may have changed a value you configured in a GPO
  2. Or more likely, someone has changed something and not let you know. Now if you’re using AGPM you’ll be able to find the individual and have a little chat…

Le caveat

This is not a catch all. If someone deploys a new setting via a GPO (one that isn’t covered by a compliance setting imported via SCM) you won’t know about it. Communication is key here, make sure left hand knows what the right is doing.

I’d advise you to take a look at the free Solution Accelerators from Microsoft, of which Microsoft Deployment Toolkit (MDT) is one – which I’ve used for years and is amazing for highly configurable desktop deployments. SCM is great tool to see what Microsoft recommend you do with your infrastructure, Windows is now quite secure out of the box but if you want to you can harden it much more. Best of all it tells you what you need to do, where you need to do it and most importantly why!

Just remember that most registry changes require a reboot to take effect. Just because you remediate a setting it doesn’t necessarily mean the setting is in effect – look at TechNet and do your research.

%d bloggers like this: