TechEd North America 2014

So it’s TechEd North America this week, hopefully Microsoft should be giving out some nuggets of information about the next versions of Windows and System Centre, but looking at dates I’d think that information will come out at TechEd Europe in October…

Two of my colleagues are speaking this week, Martyn Coupland and Gordon McKenna MVP, so if your going I’d advise catching their sessions as they know what they’re talking about!

I’m on a customer site all week but I’ll hopefully be able to catch the keynote live and the possibly some other sessions due to the time difference too…


Scripting Shared Nothing Live Migration

UPDATE: 16th September 2016 – Link to download fixed.

I was working with a customer recently to replace their existing Windows Server 2012 Hyper-V clusters and System Center 2012 SP1 Virtual Machine Manager (VMM) installation with new Windows Server 2012 Hyper-V clusters and System Center 2012 R2 Virtual Machine Manager installation.

The customer was concerned about downtime for moving their Virtual Machines (VMs) from their existing clusters to the new ones.

We looked at the option of using Shared Nothing Live Migration (SNLM) to move VMs between the clusters which whilst an option wasn’t entirely realistic due to having in excess of 250 VMs and the names of the Logical Switches were different so each VM took some time to manually process, and being a manual repetitive task is prone to errors. The customer thought they’d have to go through migrating roles, moving CSVs and taking down VMs etc. Whilst that doesn’t sound too bad I wanted to offer a better option.

So looking at the options in PowerShell it was obvious that Move-VM was the cmdlet I wanted to use. Looking at the parameters I found -CompatibilityReport which “Specifies a compatibility report which includes any adjustments required for the move.” my first thought was where do I get one of those from?

After a bit of digging on the internet I discovered Compare-VM which creates a Microsoft.Virtualization.Powershell.CompatibilityReport.

The Compatibility Report fundamentally contains information on what would happen if, in this case, wanted to move a VM from one host to another.

So running:

Compare-VM <VMName> -DestinationHost <DestinationServer> -DestinationStoragePath <DestinationStoragePath> -IncludeStorage

gave me a Compatibility Report with some incompatibilities listed… Again after some digging I determined what these incompatibilities meant and how to resolve them.

I could then run a Compare-VM -CompatibilityReport <VMReport> which essentially says, “if I did this to the VM would it work now?” As long as you get no incompatibilities all is good!

Once that completed we could use the Move-VM –CompatibilityReport <VMReport> function to move a VM from one host to another…

Now whilst all these Compare-VMs are underway the source VM is quite happy existing and running as normal.

So where is this going? After discussions with the customer I expanded the PowerShell script to cope with multiple VMs, check for Pass Through Disks, remove VMs from clusters, etc.

The basics of the script are that it requires several parameters:

  • SourceCluster – where are the VMs to move?
  • DestinationServer – where do you want to move the VMs to? (optional, if this isn’t specified then a random member of the destination cluster is chosen for each VM to be moved)
  • DestinationCluster – what cluster do you want to move the VMs to?
  • SwitchToConnectTo – what is the name of the Virtual Switch to use on the destination server/cluster? For example if you VM/VMs are connected to a virutal switch called LogSwitch1 but your new cluster uses a virtual switch named LogicalSwitch1 you would specifiy LogicalSwitch1 for this parameter.
  • DestinationStoragePath – where do you want to put the VM’s storage on the destination cluster?
  • VMsToMove – this is a list of the VMs to be moved
  • LogPath – the path to a file you want to log the progress of the script to <optional>

Whilst this script may seem a little limited it managed to save the customer a great deal of time in migrating their VMs from their old Hyper-V clusters to their new ones. It can be extended to put in different Storage Paths for differenet VMs, different Virtual Switches etc.


Cisco UCS/FlexPod with Hyper-V 2012 R2

Over the past week or so I’ve had the pleasure of working on a green-field Hyper-V 2012 R2 installation on the Cisco UCS/FlexPod platform. This consists of:

  • Cisco UCS with B200-M3 Blades (24 of those)
  • Cisco Nexus 5500 switches (2 of)
  • NetApp FAS3250 (2 controllers)

Fundamentally it’s blade architecture but with added steroids.

The UCS platform makes applying service profiles to the blades very easy and most of all consistent. The biggest problem we had was temporarily disabling the multiple storage paths so when we doing bare metal deployment with Virtual Machine Manager 2012 R2 , WinPE would only see a single instance of each LUN.

Still that aside it worked very well an best of all we could use PowerShell to speak to the UCS to name the NICs in the Windows Server 2012 R2 Hyper-V host to match the names of the NICs in the service profile. No more having to figure out which one was which!

The main thing the platform is missing is consistent device naming (CDN), with that the deployments of the hosts would’ve been super quick. There’s a rumour that the CDN that would be presented would be the name of the NIC in the service profile that the host has, so for example if your service profile had a NIC called “HyperVMgmt” then that is what would be presented to WinPE during the deployment phase… That would certainly make things a lot easier!

It would be very useful if other manufacturers were to follow suit so you had the option to change the CDN that comes through from the hardware. Whilst in large deployments that may not make any sense but in a smaller environment where hosts are not deployed very often it could be very useful…

What I’ve been up to…

So 2013 was a bit of a crazy year for me…

After winning a place at TechEd Europe 2013 I got a new job working for Inframon as a System Center and Desktop Implementation Consultant, basically I get to work with System Center 2012 every day! Not only do I get to work with the latest and greatest software every day I get to work with the best System Center guys in the world.

I’ve gone from running a small, but very capable, installation of System Center to deploying different components of it for a variety of customers all over the UK. It’s been challenging but fantastic!

I’d like to put a special thank you out to the Microsoft UK DPE team and TechNet UK team who have inspired me to go out and learn System Center and Hyper-V. Without the free training offered by Microsoft through TechDays (online and in-person), Microsoft Virtual Academy and other free resources I wouldn’t be where I am now.


Windows Environment Variables and why devs need to use them!

Windows Environment variables hold all kinds of information for example:

  • The path to the user’s My Documents folder
  • The path to the user’s Application Data folder
  • The path to the local machine’s Windows folder

The list goes on…

In a typical Windows enterprise deployment user folders are redirected to folders on file servers. This helps to minimise the amount of profile traffic moving around the network (if using roaming profiles), allows IT admins to easily backup/restore user files, use the same target folder(s) regardless of whether a user is logging in to a full Windows desktop or logging in via Remote Desktop Services Session Hosts (I include Citrix in this). The list of benefits on this is long and the above is by no means a full list.

So what is the point of this blog post?

Very occasionally it may be necessary to move a user’s data from one server to another, i.e. change the path of the redirected folder… Whilst this can be mitigated, to some degree, by the use of Distributed File System (DFS), there are some applications that really don’t like DFS paths. So if you’ve set a user’s Application Data folder to redirect to \\ServerX\UserData$\<username>\AppData you’ll find numerous entries in the registry for this path – usually stored in an application’s settings to help it find files it may need.

What happens when you need to change the server?

If you need to change the server in the above example from ServerX to ServerY then you would make the appropriate changes in Group Policy (after setting the new share up as it should be) and when the user next logs on their data will move to the appropriate location (there are several other ways of achieving this – but I’ve always found this one to be the most straight forward). Once the user logs on Windows is aware of the changes and the the environment variables are updated, the folder paths in the operating system sections of the registry are updated, etc. What doesn’t get updated is random application Z’s settings paths – consequently when application Z starts, goes to the registry to find its path information, gets the path \\ServerX\UserData$\<username>\AppData\Roaming\ApplicationZ and then goes crazy when it can’t access the path!

Not all applications are the same

I’m an IT Pro not a developer, however I have actually done development in the past so I think I can speak about this (a little bit). Most developers are aware of this issue and code appropriately, it’s not difficult as environment variables can be accessed via standard API calls, or natively in .NET.

A small bit of defensive programming from developers can really help cut down on service desk calls. For example:

  1. Get path from registry
  2. Does path exist?
  3. Yes – carry on as normal
  4. No – recreate what the normal path would be (for example %APPDATA%\ApplicationZ). Are the files there?
  5. Yes – update registry and carry on as normal
  6. No – spit dummy out and tell the user

Small things can help make big changes.

The End of Microsoft’s Advanced Certification – What Should Happen Now?

So Microsoft have decided to retire the MCSM, MCA and MCM certifications with no immediate replacements on the horizon.

Why do I care?

I started this year with ZERO Microsoft certifications. By the end of August 2013 I had passed 6! I’ve now got:

  • MCSA: Window Server 2012
  • MCSE: Server Infrastructure
  • MCTS: Administering and Deploying System Center 2012 Configuration Manager

How did I get all of these so quickly? I’ve been working in IT for almost 15 years so I’d like to think I’ve learnt quite a lot over that time! Combined with an employer who was a not-for-profit organisation (subsequently cheap MS licensing) I was able to implement much more complex and varied technology than I was able to previously outside of a time limited lab! Real world experience vs Lab – no contest.

After passing my first set of exams I was hooked and I am now determined to get my MCSE: Desktop Infrastructure by the end of the 2013. I was aiming for MCSE: Private Cloud however those 2 exams are so outdated it is shocking.

I am so hooked that I wanted to stretch myself further and see how far up the Microsoft certification ladder I could go! Alas no further it seems…

What was wrong with the Certifications?

As Tim Sneath (Microsoft’s Senior Director of Microsoft Learning) said in his comment on the Please Don’t Get Rid of the MCM and MCA programs on the Microsoft Connect site: “… many of the certifications currently offered are outdated – for example, SQL Server 2008…” and in my opinion here lies the main problem with many Microsoft exams, not just the Master levels.

With the top level certifications being massively behind the technology curve where is the incentive for employers to pay the $20K it costs to achieve these qualifications? I understand that Microsoft had recently relaxed the requirements for the training aspect on the Master level.

For example: Microsoft Certified Master: Microsoft SQL Server 2008. Since the release of SQL Server 2008 we’ve had service packs that have changed functionality, SQL Server 2008 R2, SQL Server 2012 and SQL Server 2014 is not far away from RTM! In a world where Microsoft wants customers to use the latest and greatest how is this MCM still desirable?

Other problems with Microsoft Learning and Microsoft Press

There are major problems with the speed that official Microsoft Press material is released. For example the official Exam Ref book for 70-414 is  published by O’Reilly and is not due for release until March 2014 (as of 2nd Sept 2013)! Just in time for the exams to be upgraded to include Windows Server 2012 R2 changes! I don’t know if it is down to Microsoft, the publisher or both!

Microsoft released System Center 2012 SP1 in December 2012 which made some huge changes to the functionality of some of the components in System Center – for example goodbye SCVMM 2012 self-service user portal, hello Hyper-V logical switch, hello client support for virtually every client device through ConfigMgr with Windows InTune support, Windows Server 2012/Windows 8 support and much more! Arguably they could have called that release System Center 2012 R2!

The exams for the MCSE: Private Cloud ARE still based on Windows Server 2008 R2 and System Center 2012 RTM despite one of the [possible] prerequisites for achieving the MCSE being MCSA: Windows Server 2012!

So what should Microsoft do?

  1. Get the current crop of exams up-to-date. Sort out the MCSE: Private Cloud so it reflects what is in use
  2. When huge functionality changes are introduced to applications (such as SP1 for System Center 2012) – update the exams within a reasonable amount of time. 6 months maximum
  3. In the case of exams like MCSE: Private Cloud where they are not going to be updated – keep evaluation versions of RTM software available to download, not just the latest versions
  4. Offer upgrade exams just like 70-417 for MCSA and MCSE certifications. For example if you done MCSA: Windows Server 2012 you should be able to do an exam that covers just new R2 functionality so you’re as up-to-date as people taking newer exams that cover both RTM and R2 functionality
  5. Release training material promptly. Not everyone can afford to attend official Microsoft courses, or want to. If exams are available then official training material should be too. Whilst this can be difficult, especially as most books are written before the software is RTM’d and are amended to take into account changes at RTM, it shouldn’t take as long as it does – especially in age of eBooks.
  6. Create Master level exams that are relevant to today’s IT landscape. For example we’re now living in a world of Cloud, make a Master level certificate that covers this. Maybe even abstract the technology out and just go for concepts. Just because you know which button to click doesn’t mean you know why you should use that button! Understanding when and where to use a Cloud (public/private/hybrid) is as important as knowing an application’s specific button combination
  7. Learn the art of communication – MS gave 1 month’s notice that the Master level certifications were being pulled. Not good.

There may be some amazing new things coming from Microsoft on the qualification front – they may not.

There may be a new form a low cost MSDN subscription to replace the now dead TechNet subscriptions – there may not.

Communications from Microsoft may improve – I doubt it…

RDS Connection Broker on Azure IaaS (Microsoft patching is a problem)

As part of my up coming change of employment I’ve been asked by my new employer to get up to speed on Windows Server 2012 VDI solutions and to get MCSE: Desktop Infrastructure ASAP!

After looking through the 70-415 syllabus I realised there were some gaps in my knowledge – mainly on the RDS Gateway and the RDS Connection Broker front. In my current environment UAG 2010 deals with the RDS Gateway for me – hence no real experience.

To Azure!

Not having all the capacity in the world (unlike Azure) I went and merrily built a RDS infrastructure on Azure IaaS or so I thought…

The Windows Server 2012 images that Microsoft provide on Azure are patched and unbeknownst to me there is a problem installing the RDS Connection Broker role if you’ve got KB2821895 installed on your Windows Server 2012 instance… The currently available VM images from Microsoft have this patch installed (unsurprisingly).

What to do?


So I went to my local SCVMM install, dug out the ISO that contains the Windows Server 2012 RTM files, created a VM, installed Windows Server 2012 Datacenter Edition, SysPrep’d it and attempted to upload the file to Azure using PowerShell. Azure PowerShell is amazing and wonderful and brilliant and didn’t work for me… At first I thought it was a problem with my local firewall/HTTP/HTTPS proxy (it usually is) but no! For once it was happy!

What to do?

To Cerebrata’s Azure Management Studio!

I downloaded the free trial of Cerebrata’s Azure Management Studio and my first thought was “Err… OK…” I ploughed on and found the correct way to upload a PageBlob (which VHDs, not VHDXs as they are not support on Azure, need to be for use as VM images) to my container. It’s not the most amazing user interface I’ve ever seen but it does EXACTLY what you need it to do. They even give MVPs a free copy! (I wish I was a MVP!)

Back to Azure!

Once the VHD was up there it was back to the Azure portal to create the VM image template and the new VM for my RDS Connection Broker. Instead of installing the Windows Internal Database (which a standalone RDS Connection Broker uses) as part of the RDS setup wizard I decided to install that first and then use the RDS setup wizard. Job done!


There seems to be something going slightly awry at Microsoft with regards to patching at the moment. I can’t remember ever having to revert to a non-patched server OS to install a role before.

There have been recent problems with Hyper-V patches causing BSODs when using VLANS, ADFS has had issues, Exchange 2013 too. Windows 7 is also having issues as well.

It makes me wonder if MS’s new rapid development cycle is causing substandard code, and consequently patches, to be released?

Like most people I try to test patches before rolling them into production however that can be an issue if you don’t have a test environment that matches your production environment. I think we’re heading towards a period where IT Pros are going to reluctant to install patches without someone else doing it first just in case it destroys their environment. This means systems that need patching will remain unpatched until someone bites the bullet and tries – the question is who?

Update 13/9/2013

It would appear that Microsoft have fixed the issue with

Stop Bashing Windows RT!

There’s a lot of articles in the IT press at the moment about how Windows RT is doomed to fail:

  1. Windows RT: DOA to almost everybody
  2. Microsoft Doesn’t Want To Admit Windows RT Is Dead
  3. Windows gains no tablet traction as PC OEMs turn to Android

The list goes on and on.

I was fortunate enough to attend TechEd Europe this year and Microsoft did an amazing discount offer on both flavours of its Surface hardware. Like the vast majority of attendees (that I spoke to) I purchased both. They ran the offer at TechEd North America, TechEd Europe and the Worldwide Partner Conference.

Yes Windows RT is not full Windows – MS never claimed it was. Did the marketing team go a bit nuts with the naming of the product? Yes. There are many examples of Microsoft products being “attacked” by the marketing department:

  1. Windows RT
  2. Windows Azure Services for Windows Server (now called the Windows Azure Pack)
  3. Windows Azure Active Directory (nothing to do with on-premise Active Directory – when it was first released many people, including me, thought I could use it as another AD controller!)

Anyway – back to Windows RT…

It’s not Windows, it looks like Windows, it smells like Windows but it isn’t. It can’t run traditional desktop apps like Photoshop, Auto Cad, etc. but then neither can the iPad or any flavour of Android device (excluding the wacky hybrids but they’re not running Photoshop on Android). It only runs apps from the Microsoft store – just like a non-jailbreaked i device – and don’t even go there when it comes to the crazy world of Android where each vendor has their own App store!

So what’s the problem(s)?

1. They called it Windows RT

Windows is a brand name, the same as Coca Cola, Diet Coke, Coke Light, etc. May be they should have called it Windows Lite or something similar. Immediately people know it’s different. I’m no marketing expert IN ANYWAY but when Apple released the full iPhone, iPad, etc. (not including the original iPod) it was crystal clear that it wasn’t Mac OS X. It may share certain elements from its big brother but it looks different, smells different, is different. Windows RT looks the same. It obviously shares components with its big brother but it’s different. Easy fix – get rid of the desktop element on Windows RT. Make it Modern apps only, I know it is, however the problem there is Office RT, or whatever the official name is. Surely Microsoft could wrap it in a Modern app so you can’t see it running in desktop mode?

2. The hardware cost of Surface RT (at least) is WAY TOO HIGH!

When Amazon released the Kindle, Kindle Fire and other hardware they sold them at cost or less. May be Microsoft are selling Surface RT devices at cost, if so they need to sort their supply chain out! In my opinion they should have even made a loss on the product. Flood the market with devices. Yes is would’ve made OEMs angry but surely the whole point of releasing a product is getting people to use it! The more adoption a platform gets the more incentive their is for developers to put apps in the associated App store. Apple’s products are premium products, always have been and to some extent always will be. They got there first to the mass market with the iOS devices and the App store – now its has the biggest App store out there (quantity does not equal quality). The vast majority of developers put apps on to the iOS platform before they head over to Android and eventually Windows Phone/RT/8. Once the traction is there the cost of the device can rise, OEMs can make more targeted hardware – higher capacity, more cores, better GPUs, whatever.

3. The price of hardware accessories for Surface RT is WAY TOO HIGH!

£99 for a touch over! £109 for a type cover! £69.99 for a mouse! SERIOUSLY! That is insane. The accessories are very well made and do work well but £99 for the “entry level” keyboard is just wrong. Take £50 off each keyboard, still expensive but much more realistic. £69.99 for a mouse – it’s a mouse not a USB port replicator! Take £30-£40 off that and it would be worth it.

My experience of Windows RT

As I stated earlier I have both flavours of Surface. When I leave my house in the morning there are 4 devices I pick up:

  1. My iPhone 4 – waiting for the Nokia 1020 to be released in the UK then it is GONE
  2. My work Nokia Lumia 900 – great workhorse for what I need it for
  3. My iPad mini – I love my iPad mini, it does everything I want it to do for me. Games, Facebook, Twitter, WordPress etc. It is not a big productivity device for me. Yes I can use iTap for RDP access to my environment, yes there are tools available for an IT Pro to manage a few aspects of IT
  4. My Surface RT – I NEVER leave home without it. Whilst in many respects it does the same as my iPad mini it does so much more for me. I’ve updated mine to Windows 8.1 Preview and it makes it so much better. I don’t use local storage on the device any more. Everything goes through SkyDrive. I’ll sync down some videos (which look so much better on the Surface RT than on the iPad mini), work on some blog posts in Word, update my personal excel spread sheets etc. safe in the knowledge that when I find some Wi-Fi it will sync without me having to tell it to or plugging it in to my PC to take files off (thanks Apple) or have to email documents to myself. I’ve installed a bunch of Apps and am very happy with the quality (quality is so much more important than quantity when it comes to app stores)

Admittedly it can’t do everything my Surface Pro can do but the battery life is much better, it’s lighter and doesn’t feel like you fry an egg on the back of it. I use my Surface Pro too, when I need to.

Windows RT is squarely focused at consumers and so it should be. Did Microsoft marketing get the name wrong – yes they did.

A Gotcha with Remove-SCVMhost

I’m in the process of rebuilding my Hyper-V cluster (4 nodes, nothing major) and I’m using Bare Metal Deployment (BMD) with System Center Virtual Machine Manager 2012 SP1 (SCVMM) to do so – why would I use anything else?

During the rebuild of the second Hyper-V host I did something slightly out of order (I removed the host from the domain before removing it from SCVMM – no idea why I did it like that, must have a had a brain fart). By doing this the DNS entry for the host was removed, as it should be, and the host was powered down ready to be BMD’d from SCVMM. Realising my mistake I went in to SCVMM PowerShell and ran:


I’ve done several times before however this time there was an error message basically saying: “Err… Can’t find it”. Odd.

I looked in SCVMM and sure enough it was still there. Now the definition of insanity according to Einstein is doing the same thing over and over expecting different results – according to him I’ve gone insane… Anyway I recreated the DNS entry for the host and reran the PowerShell command above – success.

Somewhat later in the day I had to move the SCVMM role from one cluster node to another – it wouldn’t start. Looking at event logs there were many .NET messages and buried in the messages was: VMM cannot find the Virtual hard disk object’ error 801. EH? Going to consequently solved the issue.

Moral of the story – do things in the right order.



Windows Server 2012 R2 Shared VHDX Infrastructure for Private/Service Provider Cloud for Resiliency

Windows Server 2012 R2 introduces Shared VHDX for guest clusters inside Hyper-V. In previous version of Windows Server to create guest clusters in Hyper-V installations you needed to expose raw shared storage to the guest VMs, either through in-guest iSCSI (Windows Server 2008 R2 and above) or through Virtual Fibre Channel HBAs (Windows Server 2012).

There are three fantastic documents that Microsoft have produced for reference IaaS architecture:

  1. Infrastructure-as-a-Service Product Line Architecture Fabric Architecture Guide
  2. Infrastructure-as-a-Service Product Line Architecture Fabric Management Architecture Guide
  3. Infrastructure-as-a-Service Product Line Architecture Deployment Guide

The first deals with your Hyper-V infrastructure and storage, the second deals with the management software (System Center); the third tells you how to do it.

They are very informative and are currently aimed at Windows Server 2012. These documents are very detailed and are very much worth reading. For obvious reasons they do not recommend specific hardware vendors.

Microsoft is yet to release any reference architecture material for Windows Server 2012 R2 (as it is in preview) and these documents got me thinking out how you could protect your guest clusters that use Shared VHDX files.

Known issues with Shared VHDX

Please bear in mind that this information is based on the preview bits of Windows Server 2012 R2… So what are the issues?

  1. Backing up the guest OSs and data is not possible with Hyper-V host level backups. You can backup the operating system aspect (probably not supported) but it will not backup the Shared VHDX file. You need to backup the guest cluster by installing agents inside the cluster – not ideal for service providers that want to offer backup without installing guests and exposing their backup infrastructure – albeit a small part – to tenants
  2. You can’t replicate a Shared VHDX using Hyper-V replica
  3. You can’t hot-resize a Shared VHDX file, you can add more Shared VHDX files but you can’t resize it whilst it’s live (unlike non-Shared VHDX files)

Hyper-V Replica and Shared VHDX

Hyper-V replica is an amazing inbox tool for replicating VMs from one Hyper-V server/cluster to another Hyper-V server/cluster. With Windows Server 2012 R2 you can add another point of replication – so you have tertiary replicas. Great – but what about your Shared VHDX?

In Hyper-V replica you can select the disks you want to replicate to the other server so in this case you would NOT select the Shared VHDX file. So how could you replicate the Shared VHDX? SAN replication (I’ll come back to this in a moment).

SMB Storage Spaces

Using Storage Spaces to store your VHDX files that contain your guest cluster VM operating systems is a no-brainer. Storage Spaces are very cheap to implement (JBODs are cheap) and with Windows Server 2012 R2 you get inbox data tiering (usually something associated with expensive SANs) – essentially moving the blocks of data that are accessed frequently to SSD for super-fast access.  Combine it with RDMA NICs (in my opinion iWARP is probably the best as you can route the traffic but you’ll take a small hit for it) and you’ve got an extremely rapid storage infrastructure. But what about the Shared VHDX? I’m getting there…

The SAN is dead! Long live the SAN!

There has been a lot of chatter about whether or not SANs still have a future in the Microsoft Hyper-V world. On the surface you can see why – Storage Spaces with its tiering, write-back cache, CSV cache, deduplication of CSVs for VDI and all other goodies it brings makes a solid argument.

One key feature missing (at the moment, I can’t believe it’ll be long before this changes) is block level synchronous (or even asynchronous) replication. Sure there is DFS-R but that doesn’t deal with open files which your Shared VHDX would be.

SANs are really the only viable alternative (there is some software out there that can do it but I don’t know enough about them to recommend any) for block level synchronous (or even asynchronous) replication.

SANs are not cheap – that is a fact. If you’ve implemented Storage Spaces for Hyper-V storage then you’ve already saved a lot on your storage budget. So what you could potentially do is buy a “small” SAN that offers the type replication you require and deploy that for Shared VHDX storage. However we all know that Fibre Channel (if that’s your SAN of choice and I’m going to assume it is) HBAs, switches, cables, etc. are not cheap so how can you make it cheaper? Put a Storage Space in front of the SAN!

You’ll need 2 or more physical nodes (up to 8) so you’ve got redundancy, you can then connect each node to the SAN, either directly or through a FC switch, configure all the MPIO, CSVs, etc. and make the storage available via SMB. That way you’ve not had to deploy FC HBAs to all the hosts, put in enough switching infrastructure etc. Also you don’t need storage that will support a large number of SCSI-3 persistent reservations as the only reservations come from the Storage Space servers.

The key issue is that Storage Spaces can be accessed by multiple clusters/hosts; it doesn’t care what cluster a host belongs to (or even if it is a member of a cluster) as long as all the correct Kerberos delegations and ACLs are in place (which System Center Virtual Machine Manager 2012 R2 can do for you). This allows you to move roles between clusters without having to move the storage – and as you can’t move Shared VHDX files this is quite important.

This implementation allows service providers to not break “the red line” between hosts and the storage fabric i.e. you don’t expose your storage to your tenants.

The diagram below shows (very roughly) what this could look like:

Shared VHDX Hyper-V Cluster

I’ve highlighted the fault domains in this implementation and as you can see the each cluster is a fault domain as is each space. So to reduce the impact of a failure of a fault domain you could create N number of Space 1, ensuring that no 2 guest cluster VM operating system disks are kept of the same space.

For example guest cluster A has two VMs and a Shared VHDX file. Server 1 for guest cluster A could reside on Space 1; Server 2 for guest cluster A could reside on Space N+1 – this just leaves the FC SAN as a fault domain (I’m certain the hardware vendor would be able to provide assistance here to reduce the likelihood of a component failure having a serious impact).

Using the Hyper-V Replicas

So you’re not just going to be able to turn on the replicas without making them aware where to find their Shared VHDX file; enter Hyper-V Recovery Manager in windows Azure. This coordinates the recovery of Hyper-V guest VMs, in a planned, unplanned or test manner and has the ability to execute scripts, especially PowerShell scripts… With a PowerShell script you can manipulate a VM’s settings, including where to find its Shared VHDX file… So you’ll need to know the path to Storage Space where the Shared VHDX replica will be (this will be the space in front of your SAN replica).

Job done… In theory…

Le Caveat

I’ve absolutely no idea if any of this will be supported in Windows Server 2012 R2 yet or if will it even work… It’s mainly my internal ramblings on a page. If I had some kit to try it on – I would.

%d bloggers like this: