Blog Archives

Windows Server 2012 R2 Shared VHDX Infrastructure for Private/Service Provider Cloud for Resiliency

Windows Server 2012 R2 introduces Shared VHDX for guest clusters inside Hyper-V. In previous version of Windows Server to create guest clusters in Hyper-V installations you needed to expose raw shared storage to the guest VMs, either through in-guest iSCSI (Windows Server 2008 R2 and above) or through Virtual Fibre Channel HBAs (Windows Server 2012).

There are three fantastic documents that Microsoft have produced for reference IaaS architecture:

  1. Infrastructure-as-a-Service Product Line Architecture Fabric Architecture Guide
  2. Infrastructure-as-a-Service Product Line Architecture Fabric Management Architecture Guide
  3. Infrastructure-as-a-Service Product Line Architecture Deployment Guide

The first deals with your Hyper-V infrastructure and storage, the second deals with the management software (System Center); the third tells you how to do it.

They are very informative and are currently aimed at Windows Server 2012. These documents are very detailed and are very much worth reading. For obvious reasons they do not recommend specific hardware vendors.

Microsoft is yet to release any reference architecture material for Windows Server 2012 R2 (as it is in preview) and these documents got me thinking out how you could protect your guest clusters that use Shared VHDX files.

Known issues with Shared VHDX

Please bear in mind that this information is based on the preview bits of Windows Server 2012 R2… So what are the issues?

  1. Backing up the guest OSs and data is not possible with Hyper-V host level backups. You can backup the operating system aspect (probably not supported) but it will not backup the Shared VHDX file. You need to backup the guest cluster by installing agents inside the cluster – not ideal for service providers that want to offer backup without installing guests and exposing their backup infrastructure – albeit a small part – to tenants
  2. You can’t replicate a Shared VHDX using Hyper-V replica
  3. You can’t hot-resize a Shared VHDX file, you can add more Shared VHDX files but you can’t resize it whilst it’s live (unlike non-Shared VHDX files)

Hyper-V Replica and Shared VHDX

Hyper-V replica is an amazing inbox tool for replicating VMs from one Hyper-V server/cluster to another Hyper-V server/cluster. With Windows Server 2012 R2 you can add another point of replication – so you have tertiary replicas. Great – but what about your Shared VHDX?

In Hyper-V replica you can select the disks you want to replicate to the other server so in this case you would NOT select the Shared VHDX file. So how could you replicate the Shared VHDX? SAN replication (I’ll come back to this in a moment).

SMB Storage Spaces

Using Storage Spaces to store your VHDX files that contain your guest cluster VM operating systems is a no-brainer. Storage Spaces are very cheap to implement (JBODs are cheap) and with Windows Server 2012 R2 you get inbox data tiering (usually something associated with expensive SANs) – essentially moving the blocks of data that are accessed frequently to SSD for super-fast access.  Combine it with RDMA NICs (in my opinion iWARP is probably the best as you can route the traffic but you’ll take a small hit for it) and you’ve got an extremely rapid storage infrastructure. But what about the Shared VHDX? I’m getting there…

The SAN is dead! Long live the SAN!

There has been a lot of chatter about whether or not SANs still have a future in the Microsoft Hyper-V world. On the surface you can see why – Storage Spaces with its tiering, write-back cache, CSV cache, deduplication of CSVs for VDI and all other goodies it brings makes a solid argument.

One key feature missing (at the moment, I can’t believe it’ll be long before this changes) is block level synchronous (or even asynchronous) replication. Sure there is DFS-R but that doesn’t deal with open files which your Shared VHDX would be.

SANs are really the only viable alternative (there is some software out there that can do it but I don’t know enough about them to recommend any) for block level synchronous (or even asynchronous) replication.

SANs are not cheap – that is a fact. If you’ve implemented Storage Spaces for Hyper-V storage then you’ve already saved a lot on your storage budget. So what you could potentially do is buy a “small” SAN that offers the type replication you require and deploy that for Shared VHDX storage. However we all know that Fibre Channel (if that’s your SAN of choice and I’m going to assume it is) HBAs, switches, cables, etc. are not cheap so how can you make it cheaper? Put a Storage Space in front of the SAN!

You’ll need 2 or more physical nodes (up to 8) so you’ve got redundancy, you can then connect each node to the SAN, either directly or through a FC switch, configure all the MPIO, CSVs, etc. and make the storage available via SMB. That way you’ve not had to deploy FC HBAs to all the hosts, put in enough switching infrastructure etc. Also you don’t need storage that will support a large number of SCSI-3 persistent reservations as the only reservations come from the Storage Space servers.

The key issue is that Storage Spaces can be accessed by multiple clusters/hosts; it doesn’t care what cluster a host belongs to (or even if it is a member of a cluster) as long as all the correct Kerberos delegations and ACLs are in place (which System Center Virtual Machine Manager 2012 R2 can do for you). This allows you to move roles between clusters without having to move the storage – and as you can’t move Shared VHDX files this is quite important.

This implementation allows service providers to not break “the red line” between hosts and the storage fabric i.e. you don’t expose your storage to your tenants.

The diagram below shows (very roughly) what this could look like:

Shared VHDX Hyper-V Cluster

I’ve highlighted the fault domains in this implementation and as you can see the each cluster is a fault domain as is each space. So to reduce the impact of a failure of a fault domain you could create N number of Space 1, ensuring that no 2 guest cluster VM operating system disks are kept of the same space.

For example guest cluster A has two VMs and a Shared VHDX file. Server 1 for guest cluster A could reside on Space 1; Server 2 for guest cluster A could reside on Space N+1 – this just leaves the FC SAN as a fault domain (I’m certain the hardware vendor would be able to provide assistance here to reduce the likelihood of a component failure having a serious impact).

Using the Hyper-V Replicas

So you’re not just going to be able to turn on the replicas without making them aware where to find their Shared VHDX file; enter Hyper-V Recovery Manager in windows Azure. This coordinates the recovery of Hyper-V guest VMs, in a planned, unplanned or test manner and has the ability to execute scripts, especially PowerShell scripts… With a PowerShell script you can manipulate a VM’s settings, including where to find its Shared VHDX file… So you’ll need to know the path to Storage Space where the Shared VHDX replica will be (this will be the space in front of your SAN replica).

Job done… In theory…

Le Caveat

I’ve absolutely no idea if any of this will be supported in Windows Server 2012 R2 yet or if will it even work… It’s mainly my internal ramblings on a page. If I had some kit to try it on – I would.

Advertisements

Storage Spaces R2 and OEMs

So Microsoft have announced Windows Server 2012 R2 with some great changes to Storage Spaces.

It got me thinking about what are we are going to be seeing in the not too distant future. I think OEMs are going to be creating Storage Spaces in a box – at the moment you can get a cluster in a box solution – this will morph into Storage Spaces in a box.

A cluster in a box is quite simply at least 2 separate servers with a boat load of disks behind which are more than likely SAS Direct Attached Storage (DAS). That gives you a small cluster that can run multiple VMs (if that’s what you want it for, could just be SQL server – probably not supported by the manufacturer though).

So what’s to stop this cluster in a box becoming a Storage Spaces cluster in a box? You’ve got the 2 servers (at least) you’d need for a highly available cluster with the SAS DAS back end.

Take a look at the (rough) diagram below:

SS-1

All the OEMs need to do is change some of the disks to SSDs (this gives you Tiered Storage Spaces in Windows Server 2012 R2), the NIC interfaces on the front end could be optional components – for example 10Gb, InfiBand,etc or just straight 1Gb NICs. Put in multiple NICs and you can team them and you’ve got redudancy – especially with the Windows Server 2012 switch independent option.

All of a sudden you’ve got a storage space in a box that you can connect your Hyper-V Failover Cluster(s) to!

Related articles

System Center Configuration Manager and Cluster Aware Updating

Microsoft have created a new feature in Windows Server 2012 for updating clusters called Cluster Aware Updating.

The premise is simple: it coordinates updating your clusters for you i,e. moves roles to other nodes, updates the node, moves roles back, updates the other node, puts everything back in its place – job done.

It’s great if you’re not using Microsoft flag ship configuration management application – System Center Configuration Manager (ConfigMgr)…

If you’ve just using WSUS for updates then it’s relatively straight forward to get up and running.

If you’re using ConfigMgr you need a hell of a lot more software to make this work and there are no straight forward tick boxes to make this work. Helpfully Neil Patterson has created some runbooks for System Center Orchestrator to make this all work.

So what have I done?

I’m yet to roll out all of the System Center suite so have I implemented a separate WSUS environment from my ConfigMgr environment? No

All my clustered servers are imaginatively titled. For example my 2 node file cluster is called HQ-File, made up of HQ-File1 and HQ-File2… All other clusters are the same, Print (Print1, Print2), HQ-SCSQL (HQ-SCSQL1, HQ-SCSQL2) etc.

In ConfigMgr I’ve created three query based collections based on my “Windows Server 2102 Servers – Non-Hyper-V Hosts” device collection.

  1. Cluster Servers 1: the criteria is very simple:
    1. System Resource.Name is like “%1” AND
    2. Services.Name is equal to “ClusSvc” AND
    3. Services.Start Mode is equal to “Auto”
    4. Limiting collection: “Windows Server 2102 Servers – Non-Hyper-V Hosts” (prevents hyper-v hosts being included – they’re special and I patch those manually at the moment)
  2. Cluster Servers 2: the same as above except:
    1. System Resource.Name is like “%2” AND
  3. Windows 2012 Non-Clustered Servers:
    1. Include Collection “Windows Server 2102 Servers – Non-Hyper-V Hosts”
    2. Exclude collection “Cluster Servers 1”
    3. Exclude collection “Cluster Servers 2”

Net result is 3 collections to deploy Windows Updates to. I have other collections for application updates like SQL, Exchange, etc. but that is outside this post. I’ll go through those another day.

So when updates are released I generally deploy the updates in this order:

  1. “Windows 2012 Non-Clustered Servers” to be installed on day X by B time (very much out of hours due to no redundancy on the services those installs are providing)
  2. “Cluster Servers 1” to installed on day X+1 by C time (this is usually sometime in business hours – shock – so I can remediate if necessary before the 2nd cluster group updates)
  3. “Cluster Servers 2” to installed on day X+2 by C time (this is usually sometime in business hours – shock – so I can remediate if necessary)

Whilst this isn’t necessarily the best way of doing this it works for me (I usually end up watching/running the installations to make sure it all goes well – unless they’re server core).

This results in all the servers getting updated as required. There is a risk that between patching Cluster Servers 1 and Cluster Servers 2 the roles move around and bad things happen… So far so good.

When it comes to server application updates that is an entirely different issue. Moving databases between SQL 2012 SP1 and SQL 2012, for example, is a bad plan and will probably end up killing your database! Same goes for Exchange. Updating server applications is usually more “dangerous” than Windows updates when it comes to application stability!

Windows Server 2012 – is it any good? In a word YES!

I’ve been using Windows Server 2012 in production since Microsoft released SP1 for System Center 2012 (SC2012); as SC2012 RTM’d prior to Windows Server 2012 it didn’t support it until SP1 was released.

To put things in to context I’ve been using Windows Server since the days of NT4 (pre Active Directory). With the introduction of Active Directory in Windows Server 2000 it suddenly became so much easier to use once my brain figured out the monumental changes that Active Directory brought (I’d never seen Novell at the time).

So fast forward 15 years and Windows Server 2012 has arrived – new interface, new ways of working, new features set and (vastly) improved features!

I dithered about which feature of Windows Server 2012 to concentrate this review on so I thought I’d do my top ten list and pick the one that means the most to me:

  1. Hyper-V 3
  2. De-duplication
  3. NIC Teaming
  4. Dynamic Access Control
  5. New Server Manager
  6. PowerShell 3.0
  7. SMB 3
  8. GUI to Core and vice verse without having to reinstall the OS
  9. Cluster Aware Updating
  10. Remote Group Policy Update

So I’ve decided to go with Hyper-V. There are lot of reviews out there but here’s why I love Hyper-V 3.

THANK YOU! THANK YOU! THANK YOU!

No more is it clear cut case of virtualisation = VMware. When I was looking at implementing virtualisation in 2011 it was a question of “Can I afford VMware? No, Hyper-V it is”. With Hyper-V in Windows Server 2012 for me it’s now “Sure glad I didn’t go for VMware!”

The functionality improvements over Windows Server 2008 R2 are phenomenal. Like most people using Hyper-V in R2 I sometimes just wanted to crawl into the server room and scream as I realised I needed to give up another weekend to do some maintenance – usually on the Storage Area Network (SAN)!

Under previous incarnations of Hyper-V if you wanted to move a Virtual Hard Disk (VHD) associated with a running Virtual Machine (VM) the only choice was to shutdown the VM, move the VHD, update the VM settings and power it on… Whilst, like most other IT Pros, I compiled a vast quantity of scripts (sadly not PowerShell) to do these things there was always that nagging feeling (what if it goes wrong…) – and obviously the associated downtime for end users (mustn’t forgot them). This would usually lead to me telling my daughter that Daddy had to work at the weekend and seeing the look of disappointment on a 3 year olds face always broke my heart! But thankfully no more – THANK YOU AGAIN!

Storage Migration comes to the rescue!

Storage Migration is the ability to move a VM’s storage (and configuration and snapshots) whilst it is running without any, well a tiny amount of (if someone noticed I’d be flabbergasted), impact on the end user. This one feature saved me hours of downtime once I’d got my Windows Server 2012 Hyper-V cluster up and running (almost perfectly) and HP released an urgent firmware patch for some hard drives we were using (explains the not quite perfect implementation)! I moved the all the storage from one Cluster Shared Volume (CSV) to another (albeit slower, RAID 5 vs RAID 10) CSV, upgraded the firmware and then moved it all back! Then repeated the sequence for the other and then put everything back where it should’ve been. Best of all no downtime! NO DOWNTIME = NO COMPLAINTS FROM USERS (or wife/child)!

Now whilst some of you may say “Pah! You could do that with VMware!” may I remind you of three things:

  1. VMware is not cheap
  2. Hyper-V is part of your Windows Server 2012 licence (or free if you’re using Microsoft Hyper-V Server 2012 – just know your licencing if you’re using that edition)
  3. VMware is not cheap

Just to make it clear you can do Storage Migration on any supported guest VM Operating System (OS) – that includes Linux and Windows client OSs (handy for Virtual Desktop Infrastructure deployments)!

Now if you’ve got an Offloaded Data Transfer (ODX) enabled SAN then frankly you’re on the fast track with the above situation. My SAN doesn’t support ODX at the moment (come on HP – how hard can it be!) but if it did, oh my… The problem with the above situation was the Hyper-V server had to copy the VHDs from one CSV to another. This meant my VHD had to leave the SAN, go through the physical switch, into the Hyper-V OS to be told to go back to the SAN but to go to a different disk. Whilst that’s not much of an issue for a 10MB file, if you’re primary user storage VHD is 600GB (x10 for the amount I had to move) that’s a lot of network traffic, processing on the Hyper-V host, etc. What if the SAN could do the move for you?

An ODX enabled SAN will move the files (in this case VHDs) for you at the storage level! No traversing the network and only a tiny amount of processing by the Hyper-V host as the SAN tells it what it’s up to. The SAN moves the file for you which I will guarantee (that’s a no money back guarantee) is faster than having Windows do it! Why do you think Microsoft created this?

There are other great features in Hyper-V 3 too; shared nothing live migration, essentially storage migration + live migration + steroids. This means that if you’ve got multiple Hyper-V hosts/clusters you can move VMs between them without having to do complex export/imports – provided the Hyper-V hosts are in the same domain, same hardware architecture (basically standard clustering rules). Another great feature is the introduction of Virtual Fibre Channel – basically what you can now do with fibre channel what was previously only possible (and supported) with iSCSI. Essentially allows you to pass the Fibre Channel adapter through to a VM in the same way as a network card.

Hyper-V Replica

Microsoft Licencing is one of those things that you wish was just easier to get your head around (you’re an IT Pro not a legal expert). I can guarantee you that there are no loop holes – apparently Microsoft employs the same number of lawyers as developers! They’re going to protect their intellectual property come hell or high water! So what has this got to do with Hyper-V replica?

Well something you may not be aware of in your Software Assurance (SA) benefits (you’ve got SA right? If not get it! Solves so many licencing issues) is Cold Back-ups for Disaster Recovery (DR). This allows you to have the same licenced server software on a “cold” backup server for DR – a Hyper-V VM that is replicated using Hyper-V replica is definitely “cold”!

So let’s take my situation as an example:

  1. 4 Node Hyper-V failover cluster. Each node has Windows Server 2012 Datacenter with SA (unlimited VMs)
  2. 2 Hyper-V servers in DR data centre running the free Microsoft Hyper-V Server 2012 (no licences for guest OSs but has unlimited VMs as long each VM is licenced – see comment above about Microsoft Licencing…)
  3. Each DR Hyper-V server also has a Windows Server 2012 Standard with SA licence assigned so I can run some VMs in perpetuity (for example an Exchange DAG node, file server with DFS-R, RDS server, System Center Data Protection Manager (SCDPM) secondary server – we’ve got a physical Domain Controller in DR)

Now selected guests in my primary data centre are replicated using Hyper-V replica to the DR Hyper-V nodes. The guest OSs in DR are off (part of Hyper-V replica) and as such are “cold” but have all the necessary server software installed to get the organisation up and running and they are fully licenced under SA benefits (they’re replicas!) In addition the Hyper-V hosts are fully licenced to run as many guest VMs as I have licences for. Oh yeah the replicated VMs can be ANY OS SUPPORTED BY HYPER-V! Just be careful with Exchange/SQL/basically any transactional software. Better off using a Database Availability Groups for Exchange, Always-On/clustering/mirroring for SQL (top end Microsoft software usually has its own high-availability solution).

So by using replica I’ve got all my VMs in a state that is approximately 5 minutes behind live (that is amazing) – by the way this is all included in your Windows Server licence (no additional licences required, no expensive asynchronous/synchronous SANs to deploy). Prior to replica we were using Hyper-V backups in SCDPM they were at least an hour behind if not more!

Summary

Hyper-V has come a long way since its first incarnation in Windows Server 2008. Microsoft has been playing catch up with VMware but now the two are very much on a level playing field, in my opinion Microsoft are ahead. If you’ve already got VMware then look at Windows Server 2012 Hyper-V when it’s time to refresh/renew your VMware infrastructure/licences. Chances are you’ve already got Windows Server Datacenter licences in which case you’ve already got Hyper-V – if so just think what else you could spend your VMware licence renewal budget on!

Update 23/5/13

Microsoft has just released a Hyper-V Capacity Planner – should help with figuring out where to spend the VMware renewal budget…

Note: Anything I say about licencing is from my perspective and should in no way be treated as 100% accurate. Check with MS licencing specialists.

%d bloggers like this: