Category Archives: Hyper-V

Scripting Shared Nothing Live Migration

UPDATE: 16th September 2016 – Link to download fixed.

I was working with a customer recently to replace their existing Windows Server 2012 Hyper-V clusters and System Center 2012 SP1 Virtual Machine Manager (VMM) installation with new Windows Server 2012 Hyper-V clusters and System Center 2012 R2 Virtual Machine Manager installation.

The customer was concerned about downtime for moving their Virtual Machines (VMs) from their existing clusters to the new ones.

We looked at the option of using Shared Nothing Live Migration (SNLM) to move VMs between the clusters which whilst an option wasn’t entirely realistic due to having in excess of 250 VMs and the names of the Logical Switches were different so each VM took some time to manually process, and being a manual repetitive task is prone to errors. The customer thought they’d have to go through migrating roles, moving CSVs and taking down VMs etc. Whilst that doesn’t sound too bad I wanted to offer a better option.

So looking at the options in PowerShell it was obvious that Move-VM was the cmdlet I wanted to use. Looking at the parameters I found -CompatibilityReport which “Specifies a compatibility report which includes any adjustments required for the move.” my first thought was where do I get one of those from?

After a bit of digging on the internet I discovered Compare-VM which creates a Microsoft.Virtualization.Powershell.CompatibilityReport.

The Compatibility Report fundamentally contains information on what would happen if, in this case, wanted to move a VM from one host to another.

So running:

Compare-VM <VMName> -DestinationHost <DestinationServer> -DestinationStoragePath <DestinationStoragePath> -IncludeStorage

gave me a Compatibility Report with some incompatibilities listed… Again after some digging I determined what these incompatibilities meant and how to resolve them.

I could then run a Compare-VM -CompatibilityReport <VMReport> which essentially says, “if I did this to the VM would it work now?” As long as you get no incompatibilities all is good!

Once that completed we could use the Move-VM –CompatibilityReport <VMReport> function to move a VM from one host to another…

Now whilst all these Compare-VMs are underway the source VM is quite happy existing and running as normal.

So where is this going? After discussions with the customer I expanded the PowerShell script to cope with multiple VMs, check for Pass Through Disks, remove VMs from clusters, etc.

The basics of the script are that it requires several parameters:

  • SourceCluster – where are the VMs to move?
  • DestinationServer – where do you want to move the VMs to? (optional, if this isn’t specified then a random member of the destination cluster is chosen for each VM to be moved)
  • DestinationCluster – what cluster do you want to move the VMs to?
  • SwitchToConnectTo – what is the name of the Virtual Switch to use on the destination server/cluster? For example if you VM/VMs are connected to a virutal switch called LogSwitch1 but your new cluster uses a virtual switch named LogicalSwitch1 you would specifiy LogicalSwitch1 for this parameter.
  • DestinationStoragePath – where do you want to put the VM’s storage on the destination cluster?
  • VMsToMove – this is a list of the VMs to be moved
  • LogPath – the path to a file you want to log the progress of the script to <optional>

Whilst this script may seem a little limited it managed to save the customer a great deal of time in migrating their VMs from their old Hyper-V clusters to their new ones. It can be extended to put in different Storage Paths for differenet VMs, different Virtual Switches etc.

Move-VMusingSNLM

Advertisements

Cisco UCS/FlexPod with Hyper-V 2012 R2

Over the past week or so I’ve had the pleasure of working on a green-field Hyper-V 2012 R2 installation on the Cisco UCS/FlexPod platform. This consists of:

  • Cisco UCS with B200-M3 Blades (24 of those)
  • Cisco Nexus 5500 switches (2 of)
  • NetApp FAS3250 (2 controllers)

Fundamentally it’s blade architecture but with added steroids.

The UCS platform makes applying service profiles to the blades very easy and most of all consistent. The biggest problem we had was temporarily disabling the multiple storage paths so when we doing bare metal deployment with Virtual Machine Manager 2012 R2 , WinPE would only see a single instance of each LUN.

Still that aside it worked very well an best of all we could use PowerShell to speak to the UCS to name the NICs in the Windows Server 2012 R2 Hyper-V host to match the names of the NICs in the service profile. No more having to figure out which one was which!

The main thing the platform is missing is consistent device naming (CDN), with that the deployments of the hosts would’ve been super quick. There’s a rumour that the CDN that would be presented would be the name of the NIC in the service profile that the host has, so for example if your service profile had a NIC called “HyperVMgmt” then that is what would be presented to WinPE during the deployment phase… That would certainly make things a lot easier!

It would be very useful if other manufacturers were to follow suit so you had the option to change the CDN that comes through from the hardware. Whilst in large deployments that may not make any sense but in a smaller environment where hosts are not deployed very often it could be very useful…

What I’ve been up to…

So 2013 was a bit of a crazy year for me…

After winning a place at TechEd Europe 2013 I got a new job working for Inframon as a System Center and Desktop Implementation Consultant, basically I get to work with System Center 2012 every day! Not only do I get to work with the latest and greatest software every day I get to work with the best System Center guys in the world.

I’ve gone from running a small, but very capable, installation of System Center to deploying different components of it for a variety of customers all over the UK. It’s been challenging but fantastic!

I’d like to put a special thank you out to the Microsoft UK DPE team and TechNet UK team who have inspired me to go out and learn System Center and Hyper-V. Without the free training offered by Microsoft through TechDays (online and in-person), Microsoft Virtual Academy and other free resources I wouldn’t be where I am now.

 

A Gotcha with Remove-SCVMhost

I’m in the process of rebuilding my Hyper-V cluster (4 nodes, nothing major) and I’m using Bare Metal Deployment (BMD) with System Center Virtual Machine Manager 2012 SP1 (SCVMM) to do so – why would I use anything else?

During the rebuild of the second Hyper-V host I did something slightly out of order (I removed the host from the domain before removing it from SCVMM – no idea why I did it like that, must have a had a brain fart). By doing this the DNS entry for the host was removed, as it should be, and the host was powered down ready to be BMD’d from SCVMM. Realising my mistake I went in to SCVMM PowerShell and ran:

Remove-SCVMHost <HOSTNAME> -FORCE

I’ve done several times before however this time there was an error message basically saying: “Err… Can’t find it”. Odd.

I looked in SCVMM and sure enough it was still there. Now the definition of insanity according to Einstein is doing the same thing over and over expecting different results – according to him I’ve gone insane… Anyway I recreated the DNS entry for the host and reran the PowerShell command above – success.

Somewhat later in the day I had to move the SCVMM role from one cluster node to another – it wouldn’t start. Looking at event logs there were many .NET messages and buried in the messages was: VMM cannot find the Virtual hard disk object’ error 801. EH? Going to http://support.microsoft.com/kb/2756886 consequently solved the issue.

Moral of the story – do things in the right order.

 

 

Windows Server 2012 R2 Shared VHDX Infrastructure for Private/Service Provider Cloud for Resiliency

Windows Server 2012 R2 introduces Shared VHDX for guest clusters inside Hyper-V. In previous version of Windows Server to create guest clusters in Hyper-V installations you needed to expose raw shared storage to the guest VMs, either through in-guest iSCSI (Windows Server 2008 R2 and above) or through Virtual Fibre Channel HBAs (Windows Server 2012).

There are three fantastic documents that Microsoft have produced for reference IaaS architecture:

  1. Infrastructure-as-a-Service Product Line Architecture Fabric Architecture Guide
  2. Infrastructure-as-a-Service Product Line Architecture Fabric Management Architecture Guide
  3. Infrastructure-as-a-Service Product Line Architecture Deployment Guide

The first deals with your Hyper-V infrastructure and storage, the second deals with the management software (System Center); the third tells you how to do it.

They are very informative and are currently aimed at Windows Server 2012. These documents are very detailed and are very much worth reading. For obvious reasons they do not recommend specific hardware vendors.

Microsoft is yet to release any reference architecture material for Windows Server 2012 R2 (as it is in preview) and these documents got me thinking out how you could protect your guest clusters that use Shared VHDX files.

Known issues with Shared VHDX

Please bear in mind that this information is based on the preview bits of Windows Server 2012 R2… So what are the issues?

  1. Backing up the guest OSs and data is not possible with Hyper-V host level backups. You can backup the operating system aspect (probably not supported) but it will not backup the Shared VHDX file. You need to backup the guest cluster by installing agents inside the cluster – not ideal for service providers that want to offer backup without installing guests and exposing their backup infrastructure – albeit a small part – to tenants
  2. You can’t replicate a Shared VHDX using Hyper-V replica
  3. You can’t hot-resize a Shared VHDX file, you can add more Shared VHDX files but you can’t resize it whilst it’s live (unlike non-Shared VHDX files)

Hyper-V Replica and Shared VHDX

Hyper-V replica is an amazing inbox tool for replicating VMs from one Hyper-V server/cluster to another Hyper-V server/cluster. With Windows Server 2012 R2 you can add another point of replication – so you have tertiary replicas. Great – but what about your Shared VHDX?

In Hyper-V replica you can select the disks you want to replicate to the other server so in this case you would NOT select the Shared VHDX file. So how could you replicate the Shared VHDX? SAN replication (I’ll come back to this in a moment).

SMB Storage Spaces

Using Storage Spaces to store your VHDX files that contain your guest cluster VM operating systems is a no-brainer. Storage Spaces are very cheap to implement (JBODs are cheap) and with Windows Server 2012 R2 you get inbox data tiering (usually something associated with expensive SANs) – essentially moving the blocks of data that are accessed frequently to SSD for super-fast access.  Combine it with RDMA NICs (in my opinion iWARP is probably the best as you can route the traffic but you’ll take a small hit for it) and you’ve got an extremely rapid storage infrastructure. But what about the Shared VHDX? I’m getting there…

The SAN is dead! Long live the SAN!

There has been a lot of chatter about whether or not SANs still have a future in the Microsoft Hyper-V world. On the surface you can see why – Storage Spaces with its tiering, write-back cache, CSV cache, deduplication of CSVs for VDI and all other goodies it brings makes a solid argument.

One key feature missing (at the moment, I can’t believe it’ll be long before this changes) is block level synchronous (or even asynchronous) replication. Sure there is DFS-R but that doesn’t deal with open files which your Shared VHDX would be.

SANs are really the only viable alternative (there is some software out there that can do it but I don’t know enough about them to recommend any) for block level synchronous (or even asynchronous) replication.

SANs are not cheap – that is a fact. If you’ve implemented Storage Spaces for Hyper-V storage then you’ve already saved a lot on your storage budget. So what you could potentially do is buy a “small” SAN that offers the type replication you require and deploy that for Shared VHDX storage. However we all know that Fibre Channel (if that’s your SAN of choice and I’m going to assume it is) HBAs, switches, cables, etc. are not cheap so how can you make it cheaper? Put a Storage Space in front of the SAN!

You’ll need 2 or more physical nodes (up to 8) so you’ve got redundancy, you can then connect each node to the SAN, either directly or through a FC switch, configure all the MPIO, CSVs, etc. and make the storage available via SMB. That way you’ve not had to deploy FC HBAs to all the hosts, put in enough switching infrastructure etc. Also you don’t need storage that will support a large number of SCSI-3 persistent reservations as the only reservations come from the Storage Space servers.

The key issue is that Storage Spaces can be accessed by multiple clusters/hosts; it doesn’t care what cluster a host belongs to (or even if it is a member of a cluster) as long as all the correct Kerberos delegations and ACLs are in place (which System Center Virtual Machine Manager 2012 R2 can do for you). This allows you to move roles between clusters without having to move the storage – and as you can’t move Shared VHDX files this is quite important.

This implementation allows service providers to not break “the red line” between hosts and the storage fabric i.e. you don’t expose your storage to your tenants.

The diagram below shows (very roughly) what this could look like:

Shared VHDX Hyper-V Cluster

I’ve highlighted the fault domains in this implementation and as you can see the each cluster is a fault domain as is each space. So to reduce the impact of a failure of a fault domain you could create N number of Space 1, ensuring that no 2 guest cluster VM operating system disks are kept of the same space.

For example guest cluster A has two VMs and a Shared VHDX file. Server 1 for guest cluster A could reside on Space 1; Server 2 for guest cluster A could reside on Space N+1 – this just leaves the FC SAN as a fault domain (I’m certain the hardware vendor would be able to provide assistance here to reduce the likelihood of a component failure having a serious impact).

Using the Hyper-V Replicas

So you’re not just going to be able to turn on the replicas without making them aware where to find their Shared VHDX file; enter Hyper-V Recovery Manager in windows Azure. This coordinates the recovery of Hyper-V guest VMs, in a planned, unplanned or test manner and has the ability to execute scripts, especially PowerShell scripts… With a PowerShell script you can manipulate a VM’s settings, including where to find its Shared VHDX file… So you’ll need to know the path to Storage Space where the Shared VHDX replica will be (this will be the space in front of your SAN replica).

Job done… In theory…

Le Caveat

I’ve absolutely no idea if any of this will be supported in Windows Server 2012 R2 yet or if will it even work… It’s mainly my internal ramblings on a page. If I had some kit to try it on – I would.

Enhanced Session Mode for VM Guest whilst SysPrepping Guest VM in Windows Server 2012 R2

So today I was creating a Windows Server 2012 R2 VM so I could SysPrep it and put in to System Center Virtual Machine Manager 2012 R2 – I want to do a Bare Metal Deployment using the sysprep’d image.

So whilst the OS in the guest VM was SysPrepping it suddenly booted me out of the VM window! I attempted to reconnect – computer says no.

The VM was still running…

I went into Hyper-V settings on the host and disabled the Allow Enhanced Session Mode option:

Enhanced Session Mode

I then reconnected and computer says yes!

 

Where are Microsoft heading with the 2012 R2 releases?

So last week I attended my first TechEd Europe in Madrid. I won my place through the Microsoft TechNet UK TechEd Challenge (say that after a few pints…) for my System Center 2012 blog post.

Never in 4 days have I learnt so much!

With the new versions of Windows Server 2012 R2 and System Center 2012 R2 announced at TechEd North America it was the turn of Europe to see what Microsoft had to offer with the latest versions. It is fair to say they’ve not disappointed any one (that much) with the upcoming releases.

First of all – TechEd

Wow. So it was my first Microsoft conference, ever, and I enjoyed every minute. It was great to see so many of the Product Managers, Marketing Managers and downright technical geniuses that had made the trip over to share their enthusiasm for the next release. So out of all the sessions I could’ve possibly attended I only missed 1 – mainly due to my brain trying to process the shear quantity of information!

It’s clear to see the preview releases are very stable (no BSODs on demos) and no pre-recorded demos (unlike some other vendors).

Le Caveat

Everything below is my opinion and should be treated as such. No Microsoft employee has confirmed any of the information below, it is purely my personal speculation.

So what is Microsoft’s vision?

Everything Windows Server 2012 R2 and System Center 2012 R2 is based on the “Cloud OS” vision (I saw that slide so many times…) where there are 3 clouds:

  1. Private cloud: on-premise cloud powered by Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released), the Service Provider Framework and the Windows Azure Pack (which is awesome). This is generally seen as the starting point for everything, doesn’t have to be but if you’ve got it on-premises the rest is easy.
  2. Public cloud: this is Microsoft’s Azure cloud. It is powered by Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released) and the Azure Services (full blown) and some mega storage system using commodity hardware – no specialist SAN.
  3. Service Provider cloud: again running on Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released), the Service Provider Framework and the Windows Azure Pack (which is awesome – still). The idea here is for value added services from a service provider, customer choice – especially around data locations (think data laws).

This leads to the “one consistent platform” message; if your internal users can provision services using the Windows Azure Pack, they can use full blown Azure and Service Provider implementations too; no more learning of several different portals.

So what is enabling this?

The core new features of Windows Server 2012 R2 are going to change the game when it comes to Cloud.

  • Shared VHDX – enables guest clustering without having to expose directly mapped storage to guest VMs. This gives you all the features needed for upgrading underlying infrastructure whilst maintaining availability of guest VMs. Storage Migration will allow you to move the VM’s storage (i.e. the shared VHDX) whilst the underlying hardware is maintained, upgraded, replaced, etc. As internal provider this will make my life so much easier (edit: this point needs to be clarified with some testing, I think I may have this wrong…) I can remove all my directly mapped LUNs and just use Shared VHDX files for the storage. Don’t use snapshots!
  • Online VHDX resize – the ability to change the size of the disk attached to a VM (grow AND shrink), without having to take the VM offline! Note: you still need to change the size of the parition within the guest, however some clever use of PowerShell/System Center Orchestrator (provided the guest trusts the Orchestrator install) will do this however that will require some effort to implement, it isn’t in the box
  • Storage QoS – you can now tune the number of IOPs on a virtual disk. No more IOP hoggers! I believe this only extends to additional disks, not disks with OSs in. As such applications like SQL that love IOPs will have to be configured correctly in guest for the Hyper-V provider to take advantage here (follow MS best practise and you’ll be fine)
  • Live Migration compression – in Windows Server 2012 R2 this will come enabled by default. Most virtualisation hosts are constrained by the amount of RAM they have to offer guests rather than the CPU cycles they can offer. Compression uses spare host CPU cycles to compress the Live Migration of RAM and you can move a VM at twice the speed (if not more). If you’ve got RDMA NICs (and multiples of) then the speed of your RAM will matter (that is not a typo). SMB direct RDMA offloads everything from the system to the NIC cards
  • Extended replica – instead of just being able to replica to one other host you can replica a replica. Perfect for Service Providers who offer replica as service; they’re able to replica the customer’s VMs to another host/data centre without having to have crazy expensive SANs
  • Hyper-V Network Virtualisation Gateway – until the Friday morning of TechEd I referred to this as the “magic gateway”, I just couldn’t figure out how it worked. After attending this session it all became very clear. This appears to be the brains behind the Virtual Networks offering on the Azure public cloud, the load balancer and all the other excellent networking offerings in Azure
  • Windows Server 2012 R2 Tiered Storage Spaces – on the surface this seems to the StorSimple technology migrated to Window Server 2012 R2. By tiering the storage available on Storage Spaces Windows Server will move the most read/written to blocks (blocks not files – blocks could contain files, think VDI deduplication here) to the fastest storage available, this could be SSD, 15K disks, etc. This tiering gives amazing IOPs especially when combined with CSV caching in memory. Best of all – it just uses JBODs on the back end! As I understand it at the moment you can only have 8 nodes in a Scale-out file cluster for this
  • Linux backups of Hyper-V guests – no longer will a VM pause when it is being backed up at the host level (provided your Linux version is correct). Microsoft have shied away from saying they’ve implemented VSS inside Linux but it is basically what they’ve done
  • Oracle support on Hyper-V – this is probably the final hurdle for high-end enterprise adoption of Hyper-V

At TechEd the focus was VERY heavy on the “Cloud OS” vision and how System Center 2012 R2 and the Windows Azure Pack was going to power that throughout:

  • System Center Virtual Machine Manager (VMM) is the king maker. VMM will now deploy VM hosts from bare metal, VMs to hosts (whether that be Hyper-V, VMware or Citrix hosts) and with R2 it will deploy Scale-Out file servers for hosting VM storage from bare metal! Allegedly this list will increase too. Microsoft have stated that there is no reason why you shouldn’t move your workloads to VMs – this is squarely aimed at SQL server workloads. With the ability to SysPrep a SQL server you can now deploy them directly to VMs. Server App-V brings another string to VMM’s bow. As far as I can see Microsoft are targeting VMM at deploying all server workloads – physical and virtual.
  • Windows Azure Pack. This is enabling end-user provisioning of services from pre-defined templates (created in VMM) with an interface that is consistent with the Microsoft Azure Cloud. The Azure Pack sits between the end-user and the Service Provider Framework (this sits in front of System Center) and can be skinned to corporate colours. At a basic level it tells VMM what to do (via service templates) and as such does not necessarily require you to have Hyper-V as your virtualisation host – it will work with VMware and Citrix too. Best of all – it’s extensible. Microsoft will add more services over time and you can add your own in too.

So what about the other System Center components?

Data Protection Manager was relatively quiet, it was confirmed that this component will be able to use a clustered SQL server for its database but there will be no push to cluster DPM. You can make DPM highly available by running it as a VM on a Hyper-V failover cluster. You should be able to use VHDX files to store the DPM backups (this will remove the final pass-through disk in my DPM setup) – these will need to be fixed size though and will probably not support online resize – DPM can get very angry about other applications playing around with its disk(s).

I heard very little mention of System Center Configuration Manger 2012 R2 at TechEd. I may have been in the wrong sessions. With VMM taking over the world role of deploying servers and ConfigMgr having tighter integration with Windows Intune I see it as becoming the client OS manager. Combine it with MDT and it is an extremely effective tool for desktop deployment and compliance monitoring. When it comes to Patch Mangement VMM already has the hosts, how long until it starts looking after guests? Admittedly ConfigMgr gives you all the reports, at the moment…

Operations Manger – there were some further strives forward, especially for monitoring Java applications. System Center Advisor is now baked in to the application (this is Microsoft’s cloud based monitoring that uses information gained from customers to ensure you’re installations are in tip top condition).

I didn’t hear anything about App Controller or End Point Protection.

Summary

The order of products to learn inside and out for effective Microsoft Cloud OS are:

  1. Windows Server 2012 R2: this is the base for everything. Microsoft runs on Microsoft best (or something similar – I’m sure the MS marketing team can correct me here). Once you know how Windows Server works, especially Hyper-V, you’ve got the foundations for your cloud
  2. System Center Virtual Machine Manger: this rules your cloud. VMM provisions and controls your cloud. I cannot stress how important this product will be in the next 12 months and far into the future
  3. System Center Operations Manger: this will monitor your cloud and all the applications running in it. There’s no point in having a bunch of amazing hardware if the services you’re running are performing like a 90 year old in 100 metre sprint. I’d include System Center Advisor in here too
  4. Windows Azure Pack: this is the front door to your cloud. It makes end-user provisioning of services much easier. You can also customise the pack, not only through colour schemes but you can add your own items in there too
  5. Data Protection Manager: no point in having an amazing cloud if you can’t restore data when you/your customer has a problem
  6. Service Manager: the perfect solution for service desk management, CMDB; it integrates with all the System Center components and offers rich reporting.
  7. App Controller: the key to where services get provisioned. From here you can provision services on premise or in the Cloud
  8. Orchestrator: the key to automation. Orchestrator can talk to all the System Center components, Windows Server 2012, Active Directory, Exchange, SQL Server (the MS list goes on and on) and a vast array of non-Microsoft software including BMC Remedy, VMware, etc.
  9. Configuration Manager: this is important to provide rich compliance information, integrated anti-malware protection, etc. I do believe however that with Desired State Configuration in Windows Server 2012 R2 the compliance monitoring aspects of ConfigMgr for servers will be used less and eventually be deprecated

With the alignment of Windows Server and System Center build/deployments Microsoft are making the life of an IT Pro much easier! Unlike when Windows Server 2012 was released there should be no delay in getting the management components up and running too.

Windows Server 2012 R2 and deduplication of CSVs

So just after the keynote this morning I saw Jeff Woolsey and asked him about deduping CSVs on Hyper-V hosts. I wanted to know if it was supported for server workloads? Could I dedupe my CSVs that hold all of my SQL, Exchange, SharePoint, etc. VHD/X files?

Unfortunately not – the dedupe of CSVs is for VDI only.

That’s not say you can’t do it but don’t expect any kind of support and who knows what the performance of these would be! You can dedupe inside VHDX files with R2 so all is not lost – provided it’s not the OS, pagefile, etc. and don’t even think about deduping a SQL database, especially with all the in-memory advancements with SQL 2014 – I’m pretty sure the performance would go down the toilet.

There are, however options. There are many storage vendors that offer deduplication on their appliances/SANs etc, so if it’s important to you check out what the hardware vendors can offer.

Windows Server 2012 R2 – Shared VHDX, DPM and Hyper-V Replica

So today is the first day (proper) of TechEd 2013 and so far it has been amazing. The conference centre is vast, the halls are huge and so far so good.

I went to the Introduction to Windows Server 2012 R2 session presented by Jeff Woolsey – that man is a fountain of knowledge!

So after they demo’d some of the new functionality around 2012 R2 I got thinking about how Hyper-V replica handles shared VHDX files – basically guest clustering without mapping raw LUNS through Fibre Channel or iSCSI? I also had a thought about how hyper-v backups via the host would work with this? And one more… Can you replicate a storage space?

Fortunately I managed to put these queries to Jeff.

It’s official – Hyper-V guests with a shared VHDX file cannot be replicated (you can replica the OSs but not the shared VDHX). Understandable due to the clustering aspects but disappointing.

Also backing up the guest OSs and data is not possible with hyper-v host level backups. Jeff’s answer here was to use back tools inside the guest vms. Not ideal – again understandable but still disappointing.

Replicating a storage space is not possible.

Who know’s – it might come in Server 2012 R3 or whatever the next iteration will be called, but it’s not coming in the R2.

Storage Spaces R2 and OEMs

So Microsoft have announced Windows Server 2012 R2 with some great changes to Storage Spaces.

It got me thinking about what are we are going to be seeing in the not too distant future. I think OEMs are going to be creating Storage Spaces in a box – at the moment you can get a cluster in a box solution – this will morph into Storage Spaces in a box.

A cluster in a box is quite simply at least 2 separate servers with a boat load of disks behind which are more than likely SAS Direct Attached Storage (DAS). That gives you a small cluster that can run multiple VMs (if that’s what you want it for, could just be SQL server – probably not supported by the manufacturer though).

So what’s to stop this cluster in a box becoming a Storage Spaces cluster in a box? You’ve got the 2 servers (at least) you’d need for a highly available cluster with the SAS DAS back end.

Take a look at the (rough) diagram below:

SS-1

All the OEMs need to do is change some of the disks to SSDs (this gives you Tiered Storage Spaces in Windows Server 2012 R2), the NIC interfaces on the front end could be optional components – for example 10Gb, InfiBand,etc or just straight 1Gb NICs. Put in multiple NICs and you can team them and you’ve got redudancy – especially with the Windows Server 2012 switch independent option.

All of a sudden you’ve got a storage space in a box that you can connect your Hyper-V Failover Cluster(s) to!

Related articles

%d bloggers like this: