Microsoft have created a new feature in Windows Server 2012 for updating clusters called Cluster Aware Updating.
The premise is simple: it coordinates updating your clusters for you i,e. moves roles to other nodes, updates the node, moves roles back, updates the other node, puts everything back in its place – job done.
It’s great if you’re not using Microsoft flag ship configuration management application – System Center Configuration Manager (ConfigMgr)…
If you’ve just using WSUS for updates then it’s relatively straight forward to get up and running.
If you’re using ConfigMgr you need a hell of a lot more software to make this work and there are no straight forward tick boxes to make this work. Helpfully Neil Patterson has created some runbooks for System Center Orchestrator to make this all work.
So what have I done?
I’m yet to roll out all of the System Center suite so have I implemented a separate WSUS environment from my ConfigMgr environment? No
All my clustered servers are imaginatively titled. For example my 2 node file cluster is called HQ-File, made up of HQ-File1 and HQ-File2… All other clusters are the same, Print (Print1, Print2), HQ-SCSQL (HQ-SCSQL1, HQ-SCSQL2) etc.
In ConfigMgr I’ve created three query based collections based on my “Windows Server 2102 Servers – Non-Hyper-V Hosts” device collection.
- Cluster Servers 1: the criteria is very simple:
- System Resource.Name is like “%1” AND
- Services.Name is equal to “ClusSvc” AND
- Services.Start Mode is equal to “Auto”
- Limiting collection: “Windows Server 2102 Servers – Non-Hyper-V Hosts” (prevents hyper-v hosts being included – they’re special and I patch those manually at the moment)
- Cluster Servers 2: the same as above except:
- System Resource.Name is like “%2” AND
- Windows 2012 Non-Clustered Servers:
- Include Collection “Windows Server 2102 Servers – Non-Hyper-V Hosts”
- Exclude collection “Cluster Servers 1”
- Exclude collection “Cluster Servers 2”
Net result is 3 collections to deploy Windows Updates to. I have other collections for application updates like SQL, Exchange, etc. but that is outside this post. I’ll go through those another day.
So when updates are released I generally deploy the updates in this order:
- “Windows 2012 Non-Clustered Servers” to be installed on day X by B time (very much out of hours due to no redundancy on the services those installs are providing)
- “Cluster Servers 1” to installed on day X+1 by C time (this is usually sometime in business hours – shock – so I can remediate if necessary before the 2nd cluster group updates)
- “Cluster Servers 2” to installed on day X+2 by C time (this is usually sometime in business hours – shock – so I can remediate if necessary)
Whilst this isn’t necessarily the best way of doing this it works for me (I usually end up watching/running the installations to make sure it all goes well – unless they’re server core).
This results in all the servers getting updated as required. There is a risk that between patching Cluster Servers 1 and Cluster Servers 2 the roles move around and bad things happen… So far so good.
When it comes to server application updates that is an entirely different issue. Moving databases between SQL 2012 SP1 and SQL 2012, for example, is a bad plan and will probably end up killing your database! Same goes for Exchange. Updating server applications is usually more “dangerous” than Windows updates when it comes to application stability!
System Center 2012 is not a single product – it is Microsoft’s collection of management software that now covers a wide range of functions including monitoring your infrastructure (servers, switches, SAN, etc.), backing up your entire Microsoft server estate, deploying operating systems (both client and server), creating private clouds, IT service desk, automation of tasks. The list goes on. The software is vast and covers so much!
So what does System Center 2012 mean to me?
There are 3 applications within System Center 2012 (SC2012) that I use day in day out (one of which saves users a lot of hassle):
Configuration Manager (ConfigMgr): this is all about users and devices and what should be available to them (applications, updates, operating systems). Since implementing ConfigMgr I’ve done a complete desktop operating system refresh (more about that later) and gone from hoping Windows updates installed to knowing what is, isn’t, and is yet to be, installed on where and when.
Virtual Machine Manager (VMM): I run a 4 node Windows Server 2012 Hyper-V cluster which was (mostly) built without the aid of VMM – it took the better part of 3 days to build over Christmas, I’m sure with VMM it would’ve taken about a day – tops. It helps me to manage my Hyper-V cluster, create/manage my private clouds and get a high-level view of my SAN.
Data Protection Manager (DPM): this is probably my favourite SC2012 application – odd to say “I love backup” but it is so simple to use and so effective it makes everything so much easier.
I’ve been using the Microsoft Deployment Toolkit (MDT) since 2008 to deploy operating systems and it is a great product, I really can’t believe they give it away free! It sounds crazy but all MDT is a bunch of scripts and a very small database. In the past I’ve looked at the lines that said “this feature is only available with Zero Touch Interface (ZTI)” with a longing for ConfigMgr. So the day after I installed ConfigMgr I installed MDT 2012 Update 1 on the server and decided a full desktop operating system (OS) refresh was required – we’d just got our Software Assurance (SA) sorted on our desktop licences – to get everyone on Windows 7 Enterprise.
The process is the same as MDT: build, capture, deploy and relax! It’s not quite that easy but not too far off! The biggest difference is that you don’t really need to use the MDT console you can do most it through the ConfigMgr console. The one thing I think MDT does better than ConfigMgr is drivers.
Drivers in MDT are easy – create a decent folder structure, import the drivers you need for each model and away you go. If you’ve got duplicates MDT is fine with that, ConfigMgr however isn’t… By allowing you to duplicate drivers in MDT you can create silos for each model you have, safe in the knowledge that it’ll work. ConfigMgr however just gives you some errors when you try to import duplicate drivers which then involves trawling log files to find out what. If I’m doing it wrong then please let me know!!!
Anyway, after days of work to get the drivers imported, categorised, tagged and built into driver packages I was ready to try a different model from my test box.
So I added some WMI queries in my task sequence to determine what hardware the operating system (OS) was about to be deployed on so it could determine which driver package to use. This worked perfectly and Windows 7 deployed successfully.
Working for a small organisation I thought it would be better to get user specific applications installed at deployment time rather than giving the “gold” image and then waiting for ConfigMgr to figure who the user is and what they require – so I went back to MDT to see if I could use the roles feature like I’d done previously.
I created a database, created a role, linked some ConfigMgr applications up to the role (could be MDT applications but then you could be duplicating applications – enter headache), populated the computer record in MDT attached the role and redeployed the OS and went for a coffee.
The OS deployed as it should, the additional applications were installed and that was that. Cue multiple roles! The overall refresh went reasonably well, a few small issues but they were with poorly written applications rather than any specific ConfigMgr problems.
When ConfigMgr went wrong…
Picture the scene: it’s just before service pack 1 for SC2012 is released and I’m eagerly awaiting its Windows Server 2012 support when Microsoft release its now infamous Windows Management Framework (WMF) 3.0 update for Windows 7. At this point I was quite excited, PowerShell 3 on Windows 7 – more cmdlets than you can dream of and all the fun that PowerShell brings! So after diligently deploy it through ConfigMgr to all the client machines I started to notice that ConfigMgr wasn’t happy…
The clients were going inactive. My immediate thought was something was wrong with the server as every client was inactive. So after extensive diagnosis I determined the server was fine, time to turn to my attention to the clients. So I pop on to a well-known search engine and find other people are having the same issues – I thought “phew not just me then” – but there was no answer. Cue hours of tinkering, uninstalling and reinstalling the client software – same result. After adding a few more worry line,s and looking in the mirror each morning for grey hairs, some news came from Microsoft, and I’m paraphrasing it here: Don’t install WMF 3.0 if you’re using ConfigMgr 2012 RTM, it breaks it… Great.
My first thought was not suitable to written on this blog, safe to say it involved many, many four letter words…
So what to do? My ConfigMgr install is “officially” broken and I’ve not got a clue what to do. Back to the well-known search engine… Many days later after piecing bits together I create a VB script, and several batch files, to sort the problem out – not the official way of resolving the problem but I wasn’t about to admit defeat… Scripts get executed overnight I come in the next morning, open the ConfigMgr console, take a deep breath and click on Devices, wait for the console to load (slowly turning blue as it takes forever – or so it seemed) and finally breathe – success!
Then SP1 was released… The person at Microsoft who used the wrong digital certificate on the release should be – hang on this heading in to four letter word territory. Moving on…
ConfigMgr has allowed me to ensure that everything is how I want, and need it, to be. Through the use of compliance settings I am able to ensure that Group Policy Object (GPO) settings are applied and maintained (auto-remediation is a big help in this area), application settings are set and maintained (no more “there is an update available”, see annoying Adobe Flash notices to end users) and that if something does move away from what is needed I know about it and can find out why.
Virtual Machine Manager
VMM for me isn’t directly used to create amazing private clouds that users can log into, create VMs based on templates etc. My organisation isn’t big enough for those features – it’s my sanity checker.
When adding another node to my Hyper-V cluster I used the failover cluster manager to validate the cluster when adding the extra node and it all came back green as expected. So instead of finishing the process in there I went to VMM and ran the validation from there.
VMM seems to be much more aware about Windows Server 2012 networks than Windows Server 2012! Whilst I had all the correct network adapters configured with the right subnets, VLANs, etc. there was one that had a minute difference to the other nodes. VMM picked this up and said “NICs don’t match”. Some head scratching later I realised what I’d done and changed the properties of one of the Hyper-V virtual switches, reran the VMM validation and all was good.
Whilst I’m certain that the cluster would’ve been fine without this change it is good to know that the product designed for this purpose really knows its stuff!
VMM allows me to view my SAN storage quickly and to see where I’ve got spare capacity and what may need adjustment. Combine this with VMM’s knowledge of my VMs and any thin provisioning I’ve done and I can see how much storage I have and what would happen if all my thinly provisioned storage suddenly filled up!
Data Protection Manager
I’ve been using DPM since 2011 when our business continuity provider told me about the product. Before then I’d never heard of it and I don’t know why not! DPM is Microsoft’s answer to Backup Exec, NetBackup, etc. but with one big difference – it’s a Microsoft product and as such DPM doesn’t play with non-Microsoft products. Currently there are no agents for operating systems created by other vendors (maybe SP2, or whatever they’re rebranding service packs to, will bring agents for other OSs).
DPM leverages Volume Shadow Services (VSS) to perform its backups. Back in the old days every time you made a change to a file the archive attribute was set on the file so the next differential/incremental backup would find the files with the archive attribute set and backup then up; consequently resetting the archive attribute. This meant the entire file was backed up. What if the file you changed was 1GB? What if you only changed a lower case “a” to an upper case “A”? The whole file was backed up. What a waste. What if there was a way of only backing up only the part of the file that changed? Cue VSS.
By leveraging VSS DPM makes one complete backup of the file, then tracks changes to the file at block level, it knows what changed and where in the file and backs that up. Below is a rough example of what happens.
You create a protection group that has a retention policy of 3 days on disk. You have a 1GB file that you make changes to every day that is stored on an operating system protected by DPM. When you first protect the OS DPM creates an entire copy of the file. Each green block below represents a block within the file (not entirely accurate but enough to show how it works):
So after DPM has done its initial copy of the file and you make a slight change in the file the VSS service tracks this change:
On the next DPM synchronisation it copies only the changed block (no more copying a 1GB file each time you backup). Bearing in mind that DPM can synchronise protection groups as frequently as 15 minutes (great for file shares when users are making lots of changes) it can keep the network traffic from peaking and troughing.
So at the next synchronisation DPM knows it already has all the green blocks and the red block. If you then make further changes to the same file VSS tracks this; in the example below DPM would only backup the blue blocks.
After 3 days (as per the protection group dictates) the first recovery point is flushed into the original copy of the file, this means if you restore from the oldest recovery point you have at this time the copy of the file you’ll get back will contain all the green blocks and the red block. As time progresses each recovery point is flushed into the “original copy”. Combining this with long term backup (standalone tape/tape libraries/virtual tape libraries) you can enable rapid restoration of recent backups with the safety of long term retention if needed.
Just a quick note on synchronisations and recovery points – you can specify a sync schedule of 15 minutes but you can only have 64 recovery points (this is hard coded in VSS). So if you need to have 10 recovery points created per day you can only have a retention period of 6 days (no part days allowed). If you specify a sync schedule of 15 minutes DPM will copy the changed blocks every 15 minutes but will not create a recovery point until it is scheduled to, at which point it combines all the synchronisations between the last recovery point and the current time, mash them together and create a recovery point.
As DPM leverages VSS the above functionality is available for any VSS enabled application, for example SQL Server, Exchange server, SharePoint (DPM is amazing with SharePoint). You even use this with Hyper-V server VHDs! If you’re using a storage appliance that is VSS aware you can leverage this at the hardware level (much faster than Hyper-V VSS).
You can integrate DPM with Active Directory so it can expose itself to end users. For example you can combine it with file shares for end user restores using the “Restore Previous Versions” feature of Windows. This can equate to less service desk calls (if the users are trained properly).
Other System Center Applications
There are several other SC2012 applications:
- System Center Operations Manager: monitors everything in your IT estate, you can even get it to look inside .NET applications to see what is happening in the code and see what is slowing the whole thing down or causing it to crash!
- System Center Orchestrator: takes your repetitive tasks and automates them. Imagine you have a script that you run to fix a problem but the problem happens at random times how can this be automated? If you can capture the problem in Operations Manager you can get it to execute a run book in Orchestrator to fix it for you and then tell you what it has done!
- System Service Manager: this is basically a service desk solution that hooks into all the other SC2012 products. It uses the best practises found in Microsoft Operations Framework and Information Technology Infrastructure Library (ITIL)
- System Center App Controller: gives you a portal that you can manage applications on on-premise clouds and the Windows Azure platform
- System Center Endpoint Protection: Microsoft’s enterprise anti malware product (here malware includes viruses) that uses ConfigMgr. It is light weight and uses all the existing software update and reporting infrastructure you’ve built with ConfigMgr
- System Center Advisor: Not strictly part of SC2012 but is available to you if you have the necessary licencing and allows you to monitor cloud based applications
In SC2012 Microsoft have unified all their management products that were previously available as separate products. The changes in SC2012 have reflected the overall changes in way IT is used by business and end users. SP1 introduced further changes to the products to include the latest developments from Microsoft including incorporating Intune, Azure, Windows 8 (including the Windows Store), Windows Server 2012, Microsoft Desktop Optimisation Pack 2012 (MED-V 2, App-V 5, DaRT), Apple, Linux, etc. (far too many things to list).
Some of the SC2012 suite is evolutionary some of it is revolutionary!
Azure is Microsoft’s public cloud offering with a wide variety of services available to consume. I’ve not really looked at Azure in the past as not being a developer by trade (and not working for an organisation that developed its own software) there was very little that caught my attention – until now.
With the release of Infrastructure Services (Iaas) Microsoft has squared up to Amazon and basically said “bring it on!” I’ve been waiting for these services to become available for some time and they’ve not disappointed me. The two main things that they offer are:
- Virtual Networks – create a Virtual Private Network (VPN) tunnel to your office/data centre from Azure
- Virtual Machines – there are several Microsoft operating systems, and non-Microsoft operating systems that you can run (mostly what runs on a local Hyper-V server, but not all)
As I understand it the Azure Virtual Machines (VMs) run on a modified version of Microsoft’s Hyper-V server designed specifically for Azure. This means you can move a Hyper-V based VM to the Azure IaaS platform with very little effort (especially if you’re running System Center 2012 AppController). There are a few caveats but they are quite straight forward to understand (apart from one – guess which one):
- Fixed size Virtual Hard Disk (VHD) files only (at the moment the VHDX – the new Windows Server 2012 VHD file format is not supported). If it’s a VHD that contains Operating System files there is a hard limit of 127GB otherwise it’s 1TB.
- VMs only have 1 virtual network card (IP addresses are DHCP leased to VMs for 150 years – not a typo! Do not mess with the IP address of your VM or it will become totally inaccessible if you do, the exception is changing DNS settings, just BE CAREFUL)
- LICENCING! You need licence mobility if you’re moving your own VMs with server software to Azure (FAQ). Check the latest Product Use Rights (PUR) document. Exchange is not covered – Microsoft’s answer – use Office 365
For me this is great – I’ve created a VPN from an unsupported device (shh! Don’t tell support) to the Europe North (Ireland) data centre. Before anyone criticises the European naming of the Azure data centres – blame the UN’s classification. Apparently Ireland is in Northern Europe and the Netherlands is in Western Europe… OK… Right… Someone should buy the UN an atlas.
So I’ve extended my data centre into Azure now giving me unlimited power – as long as the credit limit on my boss’ credit card is high enough! So what do I want to do with it?
I’m going to move some of what I’ve got in my perimeter network to Azure. There’s nothing stopping me from controlling what goes through the VPN via my firewall, and the Windows host firewall on the VMs, so it’s just as secure as most in house deployments (just need to get the pesky compliance guys to agree).
It is possible to open endpoints into VMs on Azure so publishing applications – for example port 80/443 for http/https applications, port 21 for FTP, etc. It is possible to open any port with the exception of ICMP traffic (basically pings) – ICMP internally within Azure and across a VPN is fine but anything external either incoming or outgoing is blocked by the Azure firewall.
There are some crucial things to understand about endpoints especially in load balanced applications. Load balancing is done by the Azure load balancer not anything you’ve got internally unless you do some massively complex setup and it probably won’t be supported. The Azure load balancer is not a hardware product; it’s a software load balancer that does things slightly different to traditional load balancers. If you have 2 servers in a load balanced configuration you cannot guarantee that requests will go Server 1, Server 2, Server 1, Server 2, etc. It may well go Server 2, Server 1, Server 2, Server 1, Server 1, Server 1. There is method in there – somewhere!
There’s lots of information on the internet saying it uses round-robin but on my VMs it didn’t and on everyone else’s VMs in Steve Plank’s Windows Azure Camp for the IT Pro it didn’t!
You could, for example, run any software on a Windows Azure VM that you could run on a normal Hyper-V VM guest. SharePoint farm anyone (there’s a template for that)? SQL Server Always-On cluster (there’s a template for that)? Some other random LAMP based application? Just remember to check the licencing! If you run Microsoft operating systems you’ll be able to obtain support from the Azure support team (provided you’ve paid for support – not sure how that works with Software Assurance customers and their “free” incidents) however if you’re running a Linux VM don’t bother calling support. It’s not MS they won’t support it – why would they?
I think the biggest application of Azure IaaS, for me, will be proof-of-concepts. The ability to extend my network and spin up a proof of concept, test it, demo it – all without impacting my production Hyper-V cluster will be invaluable to me. With all the template VMs available in Azure it is quite easy to get concepts going in hours rather than days! The best thing is not to having to worry about the underlying hardware resource (and someone else has done most of the hard work with the SQL installer).
So what’s next on Azure?
Obviously there is no public road map for Azure but there are several features in preview (this doesn’t guarantee they’ll make it to production):
- Point to site vpn – think traditional end user VPN connections from laptops/desktops etc.
- Websites – there are wide variety templates available, for example Word Press, Drupal, MODX
- Mobile Services – backend database for mobile apps
- HDInsight – Hadoop Big Data on Apache
- Backup – native Windows backup to Azure and integration with System Center Data Protection Manager (one feature I am very much looking forward to)
- Hyper-V Recovery Manager – protection for your System Center Virtual Machine Manager private clouds – essentially coordinates hyper-v replicas for you in a more complex fashion
Azure has been around for quite a while now, starting life as a Platform as a Service infrastructure slowly, if somewhat reluctantly, moving into Infrastructure as a Service. It’s going to become extremely important for Microsoft in the near future as it strives to take back ground from Amazon on the IaaS side. It will take time for the new services to mature but once they do (and the pricing drops – I hope) it may well, one-day, completely replace the on-premise data centre just as Microsoft’s Software as a Service offerings, Office 365 in particular, are starting to replace traditional on-premise deployments.
So there are only two supported hardware vendors for Windows Azure Virtual Networks (VPNs):
- ASA – OS version 8.3
- ASR – IOS 15.1 (Static routing), IOS 15.2 (dynamic routing)
- ISR – IOS 15.0 (Static routing), IOS 15.1 (dynamic routing)
- SRX – JunOS 10.2 (static), JunOS 11.4 (dynamic)
- J-Series – JunOS 10.4r9 (static), JunOS 11.4 (dynamic)
- ISG – ScreenOS 6.3 (static and dynamic)
- SSG – ScreenOS 6.2 (static and dynamic)
Windows Server 2012 Routing and Remote Access Service is also supported.
Helpfully Microsoft has created some templates for all of the above to help you create the VPN connections.
But if you’ve got something that isn’t supported?
Well you’re on your own! No support from Microsoft at all.
We use WatchGuard XTM 510 firewalls. Whilst not the most user friendly or feature packed, they are quite reliable.
Not being one to be deterred by this I forged on and got it working!
This worked for me; I hope it works for you!
- A Windows Azure subscription (can be a trial)
- A WatchGuard XTM 510 Firewall (will probably work with all XTM series, not sure about the hypervisor based products)
- Know what your internal IP address range(s) is that you want to connect to Azure
- Know what your external IP address is that you’ll be creating your VPN tunnel from is (remember to check your outbound NAT settings if necessary)
- Know what IP subnet you want in Azure
- Some time and patience(don’t rush it or you’ll have start all over again)
Log in to Azure (https://manage.windowsazure.com)
Go to the bottom left hand corner and select New. Navigate to Networks, Virtual Network, Custom Create
In the Virtual Network Details put in your details (remember to pick the appropriate region). Create a new affinity group if necessary or select a pre-existing one.
If you want your Azure VMs to use your internal DNS servers you must add it here. Select Site-To-Site-VPN and Specify a New Local Network (this is your internal office/data centre network not Azure’s). It should end up like this
This step and the next are very important and is all about your local network, not the network you’ll be creating in Azure (that comes later). Make sure you enter WatchGuard’s External IP address here as shown below:
Name the network something meaningful to you. Could be OfficeNetwork as shown, could be CoreServers just make sure it makes sense to you. Add the address space(s) of your local network not the network you’ll be creating in Azure (that comes later).
This is where you create your IP ranges for your Azure VMs to use. I’d suggest something completely different to what you’re using in house. So if you’re using 192.168.x.x ranges, go for the 10.x.x.x or 172.16.x.x – that way if you see that IP address anywhere in logs, firewall rules/notifications, etc. you’ll know it’s not local to you instantly.
The first 3 or 4 (can’t remember exactly) IP addresses of your range(s) are taken by Azure for “internal purposes” so don’t worry if you’re first VM doesn’t have a 10.x.x.1 IP address. I’d suggest creating a Gateway subnet too, this way you can have multiple subnets in Azure and they can communicate (I think)! If you make it the lower end of the range as I’ve done then you know where you stand (you could of course make it the top end, middle, anywhere you like but I like to keep mine tidy). If you do the same as below you may end up your first VM having a .12 IP address! Don’t worry it’s just an IP address (see Important Notes at the bottom of this post)
So Azure will then go off and create the Virtual Networks, this will take several minutes so don’t be impatient. Once the network is created you’ll need to create the gateway (this seems odd as you’ve just created the network but MS won’t just give you an IP address straight away you need to ask for one). So click on “Create Gateway” and “Static Routing” at the bottom of the Dashboard view. This will again take some time.
Now Azure should’ve given you your external IP address, in the example below I’ve redacted mine but that’s where it’ll be. Write this down. Be aware that if you ever delete and recreate your Virtual Network Gateway it’ll be different.
To create the VPN you’ll need the preshared key, without this there’s no VPN for you! So once everything is ready as above click on the “Manage Key” button at the bottom, I’d advise copying and pasting this to notepad as you’ll need it later on.
Open Policy Manager for your WatchGuard firewall and go to VPN -> Branch Office Gateways. You’ll be presented with the New Gateway screen
Enter the preshared key you pasted into notepad earlier (you can paste into this field using the keyboard CTRL+V). Click Add in the Gateway Endpoints section and enter the information as shown below. It may seem wrong to enter the Azure external IP address twice but it worked for me. Other instructions I read said to enter the internal IP address of the Azure Gateway in the “Specify the gateway ID for tunnel authentication” but that didn’t work for me.
Once you’ve entered the information click on the Phase 1 Settings tab at the top and set the mode to Main, NAT Traversal to on, IKE Keep-alive on and everything else as shown below. The next step covers the Transform Settings.
Click on Edit to change what is there. Needs to be set SHA1, AES (128-bit), SA Life 8 Hours and key group Diffie-Hellman Group 2.
Click OK to close the Branch Office Gateway, click Close on the Gateway list. Go to VPN -> Branch Office Tunnels. Click Add.
Give your Tunnel a meaningful name (ToAzure for example). Make sure you select the correct Gateway under name. Click Add. In the Local subnet box enter the IP range you told Azure about in step 5. In my case its 192.168.5.0/24 and the Remote range is 10.254.254.0/24. Leave everything else alone (you can amend later if you want – one thing at a time)
Click on Phase 2 Settings (make sure PFS is OFF). Click Remove to get rid of the default IPsec proposal, click Add. Give the proposal a meaningful name. Make sure the type is ESP, Authentication is SHA1, Encryption is AES (128-bit). On the Force Key Expiration options set to 1 hour and 102400000 kilobytes (seems a lot but that’s what MS want). Click OK.
Make sure the Multicast settings are off.
To check for any errors you can increase the Diagnostic logging level on the Firewall. In Policy Manager go to Setup -> Logging. Click Diagnostic Log Level… Change VPN IKE logging level to Information
Click OK, Click Close and save you policy to your Firebox.
Once your policy is applied go to Firebox System Manager and go to the Traffic Monitor tab; in the filter box type “ike” (no quotes) to watch the tunnel diagnostic output. On the Front Panel tab you should see a new entry under Branch Office VPN Tunnels like this:
Now that you’ve created your tunnel and you can control what traverses it just like any other tunnel or policy. To further secure your Firebox I’d suggest creating a new alias for the scope in Azure with the subnet.
If you found this helpful please leave me a comment – it’s always good to know I’ve helped at least one person!
Azure VMs are leased IP addresses by the Azure DHCP platform and not any internal DHCP server you may create (don’t create one – there is no point). Azure DHCP leases are for 150 years (that’s not a typo)! If you change your IP address from dynamically assigned to statically assigned (even if you keep the same IP address I believe) your VM will become inaccessible! No RDP, No HTTP, NOTHING. Microsoft’s Configuration Management Database keeps track of every IP address it leases. If your machine isn’t using DHCP it’ll know and kill access to it.
How this will work with technologies like Network Access Protection I don’t know. I’d imagine IPsec enforcement would work as that doesn’t rely on changing DHCP leases like NAP DHCP enforcement does. Maybe I’ll do a post on that on day!
I’ve been using Windows Server 2012 in production since Microsoft released SP1 for System Center 2012 (SC2012); as SC2012 RTM’d prior to Windows Server 2012 it didn’t support it until SP1 was released.
To put things in to context I’ve been using Windows Server since the days of NT4 (pre Active Directory). With the introduction of Active Directory in Windows Server 2000 it suddenly became so much easier to use once my brain figured out the monumental changes that Active Directory brought (I’d never seen Novell at the time).
So fast forward 15 years and Windows Server 2012 has arrived – new interface, new ways of working, new features set and (vastly) improved features!
I dithered about which feature of Windows Server 2012 to concentrate this review on so I thought I’d do my top ten list and pick the one that means the most to me:
- Hyper-V 3
- NIC Teaming
- Dynamic Access Control
- New Server Manager
- PowerShell 3.0
- SMB 3
- GUI to Core and vice verse without having to reinstall the OS
- Cluster Aware Updating
- Remote Group Policy Update
So I’ve decided to go with Hyper-V. There are lot of reviews out there but here’s why I love Hyper-V 3.
THANK YOU! THANK YOU! THANK YOU!
No more is it clear cut case of virtualisation = VMware. When I was looking at implementing virtualisation in 2011 it was a question of “Can I afford VMware? No, Hyper-V it is”. With Hyper-V in Windows Server 2012 for me it’s now “Sure glad I didn’t go for VMware!”
The functionality improvements over Windows Server 2008 R2 are phenomenal. Like most people using Hyper-V in R2 I sometimes just wanted to crawl into the server room and scream as I realised I needed to give up another weekend to do some maintenance – usually on the Storage Area Network (SAN)!
Under previous incarnations of Hyper-V if you wanted to move a Virtual Hard Disk (VHD) associated with a running Virtual Machine (VM) the only choice was to shutdown the VM, move the VHD, update the VM settings and power it on… Whilst, like most other IT Pros, I compiled a vast quantity of scripts (sadly not PowerShell) to do these things there was always that nagging feeling (what if it goes wrong…) – and obviously the associated downtime for end users (mustn’t forgot them). This would usually lead to me telling my daughter that Daddy had to work at the weekend and seeing the look of disappointment on a 3 year olds face always broke my heart! But thankfully no more – THANK YOU AGAIN!
Storage Migration comes to the rescue!
Storage Migration is the ability to move a VM’s storage (and configuration and snapshots) whilst it is running without any, well a tiny amount of (if someone noticed I’d be flabbergasted), impact on the end user. This one feature saved me hours of downtime once I’d got my Windows Server 2012 Hyper-V cluster up and running (almost perfectly) and HP released an urgent firmware patch for some hard drives we were using (explains the not quite perfect implementation)! I moved the all the storage from one Cluster Shared Volume (CSV) to another (albeit slower, RAID 5 vs RAID 10) CSV, upgraded the firmware and then moved it all back! Then repeated the sequence for the other and then put everything back where it should’ve been. Best of all no downtime! NO DOWNTIME = NO COMPLAINTS FROM USERS (or wife/child)!
Now whilst some of you may say “Pah! You could do that with VMware!” may I remind you of three things:
- VMware is not cheap
- Hyper-V is part of your Windows Server 2012 licence (or free if you’re using Microsoft Hyper-V Server 2012 – just know your licencing if you’re using that edition)
- VMware is not cheap
Just to make it clear you can do Storage Migration on any supported guest VM Operating System (OS) – that includes Linux and Windows client OSs (handy for Virtual Desktop Infrastructure deployments)!
Now if you’ve got an Offloaded Data Transfer (ODX) enabled SAN then frankly you’re on the fast track with the above situation. My SAN doesn’t support ODX at the moment (come on HP – how hard can it be!) but if it did, oh my… The problem with the above situation was the Hyper-V server had to copy the VHDs from one CSV to another. This meant my VHD had to leave the SAN, go through the physical switch, into the Hyper-V OS to be told to go back to the SAN but to go to a different disk. Whilst that’s not much of an issue for a 10MB file, if you’re primary user storage VHD is 600GB (x10 for the amount I had to move) that’s a lot of network traffic, processing on the Hyper-V host, etc. What if the SAN could do the move for you?
An ODX enabled SAN will move the files (in this case VHDs) for you at the storage level! No traversing the network and only a tiny amount of processing by the Hyper-V host as the SAN tells it what it’s up to. The SAN moves the file for you which I will guarantee (that’s a no money back guarantee) is faster than having Windows do it! Why do you think Microsoft created this?
There are other great features in Hyper-V 3 too; shared nothing live migration, essentially storage migration + live migration + steroids. This means that if you’ve got multiple Hyper-V hosts/clusters you can move VMs between them without having to do complex export/imports – provided the Hyper-V hosts are in the same domain, same hardware architecture (basically standard clustering rules). Another great feature is the introduction of Virtual Fibre Channel – basically what you can now do with fibre channel what was previously only possible (and supported) with iSCSI. Essentially allows you to pass the Fibre Channel adapter through to a VM in the same way as a network card.
Microsoft Licencing is one of those things that you wish was just easier to get your head around (you’re an IT Pro not a legal expert). I can guarantee you that there are no loop holes – apparently Microsoft employs the same number of lawyers as developers! They’re going to protect their intellectual property come hell or high water! So what has this got to do with Hyper-V replica?
Well something you may not be aware of in your Software Assurance (SA) benefits (you’ve got SA right? If not get it! Solves so many licencing issues) is Cold Back-ups for Disaster Recovery (DR). This allows you to have the same licenced server software on a “cold” backup server for DR – a Hyper-V VM that is replicated using Hyper-V replica is definitely “cold”!
So let’s take my situation as an example:
- 4 Node Hyper-V failover cluster. Each node has Windows Server 2012 Datacenter with SA (unlimited VMs)
- 2 Hyper-V servers in DR data centre running the free Microsoft Hyper-V Server 2012 (no licences for guest OSs but has unlimited VMs as long each VM is licenced – see comment above about Microsoft Licencing…)
- Each DR Hyper-V server also has a Windows Server 2012 Standard with SA licence assigned so I can run some VMs in perpetuity (for example an Exchange DAG node, file server with DFS-R, RDS server, System Center Data Protection Manager (SCDPM) secondary server – we’ve got a physical Domain Controller in DR)
Now selected guests in my primary data centre are replicated using Hyper-V replica to the DR Hyper-V nodes. The guest OSs in DR are off (part of Hyper-V replica) and as such are “cold” but have all the necessary server software installed to get the organisation up and running and they are fully licenced under SA benefits (they’re replicas!) In addition the Hyper-V hosts are fully licenced to run as many guest VMs as I have licences for. Oh yeah the replicated VMs can be ANY OS SUPPORTED BY HYPER-V! Just be careful with Exchange/SQL/basically any transactional software. Better off using a Database Availability Groups for Exchange, Always-On/clustering/mirroring for SQL (top end Microsoft software usually has its own high-availability solution).
So by using replica I’ve got all my VMs in a state that is approximately 5 minutes behind live (that is amazing) – by the way this is all included in your Windows Server licence (no additional licences required, no expensive asynchronous/synchronous SANs to deploy). Prior to replica we were using Hyper-V backups in SCDPM they were at least an hour behind if not more!
Hyper-V has come a long way since its first incarnation in Windows Server 2008. Microsoft has been playing catch up with VMware but now the two are very much on a level playing field, in my opinion Microsoft are ahead. If you’ve already got VMware then look at Windows Server 2012 Hyper-V when it’s time to refresh/renew your VMware infrastructure/licences. Chances are you’ve already got Windows Server Datacenter licences in which case you’ve already got Hyper-V – if so just think what else you could spend your VMware licence renewal budget on!
Microsoft has just released a Hyper-V Capacity Planner – should help with figuring out where to spend the VMware renewal budget…
Note: Anything I say about licencing is from my perspective and should in no way be treated as 100% accurate. Check with MS licencing specialists.