Category Archives: System Center 2012

Modern Style Visio Stencils for Operations Manager

I have created some more modern style Visio stencils for System Center. This time for Operations Manager!

You can download from here.

You can what they look like from here but download them, they’re free!





What I’ve been up to…

So 2013 was a bit of a crazy year for me…

After winning a place at TechEd Europe 2013 I got a new job working for Inframon as a System Center and Desktop Implementation Consultant, basically I get to work with System Center 2012 every day! Not only do I get to work with the latest and greatest software every day I get to work with the best System Center guys in the world.

I’ve gone from running a small, but very capable, installation of System Center to deploying different components of it for a variety of customers all over the UK. It’s been challenging but fantastic!

I’d like to put a special thank you out to the Microsoft UK DPE team and TechNet UK team who have inspired me to go out and learn System Center and Hyper-V. Without the free training offered by Microsoft through TechDays (online and in-person), Microsoft Virtual Academy and other free resources I wouldn’t be where I am now.


A Gotcha with Remove-SCVMhost

I’m in the process of rebuilding my Hyper-V cluster (4 nodes, nothing major) and I’m using Bare Metal Deployment (BMD) with System Center Virtual Machine Manager 2012 SP1 (SCVMM) to do so – why would I use anything else?

During the rebuild of the second Hyper-V host I did something slightly out of order (I removed the host from the domain before removing it from SCVMM – no idea why I did it like that, must have a had a brain fart). By doing this the DNS entry for the host was removed, as it should be, and the host was powered down ready to be BMD’d from SCVMM. Realising my mistake I went in to SCVMM PowerShell and ran:


I’ve done several times before however this time there was an error message basically saying: “Err… Can’t find it”. Odd.

I looked in SCVMM and sure enough it was still there. Now the definition of insanity according to Einstein is doing the same thing over and over expecting different results – according to him I’ve gone insane… Anyway I recreated the DNS entry for the host and reran the PowerShell command above – success.

Somewhat later in the day I had to move the SCVMM role from one cluster node to another – it wouldn’t start. Looking at event logs there were many .NET messages and buried in the messages was: VMM cannot find the Virtual hard disk object’ error 801. EH? Going to consequently solved the issue.

Moral of the story – do things in the right order.



Where are Microsoft heading with the 2012 R2 releases?

So last week I attended my first TechEd Europe in Madrid. I won my place through the Microsoft TechNet UK TechEd Challenge (say that after a few pints…) for my System Center 2012 blog post.

Never in 4 days have I learnt so much!

With the new versions of Windows Server 2012 R2 and System Center 2012 R2 announced at TechEd North America it was the turn of Europe to see what Microsoft had to offer with the latest versions. It is fair to say they’ve not disappointed any one (that much) with the upcoming releases.

First of all – TechEd

Wow. So it was my first Microsoft conference, ever, and I enjoyed every minute. It was great to see so many of the Product Managers, Marketing Managers and downright technical geniuses that had made the trip over to share their enthusiasm for the next release. So out of all the sessions I could’ve possibly attended I only missed 1 – mainly due to my brain trying to process the shear quantity of information!

It’s clear to see the preview releases are very stable (no BSODs on demos) and no pre-recorded demos (unlike some other vendors).

Le Caveat

Everything below is my opinion and should be treated as such. No Microsoft employee has confirmed any of the information below, it is purely my personal speculation.

So what is Microsoft’s vision?

Everything Windows Server 2012 R2 and System Center 2012 R2 is based on the “Cloud OS” vision (I saw that slide so many times…) where there are 3 clouds:

  1. Private cloud: on-premise cloud powered by Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released), the Service Provider Framework and the Windows Azure Pack (which is awesome). This is generally seen as the starting point for everything, doesn’t have to be but if you’ve got it on-premises the rest is easy.
  2. Public cloud: this is Microsoft’s Azure cloud. It is powered by Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released) and the Azure Services (full blown) and some mega storage system using commodity hardware – no specialist SAN.
  3. Service Provider cloud: again running on Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released), the Service Provider Framework and the Windows Azure Pack (which is awesome – still). The idea here is for value added services from a service provider, customer choice – especially around data locations (think data laws).

This leads to the “one consistent platform” message; if your internal users can provision services using the Windows Azure Pack, they can use full blown Azure and Service Provider implementations too; no more learning of several different portals.

So what is enabling this?

The core new features of Windows Server 2012 R2 are going to change the game when it comes to Cloud.

  • Shared VHDX – enables guest clustering without having to expose directly mapped storage to guest VMs. This gives you all the features needed for upgrading underlying infrastructure whilst maintaining availability of guest VMs. Storage Migration will allow you to move the VM’s storage (i.e. the shared VHDX) whilst the underlying hardware is maintained, upgraded, replaced, etc. As internal provider this will make my life so much easier (edit: this point needs to be clarified with some testing, I think I may have this wrong…) I can remove all my directly mapped LUNs and just use Shared VHDX files for the storage. Don’t use snapshots!
  • Online VHDX resize – the ability to change the size of the disk attached to a VM (grow AND shrink), without having to take the VM offline! Note: you still need to change the size of the parition within the guest, however some clever use of PowerShell/System Center Orchestrator (provided the guest trusts the Orchestrator install) will do this however that will require some effort to implement, it isn’t in the box
  • Storage QoS – you can now tune the number of IOPs on a virtual disk. No more IOP hoggers! I believe this only extends to additional disks, not disks with OSs in. As such applications like SQL that love IOPs will have to be configured correctly in guest for the Hyper-V provider to take advantage here (follow MS best practise and you’ll be fine)
  • Live Migration compression – in Windows Server 2012 R2 this will come enabled by default. Most virtualisation hosts are constrained by the amount of RAM they have to offer guests rather than the CPU cycles they can offer. Compression uses spare host CPU cycles to compress the Live Migration of RAM and you can move a VM at twice the speed (if not more). If you’ve got RDMA NICs (and multiples of) then the speed of your RAM will matter (that is not a typo). SMB direct RDMA offloads everything from the system to the NIC cards
  • Extended replica – instead of just being able to replica to one other host you can replica a replica. Perfect for Service Providers who offer replica as service; they’re able to replica the customer’s VMs to another host/data centre without having to have crazy expensive SANs
  • Hyper-V Network Virtualisation Gateway – until the Friday morning of TechEd I referred to this as the “magic gateway”, I just couldn’t figure out how it worked. After attending this session it all became very clear. This appears to be the brains behind the Virtual Networks offering on the Azure public cloud, the load balancer and all the other excellent networking offerings in Azure
  • Windows Server 2012 R2 Tiered Storage Spaces – on the surface this seems to the StorSimple technology migrated to Window Server 2012 R2. By tiering the storage available on Storage Spaces Windows Server will move the most read/written to blocks (blocks not files – blocks could contain files, think VDI deduplication here) to the fastest storage available, this could be SSD, 15K disks, etc. This tiering gives amazing IOPs especially when combined with CSV caching in memory. Best of all – it just uses JBODs on the back end! As I understand it at the moment you can only have 8 nodes in a Scale-out file cluster for this
  • Linux backups of Hyper-V guests – no longer will a VM pause when it is being backed up at the host level (provided your Linux version is correct). Microsoft have shied away from saying they’ve implemented VSS inside Linux but it is basically what they’ve done
  • Oracle support on Hyper-V – this is probably the final hurdle for high-end enterprise adoption of Hyper-V

At TechEd the focus was VERY heavy on the “Cloud OS” vision and how System Center 2012 R2 and the Windows Azure Pack was going to power that throughout:

  • System Center Virtual Machine Manager (VMM) is the king maker. VMM will now deploy VM hosts from bare metal, VMs to hosts (whether that be Hyper-V, VMware or Citrix hosts) and with R2 it will deploy Scale-Out file servers for hosting VM storage from bare metal! Allegedly this list will increase too. Microsoft have stated that there is no reason why you shouldn’t move your workloads to VMs – this is squarely aimed at SQL server workloads. With the ability to SysPrep a SQL server you can now deploy them directly to VMs. Server App-V brings another string to VMM’s bow. As far as I can see Microsoft are targeting VMM at deploying all server workloads – physical and virtual.
  • Windows Azure Pack. This is enabling end-user provisioning of services from pre-defined templates (created in VMM) with an interface that is consistent with the Microsoft Azure Cloud. The Azure Pack sits between the end-user and the Service Provider Framework (this sits in front of System Center) and can be skinned to corporate colours. At a basic level it tells VMM what to do (via service templates) and as such does not necessarily require you to have Hyper-V as your virtualisation host – it will work with VMware and Citrix too. Best of all – it’s extensible. Microsoft will add more services over time and you can add your own in too.

So what about the other System Center components?

Data Protection Manager was relatively quiet, it was confirmed that this component will be able to use a clustered SQL server for its database but there will be no push to cluster DPM. You can make DPM highly available by running it as a VM on a Hyper-V failover cluster. You should be able to use VHDX files to store the DPM backups (this will remove the final pass-through disk in my DPM setup) – these will need to be fixed size though and will probably not support online resize – DPM can get very angry about other applications playing around with its disk(s).

I heard very little mention of System Center Configuration Manger 2012 R2 at TechEd. I may have been in the wrong sessions. With VMM taking over the world role of deploying servers and ConfigMgr having tighter integration with Windows Intune I see it as becoming the client OS manager. Combine it with MDT and it is an extremely effective tool for desktop deployment and compliance monitoring. When it comes to Patch Mangement VMM already has the hosts, how long until it starts looking after guests? Admittedly ConfigMgr gives you all the reports, at the moment…

Operations Manger – there were some further strives forward, especially for monitoring Java applications. System Center Advisor is now baked in to the application (this is Microsoft’s cloud based monitoring that uses information gained from customers to ensure you’re installations are in tip top condition).

I didn’t hear anything about App Controller or End Point Protection.


The order of products to learn inside and out for effective Microsoft Cloud OS are:

  1. Windows Server 2012 R2: this is the base for everything. Microsoft runs on Microsoft best (or something similar – I’m sure the MS marketing team can correct me here). Once you know how Windows Server works, especially Hyper-V, you’ve got the foundations for your cloud
  2. System Center Virtual Machine Manger: this rules your cloud. VMM provisions and controls your cloud. I cannot stress how important this product will be in the next 12 months and far into the future
  3. System Center Operations Manger: this will monitor your cloud and all the applications running in it. There’s no point in having a bunch of amazing hardware if the services you’re running are performing like a 90 year old in 100 metre sprint. I’d include System Center Advisor in here too
  4. Windows Azure Pack: this is the front door to your cloud. It makes end-user provisioning of services much easier. You can also customise the pack, not only through colour schemes but you can add your own items in there too
  5. Data Protection Manager: no point in having an amazing cloud if you can’t restore data when you/your customer has a problem
  6. Service Manager: the perfect solution for service desk management, CMDB; it integrates with all the System Center components and offers rich reporting.
  7. App Controller: the key to where services get provisioned. From here you can provision services on premise or in the Cloud
  8. Orchestrator: the key to automation. Orchestrator can talk to all the System Center components, Windows Server 2012, Active Directory, Exchange, SQL Server (the MS list goes on and on) and a vast array of non-Microsoft software including BMC Remedy, VMware, etc.
  9. Configuration Manager: this is important to provide rich compliance information, integrated anti-malware protection, etc. I do believe however that with Desired State Configuration in Windows Server 2012 R2 the compliance monitoring aspects of ConfigMgr for servers will be used less and eventually be deprecated

With the alignment of Windows Server and System Center build/deployments Microsoft are making the life of an IT Pro much easier! Unlike when Windows Server 2012 was released there should be no delay in getting the management components up and running too.

Windows Server 2012 R2 – Shared VHDX, DPM and Hyper-V Replica

So today is the first day (proper) of TechEd 2013 and so far it has been amazing. The conference centre is vast, the halls are huge and so far so good.

I went to the Introduction to Windows Server 2012 R2 session presented by Jeff Woolsey – that man is a fountain of knowledge!

So after they demo’d some of the new functionality around 2012 R2 I got thinking about how Hyper-V replica handles shared VHDX files – basically guest clustering without mapping raw LUNS through Fibre Channel or iSCSI? I also had a thought about how hyper-v backups via the host would work with this? And one more… Can you replicate a storage space?

Fortunately I managed to put these queries to Jeff.

It’s official – Hyper-V guests with a shared VHDX file cannot be replicated (you can replica the OSs but not the shared VDHX). Understandable due to the clustering aspects but disappointing.

Also backing up the guest OSs and data is not possible with hyper-v host level backups. Jeff’s answer here was to use back tools inside the guest vms. Not ideal – again understandable but still disappointing.

Replicating a storage space is not possible.

Who know’s – it might come in Server 2012 R3 or whatever the next iteration will be called, but it’s not coming in the R2.

System Center Configuration Manager 2012 Compliance Settings

One great feature of System Center Configuration Manager 2012 (ConfigMgr) is the new compliance settings and configuration baselines. In ConfigMgr 2007 this was known as Desired Configuration Management.

In ConfigMgr 2012 Microsoft really raised their game and now allow for automated remediation, which I primarily use for registry settings. How annoying is it when you configure an application to not self update when you install an update (probably via ConfigMgr with System Center Updates Publisher) and it resets the settings and merrily checks for updates – usually leading to calls to the Service Desk along the lines of “My computer is telling me there is an update to application X but it won’t let me install it”?

This is where the awesome compliance setting remediation comes in – it can detect a change, and if instructed to do so in the compliance setting, change the value to what YOU have told it to be, not what the application developer wants it to be.

Group Policy Objects

Group Policy Objects (GPOs) give you ultimate control over a domain joined client (be that server or desktop). If you’ve got the Microsoft Desktop Optimisation Pack (MDOP) then you’ve got access to Microsoft’s Advanced Group Policy Management (AGPM) tools – which are fantastic. MDOP is well worth and it’s cheap (yes that is cheap and Microsoft in the same sentence). It allows you to log change to GPOs, do offline testing and loads more. But what if the left hand doesn’t know what the right hand is doing?

If someone authors a change to a GPO that could potentially change something fundamental, for example changes the Remote Desktop firewall settings, how can you monitor that in ConfigMgr?

Enter Microsoft’s Security Compliance Manager (SCM). You’re probably thinking “What the <insert expletive here>!” Bear with me…

Microsoft’s Security Compliance Manager

SCM is a free Solution Accelerator (of which there are many) from Microsoft that can guide you in deploying GPOs that can help secure your Windows servers and desktops with best practise guidance, documentation galore, and best of all the ability to export CAB files for use in ConfigMgr.

In SCM you can import your existing GPOs and from there you can compare them to Microsoft’s guidance. In addition you can export them to a CAB file for use in ConfigMgr. Big deal? In my opinion – YES! You don’t have to use the comparison aspect, you can just use it as a conduit for the next stage.

In the ConfigMgr console you can import the CAB file in to the compliance settings workspace – this in turn generates an array of compliance settings for you. When you dig a little deeper into these settings you find it uses scripts to check compliance, no auto remediation available here but does a good job of checking settings.

What about just opening the raw ADMX files to find the registry settings?

Rather you than me!

If your GPOs that only contain a few settings you can open the parent ADMX file, find the registry strings and use those for remediation if you want… I don’t know about your environment but that would be a boat load of work for me!

So where’s the benefit?

If you’ve got these settings imported in ConfigMgr you can see when the deployed baselines move away from their GPO settings, this can immediately alert you to one of two things:

  1. An update, whether that be from Microsoft or another company (remember you can control quite a lot of applications via GPOs, not just Microsoft’s – Google Chrome anyone?), may have changed a value you configured in a GPO
  2. Or more likely, someone has changed something and not let you know. Now if you’re using AGPM you’ll be able to find the individual and have a little chat…

Le caveat

This is not a catch all. If someone deploys a new setting via a GPO (one that isn’t covered by a compliance setting imported via SCM) you won’t know about it. Communication is key here, make sure left hand knows what the right is doing.

I’d advise you to take a look at the free Solution Accelerators from Microsoft, of which Microsoft Deployment Toolkit (MDT) is one – which I’ve used for years and is amazing for highly configurable desktop deployments. SCM is great tool to see what Microsoft recommend you do with your infrastructure, Windows is now quite secure out of the box but if you want to you can harden it much more. Best of all it tells you what you need to do, where you need to do it and most importantly why!

Just remember that most registry changes require a reboot to take effect. Just because you remediate a setting it doesn’t necessarily mean the setting is in effect – look at TechNet and do your research.

System Center Configuration Manager and Cluster Aware Updating

Microsoft have created a new feature in Windows Server 2012 for updating clusters called Cluster Aware Updating.

The premise is simple: it coordinates updating your clusters for you i,e. moves roles to other nodes, updates the node, moves roles back, updates the other node, puts everything back in its place – job done.

It’s great if you’re not using Microsoft flag ship configuration management application – System Center Configuration Manager (ConfigMgr)…

If you’ve just using WSUS for updates then it’s relatively straight forward to get up and running.

If you’re using ConfigMgr you need a hell of a lot more software to make this work and there are no straight forward tick boxes to make this work. Helpfully Neil Patterson has created some runbooks for System Center Orchestrator to make this all work.

So what have I done?

I’m yet to roll out all of the System Center suite so have I implemented a separate WSUS environment from my ConfigMgr environment? No

All my clustered servers are imaginatively titled. For example my 2 node file cluster is called HQ-File, made up of HQ-File1 and HQ-File2… All other clusters are the same, Print (Print1, Print2), HQ-SCSQL (HQ-SCSQL1, HQ-SCSQL2) etc.

In ConfigMgr I’ve created three query based collections based on my “Windows Server 2102 Servers – Non-Hyper-V Hosts” device collection.

  1. Cluster Servers 1: the criteria is very simple:
    1. System Resource.Name is like “%1” AND
    2. Services.Name is equal to “ClusSvc” AND
    3. Services.Start Mode is equal to “Auto”
    4. Limiting collection: “Windows Server 2102 Servers – Non-Hyper-V Hosts” (prevents hyper-v hosts being included – they’re special and I patch those manually at the moment)
  2. Cluster Servers 2: the same as above except:
    1. System Resource.Name is like “%2” AND
  3. Windows 2012 Non-Clustered Servers:
    1. Include Collection “Windows Server 2102 Servers – Non-Hyper-V Hosts”
    2. Exclude collection “Cluster Servers 1”
    3. Exclude collection “Cluster Servers 2”

Net result is 3 collections to deploy Windows Updates to. I have other collections for application updates like SQL, Exchange, etc. but that is outside this post. I’ll go through those another day.

So when updates are released I generally deploy the updates in this order:

  1. “Windows 2012 Non-Clustered Servers” to be installed on day X by B time (very much out of hours due to no redundancy on the services those installs are providing)
  2. “Cluster Servers 1” to installed on day X+1 by C time (this is usually sometime in business hours – shock – so I can remediate if necessary before the 2nd cluster group updates)
  3. “Cluster Servers 2” to installed on day X+2 by C time (this is usually sometime in business hours – shock – so I can remediate if necessary)

Whilst this isn’t necessarily the best way of doing this it works for me (I usually end up watching/running the installations to make sure it all goes well – unless they’re server core).

This results in all the servers getting updated as required. There is a risk that between patching Cluster Servers 1 and Cluster Servers 2 the roles move around and bad things happen… So far so good.

When it comes to server application updates that is an entirely different issue. Moving databases between SQL 2012 SP1 and SQL 2012, for example, is a bad plan and will probably end up killing your database! Same goes for Exchange. Updating server applications is usually more “dangerous” than Windows updates when it comes to application stability!

System Center 2012

System Center 2012 is not a single product – it is Microsoft’s collection of management software that now covers a wide range of functions including monitoring your infrastructure (servers, switches, SAN, etc.), backing up your entire Microsoft server estate, deploying operating systems (both client and server), creating private clouds, IT service desk, automation of tasks. The list goes on. The software is vast and covers so much!

So what does System Center 2012 mean to me?

There are 3 applications within System Center 2012 (SC2012) that I use day in day out (one of which saves users a lot of hassle):

Configuration Manager (ConfigMgr): this is all about users and devices and what should be available to them (applications, updates, operating systems). Since implementing ConfigMgr I’ve done a complete desktop operating system refresh (more about that later) and gone from hoping Windows updates installed to knowing what is, isn’t, and is yet to be, installed on where and when.

Virtual Machine Manager (VMM): I run a 4 node Windows Server 2012 Hyper-V cluster which was (mostly) built without the aid of VMM – it took the better part of 3 days to build over Christmas, I’m sure with VMM it would’ve taken about a day – tops. It helps me to manage my Hyper-V cluster, create/manage my private clouds and get a high-level view of my SAN.

Data Protection Manager (DPM): this is probably my favourite SC2012 application – odd to say “I love backup” but it is so simple to use and so effective it makes everything so much easier.

Configuration Manager

I’ve been using the Microsoft Deployment Toolkit (MDT) since 2008 to deploy operating systems and it is a great product, I really can’t believe they give it away free! It sounds crazy but all MDT is a bunch of scripts and a very small database. In the past I’ve looked at the lines that said “this feature is only available with Zero Touch Interface (ZTI)” with a longing for ConfigMgr. So the day after I installed ConfigMgr I installed MDT 2012 Update 1 on the server and decided a full desktop operating system (OS) refresh was required – we’d just got our Software Assurance (SA) sorted on our desktop licences – to get everyone on Windows 7 Enterprise.

The process is the same as MDT: build, capture, deploy and relax! It’s not quite that easy but not too far off! The biggest difference is that you don’t really need to use the MDT console you can do most it through the ConfigMgr console. The one thing I think MDT does better than ConfigMgr is drivers.

Drivers in MDT are easy – create a decent folder structure, import the drivers you need for each model and away you go. If you’ve got duplicates MDT is fine with that, ConfigMgr however isn’t… By allowing you to duplicate drivers in MDT you can create silos for each model you have, safe in the knowledge that it’ll work. ConfigMgr however just gives you some errors when you try to import duplicate drivers which then involves trawling log files to find out what. If I’m doing it wrong then please let me know!!!

Anyway, after days of work to get the drivers imported, categorised, tagged and built into driver packages I was ready to try a different model from my test box.

So I added some WMI queries in my task sequence to determine what hardware the operating system (OS) was about to be deployed on so it could determine which driver package to use. This worked perfectly and Windows 7 deployed successfully.

Working for a small organisation I thought it would be better to get user specific applications installed at deployment time rather than giving the “gold” image and then waiting for ConfigMgr to figure who the user is and what they require – so I went back to MDT to see if I could use the roles feature like I’d done previously.

I created a database, created a role, linked some ConfigMgr applications up to the role (could be MDT applications but then you could be duplicating applications – enter headache), populated the computer record in MDT attached the role and redeployed the OS and went for a coffee.

The OS deployed as it should, the additional applications were installed and that was that. Cue multiple roles! The overall refresh went reasonably well, a few small issues but they were with poorly written applications rather than any specific ConfigMgr problems.

When ConfigMgr went wrong…

Picture the scene: it’s just before service pack 1 for SC2012 is released and I’m eagerly awaiting its Windows Server 2012 support when Microsoft release its now infamous Windows Management Framework (WMF) 3.0 update for Windows 7. At this point I was quite excited, PowerShell 3 on Windows 7 – more cmdlets than you can dream of and all the fun that PowerShell brings! So after diligently deploy it through ConfigMgr to all the client machines I started to notice that ConfigMgr wasn’t happy…

The clients were going inactive. My immediate thought was something was wrong with the server as every client was inactive. So after extensive diagnosis I determined the server was fine, time to turn to my attention to the clients. So I pop on to a well-known search engine and find other people are having the same issues – I thought “phew not just me then” – but there was no answer. Cue hours of tinkering, uninstalling and reinstalling the client software – same result. After adding a few more worry line,s and looking in the mirror each morning for grey hairs, some news came from Microsoft, and I’m paraphrasing it here: Don’t install WMF 3.0 if you’re using ConfigMgr 2012 RTM, it breaks it… Great.

My first thought was not suitable to written on this blog, safe to say it involved many, many four letter words…

So what to do? My ConfigMgr install is “officially” broken and I’ve not got a clue what to do. Back to the well-known search engine… Many days later after piecing bits together I create a VB script, and several batch files, to sort the problem out – not the official way of resolving the problem but I wasn’t about to admit defeat… Scripts get executed overnight I come in the next morning, open the ConfigMgr console, take a deep breath and click on Devices, wait for the console to load (slowly turning blue as it takes forever – or so it seemed) and finally breathe – success!

Then SP1 was released… The person at Microsoft who used the wrong digital certificate on the release should be – hang on this heading in to four letter word territory. Moving on…

ConfigMgr has allowed me to ensure that everything is how I want, and need it, to be. Through the use of compliance settings I am able to ensure that Group Policy Object (GPO) settings are applied and maintained (auto-remediation is a big help in this area), application settings are set and maintained (no more “there is an update available”, see annoying Adobe Flash notices to end users) and that if something does move away from what is needed I know about it and can find out why.

Virtual Machine Manager

VMM for me isn’t directly used to create amazing private clouds that users can log into, create VMs based on templates etc. My organisation isn’t big enough for those features – it’s my sanity checker.

When adding another node to my Hyper-V cluster I used the failover cluster manager to validate the cluster when adding the extra node and it all came back green as expected. So instead of finishing the process in there I went to VMM and ran the validation from there.

VMM seems to be much more aware about Windows Server 2012 networks than Windows Server 2012! Whilst I had all the correct network adapters configured with the right subnets, VLANs, etc. there was one that had a minute difference to the other nodes. VMM picked this up and said “NICs don’t match”. Some head scratching later I realised what I’d done and changed the properties of one of the Hyper-V virtual switches, reran the VMM validation and all was good.

Whilst I’m certain that the cluster would’ve been fine without this change it is good to know that the product designed for this purpose really knows its stuff!

VMM allows me to view my SAN storage quickly and to see where I’ve got spare capacity and what may need adjustment. Combine this with VMM’s knowledge of my VMs and any thin provisioning I’ve done and I can see how much storage I have and what would happen if all my thinly provisioned storage suddenly filled up!

Data Protection Manager

I’ve been using DPM since 2011 when our business continuity provider told me about the product. Before then I’d never heard of it and I don’t know why not! DPM is Microsoft’s answer to Backup Exec, NetBackup, etc. but with one big difference – it’s a Microsoft product and as such DPM doesn’t play with non-Microsoft products. Currently there are no agents for operating systems created by other vendors (maybe SP2, or whatever they’re rebranding service packs to, will bring agents for other OSs).

DPM leverages Volume Shadow Services (VSS) to perform its backups. Back in the old days every time you made a change to a file the archive attribute was set on the file so the next differential/incremental backup would find the files with the archive attribute set and backup then up; consequently resetting the archive attribute. This meant the entire file was backed up. What if the file you changed was 1GB? What if you only changed a lower case “a” to an upper case “A”? The whole file was backed up. What a waste. What if there was a way of only backing up only the part of the file that changed? Cue VSS.

By leveraging VSS DPM makes one complete backup of the file, then tracks changes to the file at block level, it knows what changed and where in the file and backs that up. Below is a rough example of what happens.


You create a protection group that has a retention policy of 3 days on disk. You have a 1GB file that you make changes to every day that is stored on an operating system protected by DPM. When you first protect the OS DPM creates an entire copy of the file. Each green block below represents a block within the file (not entirely accurate but enough to show how it works):

VSS - 1

So after DPM has done its initial copy of the file and you make a slight change in the file the VSS service tracks this change:

VSS - 2

On the next DPM synchronisation it copies only the changed block (no more copying a 1GB file each time you backup). Bearing in mind that DPM can synchronise protection groups as frequently as 15 minutes (great for file shares when users are making lots of changes) it can keep the network traffic from peaking and troughing.

So at the next synchronisation DPM knows it already has all the green blocks and the red block. If you then make further changes to the same file VSS tracks this; in the example below DPM would only backup the blue blocks.

VSS - 3

After 3 days (as per the protection group dictates) the first recovery point is flushed into the original copy of the file, this means if you restore from the oldest recovery point you have at this time the copy of the file you’ll get back will contain all the green blocks and the red block. As time progresses each recovery point is flushed into the “original copy”. Combining this with long term backup (standalone tape/tape libraries/virtual tape libraries) you can enable rapid restoration of recent backups with the safety of long term retention if needed.

Just a quick note on synchronisations and recovery points – you can specify a sync schedule of 15 minutes but you can only have 64 recovery points (this is hard coded in VSS). So if you need to have 10 recovery points created per day you can only have a retention period of 6 days (no part days allowed). If you specify a sync schedule of 15 minutes DPM will copy the changed blocks every 15 minutes but will not create a recovery point until it is scheduled to, at which point it combines all the synchronisations between the last recovery point and the current time, mash them together and create a recovery point.

As DPM leverages VSS the above functionality is available for any VSS enabled application, for example SQL Server, Exchange server, SharePoint (DPM is amazing with SharePoint). You even use this with Hyper-V server VHDs! If you’re using a storage appliance that is VSS aware you can leverage this at the hardware level (much faster than Hyper-V VSS).

You can integrate DPM with Active Directory so it can expose itself to end users. For example you can combine it with file shares for end user restores using the “Restore Previous Versions” feature of Windows. This can equate to less service desk calls (if the users are trained properly).

Other System Center Applications

There are several other SC2012 applications:

  1. System Center Operations Manager: monitors everything in your IT estate, you can even get it to look inside .NET applications to see what is happening in the code and see what is slowing the whole thing down or causing it to crash!
  2. System Center Orchestrator: takes your repetitive tasks and automates them. Imagine you have a script that you run to fix a problem but the problem happens at random times how can this be automated? If you can capture the problem in Operations Manager you can get it to execute a run book in Orchestrator to fix it for you and then tell you what it has done!
  3. System Service Manager: this is basically a service desk solution that hooks into all the other SC2012 products. It uses the best practises found in Microsoft Operations Framework and Information Technology Infrastructure Library (ITIL)
  4. System Center App Controller: gives you a portal that you can manage applications on on-premise clouds and the Windows Azure platform
  5. System Center Endpoint Protection: Microsoft’s enterprise anti malware product (here malware includes viruses) that uses ConfigMgr. It is light weight and uses all the existing software update and reporting infrastructure you’ve built with ConfigMgr
  6. System Center Advisor: Not strictly part of SC2012 but is available to you if you have the necessary licencing and allows you to monitor cloud based applications


In SC2012 Microsoft have unified all their management products that were previously available as separate products. The changes in SC2012 have reflected the overall changes in way IT is used by business and end users. SP1 introduced further changes to the products to include the latest developments from Microsoft including incorporating Intune, Azure, Windows 8 (including the Windows Store), Windows Server 2012, Microsoft Desktop Optimisation Pack 2012 (MED-V 2, App-V 5, DaRT), Apple, Linux, etc. (far too many things to list).

Some of the SC2012 suite is evolutionary some of it is revolutionary!

%d bloggers like this: