Blog Archives

Modern Style Visio Stencils for Operations Manager

I have created some more modern style Visio stencils for System Center. This time for Operations Manager!

You can download from here.

You can what they look like from here but download them, they’re free!

OpsMgr3

OpsMgr1

OpsMgr2

MDT and DaRT – Locking the Port Used for Remote Connections during OSD

The Microsoft Deployment Toolkit (MDT) brings a lot of functionality to operating system deployment (OSD) as I’m sure many of you are aware. One of the best features is the ability to incorporate the DaRT tools into the MDT boot WIM. This allows for deployment administrators to remotely connect to a device during OSD. This can be extremely useful in a situation where the device is not local to the admin.

Johan Arwidmark has a great post on how to integrate the tools into the MDT environment with ConfigMgr.

One of the issues with the default DaRT configuration is that the remote connections use a dynamic RPC port instead of a specific port during OSD. It is possible to lock down the port when using DaRT in its fully fledged mode however locking it down to a specific port during the OSD phase is not easy.

I’ve recently been working at a customer who have VERY strict firewall policies in place and would not allow dynamic RPC ports to be open from the ConfigMgr Primary Site Server VLAN to the client device VLAN. This led me to investigate how to lock the port used by DaRT during OSD for remote connections.

After trying several different options, including adding a customised DartConfig.dat file to the base Toolsx86.cab file, I was almost at the point of giving up, I didn’t.

Using the DaRT Recovery Image Wizard I created a DaRT image for Windows 8.1 Update and on the Remote Connection tab I enabled the option to Allow Remote Connections and specified a port to use, in this case 3389 as this was what the customer wanted to use:

AllowRemoteConnections

During the process I ticked the option to edit the image before the WIM was created:

EditImage

I then opened the location where the WIM contents were stored and navigated to the Windows\System32 folder to extract the customised DartConfig.dat file:

DartConfig

This file was then copied to a new folder where I’d created a folder structure Windows\System32:

CreateExtrasFolderStructure

I then finish the DaRT Recovery Image Wizard and started to create a new boot image in ConfigMgr using the “Create Boot Image using MDT” option. During the creation wizard I ticked the “Add extra files to the new boot image” option and pointed to the UNC path folder for the folder I had created above:

ExtrasFolder

This created the boot image and crucially overwrote the default DartConfig.dat file with the one I created earlier. This meant that for all Task Sequences using this boot image the customer would be able to connect to the device using the DaRT Remote Control option in MDT using port 3389 at all times.

OnPort3389

 

Scripting Shared Nothing Live Migration

UPDATE: 16th September 2016 – Link to download fixed.

I was working with a customer recently to replace their existing Windows Server 2012 Hyper-V clusters and System Center 2012 SP1 Virtual Machine Manager (VMM) installation with new Windows Server 2012 Hyper-V clusters and System Center 2012 R2 Virtual Machine Manager installation.

The customer was concerned about downtime for moving their Virtual Machines (VMs) from their existing clusters to the new ones.

We looked at the option of using Shared Nothing Live Migration (SNLM) to move VMs between the clusters which whilst an option wasn’t entirely realistic due to having in excess of 250 VMs and the names of the Logical Switches were different so each VM took some time to manually process, and being a manual repetitive task is prone to errors. The customer thought they’d have to go through migrating roles, moving CSVs and taking down VMs etc. Whilst that doesn’t sound too bad I wanted to offer a better option.

So looking at the options in PowerShell it was obvious that Move-VM was the cmdlet I wanted to use. Looking at the parameters I found -CompatibilityReport which “Specifies a compatibility report which includes any adjustments required for the move.” my first thought was where do I get one of those from?

After a bit of digging on the internet I discovered Compare-VM which creates a Microsoft.Virtualization.Powershell.CompatibilityReport.

The Compatibility Report fundamentally contains information on what would happen if, in this case, wanted to move a VM from one host to another.

So running:

Compare-VM <VMName> -DestinationHost <DestinationServer> -DestinationStoragePath <DestinationStoragePath> -IncludeStorage

gave me a Compatibility Report with some incompatibilities listed… Again after some digging I determined what these incompatibilities meant and how to resolve them.

I could then run a Compare-VM -CompatibilityReport <VMReport> which essentially says, “if I did this to the VM would it work now?” As long as you get no incompatibilities all is good!

Once that completed we could use the Move-VM –CompatibilityReport <VMReport> function to move a VM from one host to another…

Now whilst all these Compare-VMs are underway the source VM is quite happy existing and running as normal.

So where is this going? After discussions with the customer I expanded the PowerShell script to cope with multiple VMs, check for Pass Through Disks, remove VMs from clusters, etc.

The basics of the script are that it requires several parameters:

  • SourceCluster – where are the VMs to move?
  • DestinationServer – where do you want to move the VMs to? (optional, if this isn’t specified then a random member of the destination cluster is chosen for each VM to be moved)
  • DestinationCluster – what cluster do you want to move the VMs to?
  • SwitchToConnectTo – what is the name of the Virtual Switch to use on the destination server/cluster? For example if you VM/VMs are connected to a virutal switch called LogSwitch1 but your new cluster uses a virtual switch named LogicalSwitch1 you would specifiy LogicalSwitch1 for this parameter.
  • DestinationStoragePath – where do you want to put the VM’s storage on the destination cluster?
  • VMsToMove – this is a list of the VMs to be moved
  • LogPath – the path to a file you want to log the progress of the script to <optional>

Whilst this script may seem a little limited it managed to save the customer a great deal of time in migrating their VMs from their old Hyper-V clusters to their new ones. It can be extended to put in different Storage Paths for differenet VMs, different Virtual Switches etc.

Move-VMusingSNLM

Where are Microsoft heading with the 2012 R2 releases?

So last week I attended my first TechEd Europe in Madrid. I won my place through the Microsoft TechNet UK TechEd Challenge (say that after a few pints…) for my System Center 2012 blog post.

Never in 4 days have I learnt so much!

With the new versions of Windows Server 2012 R2 and System Center 2012 R2 announced at TechEd North America it was the turn of Europe to see what Microsoft had to offer with the latest versions. It is fair to say they’ve not disappointed any one (that much) with the upcoming releases.

First of all – TechEd

Wow. So it was my first Microsoft conference, ever, and I enjoyed every minute. It was great to see so many of the Product Managers, Marketing Managers and downright technical geniuses that had made the trip over to share their enthusiasm for the next release. So out of all the sessions I could’ve possibly attended I only missed 1 – mainly due to my brain trying to process the shear quantity of information!

It’s clear to see the preview releases are very stable (no BSODs on demos) and no pre-recorded demos (unlike some other vendors).

Le Caveat

Everything below is my opinion and should be treated as such. No Microsoft employee has confirmed any of the information below, it is purely my personal speculation.

So what is Microsoft’s vision?

Everything Windows Server 2012 R2 and System Center 2012 R2 is based on the “Cloud OS” vision (I saw that slide so many times…) where there are 3 clouds:

  1. Private cloud: on-premise cloud powered by Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released), the Service Provider Framework and the Windows Azure Pack (which is awesome). This is generally seen as the starting point for everything, doesn’t have to be but if you’ve got it on-premises the rest is easy.
  2. Public cloud: this is Microsoft’s Azure cloud. It is powered by Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released) and the Azure Services (full blown) and some mega storage system using commodity hardware – no specialist SAN.
  3. Service Provider cloud: again running on Windows Server 2012 (and R2 when released), System Center 2012 (and R2 when released), SQL Server 2012 (and 2014 when released), the Service Provider Framework and the Windows Azure Pack (which is awesome – still). The idea here is for value added services from a service provider, customer choice – especially around data locations (think data laws).

This leads to the “one consistent platform” message; if your internal users can provision services using the Windows Azure Pack, they can use full blown Azure and Service Provider implementations too; no more learning of several different portals.

So what is enabling this?

The core new features of Windows Server 2012 R2 are going to change the game when it comes to Cloud.

  • Shared VHDX – enables guest clustering without having to expose directly mapped storage to guest VMs. This gives you all the features needed for upgrading underlying infrastructure whilst maintaining availability of guest VMs. Storage Migration will allow you to move the VM’s storage (i.e. the shared VHDX) whilst the underlying hardware is maintained, upgraded, replaced, etc. As internal provider this will make my life so much easier (edit: this point needs to be clarified with some testing, I think I may have this wrong…) I can remove all my directly mapped LUNs and just use Shared VHDX files for the storage. Don’t use snapshots!
  • Online VHDX resize – the ability to change the size of the disk attached to a VM (grow AND shrink), without having to take the VM offline! Note: you still need to change the size of the parition within the guest, however some clever use of PowerShell/System Center Orchestrator (provided the guest trusts the Orchestrator install) will do this however that will require some effort to implement, it isn’t in the box
  • Storage QoS – you can now tune the number of IOPs on a virtual disk. No more IOP hoggers! I believe this only extends to additional disks, not disks with OSs in. As such applications like SQL that love IOPs will have to be configured correctly in guest for the Hyper-V provider to take advantage here (follow MS best practise and you’ll be fine)
  • Live Migration compression – in Windows Server 2012 R2 this will come enabled by default. Most virtualisation hosts are constrained by the amount of RAM they have to offer guests rather than the CPU cycles they can offer. Compression uses spare host CPU cycles to compress the Live Migration of RAM and you can move a VM at twice the speed (if not more). If you’ve got RDMA NICs (and multiples of) then the speed of your RAM will matter (that is not a typo). SMB direct RDMA offloads everything from the system to the NIC cards
  • Extended replica – instead of just being able to replica to one other host you can replica a replica. Perfect for Service Providers who offer replica as service; they’re able to replica the customer’s VMs to another host/data centre without having to have crazy expensive SANs
  • Hyper-V Network Virtualisation Gateway – until the Friday morning of TechEd I referred to this as the “magic gateway”, I just couldn’t figure out how it worked. After attending this session it all became very clear. This appears to be the brains behind the Virtual Networks offering on the Azure public cloud, the load balancer and all the other excellent networking offerings in Azure
  • Windows Server 2012 R2 Tiered Storage Spaces – on the surface this seems to the StorSimple technology migrated to Window Server 2012 R2. By tiering the storage available on Storage Spaces Windows Server will move the most read/written to blocks (blocks not files – blocks could contain files, think VDI deduplication here) to the fastest storage available, this could be SSD, 15K disks, etc. This tiering gives amazing IOPs especially when combined with CSV caching in memory. Best of all – it just uses JBODs on the back end! As I understand it at the moment you can only have 8 nodes in a Scale-out file cluster for this
  • Linux backups of Hyper-V guests – no longer will a VM pause when it is being backed up at the host level (provided your Linux version is correct). Microsoft have shied away from saying they’ve implemented VSS inside Linux but it is basically what they’ve done
  • Oracle support on Hyper-V – this is probably the final hurdle for high-end enterprise adoption of Hyper-V

At TechEd the focus was VERY heavy on the “Cloud OS” vision and how System Center 2012 R2 and the Windows Azure Pack was going to power that throughout:

  • System Center Virtual Machine Manager (VMM) is the king maker. VMM will now deploy VM hosts from bare metal, VMs to hosts (whether that be Hyper-V, VMware or Citrix hosts) and with R2 it will deploy Scale-Out file servers for hosting VM storage from bare metal! Allegedly this list will increase too. Microsoft have stated that there is no reason why you shouldn’t move your workloads to VMs – this is squarely aimed at SQL server workloads. With the ability to SysPrep a SQL server you can now deploy them directly to VMs. Server App-V brings another string to VMM’s bow. As far as I can see Microsoft are targeting VMM at deploying all server workloads – physical and virtual.
  • Windows Azure Pack. This is enabling end-user provisioning of services from pre-defined templates (created in VMM) with an interface that is consistent with the Microsoft Azure Cloud. The Azure Pack sits between the end-user and the Service Provider Framework (this sits in front of System Center) and can be skinned to corporate colours. At a basic level it tells VMM what to do (via service templates) and as such does not necessarily require you to have Hyper-V as your virtualisation host – it will work with VMware and Citrix too. Best of all – it’s extensible. Microsoft will add more services over time and you can add your own in too.

So what about the other System Center components?

Data Protection Manager was relatively quiet, it was confirmed that this component will be able to use a clustered SQL server for its database but there will be no push to cluster DPM. You can make DPM highly available by running it as a VM on a Hyper-V failover cluster. You should be able to use VHDX files to store the DPM backups (this will remove the final pass-through disk in my DPM setup) – these will need to be fixed size though and will probably not support online resize – DPM can get very angry about other applications playing around with its disk(s).

I heard very little mention of System Center Configuration Manger 2012 R2 at TechEd. I may have been in the wrong sessions. With VMM taking over the world role of deploying servers and ConfigMgr having tighter integration with Windows Intune I see it as becoming the client OS manager. Combine it with MDT and it is an extremely effective tool for desktop deployment and compliance monitoring. When it comes to Patch Mangement VMM already has the hosts, how long until it starts looking after guests? Admittedly ConfigMgr gives you all the reports, at the moment…

Operations Manger – there were some further strives forward, especially for monitoring Java applications. System Center Advisor is now baked in to the application (this is Microsoft’s cloud based monitoring that uses information gained from customers to ensure you’re installations are in tip top condition).

I didn’t hear anything about App Controller or End Point Protection.

Summary

The order of products to learn inside and out for effective Microsoft Cloud OS are:

  1. Windows Server 2012 R2: this is the base for everything. Microsoft runs on Microsoft best (or something similar – I’m sure the MS marketing team can correct me here). Once you know how Windows Server works, especially Hyper-V, you’ve got the foundations for your cloud
  2. System Center Virtual Machine Manger: this rules your cloud. VMM provisions and controls your cloud. I cannot stress how important this product will be in the next 12 months and far into the future
  3. System Center Operations Manger: this will monitor your cloud and all the applications running in it. There’s no point in having a bunch of amazing hardware if the services you’re running are performing like a 90 year old in 100 metre sprint. I’d include System Center Advisor in here too
  4. Windows Azure Pack: this is the front door to your cloud. It makes end-user provisioning of services much easier. You can also customise the pack, not only through colour schemes but you can add your own items in there too
  5. Data Protection Manager: no point in having an amazing cloud if you can’t restore data when you/your customer has a problem
  6. Service Manager: the perfect solution for service desk management, CMDB; it integrates with all the System Center components and offers rich reporting.
  7. App Controller: the key to where services get provisioned. From here you can provision services on premise or in the Cloud
  8. Orchestrator: the key to automation. Orchestrator can talk to all the System Center components, Windows Server 2012, Active Directory, Exchange, SQL Server (the MS list goes on and on) and a vast array of non-Microsoft software including BMC Remedy, VMware, etc.
  9. Configuration Manager: this is important to provide rich compliance information, integrated anti-malware protection, etc. I do believe however that with Desired State Configuration in Windows Server 2012 R2 the compliance monitoring aspects of ConfigMgr for servers will be used less and eventually be deprecated

With the alignment of Windows Server and System Center build/deployments Microsoft are making the life of an IT Pro much easier! Unlike when Windows Server 2012 was released there should be no delay in getting the management components up and running too.

%d bloggers like this: