Migrating a VMWare SharePoint Dev VM to Azure – Part 2

Update: I’ve added disk performance figures for an Azure ‘D’ series virtual machine.

In a previous post I described a process for migrating an on-premises, VMWare based SharePoint development environment up to Azure. Whilst functional in a pinch, performance of the resulting virtual machine left a little to be desired.

The bottleneck seemed to be disk performance. Unsurprising given that the original VM was designed and optimised to run on a single SSD capable of a large number of IOPS, rather than a single, relatively slow disk as offered by an A servies VM in Azure.

First of all, I thought I’d gather some metrics on the performance of the respective disks I was using. Microsoft offers a great tool called DiskSpd for doing exactly this.

Here are the salient results after running DiskSpd on some representative disks:

Storage I/O per sec MB/s Avg. Latency (ms)
Local SSD 28,813.07 225.10 2.22
Local HDD (7,200RPM) 514.08 4.02 124.87
Azure A series OS disk 515.71 4.03 123.36
Azure A series temp disk 67,390.80 526.49 0.95
Azure A series data disk 492.16 3.84 82.76
Azure D series OS disk 972.78 7.60 65.56
Azure D series temp disk 128,866.03 1,006.77 0.495

Little wonder the VM wasn’t performing brilliantly: we’ve got an incredibly heavy workload, (remember, this dev environment is running Active Directory Directory Services, DNS, SQL Server, SharePoint features including the Distributed Cache, Search & workflow as well as client applications like Office, a browser with dozens of tabs open and Visual Studio – just to name a few), all running off a single disk which is only capable of approximately 500 IOPS. The SSD it runs off when stored locally is able to handle this load as it offers a blistering 28,000+ IOPS but what can we do in Azure to improve the situation?

Use the temp disk

Looking at the DiskSpd testing results above, it’d be tempting to try to use the temp disk for as much of the load as possible – it offers an incredible 67,000+ IOPS afterall! (As an aside, I can only imagine this is either a RAM disk or something with crazy fast caching enabled, either way, it’s cool, huh?)

Unfortunately, Microsoft go to great lengths to point out that the temp disk assigned to Azure VMs isn’t persistent and shouldn’t be relied upon for storing data long term (more here):

Don’t store data on the temporary disk. This disk provides temporary storage for applications and processes and is used to store data that you don’t need to keep, such as page or swap files.

This rules the temp disk out for much of what we’re doing, however there’s a clever technique which enable us to configure SQL Server to use the Azure temp disk for it’s TempDB logs and files.

This is certainly something which would have a positive impact on performance (both the TechNet documentation on Best practices for SQL Server in a SharePoint Server farm and Storage and SQL Server capacity planning and configuration (SharePoint Server 2013) state that TempDB should be given priority for the fastest available storage) and the nature of the TempDB (that it’s re-created on start-up of SQL Server) lends itself well to this solution.

Other than storing TempDB, there isn’t anything else I could think of which was transient and would suit storing on the Azure temporary drive.

Add more disks

As the testing with DiskSpd shows, each data disk we attach to our Azure virtual machine provides ~500 IOPS. Different sized Azure VMs allow you to attach different numbers of data disks. We’re using a relatively large virtual machine (an A6) so can attach up to 8 additional disks. How we use these disks is where things get interesting:

  • We could attach disks, allocate each a drive letter, then move SQL databases and log files on to their own drives. For example; one drive for content database log files, one for search databases, one for content databases and one for all the other databases). This would give 500 IOPS to each.
  • We could attach multiple disks, then use Storage Spaces to pool them all, creating a single volume made up of multiple disks. This one disk would have a higher number of IOPS, although it isn’t a linear 500*number-of-disks relationship. I tested 1, 2, 3 and 4 disks pooled and found IOPS went from 492, to 963, to 1434, to 1919.
  • Finally, we could use a hybrid of the two outlined approaches: for performance dependent files pool multiple disks, for less IOPS sensitive data, use just a couple of pooled disks, and for latency agnostic requirements, use single disks.

Note: it’s important to remember that where Storage Spaces is sometimes used to provide resiliency through RAID like features such as Mirroring and Parity, we’re only interested in the performance improvements it gives us – resiliency is afforded through the underlying Azure infrastructure (whereby writes to VHDs are replicated synchronously to 3 locations in the current datacenter and potentially to a further 3 locations in a paired datacenter asynchronously).

Use an Azure virtual machine with faster storage

Instead of an ‘A’ series virtual machine, we could look at using the already available ‘D’ series. The primary difference these offer is an SSD based temp disk. From the performance figures the ‘D’ series disks return, we see a slight increase in performance of the OS disk and a frankly staggering performance of the temp disk. Although they’re considerably more expensive, the faster storage included with a ‘D’ series VM would clearly help.

Incidentally, Microsoft recently announced a new tier of virtual machines called the ‘G’ series. These are primarily billed as providing larger local SSD storage options. For our purposes, (where we can only really take advantage of the temp disk for our SQL Server’s TempDB), I don’t think this will help us much.

When they announced the ‘G’ series VMs, Microsoft also announced a new, SSD based, ‘Premium Storage’ offering. Details are trickling out but it sounds like we will be able to use employ those techniques already described, (attaching additional data disks – this time SSDs – and striping across them using Storage Spaces), to provide very fast storage for our SQL requirements.

Consider another cloud provider

The final approach to consider, if we want to provision a virtual machine with potentially faster storage, is an entirely different public cloud provider. Among the options is Amazon’s AWS offering. I haven’t used AWS a huge amount but their i2.xlarge VM instance offers 4 CPUs, 30GB of RAM and a single 800GB SSD which sounds a great fit for what we need.

SQL legend Brent Ozar recently performed some testing of AWS VM performance and compared and contrasted it with Azure. His findings, for the specific scenario he considers, show AWS to be cheaper and better performing.


We can either increase the IOPS available to our virtual machine for little additional cost by adding disks, striping across them & reconfiguring SQL to spread the load across them or we can look to take advantage of higher performance public cloud storage – though this may incur an increased cost.

For my own scenario, I can see that having a SharePoint development environment which works reasonably well and can be spun up quickly will enable developers to start work right away whilst appropriate hardware is ordered and provisioned for them to use on-premises in the longer term.

Migrating a VMWare SharePoint Dev VM to Azure

My normal SharePoint 2013 development and testing environment is a single VMWare virtual machine. It runs Windows Server 2012 R2 and has installations of SQL Server 2012, SharePoint 2013 and Visual Studio 2013. To accommodate all of this, I have a big beefy desktop machine with 32GB of RAM and a couple of solid state hard drives: one for the host operating system and applications and another one to host the hard drive of the virtual machine.

A pretty typical setup, I think?

If a developer starts work on a SharePoint project they may not have a machine capable of hosting that environment right away. Often it needs to be ordered in for them which may take days or weeks. Likewise, they may be only need access to a development environment temporarily.

In order to provide swifter access to a working environment, I thought it might be interesting to explore converting my existing local SharePoint development VM to run in Azure.

Here are the steps I took to achieve this and what I learnt from doing it.

Stage 1: Create the VHD

VMWare virtual machines use a format of disk (VMDX) which isn’t compatible with Azure. The first step is therefore to convert the disk to the compatible VHD format.

  1. Inside my running VM I’ll download the Microsoft Disk2VHD tool.
  2. Next, I run Disk2VHD and un-tick the “Use VHDX” option in order to make sure we create a VHD disk rather than the newer VHDX format.
  3. In my scenario I want to create the VHD on a disk on the machine hosting the VM. I can do this by specifying the host PC’s IP in the “VHD File name” dialog box then specifying the admin share of the drive in question (f$ below).
  4. The creation of the VHD will take some time so this is an ideal opportunity to grab a nice cup of tea and a biscuit.

(Optional Stage 1.5: Test VHD in Hyper-V)

Before uploading your newly created 100GB+ VHD to Azure and hoping it’ll work, you can try testing it locally. You’ll need a Hyper-V server with enough resources to host it but it should just be a case of creating a new VM with your VHD as the disk.

Note: I haven’t got a Hyper-V host to hand with enough memory so I’m skipping straight to popping my disk into Azure.

Stage 2: Upload the VHD to Azure

Once stage 1 is complete, we should have our VHD ready to upload to Azure.

To proceed you’ll need to ensure you have access to an Azure subscription, a storage account within that subscription and the Azure PowerShell module installed and configured on your local machine.

Once that’s all ready it’s really just a case of running the “Add-AzureVhd” PowerShell command. We have to specify the “FileLocation” parameter and point it at our newly created VHD file as well as the “Destination” parameter.

The destination is the location we want to copy the VHD to. This is made up of the URL for our storage account, the folder within the storage account that the VHD will be stored in and the name we’re going to give the VHD file in Azure. Mine will look a bit like this:

What’s nice about this command is that it does sensible things like create an MD5 hash of your local file then check the file you’ve uploaded has the same hash upon completion. It also supports resuming of your upload should it be interrupted.

When the upload completes you should be able to browse to your Azure storage account, drill down to the vhds container and see your uploaded VHD.

Stage 3: Create a new VM image from the uploaded VHD

Back in the Azure managment portal we need to tell Azure this isn’t just any old VHD, this is something it can use as a template to create new VMs from. We do this by creating an “Image” from the VHD.

  1. Navigate to the “Virtual machines” section, click the “Images” heading then choose the “Create an image” option.
  2. On the “Create an image” screen give your VM a sensible name and description then select the VHD you uploaded earlier. You also need to specify that the VM is running Windows and tick the “I have run sysprep” box (even though we haven’t!)

Stage 4: Create a VM from the image

Having created an image from the uploaded VHD when we now go to create a VM we can drill down into the “My Images” section of the gallery and find our newly created image.

When we create a VM from the image it will take a long time to provision and start up. If you think about it, we’ve effectively pulled the rug out under our VM and moved it from the VMWare virtual hardware to Hyper-V’s. It has to rediscover that hardware and setup drivers to be able to talk to it.

This is another great opportunity for a cup of tea and a biscuit.

Once the VM has completed provisioning you should be able to connect to it over RDP and start using it.

Stage 5: Troubleshooting

VM is unresponsive to Remote Desktop

I’ve had mixed results with this approach. On more than one occassion the VM provisioned and started up fine but I couldn’t connect to it over RDP. I was able to ping it and even connect via PowerShell so I knew it was running, it just wouldn’t allow me to remote into it.

I followed some instructions on re-enabling remote desktop remotely, restarted the terminal services service and eventually it started working but I’m not entirely comfortable not knowing why it stopped in the first place.

Errors in SharePoint

Once connected to my VM the first thing I did was fire up SharePoint Central Admin. It took a while but eventually I was presented with nasty looking IIS / .NET error pages. A little digging revealed that the network card installed when the VM was created in Azure is configured to use DHCP to get both its IP address and DNS.

As my standalone VM is configured to be DNS for itself I simply had to change this to use for DNS and things were happy again.

Stage 6: Using the VM

Whilst it just about works, using the VM on an on-going basis would be pretty painful. It’s hampered by the low disk performance Azure provides. I grabbed a couple of screenshots from the Resource Monitor tool to illustrate this. Of particular interest is the “Response Time” column I’ve highlighted. This shows how long each process is having to wait to get a response from the underlying storage infrastructure.

On the local copy of the VM, hosted on a dedicated SSD response times are generally less than 5 ms:

Local VM hosted on a dedicated SDD.

By way of contrast them same VM hosted in Azure shows response times in the thousands of milliseconds:


VM hosted in Azure on a single disk.


Clearly, this particular VM was originally architected to take advantage of a single, fast SSD. Doing a ‘lift and shift’ to just dumping it in Azure was never going to provide an optimal solution. Having said that, the fact that it works at all and is useful in a pinch is pretty impressive.

The key takeaway from this should be that demanding workloads such as SharePoint can be implemented successfully on the Azure platform however there are a number of very specific considerations which should be taken into account when designing and implementing that underlying infrastructure. Expecting existing VMs to be able to be migrated across and work just as well as they did on-premises is naïve.

I’ll be sure to create a follow up to this post with some lessons I’ve learnt from implementing SharePoint and other demanding workloads in Azure.

Sorting the SharePoint Quick Launch Menu in PowerShell

I recently had to work with a SharePoint site which had accumulated an enormous number of links in its left-hand, (or ‘quick launch’ in SharePoint parlance), menu. Unfortunately, I wasn’t able to activate the ‘SharePoint Publishing Infrastructure’ feature for the site in question, meaning I couldn’t use the automatic sorting functionality. Instead, all I had available in the interface was the clunky list of items with a pull-down menu to specify the position they should appear in.


Unsorted quick launch and rudimentary sorting interface.

A quick search showed up lots of examples of how to add and remove individual links or how to export /  import via an XML file, but sorting seems not to come up. (Maybe everyone else is better at managing what ends up in their navigation elements?). Regardless, armed with my trusty PowerShell ISE I reckoned I could whip something up to fulfil my requirement.

My plan was simple:

  1. Get the specified site’s quick launch.
  2. Create a sorted copy of the links in the quick launch.
  3. Clear the current quick launch.
  4. Copy back in my sorted list of links.
  5. Treat myself to a nice cup of tea and a biscuit.

Here’s the quick and dirty solution I came up with:

That fixed my immediate problem but when I get five minutes I’ll come back and make a function out of it. I’d like to be able to specify the site and the specific section I want sorted, perhaps with the ability to sort the top-level sections too.

SharePoint Newsfeed app for iOS Updated

Microsoft have made it abundantly clear that the public cloud based Yammer application is their recommended social tool for use with SharePoint and their other cloud based services. They’ve also made it clear that the social features present in SharePoint 2013 won’t be developed further.

Imagine my surprise then, when I saw that the SharePoint Newsfeed app I have installed on my iPad received an update earlier this month. For anyone who hasn’t used it, this app connects to a user’s SharePoint Newsfeed and presents it’s content in a fairly compelling format.

The update note states, “This release is focused on improving stability and fixing bugs.”

No new features it seems but it’s nice to see it’s being given some love.

SharePoint Newsfeed App for iOS

SharePoint Newsfeed App for iOS update note.