Migrating Azure Virtual Machines To Premium Storage

In this post I’m going to cover migrating Azure Virtual Machines to Premium Storage.

Converting your current Azure VMs from Standard Storage to Premium Storage cannot be done from the portal. The process is a mix of scripting and manual steps, although it can be fully scripted if need be. In short you have to make a new Premium Storage account, copy the blobs of the VHDs you need and then recreate the virtual machines using the copied VHDs. In this example I’m migrating a VM disk to create a single new VM, not an image that will be used to create multiple VMs.

Preparation
Make sure Premium Storage is available in the Azure region you want to use and also which types of VMs you can use with Premium Storage, you need *S type VMs like DS and GS. Take note of the extra cost involved using Premium storage and the performance characteristics of the different drives and VMs here.

Step 1
You have to shut down your VMs to migrate them and it takes a while so be prepared for down time or use availability sets. You will need the URI of each drive attached to your VM in order to copy them. In the new portal you can see the URI when you click on the disks of the VM. In the old portal there is a separate tab for disks under the virtual machines section.

Step 2
In the portal create your new Premium Storage account in the desired region and create a blob container to store the hard disks. Make sure you choose Premium Storage.

Step 3
Next you have to copy the VHD blobs to the new storage account, you can use PowerShell or 3rd party tools with a GUI like CloudBerry.

$sourcevhd = "https://source_account/vhds/MyDisk.vhd"
$srcContext = New-AzureStorageContext  StorageAccountName your_source_account -StorageAccountKey your_source_account_key
$dstContext = New-AzureStorageContext  StorageAccountName your_dest_account -StorageAccountKey your_dest_account_key
Start-AzureStorageBlobCopy -srcUri $sourcevhd -SrcContext $srcContext -DestContainer "vhds" -DestBlob "NewDisk.vhd" -DestContext $dstContext

This step can take a while depending on the size of the VHD. Repeat for every data disk attached to your VM.

Step 4
Create a new VHD from the blob you just copied to the Premium Storage account. You can use the old portal if you prefer a GUI

Migrating Azure Virtual Machines To Premium Storage

or PowerShell.

Add-AzureDisk -DiskName "NewDiskName" -MediaLocation "https://your_storage_account/vhds/NewDisk.vhd" -Label "New OS Disk" -OS "Windows"

Step 5
Create a new VM using the disk you prepared in the previous step. You can use PowerShell.

$OSDisk = Get-AzureDisk DiskName "NewDiskName"

$vm = New-AzureVMConfig -Name $vmName -InstanceSize $vmSize -DiskName $OSDisk.DiskName

New-AzureVM -ServiceName $serviceName VM $vm

Note I used DiskName not ImageName in creating the VM. You can also use the old portal but choose My Disks as the source of the VM:

Migrating Azure Virtual Machines To Premium Storage

The steps above copied the OS disk of a VM, if you have data disks attached to the VM you have to repeat the process to copy the data disks as well. Use the Add-AzureDataDisk cmdlet to attach the data disks to your new VM.

If you want to create multiple VMs create an image instead of a disk using  Add-AzureVMImage and point it to a generalised VHD.

Francois Delport

Azure Blob Cache And Disk Performance

In this post I’m going to take a deep dive into Azure Blob cache and disk performance. In my previous post I touched on Azure disk performance and cache but after that post I had so many unanswered questions that I started researching and running my own tests to see what you kind of performance you get from Azure disks and how caching affects their performance. Everything you need to know about Azure disks are explained in these two posts around premium storage and standard storage but there is a lot to digest and I wanted to make a summary for my own education.

First what is Azure blob cache?
Premium Storage: On the DS series machines and I assume other VMs that also support premium storage you have Blob Cache. When you enable read-only cache your disk reads are serviced from the host server RAM or local SSD, if the requested read is not in the cache it will be retrieved from Azure Storage. If you enable read/write cache, writes are cached locally in the host RAM or local SSD until they are flushed to Azure Storage, unless write-through is explicitly requested by the application.

Standard Storage: On a VM that uses standard storage the local host HDD and RAM can be used for read cache and the host RAM for write caching. From my understanding these local disks are mechanical HDDs and Azure Blob Storage can provide better IOPS for heavy random reads than these local disks. I’ll show you some tests I performed later on since I wanted to know at which point is it better not to have caching enabled.

Temporary disks
Apart from your OS disk you will also see a temporary disk in your Azure VM. I always wondered why these disks have a warning that you shouldn’t store any data you will need later on since it can be lost. Turned out these disks are actually local disks on the VM host not Azure Blob Storage. For premium storage VMs these are SSD and for standard storage VMs mechanical HDD. If your VM is moved to a different VM host you loose this disk.

Performance throttling
On premium storage machines you will see a limit for the IOPS and throughput in MB/S for the VM as well as the disks (P10,P20,P30). You will be throttled whenever you reach either of them so make sure your VM can deliver all the IOPS for the number of disks you add to it. With standard storage the disk IOPS is 500 and the throughput 66 MB/S.

Cache limits
Premium Storage: The cache limits where clearly explained in the reference posts mentioned earlier, 4000 IOPS and 33MB/S per core but I wanted to see for myself. In my tests I used a DS4 machine with 8 disks striped into one virtual disk using Storage Spaces with an IOPS limit of 25600 IOPS and throughput limit of 256MB per second. I ran a 4KB, random write test with 100 QD  (queue depth) with write cache on and reached 32000 IOPS which is bang on the money 8 cores * 4000 IOPS and 264MBs = 8 cores * 33MB/S. With the disk set to read-only cache I could only reach 25600 IOPS when I ran the same write test, which is the VM limit and throughput of 256MB per second.

Standard Storage: At the end of the post about standard storage they mention that Azure Blob storage can handle heavy random IO better then the local disk so I ran a few tests to see at which point it is better to switch off cache. I tried random and sequential reads and writes with different queue lengths and data sizes but cache was faster. The cache performance did degrade over time but it wasn’t much slower than the local disk and I ran it for 90 minutes. May be if the VM host is under memory pressure or heavy I/O load on it’s local disks you will reach this point sooner but for short bursts off I/O, caching is faster.

Queue Depth and latency
Since there is quite a bit of latency involved when your I/O request is not serviced from local cache and it has to go all the way to Azure Blob Storage I had to up my queue depth to reach the performance limits of the VM and caching in Premium Storage. Large numbers of small random access disk operations will suffer more than a small number of large sequential access disk operations. Keep this in mind if you are designing applications that are disk I/O intensive, you will only reach maximum performance if you can utilise the local cache or have multiple I/O requests queued up.

Francois Delport