Migrating Azure Virtual Machines To Premium Storage

In this post I’m going to cover migrating Azure Virtual Machines to Premium Storage.

Converting your current Azure VMs from Standard Storage to Premium Storage cannot be done from the portal. The process is a mix of scripting and manual steps, although it can be fully scripted if need be. In short you have to make a new Premium Storage account, copy the blobs of the VHDs you need and then recreate the virtual machines using the copied VHDs. In this example I’m migrating a VM disk to create a single new VM, not an image that will be used to create multiple VMs.

Preparation
Make sure Premium Storage is available in the Azure region you want to use and also which types of VMs you can use with Premium Storage, you need *S type VMs like DS and GS. Take note of the extra cost involved using Premium storage and the performance characteristics of the different drives and VMs here.

Step 1
You have to shut down your VMs to migrate them and it takes a while so be prepared for down time or use availability sets. You will need the URI of each drive attached to your VM in order to copy them. In the new portal you can see the URI when you click on the disks of the VM. In the old portal there is a separate tab for disks under the virtual machines section.

Step 2
In the portal create your new Premium Storage account in the desired region and create a blob container to store the hard disks. Make sure you choose Premium Storage.

Step 3
Next you have to copy the VHD blobs to the new storage account, you can use PowerShell or 3rd party tools with a GUI like CloudBerry.

$sourcevhd = "https://source_account/vhds/MyDisk.vhd"
$srcContext = New-AzureStorageContext  StorageAccountName your_source_account -StorageAccountKey your_source_account_key
$dstContext = New-AzureStorageContext  StorageAccountName your_dest_account -StorageAccountKey your_dest_account_key
Start-AzureStorageBlobCopy -srcUri $sourcevhd -SrcContext $srcContext -DestContainer "vhds" -DestBlob "NewDisk.vhd" -DestContext $dstContext

This step can take a while depending on the size of the VHD. Repeat for every data disk attached to your VM.

Step 4
Create a new VHD from the blob you just copied to the Premium Storage account. You can use the old portal if you prefer a GUI

Migrating Azure Virtual Machines To Premium Storage

or PowerShell.

Add-AzureDisk -DiskName "NewDiskName" -MediaLocation "https://your_storage_account/vhds/NewDisk.vhd" -Label "New OS Disk" -OS "Windows"

Step 5
Create a new VM using the disk you prepared in the previous step. You can use PowerShell.

$OSDisk = Get-AzureDisk DiskName "NewDiskName"

$vm = New-AzureVMConfig -Name $vmName -InstanceSize $vmSize -DiskName $OSDisk.DiskName

New-AzureVM -ServiceName $serviceName VM $vm

Note I used DiskName not ImageName in creating the VM. You can also use the old portal but choose My Disks as the source of the VM:

Migrating Azure Virtual Machines To Premium Storage

The steps above copied the OS disk of a VM, if you have data disks attached to the VM you have to repeat the process to copy the data disks as well. Use the Add-AzureDataDisk cmdlet to attach the data disks to your new VM.

If you want to create multiple VMs create an image instead of a disk using  Add-AzureVMImage and point it to a generalised VHD.

Francois Delport

Azure Blob Cache And Disk Performance

In this post I’m going to take a deep dive into Azure Blob cache and disk performance. In my previous post I touched on Azure disk performance and cache but after that post I had so many unanswered questions that I started researching and running my own tests to see what you kind of performance you get from Azure disks and how caching affects their performance. Everything you need to know about Azure disks are explained in these two posts around premium storage and standard storage but there is a lot to digest and I wanted to make a summary for my own education.

First what is Azure blob cache?
Premium Storage: On the DS series machines and I assume other VMs that also support premium storage you have Blob Cache. When you enable read-only cache your disk reads are serviced from the host server RAM or local SSD, if the requested read is not in the cache it will be retrieved from Azure Storage. If you enable read/write cache, writes are cached locally in the host RAM or local SSD until they are flushed to Azure Storage, unless write-through is explicitly requested by the application.

Standard Storage: On a VM that uses standard storage the local host HDD and RAM can be used for read cache and the host RAM for write caching. From my understanding these local disks are mechanical HDDs and Azure Blob Storage can provide better IOPS for heavy random reads than these local disks. I’ll show you some tests I performed later on since I wanted to know at which point is it better not to have caching enabled.

Temporary disks
Apart from your OS disk you will also see a temporary disk in your Azure VM. I always wondered why these disks have a warning that you shouldn’t store any data you will need later on since it can be lost. Turned out these disks are actually local disks on the VM host not Azure Blob Storage. For premium storage VMs these are SSD and for standard storage VMs mechanical HDD. If your VM is moved to a different VM host you loose this disk.

Performance throttling
On premium storage machines you will see a limit for the IOPS and throughput in MB/S for the VM as well as the disks (P10,P20,P30). You will be throttled whenever you reach either of them so make sure your VM can deliver all the IOPS for the number of disks you add to it. With standard storage the disk IOPS is 500 and the throughput 66 MB/S.

Cache limits
Premium Storage: The cache limits where clearly explained in the reference posts mentioned earlier, 4000 IOPS and 33MB/S per core but I wanted to see for myself. In my tests I used a DS4 machine with 8 disks striped into one virtual disk using Storage Spaces with an IOPS limit of 25600 IOPS and throughput limit of 256MB per second. I ran a 4KB, random write test with 100 QD  (queue depth) with write cache on and reached 32000 IOPS which is bang on the money 8 cores * 4000 IOPS and 264MBs = 8 cores * 33MB/S. With the disk set to read-only cache I could only reach 25600 IOPS when I ran the same write test, which is the VM limit and throughput of 256MB per second.

Standard Storage: At the end of the post about standard storage they mention that Azure Blob storage can handle heavy random IO better then the local disk so I ran a few tests to see at which point it is better to switch off cache. I tried random and sequential reads and writes with different queue lengths and data sizes but cache was faster. The cache performance did degrade over time but it wasn’t much slower than the local disk and I ran it for 90 minutes. May be if the VM host is under memory pressure or heavy I/O load on it’s local disks you will reach this point sooner but for short bursts off I/O, caching is faster.

Queue Depth and latency
Since there is quite a bit of latency involved when your I/O request is not serviced from local cache and it has to go all the way to Azure Blob Storage I had to up my queue depth to reach the performance limits of the VM and caching in Premium Storage. Large numbers of small random access disk operations will suffer more than a small number of large sequential access disk operations. Keep this in mind if you are designing applications that are disk I/O intensive, you will only reach maximum performance if you can utilise the local cache or have multiple I/O requests queued up.

Francois Delport

Azure Disks And Images

In this post I’m going to explore a few more scenarios around VHDs and images in Azure. If you look at this post I showed how to copy a VM between subscriptions in a semi scripted way. I’ll be extending that script to create an image or hard disk from the copied VHD, but first some more info around blobs.

Blobs

VHDs are stored in Azure Blob Storage and there are 3 types of blobs.

  • Page Blobs
    Used to store VHDs and is optimised for random reading and writing.
  • Block Blobs
    Used to store files that are suitable for streaming and written to once.
  • Append Blobs
    Used for appending data for example log files.

The different types of blobs come into play when you copy VHDs into Azure Blob Storage and depends on the tool you use. If you for instance use Azure PowerShell and the Add-AzureVHD cmdlet the VHD is copied as a page blob. If you use CloudBerry explorer you have to explicitly choose Copy As Page Blob, the default copy will create a block blob. For other tools it could be different so confirm how blobs are copied before you copy 120Gb to Azure just to find out it was the wrong format. Later in the post we will register the VHD as a disk.

Images
Images can be specialised or generalised (SysPrepped) and can contain multiple disks. If you want to create an image that can be used to create more instances of a VM, use a generalised image for example to create instances for a scale out scenario. If you want an exact copy of the VM, use a specialised image, for example to copy it to another subscription or to restore it from a snapshot. When you create an image using PowerShell remember to indicate the OSState. This equates to the check box in the portal, asking if you ran Sysprep.

Azure Disks And Images

If you want to capture an image from a VM you already have in the same subscription, shutdown the VM and save the image.

Save-AzureVMImage -ServiceName VmService -Name VMName -ImageName NewImage -OSState Specialised/Generalised

In the next script I create an image from a VHD I copied into Azure Blob Storage.

$DiskConf = New-AzureVMImageDiskConfigSet

Set-AzureVMImageOSDiskConfig -DiskConfig $DiskConf -HostCaching ReadWrite -OSState Specialised/Generalised -OS "Windows" -MediaLink $vdhurl

$DiskConf.DataDiskConfigurations = new-object Microsoft.WindowsAzure.Commands.ServiceManagement.Model.DataDiskConfigurationList     #work around for a bug

Add-AzureVMImage -ImageName "NewImage" -Label "Easier To Find" -OS Windows -DiskConfig $DiskConf -ShowInGui $true

You can use the old portal to create an image from a VHD, under the virtual machines menu, click in the images tab and choose create new, you can browse to the VHD from there.

From my investigation I could not find a way to change the OSState after the image was created, you might be able to do it by altering the meta data on the blob but I didn’t try it yet. From experience using a specialised image where generalised images are expected doesn’t work, for example creating a new VM from a specialised image that is tagged as generalised just hangs on start up and the boot sequence never completes. Could have been something specific about the VMs I used but it happened twice.

In the old portal you can create a new VM using your existing images by choosing My Images.

Azure Disks And Images

Or pass in the image name in PowerShell.

$VM = New-AzureVMConfig -Name $VmName -InstanceSize $InstanceSize     -ImageName $SourceName

When you create a VM from an image, Azure will create new disks based on the disks referenced by the image, like templates in VMWare.

OS Disks
OS disks contains the disk used for booting, the disk is assumed to be specialised. I did not try to create a VM from one that is generalised yet. If you copied a VHD into blob storage you can create an OS disk using PowerShell.

Add-AzureDisk -DiskName "NewVMBootDisk" -MediaLocation $vdhurl -Label "BootDisk" -OS "Windows"

Or you can use the old portal, under the virtual machines menu, click on the disks tab and choose create new, you can browse to the VHD from there.

Now you can reference this disk to create a new VM.

$VM = New-AzureVMConfig -Name $VmName -InstanceSize $InstanceSize -DiskName "NewVMBootDisk"

You can create a new VM from your OS disk in the old portal by choosing My Disks.

Azure Disks And Images

Once you create a new VM from this disk it is not available to other VMs since it is now attached to the VM you created. This is similar to creating a new VM but attaching an existing disk to it in VMWare.

Data Disks
Data Disks can be attached to a VM but you cannot boot from it. If you copied a VHD into blob storage you can register it as a data disk when attaching it to a VM.

Add-AzureDataDisk -VM $VM -ImportFrom -MediaLocation "VHDUrl" -LUN 1

Francois Delport

Copy Azure Virtual Machines Between Subscriptions

In this post I’m going to show you how to copy Azure Virtual Machines between subscriptions. Copying VMs between subscriptions basically involves copying the VM VHD to a storage account in the other subscription. The same method can be used to copy your VM to another storage account to move it to a different region for instance.

Since this is not a recurring tasks for me this method is not completely automated, it does involve some manual steps. If you need a 100% automated solution please take a look at this post which is completely scripted but much  longer.

Firstly import the publishsettings files for your subscriptions using Get-AzurePublishSettingsFile and Import-AzurePublishSettingsFile if this the first time you are going to use them in PowerShell.

Then execute the following script to copy the VHD, replacing the parts in bold with your values.

#Source VHD
$srcUri="https://yourstorageaccount.blob.core.windows.net/vhds/src_image.vhd"


#Source StorageAccount
$srcStorageAccount="src_storageaccount_name"
$srcStorageKey="src_storage_account_key"

#DestinationStorageAccount
$destStorageAccount="dest_storageaccount"
$destStorageKey="dest_storage_account_key"

#Create the source storageaccount context
$srcContext=New-AzureStorageContext
-StorageAccountName $srcStorageAccount
-StorageAccountKey $srcStorageKey

#Create the destination storageaccount context
$destContext=New-AzureStorageContext
-StorageAccountName $destStorageAccount
-StorageAccountKey $destStorageKey

#Destination ContainerName
$containerName="destinationcontainer"

#Create the  container on the destination
New-AzureStorageContainer -Name $containerName -Context $destContext

#Start the asynchronous copy, specify the source authentication
$copyblob=Start-AzureStorageBlobCopy -srcUri $srcUri -SrcContext $srcContext
-DestContainer $containerName -DestBlob "NewVHD.vhd"
-DestContext $destContext

In my case I was copying inside the same datacentre so it was very quick but if you are copying between regions or copying very large files you can use the snippit below to report the progress of the copy operation.

#Get the status of the copy operation
$copystatus= $copyblob | Get-AzureStorageBlobCopyState


#Output the status every 5 seconds until it is finished
While($copystatus.Status-eq"Pending"){
$copystatus=$copyblob|Get-AzureStorageBlobCopyState
Start-Sleep 5
$copystatus
}

Next on the destination you have to create a disk from the VHD that was copied. In the portal click on Virtual Machines –> Disks –> Create Disk and follow the wizard to create a new disk from the VHD you copied.

Copy Azure Virtual Machines Between Subscriptions

Now you can choose it under My Disks when you create a new VM.

Copy Azure Virtual Machines Between Subscriptions

Tips: In my case both subscriptions had the same name, to differentiate between them edit the .publishsettings file and change the name of one of the subscriptions before importing it.

Francois Delport

Copying Files To And From Azure Blob Storage

I had a situation today where I had to copy a file from a build server on premise to a location where a VM in Azure could access it as part of an automated process.

You have a few options that I know about, some of which are:

  • Ideally you would connect to Azure using VPN and then treat the VMs like any other machine on the network and copy to a share but that was not a possible at the moment.
  • Next best option seemed like Azure File Service but the subscription I was using had to join the preview program first and I couldn’t know for sure how long it would take to activate.
  • Next option I looked at was using Azure Blob Storage. It turned out to work pretty well.

If you are going to copy files on an ad hoc basis or you want to avoid rolling your own PowerShell script and you don’t mind installing extra tools on your build server AzCopy is pretty straight forward to use. You can download the latest version of it here. AzCopy can be used with Azure File Storage, Blob Storage and Table Storage.

In Blob Storage your blobs are stored in containers, you can think of it almost like a folder/directory, to create one is pretty easy using PowerShell.

New-AzureStorageContainer -Name testfiles -Permission -Off

Note the container name must be lower case. In this example I set the permissions to the owner of the storage account only, but you can make it public if you have the requirement. In this case I’m copying all the files from a folder on my local machine “C:\TestUpload” to the “testfiles” container.

AzCopy /Source:C:\TestUpload
/Dest:https://YourStorageAccount.blob.core.windows.net/testfiles /DestKey:YourStorageKey

You can get your Blob URL and storage key the from the Azure Portal under the Storage tab.

In the end I went with PowerShell to avoid installing tools on the build servers plus the build process already used PowerShell. In my case the build process created the artifact we needed as a single zip file, thus the script was very simple.

Set-AzureStorageBlobContent -Container testfiles
-File C:\TestUpload\TestCases.zip -Blob TestCases.zip

If you wanted to copy all the files in a folder you would have to iterate over them and call Set-AzureStorageBlobContent on each one, a quick Google search showed multiple examples on how to do that. Using AzCopy makes this scenario easier since you can tell it to copy a folder and do it recursively as well.

On the Azure VM I had to download the file again as part of the script running the tests.

Get-AzureStorageBlobContent -Blob TestCases.zip -Container testfiles -Destination TestCases.zip

Very easy to do and it worked very well in this situation.

Note: if you have a large number of files to upload it is better to do them in parallel since it can be a bit slow.

Francois Delport

Part 3: How To Snapshot And Restore Azure Virtual Machines

UPDATE: For Azure Resource Manager virtual machine snapshots using managed disks read this post and for unmanaged disks read this post.

This post covers Azure Service Manager (ASM) virtual machines.

If you are familiar with VMWare and Hyper-V you’ll know how easy and handy it is to snapshot a VM and return to that snapshot later on. I use it especially when testing installers and automated deployments where you have to run them multiple times on a clean machine.

In this post I’m going to show you how to “snapshot” a VM in Azure and then revert back to that snapshot. I refer to it as a snaphot but Azure doesn’t support snapshots, this is more a workaround that involves capturing the VM virtual hard disk image and storing it in BLOB storage.

In other virtualisation platforms a snapshot captures the disk at that point in time, when you make changes to the file system in your VM your changes are written to a delta file that contains only the modifications from the previous snapshot. This way you can quickly revert back or roll forward between snapshots. This is a very simplified explanation, there is lots more if you want to read. In Azure it is not so quick since the whole VM image is captured and then restored when you revert back to it.

Something else I want to point out before we get started is the difference between generalised and specialised VM images. You can read the whole explanation here. In short you should not clone new VMs from a specialised image since it can lead to problems if your VM is in a domain. It also causes problems with other software like Octopus Deploy since every machine gets a unique certificate when the Tentacle is configured. If you are capturing a VM image to clone it, run SysPrep first to generalise it.

Now on to the fun part, the script. If this is your first time running PowerShell against Azure refer to this post to setup your environment. In PowerShell run the following to capture your image.

Save-AzureVMImage -ServiceName "YourServiceName" -Name "YourVmName" -ImageName "DescriptiveImageName" -OSState Specialized

If you open up the Cloud Explorer in Visual Studio you will see your new VHD image under:
Storage Accounts -> YourStorageAccountName -> Blob Containers -> vhds

Or you can see it in the portal under your storage account.

To revert back to this image you have to delete your existing VM and its disk and  create a new one from this saved VHD image. First make sure your image was created successfully before deleting your VM.

Remove-AzureVM -Name "YourVMName" -ServiceName "YourServiceName" -DeleteVHD

Then create a new VM from the image into your existing service.

New-AzureQuickVM -Name "YourVMName" -ImageName "DescriptiveImageName" -Windows -ServiceName "YourServiceName"  -InstanceSize "Basic_A2" -WaitForBoot

Since this is a new VM your previous Azure configuration will be gone, you have to create the endpoints for this VM again and so on. If you use a virtual network be sure to specify your Vnet when you create the VM as far as I am aware you can’t change it afterwards.

Tip: If you get a “CurrentStorageAccountName is not accessible” error you have to set your default storage account for the subscription by running:

Get-AzureStorageAccount

Take note of the storage account name then run:

Set-AzureSubscription -SubscriptionName "YourSubscriptionName" -CurrentStorageAccountName "YourStorageAccountName" -PassThru

Francois Delport

Part 4: How To Enable Clients To Access A File Share From An URL

While delving into Azure file service I came across a requirement to allow clients to access files from file storage using a URL. At the time of writing you cannot use Shared Access Signatures with file storage to grant them access. I am going to show one possible solution using ASP.NET MVC that will allow you to use a URL like this:

http://sitename.azurewebsites.net/Storage/RootFolder/Inroot.png
http://sitename.azurewebsites.net/Storage/RootFolder/SubFolder/File.png

Where the section of the URL in bold will match the exact path in your Azure file share and map to a file, in this case:

https://accountname.file.core.windows.net/ShareName/RootFolder/Inroot.png

In the example above Storage is just a friendlier name I chose as my route to the controller.

Steps

  1. In your MVC project create a new controller and add an action to it that will be used to retrieve the file:

    public class StorageController : Controller
    {
    public ActionResult StreamFile(string filepath)
    {
    string storagebase =        "https://StorageAccount.file.core.windows.net/ShareName/";
    var storageAccount = CloudStorageAccount.Parse(StorageConnectionString);
    var fileclient = storageAccount.CreateCloudFileClient();
    var fileUri = new Uri(storagebase + filepath, UriKind.Absolute);
    var azfile = new CloudFile(fileUri, fileclient.Credentials);
    return new FileStreamResult(azfile.OpenRead(), "image/png");
    }
    }

Note: I am streaming the file back, for scalability and performance reasons you do not want to read large files into memory.

  1. Add a new route. In this case I am redirecting all the requests for Storage to my StorageController and the StreamFile action, everything after the Storage section of the URL will be passed along in the filepath parameter

routes.MapRoute(
name: "Files",
url: "Storage/{*filepath}",
defaults: new {controller = "Storage", action = "StreamFile", filepath = ""});

  1. Enable runAllManagedModulesForAllRequests in your web.config

<system.webServer>
<modules runAllManagedModulesForAllRequests="true">
...
</system.webServer>

The result

TestImage

We need runAllManagedModulesForAllRequests to enable ASP.NET to service the request instead of IIS. Keep in mind this is a demo and not meant for high performance sites. Enabling runAllManagedModulesForAllRequests is bad for performance since all managed modules will run for all requests.

Update: If you can’t or don’t want to change the code for your file access mechanism or you absolutely have to access the files using an SMB share you could move your application to an Azure Web Role which supports mounting a file share.

Francois Delport

Part 3: Using Azure Storage API With A File Share

In my previous post I discussed how to create your file share and add some files to it using Powershell. In this post I will show you how to use the Azure Storage API to manipulate your file share.

Scenarios where this would be useful.

  • On-premise applications.
  • Accessing the share from Azure but in different regions than the one where your share is residing.
  •  Using the share in scenarios where you cannot mount file shares like Azure websites.

I am going to explore the first scenario and change one of my existing on-premise applications to use Azure file service instead of the local file system. This is actually a quick change since the application performs file operations using a file repository class not the System.IO classes directly. I initially did this to make the code unit testable and now it is also making it easy to swap out my current implementation for the Azure file service one.

The Code
The easiest way to install the Windows Azure Storage client library is using the Nuget package manager console:

Install-Package WindowsAzure.Storage

It can also be obtained when you install Azure SDK for .NET, which I can recommend if you are going to do any Azure development. After installing add the following imports:

using Microsoft.WindowsAzure.Storage;
using Microsoft.WindowsAzure.Storage.File;

Connecting to your storage account
var conString = "DefaultEndpointsProtocol=https;
AccountName=YourAccountName;AccountKey=YourStorageAccountAccessKey";
var storageAccount = storageAccount.Parse(conString);
var fileClient = storageAccount.CreateCloudFileClient();

Creating a new  share
var newShare = fileClient.GetShareReference("NewShareName");
newShare.CreateIfNotExistsAsync().Wait();

Listing the contents of existing shares
var ExistingShare = fileClient.GetShareReference("ExistingShareName");
shareRoot = ExistingShare.GetRootDirectoryReference();
var contents = shareroot.ListFilesAndDirectories();
var existingfile = shareRoot.GetFileReference("FileInRootDir");
var existingDirectory = shareRoot.GetDirectoryReference("DirectoryInRootDir");

Note contents is of type IEnumerable<IListFileItem> and contains files of type CloudFile as well as directories of type CloudFileDirectory both implement the IListFileItem interface.

Creating files and directories
shareRoot = fileshare.GetRootDirectoryReference();
shareRoot.GetFileReference("NewFileName").Create(NewFileSize);
shareRoot.GetDirectoryReference("NewDirectoryName").Create();

Downloading a file
//To a local file
existingfile.DownloadToFile("LocalFilePath", FileMode.Create);

Read a file
//Read the contents of a text file into a string
string fileContents = existingfile.DownloadText();
//Read bytes from a file into byte array
var fileContents = existingfile.DownloadToByteArray(byteArray,offset);

Retrieve a directory or file directly
As you can see it will be very cumbersome to walk down a shares’ directory hierarchy to retrieve files. You can gain access to a file or directory directly by using its URI property. It will look something like this:
https://YourAccount.file.core.windows.net/ShareRootDir/1stDir/SubDir/FileName

And you can use it in the following way:
var fileUri = new Uri(FullURI, UriKind.Absolute);
var azfile = new CloudFileDirectory(fileUri , fileClient.Credentials);

It might take a bit of time to wrap your head around the fact that the API is using the CloudFileDirectory type that represents both files and directories. If you want to work with files only you can list them using the following snippet:
SomeAzurefolder.ListFilesAndDirectories().OfType<CloudFile>()

The examples above are for demonstration only, in production you would be better of using async versions of these methods and streaming files up or down instead of reading whole files into memory.

Francois Delport

Part 2: Creating An Azure File Service File Share

To get started with Azure file service it is necessary to join the preview first. It is very easy to sign up and it did not take very long to receive confirmation that the file service feature was enabled on my account.

Create storage account

The next step is to create a new storage account, you need a new one because storage accounts created before Azure file service was activated will not be able to create file shares. The easiest way create a new storage account is using the Azure portal .

Create Storage Account

You can use the current portal or the preview portal to create a storage account but the preview portal does not support file service yet and it won’t show under your storage endpoints.

File service enabled

Create A Share

At the time of writing it was not possible to create a share from the portal but it can be done using Powershell or the REST API. I will cover using the API later on so for now I will use Powershell. You can download Azure Powershell using the Microsoft Web Platform Installer.

Install Azure PowerShell

PS: On my VM Azure Powershell did not work initially, it was missing some dlls and could not load the Azure Modules, in my case installing Azure SDK fixed the problem. Once installed you can launch it by typing “azure powershell” in the start menu.

Enter the following two commands to create a new share:
$ctx = New-AzureStorageContext storage_account_name storage_account_key
New-AzureStorageShare YourNewShareName -Context $ctx

In the code above storage_account_name is the name of the new storage account created earlier, just the name not the whole URL. Your storage_account_key can be retrieved from the portal by clicking Manage Access Keys under your storage account.Manage Storage Access Key

Once the share is created you have a few options to create directories and files in it.

  • Mounting the share in a VM

You can only mount the share on VMs that are in the same region as the file share. In a Powershell window or command prompt execute the following:

net use z: \\storage_account_name.file.core.windows.net\YourShareName /u:storage_account_name storage_account_key

Alternatively you can store your account key on the VM that will be mounting the share, this way you can mount shares without specifying your key every time. In a Powershell window or command prompt execute the following two commands:

cmdkey /add:storage_account_name.file.core.windows.net /user:storage_account_name /pass:storage_account_key

net use z: \\storage_account_name.file.core.windows.net\YourShareName

Now you can use this share like you would any other windows file share and perform the usual file operations on it

  • Using Azure Powershell

#Create a context
$ctx = New-AzureStorageContext storage_account_name storage_account_key


#Retrieve the share
$share = Get-AzureStorageShare -Context $ctx -Name YourShareName

#Create a new directory in the share
New-AzureStorageDirectory -Share $share -Path DirectoryName

# Upload a local file to the directory you just created
Set-AzureStorageFileContent -Share $share -Source C:\Temp\File.txt -Path DirectoryName

# List the contents of the directory you created
Get-AzureStorageFile -Share YourShareName -Path NewFolder

There are many more cmdlets for Azure Storage, I suggest having a look at the documentation to see what is available.

In my next post will explore the file service API.

Francois Delport

Part 1: What is Azure file service?

In the next few posts I am going to take Azure file service for a test drive to see what you can and cannot do with it. My aim is to have a proper exploration of the capabilities and short comings in one place instead of scouring the Internet to try and determine if you can/should use it in your projects once it goes generally available.

What is Azure file service?
Azure file service is a new capability of Azure storage to expose a file share over SMB 2.1. It is platform as a service, saving you the trouble of maintaining VMs. It is currently in preview, to enable it for your account head over to http://azure.microsoft.com/en-us/services/preview/ . You can only create shares on storage accounts created after file service was activated on your account. You will not be able to create a share on storage accounts that existed before you activated the file service preview.

What works

  • Sharing files between VMs over SMB 2.1, your file share is accessible from Windows and Linux machines or any other OS implementing SMB 2.1. You can use windows file commands and API calls to access the share just like a normal windows file share.
  • You get the availability, durability, scalability and redundancy of Azure storage.
  • The REST API supports the usual file operations and the storage libraries in Azure SDK is a friendlier way to interact with the file share programmatically. Using the API is one way to access your file share on premise but there are others which I will look at and test later.

I think the most common use case will be sharing files between VMs and that is working at the moment. File service is cheaper than running a VM to share files as you pay per GB used, but remember this is in preview and the price will change.

What does not work (yet)

    • The SMB share cannot be mounted in Azure websites but the share can be mounted in web roles and worker roles. I will post more on this in the future and take it for a test drive myself.
    • The SMB share cannot be mounted on premise, but instead  the API can be used to access it. There is a way using WebDAV and mounting it as a local share that I will look into later on.
    • The Azure storage emulator (v3.2 at the time) does not support file service, you have to test against your storage account.
    • Active directory authentication is not supported instead access is controlled by storage keys.
    • The SMB share is only accessible by VMs in the same region.

Keep in mind file service is in preview and most of these limitations will hopefully be addressed in the future.

In my next post I will show you how to get started by creating a file share.

Francois Delport