Change The MAC Address On Your Azure Virtual Machine

Today I’m going to show you how to add a second NIC to an Azure Virtual Machine and how to change the MAC Address on your Azure Virtual Machine.

I had this requirement while automating the deployment of a component that is license locked to the MAC address of a NIC. In itself this is not a problem but combined with Azure and our deployment workflow it presented some challenges. As part of the deployment workflow the Azure VM is deleted and restored from a VHD image. Since this process creates a new VM and new NICs you also get a new MAC address every time which caused the license check to fail.

To avoid interfering with the normal operation of the networking in Azure I thought it best to add a second NIC on its own subnet and use it for the license while leaving the default NIC intact.

So the first step was to create a new subnet and to give it a different IP address range from the default NIC.

Change The MAC Address On Your Azure Virtual Machine

The second step is to add another NIC to the VM when you create it:

Add-AzureNetworkInterfaceConfig -Name "LicenseNIC" -SubnetName "License" -VM $NewVM

Thirdly there is the PowerShell script to change the MAC address when the new VM is created. Credit to Jason Fossen for the original script. This script is executed on the VM itself not against Azure. You can use Invoke-Command for instance as part of your deployment script to execute it remotely on the VM.

In the script I identify the NIC  used for licencing based on it’s IP address 10.32.2.* and then I  retrieve the index number for this NIC. This index is the same one used to find this NIC in the registry.

$indexobj = Get-WmiObject win32_networkadapterconfiguration -Filter "ipenabled = 'true'" | Where-Object {$_.IPAddress -like "10.32.2.*" } | Select-Object -Property Index
$index = $indexobj.index

The registry key for the NIC always has four digits, so padleft, then get the key.

$index = $index.tostring().padleft(4,"0")
$regkey = get-item "hklm:\system\CurrentControlSet\control\class\{4D36E972-E325-11CE-BFC1-08002BE10318}\$index"

Set a new value for MAC address, in this case 30-D5-1E-DD-F2-A5.
$regpath = "hklm:\system\CurrentControlSet\control\class\{4D36E972-E325-11CE-BFC1-08002BE10318}\$index"
set-itemproperty -path $regpath -name "NetworkAddress" -value $("30-D5-1E-DD-F2-A5")

If the NIC is not refreshed the new MAC address is not picked up by the licensing component we used. This may not be neccesary depending on your use case.

ipconfig.exe /release """$($thenic.netconnectionid)""" | out-null
$thenic.disable() | out-null
$thenic.enable() | out-null
ipconfig.exe /renew """$($thenic.netconnectionid)""" | out-null

If you now look at your NIC properties you will see the new MAC address.

Change The MAC Address On Your Azure Virtual Machine

PS. On my Windows 10 machine it didn’t display properly on the NIC properties but you can use ipconfig /all to see the MAC address.

Francois Delport

Part 4: Restore Virtual Machine Endpoints And Virtual Network Settings

Last week I briefly mentioned you have to re-configure all the settings on your VM when you restore it from a VHD image, you basically only get the hard disk back. Today I’m going to show you a few more details around restoring your VM endpoints and virtual network settings.

If you are completely new to Azure Virtual Networks please read the intro here. Among other things you use them to assign DHCP pools, create VPN connections and setup security groups. The feature I found the most useful was connecting VMs from different cloud services to the same internal network by putting them all on the same VNet. It is a lot simpler than having multiple VMs in the same cloud service and mapping all those external endpoints with different port numbers to the VMs inside the cloud service. By using different cloud services you can also give each VM its own public DNS name.

You can add and remove endpoints after creating your VM but I could not find a way to change the VNet of a VM after it was created, apart from recreating the VM , so it is important to specify the VNet when you create the VM. For example in this script I pipe the VM config, subnet name and static IP address to the New-AzureVM command and I pass the -VNetName parameter with the name of my VNet.

New-AzureVMConfig -Name "TestVM" -InstanceSize "Standard_D2" -ImageName "VMImage" |
Set-AzureSubnet -SubnetNames "Subnet-1" |
Set-AzureStaticVNetIP -IPAddress "10.0.0.10" |
New-AzureVM -ServiceName "NewService" -VNetName "Test VNet"

If you don’t need a static IP you can omit the Set-AzureStaticVNetIP line.

Next up I’m going to map some endpoints to my VM.

Get-AzureVM -ServiceName "NewService" -Name "TestVM" |
Add-AzureEndpoint -Name "Remote Desktop" -Protocol "tcp" -PublicPort 3389 -LocalPort 3389 |
Add-AzureEndpoint -Name "PowerShell" -Protocol "tcp" -PublicPort 5986 -LocalPort 5986 |
Update-AzureVM

I used known port numbers but for a very small amount of extra security you can use different ports. This is a very simple example, I didn’t use load balancing for instance, if you read the documentation you will see how to do that.

Tip: In my environment while I was deleting the existing VM and creating a new one from the imaged VHD I sometimes received this error:

New-AzureVM : Networking.DeploymentVNetAddressAllocationFailure : Unable to allocate the required address spaces for the deployment in a new or predefined subnet that is contained within the specified virtual network.

It happened intermittently and it turned out sometimes Azure took a while to release the static IP address. To try and avoid the problem I stop and deprovision the VM first before deleting it and I also added a retry loop in my PowerShell script. I didn’t have time to test it exhaustively so I can’t confirm that it is working but it looks that way.

Keep in mind this can also happen if another VM is using the same IP address as the one you are trying to assign to your VM. If you look at the list of IP address assigned to your running VMs in the portal you can see if it is already in use.

I didn’t experience this one myself but when I was investigating this error message I came across posts where users received this error but the IP was not in use when they looked at the portal. It turned out that the static IP was assigned at the OS level, never do that, always assign the static IP in the Azure. Just out of interest, you will notice when you assign a static IP to the VM in Azure it doesn’t show in your NIC settings in Windows control panel, everything happens in Azure not the OS.

Francois Delport

Copying Files To And From Azure Blob Storage

I had a situation today where I had to copy a file from a build server on premise to a location where a VM in Azure could access it as part of an automated process.

You have a few options that I know about, some of which are:

  • Ideally you would connect to Azure using VPN and then treat the VMs like any other machine on the network and copy to a share but that was not a possible at the moment.
  • Next best option seemed like Azure File Service but the subscription I was using had to join the preview program first and I couldn’t know for sure how long it would take to activate.
  • Next option I looked at was using Azure Blob Storage. It turned out to work pretty well.

If you are going to copy files on an ad hoc basis or you want to avoid rolling your own PowerShell script and you don’t mind installing extra tools on your build server AzCopy is pretty straight forward to use. You can download the latest version of it here. AzCopy can be used with Azure File Storage, Blob Storage and Table Storage.

In Blob Storage your blobs are stored in containers, you can think of it almost like a folder/directory, to create one is pretty easy using PowerShell.

New-AzureStorageContainer -Name testfiles -Permission -Off

Note the container name must be lower case. In this example I set the permissions to the owner of the storage account only, but you can make it public if you have the requirement. In this case I’m copying all the files from a folder on my local machine “C:\TestUpload” to the “testfiles” container.

AzCopy /Source:C:\TestUpload
/Dest:https://YourStorageAccount.blob.core.windows.net/testfiles /DestKey:YourStorageKey

You can get your Blob URL and storage key the from the Azure Portal under the Storage tab.

In the end I went with PowerShell to avoid installing tools on the build servers plus the build process already used PowerShell. In my case the build process created the artifact we needed as a single zip file, thus the script was very simple.

Set-AzureStorageBlobContent -Container testfiles
-File C:\TestUpload\TestCases.zip -Blob TestCases.zip

If you wanted to copy all the files in a folder you would have to iterate over them and call Set-AzureStorageBlobContent on each one, a quick Google search showed multiple examples on how to do that. Using AzCopy makes this scenario easier since you can tell it to copy a folder and do it recursively as well.

On the Azure VM I had to download the file again as part of the script running the tests.

Get-AzureStorageBlobContent -Blob TestCases.zip -Container testfiles -Destination TestCases.zip

Very easy to do and it worked very well in this situation.

Note: if you have a large number of files to upload it is better to do them in parallel since it can be a bit slow.

Francois Delport

Part 3: How To Snapshot And Restore Azure Virtual Machines

UPDATE: For Azure Resource Manager virtual machine snapshots using managed disks read this post and for unmanaged disks read this post.

This post covers Azure Service Manager (ASM) virtual machines.

If you are familiar with VMWare and Hyper-V you’ll know how easy and handy it is to snapshot a VM and return to that snapshot later on. I use it especially when testing installers and automated deployments where you have to run them multiple times on a clean machine.

In this post I’m going to show you how to “snapshot” a VM in Azure and then revert back to that snapshot. I refer to it as a snaphot but Azure doesn’t support snapshots, this is more a workaround that involves capturing the VM virtual hard disk image and storing it in BLOB storage.

In other virtualisation platforms a snapshot captures the disk at that point in time, when you make changes to the file system in your VM your changes are written to a delta file that contains only the modifications from the previous snapshot. This way you can quickly revert back or roll forward between snapshots. This is a very simplified explanation, there is lots more if you want to read. In Azure it is not so quick since the whole VM image is captured and then restored when you revert back to it.

Something else I want to point out before we get started is the difference between generalised and specialised VM images. You can read the whole explanation here. In short you should not clone new VMs from a specialised image since it can lead to problems if your VM is in a domain. It also causes problems with other software like Octopus Deploy since every machine gets a unique certificate when the Tentacle is configured. If you are capturing a VM image to clone it, run SysPrep first to generalise it.

Now on to the fun part, the script. If this is your first time running PowerShell against Azure refer to this post to setup your environment. In PowerShell run the following to capture your image.

Save-AzureVMImage -ServiceName "YourServiceName" -Name "YourVmName" -ImageName "DescriptiveImageName" -OSState Specialized

If you open up the Cloud Explorer in Visual Studio you will see your new VHD image under:
Storage Accounts -> YourStorageAccountName -> Blob Containers -> vhds

Or you can see it in the portal under your storage account.

To revert back to this image you have to delete your existing VM and its disk and  create a new one from this saved VHD image. First make sure your image was created successfully before deleting your VM.

Remove-AzureVM -Name "YourVMName" -ServiceName "YourServiceName" -DeleteVHD

Then create a new VM from the image into your existing service.

New-AzureQuickVM -Name "YourVMName" -ImageName "DescriptiveImageName" -Windows -ServiceName "YourServiceName"  -InstanceSize "Basic_A2" -WaitForBoot

Since this is a new VM your previous Azure configuration will be gone, you have to create the endpoints for this VM again and so on. If you use a virtual network be sure to specify your Vnet when you create the VM as far as I am aware you can’t change it afterwards.

Tip: If you get a “CurrentStorageAccountName is not accessible” error you have to set your default storage account for the subscription by running:

Get-AzureStorageAccount

Take note of the storage account name then run:

Set-AzureSubscription -SubscriptionName "YourSubscriptionName" -CurrentStorageAccountName "YourStorageAccountName" -PassThru

Francois Delport

Part 2: Creating Azure Virtual Machines Using Powershell – The Actual Script

In my previous post I explained what a cloud service is in relation to virtual machines in Azure. I want to mention virtual networks quickly as well since it ties in with your cloud service and virtual machine network configuration before we create a VM.

If you want VMs from multiple cloud services to connect to each other via the internal network instead of public IP addresses you can add them to a virtual network. You also need a virtual network to connect your on-premise infrastructure to Azure via VPN. I will cover the details of virtual networks and VPN in a future post.

When you create a new VM you can choose to add it to an existing virtual network or you can create a new virtual network or you can create it without a virtual network. Is this screenshot I am creating a new VM and using an existing virtual network called “demo”. You will see why this matters when you run the script to create a VM.

VNET

Finally I can get to the actual demo to create a VM in Azure.

You can download Azure PowerShell modules using the Microsoft Web Platform installer. Before you can run PowerShell scripts against your Azure VMs you have to import your publish settings file. Open up PowerShell and run the following command.

Get-AzurePublishSettingsFile

This will launch the Azure Portal in your default browser, sign in and follow the instructions to download your publish settings. Remember to save the file with a .publishsettings extension and put in a directory where you can reach it easily. Back in PowerShell execute the following command to import your file that was downloaded in the previous step.

Import-AzurePublishSettingsFile PathToPublishSettingsFile.publishsettings

And here is the PowerShell script to create a new VM:

$ImageName = "a699494373c04fc0bc8f2bb1389d6106__Windows-Server-2012-R2-20150726-en.us-127GB.vhd"
$VmName = "NewVMName"
$InstanceSize = "Standard_D1"
$AdminUsername = "MyNewAdmin"
$Password = "MyNewPassword"
$Location = "West US"
$ServiceName = $VmName

#Set the image
$Img = Get-AzureVMImage -ImageName $ImageName

#Create the config
$NewVM = New-AzureVMConfig -Name $VmName -InstanceSize $InstanceSize -ImageName $Img.ImageName | Add-AzureProvisioningConfig -Windows -AdminUsername $AdminUsername -Password $Password

#Create the vm
New-AzureVM -VMs $NewVM -WaitForBoot -Verbose -Location $Location -ServiceName $ServiceName

This is a bare bones example, if you look at the documentation there are lots more you can do: Link to MSDN

Parameters Explained

$ImageName: is the name of the image you are using to create this VM. To see which VM images are available, including your own images you created, you can run: Get-AzureVMImage
$VmName: The name for you new VM.
$InstanceSize: The size of the hardware for your VM. To see which instance sizes are available use this link.
$AdminUsername and $Password is the username and password for the administrator user that will be created on this new VM.
$Location: Is the location for this new VM, to get the list of locations run: Get-AzureLocation

NOTE: If you specify the -Location or -AffinityGroup parameters Azure will create a new cloud service for this VM. In this script it will create a new cloud service with the same name as the VM, it is stored in $ServiceName. To add this VM to an existing cloud service specify the -ServiceName parameter but leave out -Location.

I didn’t specify a virtual network for this VM but you can use the -VNetName parameter to do that.

Tip: Make sure your password is valid according to Azure standards. The first one I used wasn’t and the PowerShell script didn’t fail, it happily created the VM but the admin user wasn’t valid and I had to reset the password before I was able to RDP into the machine.
Secondly when you create the storage account you can’t use zone redundant storage for VM disks.

Francois Delport

Part 1: Creating Azure Virtual Machines Using PowerShell

Before I delve into the PowerShell script to create Azure Virtual Machines I want to talk about the relationship between virtual machines and cloud services. This can be a source of confusion and I think it is not so obvious when you look at the Azure portal.

When you create a web role, worker role or VM it is created inside a cloud service. A cloud service is a container that contains roles and a role contains instances. For example you can have a web role that contains 3 instances, in reality this will mean you have 3 windows servers running IIS inside this web role. Being a web role you have some access to the underlying OS but you don’t see the 3 VMs running as you do with compute instances but they are still there, you just see them as web role instances.

When you create a VM it will also reside inside a cloud service as a role but you don’t see a role called virtual machines inside your cloud service, you just see the instances. When you create a new VM you have the option to create a new cloud service or adding this VM to an existing cloud service. The default behaviour is to create a new cloud service and you end up with a 1 to 1 mapping between VMs and cloud services. This works great if you don’t have a need for your VMs to communicate with each other on the private network and it makes mapping endpoints easier since there is only one instance in the cloud service using the ports.

If you look at a cloud service you will see it has a DNS name and the instances inside the cloud service all use this DNS name. The instances have private IP addresses so they can communicate which each other over the private network in the cloud service. If you for instance have 3 web roles the load balancer will redirect port 80 traffic to these 3 web role instances and all 3 will be handling the HTTP requests, great for scaling out your web app. If you have multiple VMs in the cloud service you will see the load balancer setup endpoints mapping external ports to the internal ports on your VM instances.

If you for instance want to remote desktop into these VMs you have the use the port that the load balancer assigned for that VM, by default it will automatically choose a port for RDP but you can specify one if you want. To change the ports you click on the endpoints tab for that VM under the Virtual Machines section in the portal. Remember you can’t just choose any port, you have to make sure the port is not in use already by another instance in that cloud service.

When you create a cloud service you choose the region to host this cloud service, if you then add more VMs to this cloud service you can’t change the location for the VM, it is inherited from the cloud service.

Next time I’ll show the PowerShell script to create a VM.

Francois Delport

Managing Git In Visual Studio 2015

In this last post about Git I’m going to touch on a few odds and ends I came across that are worth mentioning or confused me at first.

Working Directories
Git does not create folders in your local repository for every branch you checkout like TFSVC or SVN does. There’s only one working directory per repository and in that directory is a hidden .git folder where the files are stored.
GitFolder

When you change your active branch by selecting a different branch in Team Explorer, Git retrieves the files for branch into your working folder. If you have changes that were not committed to the local branch Visual Studio will warn you so you don’t loose your work when you switch branches.

Multiple Repositories
When you clone a repository you get everything in the history of that repository and I mean everything, every version of all the file changes that happened since the repository was initialised. This is can be handy since any client repository can be used to recover the server repository should anything happen but it also means the repository can get really big over time.

To keep it manageable you should create multiple repositories in your team project and keep solutions that change together or have a logical reason to be together in a repository.

Keep in mind if you have projects that change together spread across multiple repositories it can add to your headaches, merging and branching and tracking the versions across multiple repositories can be difficult. There are also some practical implications like searching across multiple repositories is not supported by all Git tools.

To create a new repository, open up TFS portal and browse to your team project and click on the Code tab then click on the drop-down arrow next to your current repository name and select Manage Repositories… and click on New Repository.. give the new repository a meaningful name and click on OK.

ManageRepos

AddRepo

Back in Visual Studio, open Team Explorer and click on Manage Connections to see the new repository, right click on it and connect to it.

ManageConnection

Visual Studio will prompt you to clone the repository to start using it.

Merge conflicts in pull requests
In my previous post I discussed pull requests now I’ll briefly show you how to handle merge conflicts. When you submit a pull request to merge your code back into the master branch and there are other changes that can not be automatically merged by Git you will see this error message.

MergeConflict

TFS actually makes it very easy for you to fix it, when you click on the Manual merge help… link under the merge failed message it will show exactly what to do.

MergeHelp

After you do a pull to get the latest changes to your local branches you have to merge master into BranchOfMaster. First make sure your branch is the active branch in Git, then right click on it and select Merge From…, we want to merge from master into this branch.

StartMerge

When Team Explorer shows the Resolve Conflicts section click on the Merge button and you will see the merge editor.
DoMerge

In this case I am going to select both changes and click on the Accept Merge button. Still under the Resolve Conflict section of Team Explorer click on Commit Merge. If you now look at the pull request screen in TFS portal you will see there are no more merge conflicts and you can complete the pull request.

FinishMerge

Remember to pull down the changes to your local master since the pull request only updated the remote/origin master. I assumed when I right click on my remote/origin branches and view the history it would be real time but it seems you have to do a pull to view the latest history as well.

Francois Delport

Git Branching In Visual Studio 2015

There are many different ways to use Git or Git workflows, you can even use it like a central repository similar to the way you would use SVN or Team Foundation Version Control but most of the popular and widely used workflows involve branching.

In this post I am going to give a quick demo on how to create a new branch from a remote master in Visual Studio 2015 and submit a pull request to merge it back to the remote master branch. This post is for absolute beginners to Git and follows on the previous post that showed how to create your repository.

In your solution open up Team Explorer and the Branches section. Right click on the master branch in the remote/origin server and click on Create new local branch. Make sure that Track remote branch is NOT selected and give the branch a meaningful name, click Create branch.

CreateBranch

If you look at the Branches section you will see the new FixABug branch created in your local repository but it is not on the server yet. To push the branch to the server, right click on the FixABug branch and click on Publish Branch. If you now look at the remote/origin repository you will see the new branch on the server.

Make some changes to your solution and commit them by opening the Changes section in Team Explorer, give the commit a proper commit message and click on Commit. These changes are now committed to your local repository, sync them to your bug fix branch on the server my doing a sync.

Sync

I did the commit and sync in two separate steps to drive the point home of local repositories but you can use Commit and Sync to do both at once. Next we are going to merge the changes from the FixABug branch that is now on the server with the master branch on the server.

Now we are going to create the pull request. In the Branches section make sure the remote FixABug branch is selected, right click on it and select Create Pull Request and watch the seamless integration between GIT and Visual Studio fall apart as you are taken to the TFS web portal to create the pull request 🙂

CreatePull

In this case I am the same person making the pull request and allowing it but usually the person allowing it will be a senior member of the team that will review the change before allowing it. You can add reviewers to this pull request using the reviewers drop down on the right. I’m going to add myself to the reviewers list.

Review

If I now look under pull requests assigned to me I will see this one waiting. As the only reviewer I am going to approve this pull request.

Approve

Now I will complete the pull request and merge the changes into master by clicking on the Complete pull request button. You will have the option to FixABug delete the branch which can be handy if this bug was fixed and the branch will not be needed anymore. If you now look at the master branch on the server you will see your changes.

This was obviously the happy path, in reality there can be merge conflicts and reviewers can ask for changes and pull requests can be abandoned but I will look at these scenarios in a later post.

Francois Delport

Introduction To Git In Team Foundation Server 2015 And Visual Studio 2015

This is short into to using Git in Visual Studio 2015 and Team Foundation Server 2015 it is aimed at someone completely new to Git, like me:)

The most important concept to understand about Git is the fact that it is distributed and works on branches. You will be downloading the code from the remote/origin repository to a local repository on your dev environment. Changes you make to the code will first be committed to this local repository. When you are ready to send your changes up to the remote/origin repository you will first pull down changes made by other users on the remote/origin repository and merge them into your local branch to fix merge conflicts and then you will commit the changes to the remote/origin repository.

Git support is now baked into Visual Studio when you install it remember to select Custom installation and select Git tools as part of your installation.

Installl GIT Tools

Once installed open Team Explorer, in my case I didn’t have any other TFS connections setup. Click on Manage Connections -> Connect to Team Project and add a new TFS server connection.

CreateConnection

Once created, select the server from the list of connections to connect to it. You can ignore the message to configure a workspace, GIT does not use one. Click on the drop down arrow and select Projects and My Teams -> New Team Project… to add a new team project.

AddNewTeamProject

Give your project a meaning full name in this case ‘Trying Out GIT’ and select the process template of your choice.

SelectGIT

Remember to choose Git as your version control system. After a few seconds you will get a success confirmation. You will see the new team project in your list of projects in Team Explorer, double click on it to connect to the project.

Even though the remote repository is empty you still have to clone it your local repository to start working, you will see a message to clone the repository to start working on it, click on Clone and choose a folder for your local repository.

CloneRepo

GIT will download a copy of the empty remote repository to your dev environment. When you make changes and commit them it will first be into this local repository. You will see later how the committed changes are sent to the remote repository on the server.

Since this an empty repository I’m going to add some code to it, click on the New.. button under Solutions to add a new solution to this repository. For this demo I am adding a new ASP.NET MVC application but the type of project is irrelevant. In the team explorer click on Changes and you will see the changes from the solution you created, note the branch is master since you just created it from scratch.

Click on Commit and you will see a successful commit message, each commit gets a SHA-1 hash to uniquely identify it and its changes. This commit was in your local repository, you still have to push it to the server. If you opened TFS web portal and went to the code windows you will see no code in the repository. Click on Sync and you will see the synchronization screen.

Sync

There is quite a bit to explain here:
Sync: Send your local changes to the server and pull down changes from the server and merge them into your local repository.
Fetch: Pull down changes from the server but do not merge them into your local repository, you have to merge them yourself.
Pull: Pull down changes from the server and merge them into your local repository, do not send local changes to the server.
Push: Commit your changes to the server, you may get merge conflicts.

You can see the local outgoing commit waiting to be pushed to the server, press Push to commit the changes to the server. In this case we can safely push since this is the initial commit into master. Normally your changes would be made in a branch and you would submit a pull request to pull your changes into the master branch. The pull request will be reviewed by the person in charge of the master branch and then accepted or declined. I’ll show pull requests and code reviews in my next post.

When push is done select Branches from the team menu to see your master branch on the server.

Origin

Another term you will come across is origin, it means the remote server where the master is. In Visual Studio it will show as remote/origin in other Git tools it will usually just show origin.

Francois Delport

Part 4: How To Enable Clients To Access A File Share From An URL

While delving into Azure file service I came across a requirement to allow clients to access files from file storage using a URL. At the time of writing you cannot use Shared Access Signatures with file storage to grant them access. I am going to show one possible solution using ASP.NET MVC that will allow you to use a URL like this:

http://sitename.azurewebsites.net/Storage/RootFolder/Inroot.png
http://sitename.azurewebsites.net/Storage/RootFolder/SubFolder/File.png

Where the section of the URL in bold will match the exact path in your Azure file share and map to a file, in this case:

https://accountname.file.core.windows.net/ShareName/RootFolder/Inroot.png

In the example above Storage is just a friendlier name I chose as my route to the controller.

Steps

  1. In your MVC project create a new controller and add an action to it that will be used to retrieve the file:

    public class StorageController : Controller
    {
    public ActionResult StreamFile(string filepath)
    {
    string storagebase =        "https://StorageAccount.file.core.windows.net/ShareName/";
    var storageAccount = CloudStorageAccount.Parse(StorageConnectionString);
    var fileclient = storageAccount.CreateCloudFileClient();
    var fileUri = new Uri(storagebase + filepath, UriKind.Absolute);
    var azfile = new CloudFile(fileUri, fileclient.Credentials);
    return new FileStreamResult(azfile.OpenRead(), "image/png");
    }
    }

Note: I am streaming the file back, for scalability and performance reasons you do not want to read large files into memory.

  1. Add a new route. In this case I am redirecting all the requests for Storage to my StorageController and the StreamFile action, everything after the Storage section of the URL will be passed along in the filepath parameter

routes.MapRoute(
name: "Files",
url: "Storage/{*filepath}",
defaults: new {controller = "Storage", action = "StreamFile", filepath = ""});

  1. Enable runAllManagedModulesForAllRequests in your web.config

<system.webServer>
<modules runAllManagedModulesForAllRequests="true">
...
</system.webServer>

The result

TestImage

We need runAllManagedModulesForAllRequests to enable ASP.NET to service the request instead of IIS. Keep in mind this is a demo and not meant for high performance sites. Enabling runAllManagedModulesForAllRequests is bad for performance since all managed modules will run for all requests.

Update: If you can’t or don’t want to change the code for your file access mechanism or you absolutely have to access the files using an SMB share you could move your application to an Azure Web Role which supports mounting a file share.

Francois Delport