Adding Performance Counters To Your Automated Tests

In this post I’m going to cover adding performance counters to your automated tests.

Performance counters can be invaluable in analysing your application’s behaviour while running tests. If you are lucky enough to use Visual Studio Enterprise you can add counters to your load test with just a few clicks.

Adding Performance Counters To Your Automated Tests

Since I’m using a different test runner I have to add them manually as part of my test run, I’m going to use PowerShell and LogMan to do this. You can create the data collectors entirely in PowerShell but there are loads of properties you can set on a data collector and it quickly becomes unmanageable. Instead I’m going to create the data collectors manually and use LogMan to export them and then import them on the test machine as part of the test sequence.

Step 1
Create your data collectors using Perfmon just like you normally do, take note of the names and optionally specify the path to store the counter file.

Adding Performance Counters To Your Automated Tests

When you are done, right click on each data collector, select Export Template and save it to a file, keep the convention of CounterName.xml file, you will later see how this makes the script simpler.

Step 2
In my environment the VMs are reset every day so I scripted the import of the data collector templates at as part of my VM rebuild. I gave each exported XML file the same name as its data collector, if you follow this convention it makes it very easy to add new ones by adding it to the $counters array.

$counters = @("CPU","ASP.NET","Disk","Memory","SQL")

foreach ($counter in $counters)
 {
 Write-Host "Creating Counter $counter"
 $create = "logman.exe import -name $counter -xml $counter.xml"
 Invoke-Expression $create
}

Step 3
After the previous step the data collectors are created but not started yet. I want to start, stop and retrieve the data collectors for each set of tests I run. At the start of a test run I clear out the data collectors folder and start the data collectors.

Remove-Item c:\logs\perfmon\* -recurse -force

$counters = @("CPU","ASP.NET","Disk","Memory", "SQL")

foreach ($counter in $counters)
 {
  Write-Host "Starting Counter $counter"
  $start = "logman.exe start $counter"
  Invoke-Expression $start
 }

Step 4
At the end of a test run I stop the data collectors and collect the files to include them as artefacts in my build system. You have to stop the data collectors before you attempt to copy or even view the reports of the data collector.

$counters = @("CPU","ASP.NET","Disk","Memory", "SQL")

foreach ($counter in $counters)
 {
  Write-Host "Stopping Counter $counter"
  $stop = "logman.exe stop $counter"
  Invoke-Expression $stop
 }

I had a problem copying the files due to the permissions Windows assigned to the data collector folders when it created them, I used the following script to assign permissions to the appropriate user group.

$Path = "C:\Logs\Perfmon\"
 $ACL  = (Get-Item $Path).GetAccessControl("Access")
 $ACE  = New-Object System.Security.AccessControl.FileSystemAccessRule("YourUserGroup", "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow")
 $ACL.AddAccessRule($ACE)

ForEach($_ In Get-ChildItem $Path -Recurse)
 {
 Set-Acl -ACLObject $ACL $_.FullName
 }

Now you can view the counter files using PerfMon. In my case I added them to the artefacts of my build step in TeamCity.

Tip: There is actually a lot more functionality to data collectors than most people know, you can for instance start and stop them using various triggers including a schedule. You can manage the growth and archiving of the data collector files and even create html reports. You can for instance reset the counters daily,  archive the log files,  generate a report and email it. These settings are controlled from the Data Manager.

Adding Performance Counters To Your Automated Tests

Analysing Counters
There are loads of counters available and it is sometimes difficult to determine if the values are indicative of a problem. Luckily there is are tools to help you. I use PAL tool. It is fully scriptable and generates HTML reports so you can include the analysis of the counters in your build step.

Francois Delport

Migrating Azure Virtual Machines To Premium Storage

In this post I’m going to cover migrating Azure Virtual Machines to Premium Storage.

Converting your current Azure VMs from Standard Storage to Premium Storage cannot be done from the portal. The process is a mix of scripting and manual steps, although it can be fully scripted if need be. In short you have to make a new Premium Storage account, copy the blobs of the VHDs you need and then recreate the virtual machines using the copied VHDs. In this example I’m migrating a VM disk to create a single new VM, not an image that will be used to create multiple VMs.

Preparation
Make sure Premium Storage is available in the Azure region you want to use and also which types of VMs you can use with Premium Storage, you need *S type VMs like DS and GS. Take note of the extra cost involved using Premium storage and the performance characteristics of the different drives and VMs here.

Step 1
You have to shut down your VMs to migrate them and it takes a while so be prepared for down time or use availability sets. You will need the URI of each drive attached to your VM in order to copy them. In the new portal you can see the URI when you click on the disks of the VM. In the old portal there is a separate tab for disks under the virtual machines section.

Step 2
In the portal create your new Premium Storage account in the desired region and create a blob container to store the hard disks. Make sure you choose Premium Storage.

Step 3
Next you have to copy the VHD blobs to the new storage account, you can use PowerShell or 3rd party tools with a GUI like CloudBerry.

$sourcevhd = "https://source_account/vhds/MyDisk.vhd"
$srcContext = New-AzureStorageContext  StorageAccountName your_source_account -StorageAccountKey your_source_account_key
$dstContext = New-AzureStorageContext  StorageAccountName your_dest_account -StorageAccountKey your_dest_account_key
Start-AzureStorageBlobCopy -srcUri $sourcevhd -SrcContext $srcContext -DestContainer "vhds" -DestBlob "NewDisk.vhd" -DestContext $dstContext

This step can take a while depending on the size of the VHD. Repeat for every data disk attached to your VM.

Step 4
Create a new VHD from the blob you just copied to the Premium Storage account. You can use the old portal if you prefer a GUI

Migrating Azure Virtual Machines To Premium Storage

or PowerShell.

Add-AzureDisk -DiskName "NewDiskName" -MediaLocation "https://your_storage_account/vhds/NewDisk.vhd" -Label "New OS Disk" -OS "Windows"

Step 5
Create a new VM using the disk you prepared in the previous step. You can use PowerShell.

$OSDisk = Get-AzureDisk DiskName "NewDiskName"

$vm = New-AzureVMConfig -Name $vmName -InstanceSize $vmSize -DiskName $OSDisk.DiskName

New-AzureVM -ServiceName $serviceName VM $vm

Note I used DiskName not ImageName in creating the VM. You can also use the old portal but choose My Disks as the source of the VM:

Migrating Azure Virtual Machines To Premium Storage

The steps above copied the OS disk of a VM, if you have data disks attached to the VM you have to repeat the process to copy the data disks as well. Use the Add-AzureDataDisk cmdlet to attach the data disks to your new VM.

If you want to create multiple VMs create an image instead of a disk using  Add-AzureVMImage and point it to a generalised VHD.

Francois Delport

Windows Container Services Preview

In this post I’m going to take a quick look at Windows Container Services Preview.

Containers have been the rage in the Linux world for the last few years and now they are finally coming to Windows with Server 2016 which is currently in preview. There is quite a lot to grasp when it comes to containers, especially if it is your first time using them, in this post I’m going to bring together the bits and pieces as far as I understand it. Since Windows containers are in preview and still evolving the information in this post will probably be out of date the moment I finish writing it 🙂

What is a container?
Containers are a virtualisation technique where you have multiple user-mode instances running on top of one kernel. Basically you are virtualising the OS instead of a machine like VMs do. Each container will have it’s own file system, registry and network settings that is isolated from the host OS but still use the kernel of that OS. Since you have this dependency on the host OS you can’t run Linux containers directly on a Windows machine, Linux containers need a Linux kernel and vice versa. You can however have a Linux VM running on a Windows host and that Linux VM can host Linux containers. This is what Docker Machine does. Basically containers are used to host applications and services.

When you create a container image it starts from a base image, in Windows your choices are Windows Server Core and Windows Nano Server. These are your only choices and all Windows containers have to use either of these as the base, you can’t create your own base images. You can chain containers together, for instance you can create a container and install IIS on it, this will be the base container for your web applications and from this base container you can create three separate containers for MVC 3.0, MVC 4.0 and MVC 5.0 applications respectively. Any changes you make to the base IIS container image, like enabling Windows Authentication in IIS will also show in the child images when you create them. Containers do not have a GUI, you can connect to the container instance using the command line or PowerShell.

Host Resources
In the initial version of Windows Container Services (WCS) you won’t be able to access host devices directly. You get access to the host’s network via a virtual NIC that is connected to a virtual switch which supports NAT or transparent (bridge) mode. You can configure shared folders between the host and the container if you need access to files on the host. The host CPU, RAM, disk and network I/O allocated to a container can also be constrained.

Container Management
There are a few options you can use to manage your containers

  • PowerShell
    Easiest way to explain it is to look at the quick start over here and the cmdlets reference. There are command to create containers, start them, set network options etc.
  • Visual Studio
    The Visual Studio Tools for Docker is currently in preview and it enables application deployment directly to Linux  containers and will eventually support Windows containers.
  • Docker
    Firstly Docker is a management tool for containers, Docker itself is not the container implementation, container functionality comes from the underlying OS. Docker with help from Microsoft is busy working on a Docker Engine for Windows that will allow you to use the same commands to manage containers independent of the host OS. For instance you will use the same syntax to create a container or set the network configuration for Linux and Windows containers.

Windows Hyper-V Containers
To add to the confusion Windows will also feature Hyper-V Containers which will run each container in it’s own VM to add further isolation. This VM will use Nano Server as the VM OS with just enough functionality to run a container.

Switching to containers
Containers are not a replacement for VMs, they provide slightly different functionality but it is likely that your container hosts will be VMs. It is also likely you’ll have to change your applications and the way you develop them to truly take advantage of containers. Containers are quicker to create and start than VMs which makes them ideal for micro services and scale-out applications. Designing your applications to take advantage of this architecture is beyond this post but you can look here to get some guidelines. Containers are also useful for development scenarios, you can test your application in a standardised environment in just a few minutes as part of your build process or on the developers local machine since they are so lightweight.

Francois Delport