Application Insights Customisation And OMS Integration

In this post I’ll be taking a look at Application Insights customisation and OMS Integration.

Custom Dashboards

You can pin the out-of-the-box charts to your Azure Portal dashboard by clicking on the pin in the top right corner. Analytics query charts can also be pinned using the same method but they are pinned to shared dashboards. Shared dashboards exist as an Azure resource in a resource group and you can use RBAC to control access to it.

Application Insights Customisation And OMS Integration

Alerts

You can create alerts based on the telemetry in Application Insights, you will find the Alert Rule button in the Metrics Explorer blade.

Application Insights Customisation And OMS Integration

 

Currently you can only create alert rules based on out-of-the-box telemetry but not from custom events or analytics queries. Good news is the feature is in preview so it should be available soon, link to uservoice.

Correlating Entries And Custom Properties

By default Application Insights populates the operation_id for every web request and propagates that operation_id to dependencies that it is able to trace out -of-the-box for example SQL Server queries and WCF calls to HTTP endpoints. The example below is for SQL Server queries joined to web requests.

Application Insights Customisation And OMS Integration

If you have a dependency that Application Insight can’t trace automatically or you have multiple levels of dependencies you have to provide your own solution to propagate the operation_id or your own contextual identifier like customer id. You can do this by creating a TelemetryInitializer to add your custom id or to grab the web request id and pass it along to the other services, example here.

OMS Integration

You can view and query your Application Insights telemetry in OMS by installing the OMS Application Insights solution from the OMS portal and configuring which applications you want to connect from Application Insights.

Application Insights Customisation And OMS Integration

You can connect multiple applications from different subscriptions which makes it easy to create a consolidated view. You can correlate Application Insights telemetry with other data sources in OMS like infrastructure telemetry making it easier to pinpoint the cause of slow application response.

VSTS Integration

You can install Application Insight widgets on your VSTS dashboards to surface Application Insight data in VSTS, link to marketplace.

PowerBI Integration

There is a content pack to create PowerBI dashboards with telemetry from Application Insights, link here. The content pack comes with a set of built-in dashboards. If you need a custom dashboard you can export Analytics Queries to PowerBI as explained here.

Custom Telemetry

Contextual logging and telemetry that makes sense in a business context for instance orders completed per second or aborted shopping carts can be a very powerful tool to get useful information out of logs and to track problems related to a specific customer interaction. To achieve this you can add your own telemetry to Application Insights by writing custom events and logging exceptions, link here. You can also have your favorite logging library writing to Application Insights, examples here.

Francois Delport

VSTS Sql Server Database Deployment Task

In this post I’ll be having a quick look at the VSTS Sql Server Database Deployment Task. You can find it in the marketplace as part of the IIS Web App Deployment Using WinRM package. It is a bit confusing that you have to look for it under the IIS section not the SQL section in the marketplace.

Take note this task is using WinRm, the instructions to enable it is on the marketplace page and includes a link to a PowerShell script you can run to configure WinRM.

Lets take it for a spin. Follow the installation process from the marketplace to install the extension on your account or download it for on-prem TFS servers. You will see the new WinRM – SQL Server DB Deployment tasks in the Deploy tasks menu.sqlsvsts

Add the task to your deployment and set a few properties, this was quick test I did. You will need a task before this one to copy the dacpac file to a location where the agent can reach it.

SampleConfig

I didn’t have ports open on my firewall for the VSTS hosted agent to reach my machine so I installed the VSTS agent locally and set the task to use localhost. You can specify multiple machines or even use Machine Groups, you will find it in the Test hub in VSTS.

The output from the task doesn’t show much about the actual database changes taking place but you will at least know if the deployment failed.

dacpacoutput

Side Note: Don’t put plain text passwords in your build tasks, use variables and mark them as secret by clicking the padlock icon next to the variable value.

Secret

You can find some more information in the readme file for the task, which they include as a link in error messages when a step fails, which is really helpful.

If you are looking for more information around  continuous deployment of databases have a look at these posts in my continuous deployment of databases series.

Part 1 covers the logic behind the deployment options.
Part 2 covers SqlPackage.exe.
Part 3 covers ReadyRoll.
Part 4 covers FlyWay.

Francois Delport

Continuous Deployment Of Databases : Part 4 – Flyway

This is part 4 in my series on continuous deployment of databases. In this post I’ll be having a quick look at FlyWay.

Part 1 covers the logic behind the deployment options.
Part 2 covers SqlPackage.exe.
Part 3 covers ReadyRoll.
VSTS Sql Server Database Deployment Task can be found here.

In my two previous posts I briefly had a look at two tools for SQL Server Continuous Deployment, SqlPackage.exe and ReadyRoll. I wanted to see what is out there for Oracle, MySQL and other databases especially if you have a situation where your application supports multiple database and you want to do Continuous Deployment against all of them. There are plenty commercial applications for SQL and Oracle that integrate directly with build systems and IDEs but not that much for other databases based on my quick google search.

Flyway
I’m going to take a quick look at Flyway, a migration based tool, bearing in mind I work in .NET and some of the features are Java centric. It is an open source project and looking at the commit history on GitHub it has been around for a long time and is supported by an active development community. At the time of writing it supported 18 databases.

You create migrations using SQL script files or Java code based migrations which comes in handy for very complex data manipulation and handling blobs. It is based on convention over configuration so naming folders and files correctly is very important since the version number which controls the deployment sequence is part of the name. There are various ways to execute the migrations for example using the command line tool or calling the Java API directly, this opens up scenarios like including Flyway as part of your application to keep your database and application in sync. There are also plugins for Maven, Gradle and Ant but not much in terms of .NET.

I’ll quickly touch on the main operations used by the application:
Migrate: The meat of the application, it migrates a schema to the required version by determining the migrations that should be run.
Clean: Wipes the destination database clean, used during development and testing.
Info: Shows the migrations applied with execution time and state and which ones are pending.
Validate: Verify the applied migrations match the ones you have locally.
Baseline: Integrate Flyway with an existing database so only newer migrations will be applied.
Repair: Flyway uses a metadata table to keep track of applied migrations, this will the repair the metadata table.

Tooling
It doesn’t have direct integration with Visual Studio or VSTS but it doesn’t look to difficult to roll your own automated deployments with some scripting to call the command line tool. Similar to the SqlPackage.exe you can call the Flyway command line tool from your release management tool to update the target database directly or generate a report of the changes that will be applied in case a DBA has to scrutinise the changes first.

Oracle Side Note
If you want to use Flyway against Oracle or just want to use Visual Studio in general for Oracle database development have a look at Oracle Developer Tools for Visual Studio. Oracle tools is very similar to Sql Server Data Tools and makes is easy to add your Oracle database to source control. You can also use it to compare Oracle schemas and generate change scripts which can be the base for your Flyway migration scripts.

Francois Delport

Continuous Deployment Of Databases : Part 3 – ReadyRoll

This is part 3 in my series on continuous deployment of databases. In this post I’ll be having a quick look at ReadyRoll and using it in continuous database deployment.

Part 1 covers the logic behind the deployment options.
Part 2 covers SqlPackage.exe.
Part 4 covers FlyWay.
VSTS Sql Server Database Deployment Task can be found here.

ReadyRoll, is a tool from Redgate that is migration based, although it also allows creating individual migration scripts based on state comparison. It is pretty easy to use and I was able to get my demo working end to end and running a release from VSTS without ceremony.

Quick Intro To Creating Migration Scripts
Just like Sql Server Data Tools, ReadyRoll is integrated into Visual Studio. You can create the initial project by importing an existing database and adding it to source control. To create your own migration scripts you can add them manually to the Migrations folder in your project or you can make changes to a database using the Visual Studio Server Explorer and then click on the Update button and then Generate Script button to generate the migration script based on the changes made without updating the actual database.

You can also use the state method by making changes to the database directly using Visual Studio Server Explorer or Sql Server Management Studio and then importing the changes back into your project. Use the ReadyRoll DB sync option from the ReadyRoll menu and the Import and Generate Script button to generate a migration script that will be added your Migrations folder. Obviously if the changes to the database involved data manipulation that part of it will not show in the migration. The changes you made can be misunderstood for example renaming a column can show up as adding a new column and deleting the old one via a table recreate.

You can edit the migration scripts and add any steps needed to manipulate data or database objects which makes is better suited for changes that require data manipulation.

Database Baseline
ReadyRoll uses a table called __MigrationLog to keep track of the migrations applied to a database, when you import an existing database this table is created and the current state is used as the baseline for that database. Before you can deploy to a database you have to baseline it first, for example you import your Dev database into ReadyRoll but will be deploying to QA and Production as part of your release for the first time. Read the documentation here for instructions before you try to deploy or the deployment will fail.

Continuous Deployment In Visual Studio Team Services
To use ReadyRoll in VSTS you have to install the extension from the marketplace, the official getting started guide for ReadyRoll in VSTS is over here. I’m just going to highlight a few things but this will not be a deep dive into getting it working. Continuous Deployment  with ReadyRoll is a two step process, first building and then releasing. The build step will build your ReadyRoll project while comparing it to the target database specified in your build step to generate a deployment package PowerShell file to run the migrations required on the target database. This PowerShell script will be part of the artifacts of the build. You can view the SQL script and a database diff report on your build result page in VSTS.

ReadyRollNewTabs

The next step is to release it, in your release definition add the Deploy ReadyRoll Database Package task and point it to the script created in your build and the target database.

AddReadyRollStep

Take note, in the Package To Deploy setting of the task, point it to the {YourProjectName}_DeployPackage.ps1 file from your artifacts, not the {YourProjectName_Package.sql} file. I hit a snag at this point since the DeployPackage.ps1 file did not exist for me, to fix it I had to go to the properties of the project in Visual Studio and select SQLCMD package file as the output type.

EnablePackage

When To Use State Or Migration
I talked bit about the differences in my first post but after all the research and demos I created I started to see how migrations seem better suited for a smaller number of changes that are complex and require data manipulation where as state based changes work great during development and for a large number of changes especially to the database structure not the data it self. Obviously this depends on your application and the churn in your database and data structure.

Francois Delport

Continuous Deployment Of Databases : Part 2 – SqlPackage.exe

This is part 2 in my series on continuous deployment of databases. In this post I’m going to have a quick look at SqlPackage.exe and using it in continuous database deployment.

Part 1 covers the logic behind the deployment options.
Part 3 covers ReadyRoll.
Part 4 covers FlyWay.

Update: The VSTS Sql Server Database Deployment Task using SqlPackage.exe has since been released, you can read more about in this post. If you are using SqlPackage.exe outside VSTS the information in this post will still be relevant.

SqlPackage.exe is a free command line tool from Microsoft that you can use against SQL Server and Azure SQL Server to compare, update, import, export databases to name but a few scenarios. This post is only scratching the surface, the official documentation is here, please read it to get a better understanding of the tool.

SqlPackage.exe is a state based database tool, if you are not familiar with the term please read part 1 of the series here for an explanation. For continuous deployments I will be using a dacpac file as the source state for my database, read more on dacpac files here. The easiest way to generate dacpac files is using Sql Server Data Tools. SSDT is integrated with Visual Studio, when you build the SSDT project the output will be a dacpac file. This also fits in nicely with adding your database to source control and building the proper version of your database along with your other application components. You can read more about SSDT here.

Making Backups
As everyone knows you always make a backup of your database before deploying changes. You can instruct SqlPackage.exe to backup your database by passing the /p:BackupDatabaseBeforeChanges parameter. I assume it will create a backup using the default settings and location of your server but I prefer to rather do it myself. Here is a sample PowerShell script that will backup a database using the SQLPS module and SQL Server Management Objects SMO. Side Note: keep your PowerShell scripts in source control instead of lumping large blocks of script into text boxes in your build or deployment tool.

Now for the fun part, I’m going to explore 3 scenarios that I came across so far and I guess most other developers will as well.

Scenario 1: Automatically update your target database
If you are deploying to a development database or have an extremely informal (but not recommended) production environment, you can use SqlPackage.exe to update your target database by comparing it to your dacpac file and applying the changes on the fly. The syntax for this is:

/Action:Publish /SourceFile:$dacpac_path /TargetConnectionString:$constr

Scenario 2: Generate a script with the changes that will be applied
The more likely choice for production environments would be generating a script that your DBA can scrutinise before applying it to the production database. The syntax for this would be:

/Action:Script /SourceFile:$dacpac_path} `  /TargetConnectionString:$constr /OutputPath:$path_to_scriptfile

For the actual deployment you have a few options, you can pull in the generated script as an artifact of the build or email it to the DBA or drop it in a file location etc and they can apply it manually using their preferred procedure. Alternatively you can still keep the process semi automated and inside your release management tool by adding a deployment step that requires approval from the DBA, this step will use the generated script artifact to deploy the changes to a database.

Scenario 3: Generate a difference report between database
This is usually a post deployment task that will form part of your post deployment testing or sanity check to confirm the database is in the correct state. The SqlPackage.exe tool should always do its job properly but database are precious pets and everyone wants to make 100% sure everything is correct. The syntax is:

/Action:DeployReport /SourceFile:$dacpac-path ` /TargetConnectionString:$constr /OutputPath:$reportoutputfilepath

I combined all the samples into one PowerShell script here.

Populating lookup tables
In your SSDT project you can add pre and post deployment scripts. I use the post deployment scripts with the SQL Merge statement to populate lookup tables, full example from MSDN here. There is a way to add the lookup table(s) as dependent objects but last I read the tables couldn’t contain foreign key relations to tables outside the dependent dacpac.

VSTS SQL Deployment Task
At the time of writing this blog post the official SQL Deployment task for VSTS was in preview and should be released very soon but in the mean time I used these examples scripts in VSTS and they worked so if you are in hurry they will do the job.

Francois Delport

Continuous Deployment Of Databases : Part 1

This is part 1 in my series on continuous deployment of databases. In this post I’ll be covering the logic behind the different deployment options.

Part 2 covers SqlPackage.exe.
Part 3 covers ReadyRoll.
Part 4 covers FlyWay.
VSTS Sql Server Database Deployment Task can be found here.

Continuous deployment of databases is a lot more complicated than applications so before we can look at continuous deployment of databases we have to look at managing database changes first. I’m going to give a high level overview of the problem space and methods to deal with database changes before looking at practical examples of continuous deployment of databases in the next post.

The Problem Space
A few of the challenges we face with database changes are:

  • Rolling back: Rolling back files is easy, just restore the previous version from source control, with databases you have to explicitly back them up before hand and restore them if needed and it can take a while for large databases and no one can work while you are restoring.
  • Down Time: It is easy to swap servers in and out of your load balancer to update them but it is not so simple with a database, even in a cluster. It is not impossible with databases just more work, there are solutions like sharding for instance but that is a post on its own.
  • State: Your application files do not contain state, you can recreate them easily while your database contains precious business data. You have to make changes to the structure and existing data without loosing anything. This leads to some teams hand crafting the change scripts for each release which is error prone.
  • Control: In large organisations DBAs are often the only people allowed to make changes to a database and they scrutinise the change scripts provided by developers before implementing them manually. You can’t have automatic migrations or automatically generated scripts executing against the database.
  • Drift: It is not unheard of that databases in production do not match the version developers are using the create the change scripts, especially with applications that are installed at the client premises.

Keeping Track Of Database Changes
The most important step to keep track of your database is adding it to source control, whether you are using specialised database tools or not. This will enable tracking the changes made to your database at specific points in time. Even just having the database create scripts committed each time a change is made will be better than nothing. If you are using the state/declarative method it will also help to solve conflicting changes, for instance two developers renaming the same column in a table at the same time will lead to a merge conflict in your source control system. You can also branch the database just like you do with application code to fix a serious production bug and then later merge the changes back into trunk.

Approaches To Implement Database Changes
Currently there are two ways to handle database changes, state/declarative and migrations, each one with its own strengths and weaknesses:

State/Declarative Method:
The state method works by comparing the target database to a source database and then generating a script to bring the target database in sync with the source database. This is best accomplished using a database schema comparison tool.
Pro: Any number of changes made in any sequence by team members will be included since the current state of the source database in source control is the source of truth not the steps taken to reach that state. Works great for databases during the development phase when you can tolerate dropping and recreating tables or columns and large teams making multiple changes at the same time. Less error prone since the scripts are generated by a tool instead of hand crafting them.
Cons: Cannot handle data migrations since the tool works according to the current state, it has no way of knowing how that state was achieved. For example if a new column was created by manipulating data from two existing columns.

Migrations Method:
This method applies a sequence of changes to a database to migrate it to the new state. With this method the migration scripts are the source of truth, the scripts are created by hand (there are exceptions see the comparison of tools later in the post) and keeping the correct sequence is very important.
Pro: Handles data migrations very well since developers can include any data manipulation required in the scripts.
Cons: Most of the time the scripts will be created by hand which is cumbersome and error prone. You have to take all the change scripts created during development and scrutinise them to weed out conflicting changes and you have to keep the correct sequence. For example developer A renames a column in table X and checks in the alter script, later developer B also renames the same column to something different without getting the changes from developer A and checks in his alter script. When you execute all the migrations the change from developer B will fail since the column exists under the new name. It is also cumbersome for all developers to keep their development databases in sync with the changes.

As you can see there is no clear cut solution to the problem and most tools will fix some of the problems but not all, for example Sql Server Data Tools will handle the state based method very well and Entity Framework Code First will handle migrations very well. Some tools like ReadyRoll try to combine both by generating a sequentially numbered migration script based on the current state of your database when you check-in a change.

Francois Delport

Octopus Tentacle Automated Deployment And Registration

In post I’m going to cover Octopus Tentacle automated deployment and registration.

Recently I had situation where I had to install and then register Octopus Tentacles in Azure using ARM templates. The Octopus server was only reachable via public internet using it’s DNS name, by default Octopus will register using the local hostname which wouldn’t work in this case. I couldn’t find a complete example that did exactly what I needed, I’m posting the solution I came up with in case it is needed again.

There is some guidance here around ARM it but it doesn’t cover ARM templates only PowerShell scripts. I took this example anyway and modified it to be a DSC extension in my ARM template and it kind of worked. You can only use one instance of the DSC extension in a template since ARM will try to install it multiple times if you have multiple instances. I needed multiple instances and it failed. I uploaded the ARM example to GitHub anyway if someone needs it for Octopus servers installed in Azure.

In the end I used Azure Custom Script extensions to execute a PowerShell script for the installation. To create the PowerShell script I tried to follow the examples here using Tentacle.exe but still had the problem with the wrong tentacle URL. Using the Octopus.Client dll which was also part of the example didn’t work either, it didn’t configure the Windows service and it assumed you knew the tentacle client thumbprint which you don’t since this is a new installation and the tentacle will generate new thumbprint. Eventually I got it working using a combination of the two and some extra code to retrieve the tentacle client thumbprint. The full sample is here but I will highlight the important parts.

To retrieve the tentacle client thumbprint run:

$raw = .\Tentacle.exe show-thumbprint --nologo --console
	$client_thumbprint = $raw -Replace 'The thumbprint of this Tentacle is: ', ''

It will print the thumbprint to screen with some extra text you don’t need so I remove the first part.

Secondly don’t call the register-with command shown in the Tentacle.exe example since it will be done using the Octopus.Client dll.

To add multiple roles you have to add them one at a time, to achieve this I pass the roles string to PowerShell as a comma separated value.

"Web Server, Database Server"

And then I split it into an array and loop through it:

foreach($role in (New-Object -TypeName System.String -ArgumentList $roles).Split(','))
	{
		$tentacle.Roles.Add($role)
	}

After it is all done you will end up with a new machine in the desired environment with the correct URL.

Octopus Tentacle Automated Deployment And Registration

Francois Delport

Adding Performance Counters To Your Automated Tests

In this post I’m going to cover adding performance counters to your automated tests.

Performance counters can be invaluable in analysing your application’s behaviour while running tests. If you are lucky enough to use Visual Studio Enterprise you can add counters to your load test with just a few clicks.

Adding Performance Counters To Your Automated Tests

Since I’m using a different test runner I have to add them manually as part of my test run, I’m going to use PowerShell and LogMan to do this. You can create the data collectors entirely in PowerShell but there are loads of properties you can set on a data collector and it quickly becomes unmanageable. Instead I’m going to create the data collectors manually and use LogMan to export them and then import them on the test machine as part of the test sequence.

Step 1
Create your data collectors using Perfmon just like you normally do, take note of the names and optionally specify the path to store the counter file.

Adding Performance Counters To Your Automated Tests

When you are done, right click on each data collector, select Export Template and save it to a file, keep the convention of CounterName.xml file, you will later see how this makes the script simpler.

Step 2
In my environment the VMs are reset every day so I scripted the import of the data collector templates at as part of my VM rebuild. I gave each exported XML file the same name as its data collector, if you follow this convention it makes it very easy to add new ones by adding it to the $counters array.

$counters = @("CPU","ASP.NET","Disk","Memory","SQL")

foreach ($counter in $counters)
 {
 Write-Host "Creating Counter $counter"
 $create = "logman.exe import -name $counter -xml $counter.xml"
 Invoke-Expression $create
}

Step 3
After the previous step the data collectors are created but not started yet. I want to start, stop and retrieve the data collectors for each set of tests I run. At the start of a test run I clear out the data collectors folder and start the data collectors.

Remove-Item c:\logs\perfmon\* -recurse -force

$counters = @("CPU","ASP.NET","Disk","Memory", "SQL")

foreach ($counter in $counters)
 {
  Write-Host "Starting Counter $counter"
  $start = "logman.exe start $counter"
  Invoke-Expression $start
 }

Step 4
At the end of a test run I stop the data collectors and collect the files to include them as artefacts in my build system. You have to stop the data collectors before you attempt to copy or even view the reports of the data collector.

$counters = @("CPU","ASP.NET","Disk","Memory", "SQL")

foreach ($counter in $counters)
 {
  Write-Host "Stopping Counter $counter"
  $stop = "logman.exe stop $counter"
  Invoke-Expression $stop
 }

I had a problem copying the files due to the permissions Windows assigned to the data collector folders when it created them, I used the following script to assign permissions to the appropriate user group.

$Path = "C:\Logs\Perfmon\"
 $ACL  = (Get-Item $Path).GetAccessControl("Access")
 $ACE  = New-Object System.Security.AccessControl.FileSystemAccessRule("YourUserGroup", "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow")
 $ACL.AddAccessRule($ACE)

ForEach($_ In Get-ChildItem $Path -Recurse)
 {
 Set-Acl -ACLObject $ACL $_.FullName
 }

Now you can view the counter files using PerfMon. In my case I added them to the artefacts of my build step in TeamCity.

Tip: There is actually a lot more functionality to data collectors than most people know, you can for instance start and stop them using various triggers including a schedule. You can manage the growth and archiving of the data collector files and even create html reports. You can for instance reset the counters daily,  archive the log files,  generate a report and email it. These settings are controlled from the Data Manager.

Adding Performance Counters To Your Automated Tests

Analysing Counters
There are loads of counters available and it is sometimes difficult to determine if the values are indicative of a problem. Luckily there is are tools to help you. I use PAL tool. It is fully scriptable and generates HTML reports so you can include the analysis of the counters in your build step.

Francois Delport

Windows Container Services Preview

In this post I’m going to take a quick look at Windows Container Services Preview.

Containers have been the rage in the Linux world for the last few years and now they are finally coming to Windows with Server 2016 which is currently in preview. There is quite a lot to grasp when it comes to containers, especially if it is your first time using them, in this post I’m going to bring together the bits and pieces as far as I understand it. Since Windows containers are in preview and still evolving the information in this post will probably be out of date the moment I finish writing it 🙂

What is a container?
Containers are a virtualisation technique where you have multiple user-mode instances running on top of one kernel. Basically you are virtualising the OS instead of a machine like VMs do. Each container will have it’s own file system, registry and network settings that is isolated from the host OS but still use the kernel of that OS. Since you have this dependency on the host OS you can’t run Linux containers directly on a Windows machine, Linux containers need a Linux kernel and vice versa. You can however have a Linux VM running on a Windows host and that Linux VM can host Linux containers. This is what Docker Machine does. Basically containers are used to host applications and services.

When you create a container image it starts from a base image, in Windows your choices are Windows Server Core and Windows Nano Server. These are your only choices and all Windows containers have to use either of these as the base, you can’t create your own base images. You can chain containers together, for instance you can create a container and install IIS on it, this will be the base container for your web applications and from this base container you can create three separate containers for MVC 3.0, MVC 4.0 and MVC 5.0 applications respectively. Any changes you make to the base IIS container image, like enabling Windows Authentication in IIS will also show in the child images when you create them. Containers do not have a GUI, you can connect to the container instance using the command line or PowerShell.

Host Resources
In the initial version of Windows Container Services (WCS) you won’t be able to access host devices directly. You get access to the host’s network via a virtual NIC that is connected to a virtual switch which supports NAT or transparent (bridge) mode. You can configure shared folders between the host and the container if you need access to files on the host. The host CPU, RAM, disk and network I/O allocated to a container can also be constrained.

Container Management
There are a few options you can use to manage your containers

  • PowerShell
    Easiest way to explain it is to look at the quick start over here and the cmdlets reference. There are command to create containers, start them, set network options etc.
  • Visual Studio
    The Visual Studio Tools for Docker is currently in preview and it enables application deployment directly to Linux  containers and will eventually support Windows containers.
  • Docker
    Firstly Docker is a management tool for containers, Docker itself is not the container implementation, container functionality comes from the underlying OS. Docker with help from Microsoft is busy working on a Docker Engine for Windows that will allow you to use the same commands to manage containers independent of the host OS. For instance you will use the same syntax to create a container or set the network configuration for Linux and Windows containers.

Windows Hyper-V Containers
To add to the confusion Windows will also feature Hyper-V Containers which will run each container in it’s own VM to add further isolation. This VM will use Nano Server as the VM OS with just enough functionality to run a container.

Switching to containers
Containers are not a replacement for VMs, they provide slightly different functionality but it is likely that your container hosts will be VMs. It is also likely you’ll have to change your applications and the way you develop them to truly take advantage of containers. Containers are quicker to create and start than VMs which makes them ideal for micro services and scale-out applications. Designing your applications to take advantage of this architecture is beyond this post but you can look here to get some guidelines. Containers are also useful for development scenarios, you can test your application in a standardised environment in just a few minutes as part of your build process or on the developers local machine since they are so lightweight.

Francois Delport

Release From Visual Studio Team Services To On-Premise Servers Using Web Deploy

In my previous post I showed how to host a private agent to build your solutions on-premise. Today I’m going to show how to release from Visual Studio Team Services to on-premise servers using Web Deploy and a private agent.

Just like the build experience the release management features got an overhaul in VSTS, the previous Release Management product is now rolled into VSTS and TFS Update 2. Most of the current functionality is focused on releasing to Azure but you can use PowerShell or the command line to execute your own install scripts.

For this demo I’m going to release a simple web application using a web deploy package. You could deploy to IIS using remote agent deployment straight from the VSTS servers to on-premise but that is not always possible or allowed in secure environments. When you build a web deploy package the build process will retrieve some settings from you local IIS instance to set some defaults for the package. This won’t work if you use a hosted agent, I’ll be using my private agent for building as well as releasing the web deploy package. In a future post I’ll dive into web deploy in more detail but for now I’ll be using the defaults for deployment since the post is about VSTS release management not web deploy.

Firstly you have to modify your build and pass in the arguments to instruct MSBuild to create the deployment package, as shown here:

Release From Visual Studio Team Services To On-Premise Servers Using Web Deploy

I’m only interested in the deployment package output from the build process so I’m going to publish the deployment package folder to my artifact output.

Release From Visual Studio Team Services To On-Premise Servers Using Web Deploy

In Visual Studio set your publish settings for the web deployment package, take note of the package location, it is the folder you specified in the artifact publish step and I’m creating it in the root of the solution folder to keep things easy. Check the change in and kick off a build.

Release From Visual Studio Team Services To On-Premise Servers Using Web Deploy

Back in VSTS open the release menu. To create a new release, click on the + sign to create a new release definition and start with a blank definition.

Release From Visual Studio Team Services To On-Premise Servers Using Web Deploy

This demo deployment will have only one step to execute the web deploy package with its default parameters. Click on Add tasks and choose Command Line from the Utility sub section. To get the correct path to the web deploy .cmd file you can browse the artifacts published in the build step. Remember to save the new release definition you created.

Release From Visual Studio Team Services To On-Premise Servers Using Web Deploy

I’ll cover the details of environments and different release types in a future post but for now this will be a manual release to the default environment. Access the context menu for the definition you created and select Release.

Release From Visual Studio Team Services To On-Premise Servers Using Web Deploy

On the next screen accept all the defaults except under artifacts select the latest build of your code and click Create. This will not launch the deployment yet since we are doing it manually. If you look at your list of releases you will see a new release definition. Double click this release and click on the Deploy menu item and deploy to the default environment.

Release From Visual Studio Team Services To On-Premise Servers Using Web Deploy

Once the deployment is completed, you will see the new web application in IIS using defaults picked up from your solution.

This was just one way of many to accomplish on-premise deployments, if you look at the example from MSDN, they are using WinRM and DSC to install IIS, WebDeploy and to install the web app.

Note: On the destination IIS server you also have to take some steps to set up IIS for deployment. You have to install Web Deploy and the easiest way seems to be Web Platform Installer. Also make sure the IIS Management Service feature is installed.

Release From Visual Studio Team Services To On-Premise Servers Using Web Deploy

Francois Delport