Building A MVC3 Project In Visual Studio Team Services/TFS 2015

In this post I’m going to cover building a MVC3 project in Visual Studio Team Services/TFS 2015.

With the release of Team Foundation Server 2015 Microsoft overhauled the build experience with a completely new build server. The new build experience is way better than the old XAML build and I decided to take it for a spin and share some of the problems or gothas I came across.

There are tons of demos on building MVC projects most of which start with a new MVC project but I wanted to try it out with something more realistic and decided to use a very old MVC3 project I have that has never been successfully built on a build server before. This meant quite a few errors along the way I had to fix first but it didn’t take very long and everything worked fine in the end.

If you want a step by step introduction read this, I will mostly be focusing on the deviation from the happy path in this post and documenting the steps to fix it in case I need it again one day.

First lesson
The retention policy for your builds is 10 days by default so I lost all the build logs from my old builds containing the error message before I got around to writing this post. So most of what I’ll be writing is from memory without screenshots.

Second Lesson
I got this error while restoring packages during the build:

The process cannot access the file NuGet.exe because it is being used by another process.

To fix it was very simple, just update the nuget.exe file in your solution to the newest version.

Third Lesson
I’m using a nuget repository that requires authentication and if you try to run a build you will get error message that it can’t find your packages. To resolve this issue you have to store the credentials in nuget.config for the build server to be able to authenticate against your nuget repository. To do that use the following command:

nuget sources add -Name "YourFeed" 
-source "https://urltonuget/nuget" 
-User username -pass password 
-configFile "Path\To\nuget.config" -StorePasswordInClearText

Sadly this will store it as clear text but the encrypted credentials will only work on the same machine where it was encrypted so you won’t be able to use it on the build server.

Fourth Lesson
In this project all the packages were checked into source control, this is not good practice, it takes up lots of space, pollutes your repository and downloading the project from VSO takes a lot longer. To prevent this from happening use .tfignore files. In this case I placed the file in the root of my project.

Building A MVC3 Project In Visual Studio Team Services/TFS 2015

The contents of the file is very simple.

\packages
!\packages\repositories.config

This will ignore the contents of the packages folder but not the repositories.config since we still need it.

Fifth Lesson
It is also not good practice to include your bin folder in source control, rather use nuget for all your dependencies, in this project which was always built locally all the developers had MVC3 installed on their machines. The project failed to build on the build server since it didn’t (and shouldn’t have MVC3 installed). The fix for this is very simple, install MVC3 as nuget packagein your web project.

Install-Package Microsoft.AspNet.Mvc -Version 3.0.50813.1

Sixth Lesson
Build definitions can become very complex and it is easy to mess it up, luckily VSO will keep track of the changes made to your build definitions, you can even do a diff to see the exact changes. It is like a mini source control for your build definitions.

Building A MVC3 Project In Visual Studio Team Services/TFS 2015

Francois Delport

Setup SSL In JIRA With An Existing SSL Certificate

In this post I’m going to show you how to setup SSL in JIRA with an existing SSL certificate.

If you setup SSL in JIRA from scratch by requesting a new certificate the official instructions work well but when you have an existing certificate the instructions are not very clear, especially to someone that is not familiar with Java and Tomcat. If you read further down in the comments and google a bit you can piece it together but I want to bring it all together in a single post to make it easier next time I have to do it. These instructions are for windows but should work for any OS since the tools used are ports from Linux anyway.

Tools

Before we start you’ll need a few things:

  1. If you want to know what .pem, pkcs12 and .key files are please read this first.
  2. Your SSL certificate,  private key pair and the password that was used to create the private key. If you received it as text in a email instead of file attachments you can copy and paste them into separate files but remember to include the –begin***— and —end***– parts for the certificate and the private key. The extensions does not really matter when you run the tools but I named mine .key and .pem to make it easier.
  3. OpenSSL: You can download it from sourceforge.
  4. Jave JRE: You will have this one already since it is part of the JIRA installation in my case it was in C:\Program Files\Atlassian\JIRA\jre\bin\ and the tool you need is keytool.exe

Steps

Export your certificate to pkcs12, the format the Java key tool understands. You will find openssl in C:\Program Files (x86)\GnuWin32\bin, run openssl.exe to get the openssl command prompt then run:

pkcs12 -export -in c:\cert\your_ssl.pem -inkey c:\cert\your_keyfile.key -out newfile.p12 -name alias

The alias is optional and if you don’t provide one the tool will assign a number as the alias, starting from 1. If you want to see the alias for existing files have a look at the command line parameters for openssl. You will be prompted for the password used to generate the private key pair. If successful you will see the newfile.p12 created in the output folder.

Next step is to create the java key store, I called this one jira.jks.

"%java_home%\bin\keytool.exe" -importkeystore -srckeystore newfile.p12 -destkeystore jira.jks -srcstoretype pkcs12 -alias alias

You will be prompted to create a new password for this keystore and then you will be prompted for the private key  password used to create the exported certificate. It is imported the use the private key password as the new password for this key store or else JIRA will complain, example of the error message below.

Setup SSL In JIRA With An Existing SSL Certificate

Now you can configure JIRA to use this Java keystore for SSL by running config.bat it is located in the bin folder of your JIRA installation.

Setup SSL In JIRA With An Existing SSL Certificate

If you want to have a look what is inside existing Java keystore certificates you can use openssl.exe to view them or you can use portecle if you prefer a GUI.

Side Note: To manually configure a Tomcat connector for SSL, edit <tomcat_dir>/conf/server/xml and add the following:

<Connector port="8443" maxThreads="150" scheme="https" secure="true" SSLEnabled="true" keystoreFile="path/to/your/keystore" keystorePass="YourKeystorePassword" clientAuth="false" keyAlias="alias" sslProtocol="TLS"/>

Tip: I had endless trouble creating application links between JIRA and BitBucket with SSL enabled. BitBucket was able to use the JIRA user directory but application links were throwing certificate errors and http 500 errors on the application links screen. In the end I had to change JIRA to use port 443 instead of 8443 and it solved the problem.

Francois Delport

Automating Installers .Net Config Files Scheduled Tasks And Services

This post is a continuation of the brain dump I started here.

Adding new settings to a .NET config file
Previously I showed how to manipulate the existing settings in a .NET/XML configuration file but adding new entries turned out to be a bit more typing  since you have to create the new xml element(s) and append them to the document.

For example to add the section in bold to the existing file:

<Configuration>
<CustomSettings>
<add key="ExistingSetting1Key" value="ExistingSetting1Value">
  <add key="Setting1Key" value="Setting1Value">
</CustomSettings
</Configuration>

You use the code below:

$file = 'C:\App\App.exe.config'
$xml = [xml](Get-Content $file)
$CustomSettings  = $xml.configuration.CustomSettings
$Setting1 = $xml.CreateElement("add")
$Setting1.SetAttribute("key", "SettingKey")
$Setting1.SetAttribute("value", "SettingValue")
$CustomSettings.AppendChild($Setting1 )
$xml.Save($file)

As you can see nothing really new if you are used to manipulating XML documents, just more typing.

Depending on the complexity of the xml and how much must be changed you could take a existing element, make a copy of it, manipulate it and then append it to the document as well.

Creating a scheduled task in using PowerShell
In case you did not know, Azure VMs recreate their NICs when you shut them down from the portal and I had a software component that was linked to a specific NIC. After starting up again it would loose the NIC you selected earlier. So as part of the automated deployment every time the machine boots up I have to update the configuration of my 3rd party component to select the correct NIC. One way to do this is to run a scheduled task at startup. Here is the PowerShell script to create a scheduled task, you need PowerShell 3.0 or higher and Windows 2012.

$action = New-ScheduledTaskAction -Execute 'Powershell.exe' -Argument '-File C:\Tools\SelectNIC.ps1 -NoProfile -WindowStyle Hidden'


$trigger = New-ScheduledTaskTrigger -AtStartup

Register-ScheduledTask -Action $action -Trigger $trigger -TaskName "SelectNIC" -Description "Change the NIC used to match licence"

Sadly my server was Windows 2008 so here is the command to do it using schtask.exe.

schtasks /Create /TR "Powershell.exe -File C:\Tools\SelectNIC.ps1 -NoProfile -WindowStyle Hidden" /SC ONSTART /RU account /RP password /TN NameForTheTask /RL HIGHEST

This specific task needed admin permissions so I had to use /RU and /RP and also /RL HIGHEST parameters to run with the highest privilege. You can also use schtask.exe to import scheduled tasks from xml files, this can be handy if it becomes too cumbersome to fit into the command line.

Changing the user for a service using PowerShell
Here is a simple PowerShell script to change the account used to run a service.

$filter = 'Name=' + "'" + $ServiceName + "'" + ''
$service = Get-WMIObject -ComputerName $CompName -namespace  "root\cimv2" -class Win32_Service -Filter $filter
$service.Change $null, $null, $null, $null, $null, $null, $newAcct, $newPass, $null, $null, $null

$service.StopService
sleep 10
$service.StartService

This one seems pretty forward except the parameters of the Change method on the $Service object differs depending on your version of Windows and may be the version of WMI as well. To see the correct parameters for your environment you can use the snippet below. It will display the methods on the $Service object in grid.

$Service | Get-Member -MemberType  Method | Out-GridView

One more thing to look out for is the format of the username, it must be in the domain\user format when using domain accounts or .\username format for the local account.

Changing the startup type of a service
Using PowerShell to change the startup type of a service is very easy:

Set-Service yourservicename -startuptype "manual"

But Powershell cannot change the startup type to automatic delayed start, you have to use the sc.exe command to do that.

sc.exe \\machinename config "ServiceName" start= delayed-auto

Note you need the space between start= and delayed-auto and the account running the service must have login as a service rights.

Francois Delport

Automating Installers

I’m still on my quest to automate the installation of a pretty complex environment and I’m writing down what I learnt in case I need it again one day or even better someone else will find it useful and save them some time.

Updating environment variables
There doesn’t seem to be a built in way to set environment variables in PowerShell but you can use the Environment class from .NET to retrieve the current value:

[Environment]::GetEnvironmentVariable("Path","Machine")

and to set the new value:

[Environment]::SetEnvironmentVariable("Path", "New Path Value.", "Machine")

I also saw another method editing the registry key that contains the Path variable but the problem is it doesn’t update it for the current PowerShell session, you have to restart PowerShell first. From my testing, using the Environment class immediately updates it for the current session and broadcasts the change so new sessions will pick it up without restarting.

The InstallShield interactive user problem
One of the 3rd party components I was installing via PowerShell worked perfectly well when you run the script from the command prompt but failed with permission denied errors when running the script from Octopus deploy or any other remote method. All the methods used the same account and it had admin rights on the box. It took a while and some googling to finally get to the bottom of it. It turned out to be the InstallShield Script Engine (ISScript) and in this environment specifically version 10.50. If you look at the DCOM configuration for that version you will see it is set to run as the interactive user which won’t work when you are running it from a non interactive session, it should actually be the launching user.

ISSCript

The other versions I encountered didn’t have this problem, they are all set to the launching user by default.

The installer needed this specific ISSsript version, if it is not present, the Setup.exe will install it. To get past this problem you have to explicitly install ISScript instead of leaving it up to the Setup.exe then change the DCOM config to use the launching user and then install the software using the MSI not the Setup.exe. If you use Setup.exe it will change your DCOM config back to the default before running the MSI installer. If you want to change the DCOM config from a script you can delete the RunAs registry key, make sure you have to correct AppID for the version of ISScript in this case {223D861E-5C27-4344-A263-0C046C2BE233} for 10.50.

DeleteRegKey

Config file encoding
I had a component failing with invalid configuration error messages even though the file looked fine and was even identical to the config file from a working system when doing a text compare. In retrospect I should have done a hex compare as well. Turns out the encoding of the file was UCS-2 LE BOM instead of UTF-8 and this happened because I was reading the file contents, making changes and then writing the file out from a collection of strings in PowersShell.

WrongEncoding

To fix it you have to specify the encoding when writing out the file in PowerShell.

Add-Content -Encoding UTF8 -Path C:\File.txt -Value ValuesToWrite

Capture PowerShell  output to file
When you are creating automated deployments it is not uncommon for scripts to work when you run them manually but fail during the deployment from a build server or other deployment automation tool. If you want to debug what is happening you can add your own Write-Output statements all over to trace where you are but this gets tedious very quickly. A better way I find is to use the PowerShell Transcript cmdlets.

Start-Transcript -path YourLogFile

and

Stop-Transcript

The filename is optional and if you omit it PowerShell does have some smarts built-in around overwriting and naming the file, you read more here.

Francois Delport

Automating Installers And Editing Config Files With PowerShell

In this post I’m going to take a deep dive into automating installers and editing config files with PowerShell.

I spent the last few weeks automating the installation and configuration of various 3rd party software components. It involved a fair bit of trail and error to get all of it working. I decided to compile all the scripts into a blog post in case someone else or even future me need a refresher or quick start.

Runnig MSI and Exe installers silently
MSI installers are easy to run silently and you can prevent it from rebooting the machine after installation. I also specified verbose logging, it comes in handy if the installer fails for some reason. Note the accent character  (`) used for escaping the double quotes (“) around the MSI file name. I enclosed the file name in double quotes in case it contained spaces. The second last line prints the command line parameters that are about to run to the console, in my case I was using Octopus and it was handy for debugging since the output is shown in the Octopus deployment log.

$msi = "/i `"AppToInstall.msi`""
$cmd = "MSIEXEC"
$q = "/qn"
$nrst = "/norestart"
$log = "/lv verbose.log"
$fullparam = $msi + " " + $q + " "+ $log + " " + $nrst
$fullparam  #optional, output the parameters to the console for debugging
Start-Process -file $cmd -arg $fullparam -PassThru | wait-process

You can write everything in a single line but I split the command and parameters up into different variables to make it more readable and easier when you want to change the script.

You also get Exe installers that wrap a MSI installer and in this case you have to pass some parameters to the Exe to make it silent /S and also a second set of parameters that it will pass to the MSI /V to hide the installer gui /qn and also to prevent rebooting in this case. Note the single qoutes (‘) on line two, this is another way to deal with double quotes in a string, PowerShell will treat this as a literal string.

$cmd = ".\Setup.exe"
$param = '/s /v"REBOOT=ReallySupress /qn"'  #note single qoutes
$cmd + " " + $param  #optional output to concole for debugging
Start-Process -file $cmd -arg $param -PassThru | wait-process

Note that everything after /V is passed as a single parameter to the MSI and should be enclosed in quotes.

Editing config files using PowerShell
The easiest ones are XML files, you find the node to change and you give it a value. In this case I am changing the InnerText of the element.

$file = 'PathToXmlFile.xml'
$xml = [xml](Get-Content $file)
$node = $xml.customconfig.usersettings.servers.FirstChild
$node.InnerText = 'localhost'
$xml.Save($file)

If you are dealing with .NET config files you can use the same technique as above. In this example I am changing the value of the UserEmailAddress setting in the AppSettings section. Note I am changing an attribute on the XML element this time.

$xml.configuration.appSettings.add | Foreach-Object {
if ( $_.key -eq 'UserEmailAddress' ) {
$_.value = "$internalEmailAddress";
}
}

You can also modify connection strings.

$xml.configuration.connectionStrings.add | Foreach-Object {
if ( $_.name -eq 'DefaultContext' ) {
$_.connectionString = "$connectionString";
$_.providerName = "$dbProvider";
}
}

Editing text files turned out to be a bit more work but luckily this was for a clean install so it always had the same values to start with. I build up a new config file by reading all the existing lines into a collection, replacing the current setting with the new value and then I overwrite the existing file with the new lines.

$fileName = "Custom.cfg"
$lines = Get-Content $fileName
$newlines = @()   #create the collection


Foreach($line in $lines) {
if ('[email protected]'.CompareTo($line) -eq 0)
{
$line =   $line.Replace('[email protected]',
'[email protected]')
}
$newlines += $line
}

Out-File $fileName -inputobject $newlines

Setting registry values was also pretty easy.

Set-ItemProperty -Path "HKLM:\Software\AppName\LicenseSettings" -Name "LicenseFile" -Value "PathToLicenceFile.lic"

I also had to create some environment variables, in this case it was a machine wide setting but you can also create one at user and process level.

[Environment]::SetEnvironmentVariable('USERCFG', 'PathToUserConfig.xml', 'Machine')

Hopefully this post covered most of the common tasks when it comes to automating installers, editing config files and registry settings to get you going quickly.

Francois Delport