PowerShell Background Jobs

If you want to run PowerShell commands in parallel you can use Powershell background jobs. In this post I’m going to have a look at waiting for jobs to finish, handling failures, timeouts and some other bits I came across when I started using background jobs.

Start-Job
To start a background job use Start-Job and pass in the script block you want to execute, PowerShell will immediately return to the console.

PowerShell Background Jobs

The job will execute in it’s own scope and won’t be able to use variables and functions declared in the script starting the job. You have to pass in all the functions required for the script block to run using the InitializationScript parameter when you call Start-Job. You can store the contents of the script block in a variable to enable reuse.

$functions =
 {
   ... 
   functions
   ...
}

If you have multiple jobs running you will need a way to know all of them  completed execution. Complete in this case means it ran through the script block you passed it, it could have failed, or thrown exceptions or got disconnected from the remote machine or executed without errors. You can use Wait-Job and or Receive-Job for this.

Wait-Job
Wait-Job suppresses the console output until background jobs are completed and then it prints only the job state. Note the HasMoreData property that is true, this means there is something in the job output that can be retrieved. By default it will wait for all the jobs in the current session but you can pass in filters to wait for specific jobs. Only problem with Wait-Job is it doesn’t show the output from jobs, you can always log to a file but seeing what is happening is handy.

PowerShell Background Jobs

Receive-Job
Receive-Job’s actual purpose is to get the output from jobs at that point in time but by passing in the Wait parameter it will wait for the jobs to complete first. By default it will wait for all the jobs in the current session but you can pass in filters to wait for specific jobs. To me it looks like Receive-Job was not specifically designed as a mechanism to wait for job completion it is more a side effect when using the Wait parameter and it doesn’t have a timeout parameter.

PowerShell Background Jobs

You can also pass in the WriteEvents parameter that will print to the console as your jobs change state.

Handling Timeouts
Wait-Job takes a timeout parameter.

Wait-Job -Job $jobs -Timeout 60

If your jobs are not completed and the timeout is reached it will return from the Wait-Job call but it won’t show an error message. You’ll have to retrieve the jobs at this point and check which ones didn’t complete, you can use Get-Job for this.

PowerShell Background Jobs

Handling Errors
Errors that occur inside the script blocks running as a job won’t bubble up back to your script. If you use Receive-Job it will print to the console and if you use Wait-Job you won’t see anything, you’ll have to retrieve the jobs using Get-Job to see the failed status.

PowerShell Background Jobs

PowerShell Background Jobs

You can also store the errors that occur in each script block by using the ErrorVariable parameter, it is not a jobs specific feature it is one of the PowerShell CommonParameters, errors are stored in $Error automatic variable but by using ErrorVariable you can append multiple errors to the same variable and you can specify a different variable for each job to get the errors just for that job.

You can use Wait-Job and Receive-Job together by piping the result of Wait-Job to Receive-Job but that means you will only see the output from your jobs when they are done.

[Updated] Pass Parameters To Start-Job
As I mentioned earlier the job won’t have access to variables in your script, you can pass parameters to Start-Job using the Argumentlist parameter. It takes an array of arguments for example:

$param1 = @{"name" = "value" ; "name2" = "value2";}
$param2 = "value2"

$job = Start-Job -Name "job1" -ScriptBlock {
 param($param1, $param2)
 ...
} -ArgumentList @($param1, $param2)

Francois Delport

Building A MVC3 Project In Visual Studio Team Services/TFS 2015

In this post I’m going to cover building a MVC3 project in Visual Studio Team Services/TFS 2015.

With the release of Team Foundation Server 2015 Microsoft overhauled the build experience with a completely new build server. The new build experience is way better than the old XAML build and I decided to take it for a spin and share some of the problems or gothas I came across.

There are tons of demos on building MVC projects most of which start with a new MVC project but I wanted to try it out with something more realistic and decided to use a very old MVC3 project I have that has never been successfully built on a build server before. This meant quite a few errors along the way I had to fix first but it didn’t take very long and everything worked fine in the end.

If you want a step by step introduction read this, I will mostly be focusing on the deviation from the happy path in this post and documenting the steps to fix it in case I need it again one day.

First lesson
The retention policy for your builds is 10 days by default so I lost all the build logs from my old builds containing the error message before I got around to writing this post. So most of what I’ll be writing is from memory without screenshots.

Second Lesson
I got this error while restoring packages during the build:

The process cannot access the file NuGet.exe because it is being used by another process.

To fix it was very simple, just update the nuget.exe file in your solution to the newest version.

Third Lesson
I’m using a nuget repository that requires authentication and if you try to run a build you will get error message that it can’t find your packages. To resolve this issue you have to store the credentials in nuget.config for the build server to be able to authenticate against your nuget repository. To do that use the following command:

nuget sources add -Name "YourFeed" 
-source "https://urltonuget/nuget" 
-User username -pass password 
-configFile "Path\To\nuget.config" -StorePasswordInClearText

Sadly this will store it as clear text but the encrypted credentials will only work on the same machine where it was encrypted so you won’t be able to use it on the build server.

Fourth Lesson
In this project all the packages were checked into source control, this is not good practice, it takes up lots of space, pollutes your repository and downloading the project from VSO takes a lot longer. To prevent this from happening use .tfignore files. In this case I placed the file in the root of my project.

Building A MVC3 Project In Visual Studio Team Services/TFS 2015

The contents of the file is very simple.

\packages
!\packages\repositories.config

This will ignore the contents of the packages folder but not the repositories.config since we still need it.

Fifth Lesson
It is also not good practice to include your bin folder in source control, rather use nuget for all your dependencies, in this project which was always built locally all the developers had MVC3 installed on their machines. The project failed to build on the build server since it didn’t (and shouldn’t have MVC3 installed). The fix for this is very simple, install MVC3 as nuget packagein your web project.

Install-Package Microsoft.AspNet.Mvc -Version 3.0.50813.1

Sixth Lesson
Build definitions can become very complex and it is easy to mess it up, luckily VSO will keep track of the changes made to your build definitions, you can even do a diff to see the exact changes. It is like a mini source control for your build definitions.

Building A MVC3 Project In Visual Studio Team Services/TFS 2015

Francois Delport

Azure Disks And Images

In this post I’m going to explore a few more scenarios around VHDs and images in Azure. If you look at this post I showed how to copy a VM between subscriptions in a semi scripted way. I’ll be extending that script to create an image or hard disk from the copied VHD, but first some more info around blobs.

Blobs

VHDs are stored in Azure Blob Storage and there are 3 types of blobs.

  • Page Blobs
    Used to store VHDs and is optimised for random reading and writing.
  • Block Blobs
    Used to store files that are suitable for streaming and written to once.
  • Append Blobs
    Used for appending data for example log files.

The different types of blobs come into play when you copy VHDs into Azure Blob Storage and depends on the tool you use. If you for instance use Azure PowerShell and the Add-AzureVHD cmdlet the VHD is copied as a page blob. If you use CloudBerry explorer you have to explicitly choose Copy As Page Blob, the default copy will create a block blob. For other tools it could be different so confirm how blobs are copied before you copy 120Gb to Azure just to find out it was the wrong format. Later in the post we will register the VHD as a disk.

Images
Images can be specialised or generalised (SysPrepped) and can contain multiple disks. If you want to create an image that can be used to create more instances of a VM, use a generalised image for example to create instances for a scale out scenario. If you want an exact copy of the VM, use a specialised image, for example to copy it to another subscription or to restore it from a snapshot. When you create an image using PowerShell remember to indicate the OSState. This equates to the check box in the portal, asking if you ran Sysprep.

Azure Disks And Images

If you want to capture an image from a VM you already have in the same subscription, shutdown the VM and save the image.

Save-AzureVMImage -ServiceName VmService -Name VMName -ImageName NewImage -OSState Specialised/Generalised

In the next script I create an image from a VHD I copied into Azure Blob Storage.

$DiskConf = New-AzureVMImageDiskConfigSet

Set-AzureVMImageOSDiskConfig -DiskConfig $DiskConf -HostCaching ReadWrite -OSState Specialised/Generalised -OS "Windows" -MediaLink $vdhurl

$DiskConf.DataDiskConfigurations = new-object Microsoft.WindowsAzure.Commands.ServiceManagement.Model.DataDiskConfigurationList     #work around for a bug

Add-AzureVMImage -ImageName "NewImage" -Label "Easier To Find" -OS Windows -DiskConfig $DiskConf -ShowInGui $true

You can use the old portal to create an image from a VHD, under the virtual machines menu, click in the images tab and choose create new, you can browse to the VHD from there.

From my investigation I could not find a way to change the OSState after the image was created, you might be able to do it by altering the meta data on the blob but I didn’t try it yet. From experience using a specialised image where generalised images are expected doesn’t work, for example creating a new VM from a specialised image that is tagged as generalised just hangs on start up and the boot sequence never completes. Could have been something specific about the VMs I used but it happened twice.

In the old portal you can create a new VM using your existing images by choosing My Images.

Azure Disks And Images

Or pass in the image name in PowerShell.

$VM = New-AzureVMConfig -Name $VmName -InstanceSize $InstanceSize     -ImageName $SourceName

When you create a VM from an image, Azure will create new disks based on the disks referenced by the image, like templates in VMWare.

OS Disks
OS disks contains the disk used for booting, the disk is assumed to be specialised. I did not try to create a VM from one that is generalised yet. If you copied a VHD into blob storage you can create an OS disk using PowerShell.

Add-AzureDisk -DiskName "NewVMBootDisk" -MediaLocation $vdhurl -Label "BootDisk" -OS "Windows"

Or you can use the old portal, under the virtual machines menu, click on the disks tab and choose create new, you can browse to the VHD from there.

Now you can reference this disk to create a new VM.

$VM = New-AzureVMConfig -Name $VmName -InstanceSize $InstanceSize -DiskName "NewVMBootDisk"

You can create a new VM from your OS disk in the old portal by choosing My Disks.

Azure Disks And Images

Once you create a new VM from this disk it is not available to other VMs since it is now attached to the VM you created. This is similar to creating a new VM but attaching an existing disk to it in VMWare.

Data Disks
Data Disks can be attached to a VM but you cannot boot from it. If you copied a VHD into blob storage you can register it as a data disk when attaching it to a VM.

Add-AzureDataDisk -VM $VM -ImportFrom -MediaLocation "VHDUrl" -LUN 1

Francois Delport

Setup SSL In JIRA With An Existing SSL Certificate

In this post I’m going to show you how to setup SSL in JIRA with an existing SSL certificate.

If you setup SSL in JIRA from scratch by requesting a new certificate the official instructions work well but when you have an existing certificate the instructions are not very clear, especially to someone that is not familiar with Java and Tomcat. If you read further down in the comments and google a bit you can piece it together but I want to bring it all together in a single post to make it easier next time I have to do it. These instructions are for windows but should work for any OS since the tools used are ports from Linux anyway.

Tools

Before we start you’ll need a few things:

  1. If you want to know what .pem, pkcs12 and .key files are please read this first.
  2. Your SSL certificate,  private key pair and the password that was used to create the private key. If you received it as text in a email instead of file attachments you can copy and paste them into separate files but remember to include the –begin***— and —end***– parts for the certificate and the private key. The extensions does not really matter when you run the tools but I named mine .key and .pem to make it easier.
  3. OpenSSL: You can download it from sourceforge.
  4. Jave JRE: You will have this one already since it is part of the JIRA installation in my case it was in C:\Program Files\Atlassian\JIRA\jre\bin\ and the tool you need is keytool.exe

Steps

Export your certificate to pkcs12, the format the Java key tool understands. You will find openssl in C:\Program Files (x86)\GnuWin32\bin, run openssl.exe to get the openssl command prompt then run:

pkcs12 -export -in c:\cert\your_ssl.pem -inkey c:\cert\your_keyfile.key -out newfile.p12 -name alias

The alias is optional and if you don’t provide one the tool will assign a number as the alias, starting from 1. If you want to see the alias for existing files have a look at the command line parameters for openssl. You will be prompted for the password used to generate the private key pair. If successful you will see the newfile.p12 created in the output folder.

Next step is to create the java key store, I called this one jira.jks.

"%java_home%\bin\keytool.exe" -importkeystore -srckeystore newfile.p12 -destkeystore jira.jks -srcstoretype pkcs12 -alias alias

You will be prompted to create a new password for this keystore and then you will be prompted for the private key  password used to create the exported certificate. It is imported the use the private key password as the new password for this key store or else JIRA will complain, example of the error message below.

Setup SSL In JIRA With An Existing SSL Certificate

Now you can configure JIRA to use this Java keystore for SSL by running config.bat it is located in the bin folder of your JIRA installation.

Setup SSL In JIRA With An Existing SSL Certificate

If you want to have a look what is inside existing Java keystore certificates you can use openssl.exe to view them or you can use portecle if you prefer a GUI.

Side Note: To manually configure a Tomcat connector for SSL, edit <tomcat_dir>/conf/server/xml and add the following:

<Connector port="8443" maxThreads="150" scheme="https" secure="true" SSLEnabled="true" keystoreFile="path/to/your/keystore" keystorePass="YourKeystorePassword" clientAuth="false" keyAlias="alias" sslProtocol="TLS"/>

Tip: I had endless trouble creating application links between JIRA and BitBucket with SSL enabled. BitBucket was able to use the JIRA user directory but application links were throwing certificate errors and http 500 errors on the application links screen. In the end I had to change JIRA to use port 443 instead of 8443 and it solved the problem.

Francois Delport