PowerShell Workflows And Visual Studio

In post post I’m going to cover PowerShell Workflows and Visual Studio. When I saw PowerShell Workflow and how it was using Windows Workflow behind the scenes I wondered how it fitted in with designing workflows in the Visual Studio designer. With C# creating large workflows is easier in the Visual Studio designer instead of coding them, I wanted to see if it was the same for PowerShell and how it worked in the designer.

When you create a workflow script in PowerShell it generates a XAML workflow that is executed by .NET. The commands you use in your script are transformed into PowerShell activities in the XAML workflow. If your code doesn’t have a corresponding PowerShell activity it is wrapped in an InlineScript activity. This is transparent to the user but it can have ramifications. The code in an InlineScript activity is executed in a separate PowerShell instance that is loaded for each activity and stays active for the whole activity, this can be bad for performance when running workflows at scale. Activities in a workflow have lots of plumbing around them and the code in the InlineScript activity will not have that plumbing available. For instance you can’t checkpoint inside your InlineScript activity so you won’t be able to recover to a specific point inside the InlineScript. To illustrate the transformation, look at the script sample below:

PowerShell Workflows And Visual Studio

Here is the equivalent workflow that is generated with Write-Output resulting in a WriteOutput activity and Start-BitsTransfer resulting in an InlineScript activity.

PowerShell Workflows And Visual Studio

I’m not aware of a command in PowerShell that will inform you if your code has a corresponding workflow activity but you can do a visual inspection by exporting your workflow to XAML, opening it in Visual Studio and looking for InlineScript activities or searching the XAML text for InlineScript. Using the same script as above, I wrote the XAML to a file which you can open in Visual Studio.

(get-command Invoke-MyFirstWorkFlow).XamlDefinition |out-file c:\temp\MyFirstWorkFlow.xaml

You can view the activities available by adding them to the toolbox in the workflow designer. As far as I could see most of them are in these assemblies.

  • Microsoft.PowerShell.Activities
  • Microsoft.PowerShell.Core.Activities
  • Microsoft.PowerShell.Diagnostics.Activities
  • Microsoft.PowerShell.Management.Activities
  • Microsoft.PowerShell.Security.Activities
  • Microsoft.PowerShell.Utility.Activities
  • Microsoft.WSMan.Management.Activities

I uploaded the list of PowerShell workflow activities I found in the above assemblies along with their corresponding PowerShell commands over here.

In theory you can design your workflow in Visual Studio, import the XAML file in PowerShell and execute it like the example below.

PowerShell Workflows And Visual Studio

I assume this is not the intended use-case since it didn’t work that well for large workflows, especially if you are not using the PipeLine activity and that limits you to activities that is pipeline enabled. I think part of the problem is all the extra plumbing activities that PowerShell injects to handle expressions and variable values etc which you have to do now as well.

PowerShell can also execute workflows from XAML files by importing the file as a module.

Import-Module "C:\Temp\PowerWorkFlow.xaml"

PowerWorkFlow #name of the workflow

If you are going to distribute and re-use the workflow you should package it like a proper PowerShell module instead of passing the XAML file around.

While going through all of this I started to wonder why you would use PowerShell workflows in the first place since they have a bit of a learning curve and many restrictions. Hopefully I’ll be doing a post on that soon.

Francois Delport

Configuring Desired State Configuration Pull Server

In this post I’m going to highlight a few issues I came across configuring a Desired State Configuration pull server.

Certificates
For security reasons it is recommended to use HTTPS for your pull server. Since I was doing it in a lab I used a self signed certificate for testing but for production you should be using proper certificates. Turns out Active Directory Certificate Services is a convenient way to manage it inside you organisation, I’ll do a post about it later on. The self signed certificate I created in IIS won’t be trusted but this is not usually a problem when testing a web app since you can tell the browser to load the page anyway but for service calls you can’t do that. You have to import the certificate into the Trusted Root Authority of the Local Computer certificate store.

ImportCert

After I created my pull server I wanted to test the URL by browsing to it. This was on a 2012R2 server with IE11 but it kept on showing “Page cannot be displayed” even after I added the URL to trusted sites and clicked on continue anyway when prompted about the invalid certificate. In the end I switched to Chrome and I was able to confirm the service URL was responding.

Creating The Pull Server
You can configure a pull server manually by creating the website and app in IIS but I found the xPSDesiredStateConfiguration module way easier to use. The documentation does a good job of explaining the procedure but I want to emphasise you have to store the certificate you are using in the ‘CERT:\LocalMachine\My’ location even if you have it imported into Trusted Root Authority already.

Configure LCM For Pull
There are quite a few settings you can configure on your LCM, I found this reference very handy. After you create the LCM configuration and generate a meta MOF file you can configure the LCM by calling Set-DscLocalConfigurationManager. To confirm the current LCM status on a machine you can use Get-DscLocalConfigurationManager.

Pull DSC Configurations
In your pull server configuration you will see a ConfigurationPath setting specifying where you store your NamedConfig.MOF configuration files and a ModulePath setting specifying where you can store any extra Modules required by your configurations. One thing that caught me out was the nodename in a named configuration file must be the same as the configuration which is also the named configuration specified in the LCM config.

After you create or update a configuration you have to create a new checksum for it using the New-DscChecksum command.  By default it doesn’t overwrite existing checksum files even if the configuration MOF has hanged, you have to specify -Force to overwrite the existing checksum files.

You can force a machine to update its configuration from the pull server by calling Update-DscConfiguration, instead of waiting the default 30 minutes.

Troubleshoot Configurations
To see what is happening with the applied configurations use Get-DscConfigurationStatus to see the latest one or add the -All flag to see all of them.

You can find the DSC event logs at Applications And Service Logs ->
Microsoft -> Windows – Desired State Configuration but the Debug and Analytic logs  will be disabled. To switch them on run:

wevtutil.exe set-log “Microsoft-Windows-Dsc/Debug” /q:true /e:true
wevtutil.exe set-log “Microsoft-Windows-Dsc/Analytic” /q:true /e:true

And to show them in the event viewer select “Enable Show Analytic and Debug Logs” from view menu in the event viewer. Remember to switch them back off again if you are in production.

There is also a Diagnostics Helper module to make troubleshooting and tracing easier. The analytics and debug logs will have multiple entries per action, the helper will find the related entries and consolidate them.

Don’t forget about the IIS access logs for the Pull server to troubleshoot access problems.

Francois Delport

PowerShell Bits And Pieces

I recently came across a few interesting things in PowerShell and I’m writing it down for future me or anybody else that might find this useful.

Importing CSV files
If you have a CSV file with proper headings in the first row you can import it using the Import-CSV command and you will end up with an array of objects that contain properties matching the headings. For example:

Name, IP, Role
 Srvr1,10.0.0.1,Data
 Srvr2,10.0.0.2,Web

Will give you an array with the following:

CsvArray

Using Visual Studio Code as your PowerShell IDE
Turns out VS code is actually very handy when editing PowerShell, you get intellisense, debug, validation, watches and key bindings that closely match Visual Studio to name but a few features. I used Visual Studio before VS Code as my IDE and although it worked very well, VS Code is lightweight and less cluttered and only takes few seconds to load even on a low end Windows tablet. You have to download the PowerShell plugin and enable but luckily that can now be done from the GUI instead of the command line.

InstallPS

You have to do a bit of work to get debugging to work, click the debug icon, then the gear icon to edit the launch.json file. For every script you want to debug you have to create a new entry and specify the name of the PowerShell file and a meaningful name to display in the drop down list.

VSDebug

Break and ForEach loops
Just like C# you can use Continue to go to the next iteration of a loop and Break to exit the loop but the behaviour is different depending on where you use it. Break will exit the current loop if used in a Foreach, For, While, or Do loop and it will exit the current code block if used in a switch statement. Using Break outside a loop or switch statement will exit the script even if it is used inside a code block not the main script. When you use Break inside a ForEach command Break behaves as expected and will exit the immediate loop but if you are using the ForEach-Object cmdlet Break will not exit the loop it will exit the script.

PowerShell without remoting
There are a few PowerShell commands that you can run remotely without using PowerShell remoting, PowerShell must be installed on the remote computer but PowerShell Remoting doesn’t have to be enabled or configured. You can find the complete list is here. I found this feature very useful to execute WMI queries for inventory purposes.

Working With Regex
Regex behaves like it did in .NET you use the Select-String cmdlet and it returns a MatchInfo object for each match. For example to find lines in a file that start with a UTC date.

$match = Get-Content c:\temp\logfile.txt | Select-string -Pattern '^(\d\d\d\d)-(\d\d)-(\d\d) (\d\d):(\d\d):(\d\d).(\d\d\d)Z' | Select-Object Line

The Line property will show the whole line in the file that matched while the  Matches property shows more information around the pattern that was matched. You can also use Select-String to process multiple files in a directory by specifying a filter and the MatchInfo object for each match will contain information about the file, line no etc where it was matched.

Direct Output In Two Directions
You can use Tee-Object command to redirect output to a file or variable and still pass it to the pipeline as well. For example to keep the output of a process in a log file but still see what is happening in the console and even process the output further down the pipeline:

Start-Process ... | Tee-Object C:\Output.log | ... more processing ...

Francois Delport

Unit Testing PowerShell With Pester

In this post I’m going to cover unit testing PowerShell with Pester.

As your PowerShell scripts become more complicated it can be difficult and time consuming to manually test them. This is especially true when your script relies on external resources like ActiveDirectory or Azure infrastructure. Pester is a framework specifically designed to unit test PowerShell scripts. It enables mocking, assertions and running tests using nunit or Visual Studio test explorer among others.

How to install it
In PowerShell 5.0 and greater you can install the Pester module by running:

Install-Module Pester

If for some reason you are unable to do that you can manually download it from Github and copy it to a folder that is in your $ENV:PSModulePath environment variable or use choclatey.

Invoke-Expression ((new-object   net.webclient).DownloadString('https://chocolatey.org/install.ps1'))
choco install pester -y

To confirm that the module is loaded run:

Get-Module -ListAvailable

Authoring Tests
I highly recommend using Visual Studio with PowerShell tools installed for your PowerShell projects, this demo will be using it. In VS create a PowerShell Script Project, add a PowerShell test to it using the PowerShell Test template.

Unit Testing PowerShell With Pester

The VS test explorer will pick up your PowerShell tests, same it does for other unit tests. Pester has a naming convention to follow for the tests and scripts, the test file should have the same name as the script to test but with .test before the extensions. If you don’t follow the convention the tests will show as greyed out in the VS test explorer.

Unit Testing PowerShell With Pester

The structure of a test
At the top of each test file you have to dot source the script file that will be tested.

.\PathTo\ScriptFiletoTest.ps1

In the demo project this is done using logic based on the naming convention. The structure of the actual test looks like this:

Unit Testing PowerShell With Pester

The Description section will group your tests together and this is also the name you will see in VS test explorer. Context creates a scope for variables and mocked objects. It will contain the code for your test and at least one assertion or else the test will not run.

Mocking And Assertions
You can mock calls to your own functions as well as calls to PowerShell modules. For example if you want Get-Date to return a specific date in your test you can do this:

Mock Get-Date { New-Object DateTime (2000, 1, 1) }

$date = GetMyOwnDate
It "It Should Be The Year 2000"{
    ($date).year | Should be "2000"
}

If you want to verify that a method is called with specific parameters you can do this:

Mock Myfunction {} -Verifiable -ParameterFilter {$myparam -eq $folder}
...
Assert-VerifiableMocks

I have a sample project on GitHub to show some of the functionality of Pester but read the wiki on their GitHub repo to get the full picture .

Some Errors I Encountered And Tips

  • The nuget package didn’t work for since nuget doesn’t understand PowerShell projects.
  • If your tests doesn’t run and you get a yellow exclamation mark next to the test in text explorer one cause could be that there are no assertions in your test “It” block.
  • I also tried mocking some ActiveDirectory calls on a Windows 1o machine. To get the PowerShell modules you have to install Remote Server Administration tools for your client OS.
  • If you want to mock the calls inside another module instead of just mocking your calls to those functions follow this guide.

Francois Delport

Creating Reports And Formatting Output In PowerShell

In PowerShell there are a few ways to format your output, in this post I’m going to tie together a few ideas creating reports and formatting output in PowerShell.

Formatting output for the console
The Format-* cmdlets are used to format output and works best for the console window or any display using fixed width font. You can store the formatted output in a variable and you can write it to a text file for instance and keep the format. If you want to iterate over the lines of the formatted text object you can use the Out-String cmdlet to convert it to an array of strings. For example Format-Table will produce output like this:

Creating Reports And Formatting Output In PowerShell

There is also Format-List to list item properties per line and Format-Wide among others, play around with them to see how it looks.

Converting output to HTML
If you need output fit for reporting or email purposes you have to convert it to HTML. Luckily this functionality is already in PowerShell with the ConvertTo-* cmdlets and specifically for HTML the ConvertTo-HTML cmdlet. Obviously the default HTML has no styling and looks pretty dull but you have some control over the HTML generated. You can add CSS to the HTML by specifying a custom Head section. The script below is using an inline style sheet to keep it with the script but you can point to external CSS files as well.

$css = "<title>Files</title>
<style>
table { margin: auto; font-family: Arial; box-shadow: 10px 10px 5px #777; border: black; }
th { background: #0048e3; color: #eee; max-width: 400px; padding: 5px 10px; }
td { font-size: 10px; padding: 5px 20px; color: #111; }
tr { background: #e8d2f3; }
tr:nth-child(even) { background: #fae6f5; }
tr:nth-child(odd) { background: #e9e1f4; }
</style>"

dir | Select-Object -Property Name,LastWriteTime | ConvertTo-Html -Head $css | Out-File C:\Temp\report.html

It generated the following HTML:

Creating Reports And Formatting Output In PowerShell

Convert output for data transfer purposes
If you need the output to be structured for data transfer or machine input purposes you can convert it using the ConvertTo-JSON/XML/CSV cmdlets. JSON and CSV is pretty straight forward but XML needs some explaining. By default the output will be a XML Document and to get to the actual XML you have to use the OuterXML property od the XML Document. You can change the output mechanism by using the -As {format} parameter that takes the following parameter values:
String: Returns the XML as a single string.
Stream: Returns the XML as an array of strings.
Document: The default that returns a XML Document object.
You can control the inclusion of type information in the XML using the -NoTypeInformation parameter. If you have the requirement to keep the generated output as small as possible you can use JSON and pass in the -Compress parameter to compress the output.

Francois Delport

PowerShell Background Jobs

If you want to run PowerShell commands in parallel you can use Powershell background jobs. In this post I’m going to have a look at waiting for jobs to finish, handling failures, timeouts and some other bits I came across when I started using background jobs.

Start-Job
To start a background job use Start-Job and pass in the script block you want to execute, PowerShell will immediately return to the console.

PowerShell Background Jobs

The job will execute in it’s own scope and won’t be able to use variables and functions declared in the script starting the job. You have to pass in all the functions required for the script block to run using the InitializationScript parameter when you call Start-Job. You can store the contents of the script block in a variable to enable reuse.

$functions =
 {
   ... 
   functions
   ...
}

If you have multiple jobs running you will need a way to know all of them  completed execution. Complete in this case means it ran through the script block you passed it, it could have failed, or thrown exceptions or got disconnected from the remote machine or executed without errors. You can use Wait-Job and or Receive-Job for this.

Wait-Job
Wait-Job suppresses the console output until background jobs are completed and then it prints only the job state. Note the HasMoreData property that is true, this means there is something in the job output that can be retrieved. By default it will wait for all the jobs in the current session but you can pass in filters to wait for specific jobs. Only problem with Wait-Job is it doesn’t show the output from jobs, you can always log to a file but seeing what is happening is handy.

PowerShell Background Jobs

Receive-Job
Receive-Job’s actual purpose is to get the output from jobs at that point in time but by passing in the Wait parameter it will wait for the jobs to complete first. By default it will wait for all the jobs in the current session but you can pass in filters to wait for specific jobs. To me it looks like Receive-Job was not specifically designed as a mechanism to wait for job completion it is more a side effect when using the Wait parameter and it doesn’t have a timeout parameter.

PowerShell Background Jobs

You can also pass in the WriteEvents parameter that will print to the console as your jobs change state.

Handling Timeouts
Wait-Job takes a timeout parameter.

Wait-Job -Job $jobs -Timeout 60

If your jobs are not completed and the timeout is reached it will return from the Wait-Job call but it won’t show an error message. You’ll have to retrieve the jobs at this point and check which ones didn’t complete, you can use Get-Job for this.

PowerShell Background Jobs

Handling Errors
Errors that occur inside the script blocks running as a job won’t bubble up back to your script. If you use Receive-Job it will print to the console and if you use Wait-Job you won’t see anything, you’ll have to retrieve the jobs using Get-Job to see the failed status.

PowerShell Background Jobs

PowerShell Background Jobs

You can also store the errors that occur in each script block by using the ErrorVariable parameter, it is not a jobs specific feature it is one of the PowerShell CommonParameters, errors are stored in $Error automatic variable but by using ErrorVariable you can append multiple errors to the same variable and you can specify a different variable for each job to get the errors just for that job.

You can use Wait-Job and Receive-Job together by piping the result of Wait-Job to Receive-Job but that means you will only see the output from your jobs when they are done.

[Updated] Pass Parameters To Start-Job
As I mentioned earlier the job won’t have access to variables in your script, you can pass parameters to Start-Job using the Argumentlist parameter. It takes an array of arguments for example:

$param1 = @{"name" = "value" ; "name2" = "value2";}
$param2 = "value2"

$job = Start-Job -Name "job1" -ScriptBlock {
 param($param1, $param2)
 ...
} -ArgumentList @($param1, $param2)

Francois Delport