Read SMART Attributes Using PowerShell And Smartmontools

In this post I’ll show you how to read SMART attributes using PowerShell and Smartmontools. I decided to use Powershell and Smartmontools because it will work from the command line which is great for Windows Server Core machines and it can be scheduled to run in the background. I also needed a way to receive notifications if the SMART attributes predicted disk failure. I decided to use Azure Log Analytics and Azure Monitor Alerts since I already use it for other tasks.

DISCLAIMER: I used this for my lab machines, it is not a practical or scalable solution for production.

Checking Drives For Failure Based On SMART Attributes

Windows already exposes the predicted failure status of drives based on SMART attributes so you don’t have to interpret the attributes yourself.

Get-WmiObject -namespace root\wmi –class MSStorageDriver_FailurePredictStatus

But if you want to you can retrieve more detailed SMART attribute data using PowerShell. There are various commands returning different levels of detail, for example:

Get-Disk | Get-StorageReliabilityCounter | fl
Get-WmiObject -namespace root\wmi –class MSStorageDriver_FailurePredictThresholds
Get-WmiObject -namespace root\wmi –class MSStorageDriver_FailurePredictData

If you feel brave you can read the raw SMART attributes but you’ll have to manipulate them to get something in a human readable form.

Get-WmiObject -namespace root\wmi –class MSStorageDriver_ATAPISMartData | Select-Object -ExpandProperty VendorSpecific

For my purposes I retrieve the disks that have the PredictFailure property set to true.

$failures = Get-WmiObject -namespace root\wmi –class MSStorageDriver_FailurePredictStatus -ErrorAction SilentlyContinue | Select InstanceName, PredictFailure, Reason | Where-Object -Property PredictFailure -EQ $true

To receive notifications for the predicted failures I write an error event to the Windows System Event Log.

Write-EventLog -LogName 'System' -Source 'YourSourceName' -EntryType Error -EventId 0 -Category 0 -Message "SMART Failure Prediction"

Before you write to the Event Log you have to register as a source.

New-EventLog -LogName 'System' -Source 'YourSourceName' 

If you are not registered as an event source you will get “The description for Event ID 0 from source source cannot be found.” as part of your event message.

The full script can be found here.

The error event will be picked up by Azure Log Analytics if you are collecting the System Event Log error entries.

AzureLogAnalyticsDataSources
Azure Log Analytics Data Sources

If you need more information on creating alerts read this post.

Retrieving SMART Attributes Using SmartMonTools

Apart from retrieving the failure status I also want to retrieve some extra SMART attributes from the disks. This data is for the next stage of my pet project to create some reports and track the degradation of the disks over time.

I’m using smartctl.exe from the Smartmontools package link. It works from the command line and it can return output in Json format. You can unzip the installer for Windows to get the bin folder containing the exe files.

In short I scan for drives and retrieve the SMART attributes for each one in JSON format and dump the attributes I need to a CSV file. The full script is to long to post but you can find it here.

Later on I will import this into a database for reporting. You could potentially leave it as JSON if you are going to use a document database but I’m leaning towards SQL at the moment.

Francois Delport

How To Install OMS Agent On Server Core

In this short post I’ll cover how to install OMS agent on a Server Core instance.

The first step is to download and extract the installer. This step must be performed on a machine with the Windows GUI installed, Server Core instances won’t work. To extract the installer run:

MMASetup-AMD64.exe /c [/t:ExtractPath]

The /t parameter is optional, you will be prompted to specify the extraction path in the GUI if it wasn’t specified.

To install OMS agent on a Server Core instance you have to run the installer silently by passing the /qn switch along with some other bits of information required by OMS.

See the example PowerShell script below:

$WorkSpaceID = "xxxxxx"
$WorkSpaceKey="xxxxx=="

$ArgumentList = ' /qn ADD_OPINSIGHTS_WORKSPACE=1 ' + "OPINSIGHTS_WORKSPACE_ID=$WorkspaceID " + "OPINSIGHTS_WORKSPACE_KEY=$WorkSpaceKey " + 'AcceptEndUserLicenseAgreement=1'

Start-Process '.\setup.exe'-ArgumentList $ArgumentList-ErrorAction Stop -Wait -Verbose |Out-Null
To confirm the OMS agent is installed you can run the following script:
Get-WmiObject -Query 'select * from win32_product where Name = "Microsoft Monitoring Agent"'
If it was successfully installed you will see the connected agent in the OMS portal after a while.
On a side note if you want to remove OMS Agent from a Server Core instance you can run the following script:
$agent = Get-WmiObject -Query 'select * from win32_product where Name = "Microsoft Monitoring Agent"'
$agent.Uninstall()

Francois Delport

Using Microsoft Report Viewer In PowerShell

In this post I’m going to give a quick example using Microsoft Report Viewer in PowerShell. PowerShell makes is easy enough to slap together a simple report by converting data to HTML and adding some CSS but every now and then you need something a bit more polished. For instance generating a chart and or report in Word format that is ready for distribution or printing, laid out according to page size and orientation with headers, footers, logos etc. HTML doesn’t work that great for this so I did a POC using Microsoft ReportViewer 2012, this should work with the 2015 version as well but I didn’t try it.

I’m not going to dig into creating reports with the report viewer in this post, I’ll be focusing on using it with PowerShell. If you are not familiar with the report viewer you can catchup over here and there are some very good resources and samples on the GotReportViewer website as well. Short version, you design reports using the report designer in Visual Studio, at run time you pass it data and render the report to a file or display it in the report viewer. If you don’t have the report viewer installed you can download it here. The whole solution is in my GitHub repo here but I will be highlighting/explaining some aspects of it as we go along.

Code Highlights
When you design the report you have to assign data sources to it that will provide the report with the fields it will use at design time to author the report and it will also expect the same objects at run time with the populated data. You’ll have to create these in a .NET project, compile it and load the assembly along with the ReportViewer assembly into PowerShell.

[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.ReportViewer`.WinForms") 
[System.Reflection.Assembly]::LoadWithPartialName("ReportPOC")

In this solution the data source is the ReportData class in the ReportPOC assembly. When you create the data source collection you have to create a strongly typed collection, in this case a generic List of the data source type.

$data = New-Object "System.Collections.Generic.List[ReportPOC.ReportData]"

Creating the records and adding them to the data source is pretty straight forward.

$item1 = New-Object ReportPOC.ReportData
$item1.ServerName = "Server1"
$item1.CPUAvail = "128"
$item1.CPUUsed = "103"
$data.Add($item1)

Then you create the report viewer instance specify the path to your report file and add the data source.

$rep = New-Object Microsoft.Reporting.WinForms.ReportDataSource
$rep.Name = "ReportDS"
$rep.Value = $data
$rv.LocalReport.ReportPath = "C:\MySource\ReportPOC\POC.rdlc";
$rv.LocalReport.DataSources.Add($rep);

Next you render the report and write the output byte array to a file stream, remember to cast the render result to type [byte[]] or else it won’t work.

[byte[]]$bytes = $rv.LocalReport.Render("WORDOPENXML");
$fs = [System.IO.File]::Create("C:\MySource\ReportPOC\POC.docx")
$fs.Write( [byte[]]$bytes, 0, $bytes.Length);
$fs.Flush()
$fs.Close()

The result, a chart and report using the same data source with a page break between them as it displays in Word 2013:

Using Microsoft Report Viewer In PowerShell

Next Steps
If you want a re-usable solution I would create a more generic data source class to avoid coding a new data source class for different charts. Also add some parameters to the report/chart to control the headings, legend and labels, page headers/footers etc. You can also export to other formats by passing different parameters to the render method like “EXCEL”, “PDF”, “HTML” and “XML”. You can also create different series to group categories and apply logic to the report/chart to calculate values or control colors for example if CPU usage is > 90 % color the bar red, this is done in the report designer.

Francois Delport

Out Parameters, Nested Classes, Specifying A Proxy, Modules And Local Variables And Debugging Jobs In PowerShell

In post I’m going to cover out parameters, nested classes, specifying a proxy, modules and local variables and debugging jobs in PowerShell

Calling .NET Functions With Out Parameters
When you call a .NET function that requires an out parameter you have to cast the parameter as a System.Management.Automation.PSReference object by using a [ref] cast. Apart from the cast you also have to initialize the variable, this didn’t seem obvious at first since you don’t have to do it in .NET but in PowerShell you have to.

$parsedip = New-Object System.Net.IPAddress -ArgumentList 1
[System.Net.IPAddress]::TryParse("192.168.10.88", [ref] $parsedip)

Using Nested .NET Classes In PowerShell
Usually you don’t come across nested classes very often but when you do they have different syntax from other classes. To access a nested class use the + sign before the name of the nested class instead of a full stop.

Microsoft.Exchange.WebServices.Data.SearchFilter+IsLessThanOrEqualTo

Specifying A Proxy At Script Level
If you want to specify a proxy that will be used for web requests but you can’t or don’t want to change the proxy settings machine wide you can specify it in your script.

$proxyString = "http://someserver:8080"
$proxyUri = new-object System.Uri($proxyString)
[System.Net.WebRequest]::DefaultWebProxy = new-object System.Net.WebProxy ($proxyUri, $true)

Modules And Local Variables
Variables declared local to a module will not hold their value between function calls to the module. For example if you create a counter that is incremented every time you call a function in the module that variable will be created as a new local  variable every time the function begins execution. To reference the same variable between function calls you have to declare it at script scope. This will not clash with variables declared in scripts that use the module if they also have a variable with the same name declared in script scope.

$script:localvar = 0

function DoOne
{
 write-output "DoOne : $script:localvar"
 $script:localvar += 1
}

Debugging Jobs
Since jobs start their own Powershell.exe process when you run them you can’t step into the code from your IDE even if you have a breakpoint set in the script. You have to use the Debug-Job cmdlet in your script running the job to start a debug session to the remote process. It will start debugging at the currently executing line in the job. If you want the job to wait for you at a specific line so you can start debugging at that point use the Wait-Debugger cmdlet in the script being executed by the job.

Francois Delport

PowerShell Active Directory Interaction Without Using Active Directory Modules

I recently had the requirement to interact with Active Directory using PowerShell but without using the *AD* cmdlets from the ActiveDirectory PowerShell module. The reason being the script would be running from a server that won’t meet the requirements to install the module and it was running Powershell V2. In this post I’ll be looking at two alternatives to achieve PowerShell Active Directory interaction without using Active Directory modules. Just to reiterate the easiest way to interact with Active Directory from PowerShell is the Active Directory module, link here. But if you can’t these alternatives will come in handy.

 

Find an object example:

$domain = New-Object DirectoryServices.DirectoryEntry
 $search = [System.DirectoryServices.DirectorySearcher]$domain
 $search.Filter = "(&(objectClass=user)(sAMAccountname=UserToLookFor))"
 $user = $search.FindOne().GetDirectoryEntry()

When it comes to creating objects you can create generic Directory entries and populate their Active Directory attributes directly but that would require knowledge of the attributes before hand.

$objDomain = New-Object System.DirectoryServices.DirectoryEntry 
$obj = $objDomain.Create("$name""CN=ContainerName"$obj.Put("AttributeName","AttributeValue")
$obj.CommitChanges()

Or if you are on .Net 3.5 or higher you can use classes from the System.DirectoryServices.AccountManagement namespace to create typed objects.

Add-Type -AssemblyName System.DirectoryServices.AccountManagement
$ctx = [System.DirectoryServices.AccountManagement.ContextType]::Domain
$obj = New-Object System.DirectoryServices.AccountManagement.UserPrincipal -ArgumentList $ctx
$obj.Name = "test"
$obj.Save()
  • Using Active Directory Service Interface ADSI. I mostly used the Exists static method on the ADSI class to check if objects exist in Active Directory.
    [adsi]::Exists("LDAP://OU=test,DC=domain,DC=com")

    You can also use the ADSI PowerShell adapter to manipulate objects, there are a few examples here on TechNet. I found the classes from the ActiveDirectory namespace easier to use when it comes to manipulating objects but the Exists method works well if you already have the Distinguished Name of an object.

Francois Delport

Pester Testing And PowerShell Development Workflow

In this post I’m going to have a look at Pester testing and PowerShell development workflow. I’ll be using Visual Studio and Team Foundation Server/Visual Studio Team Services to keep it simple. What I’m going to show won’t be news for developers but this can help to put testing in perspective for non developers.

Developer WorkFlow
Firstly lets have a look at the typical points during development when you want to run your tests and then we will dive into the details of getting it to work.

  • Running tests locally while developing.
    This depends on the development methodology you follow.

    • If you practice TDD you will be writing unit tests as you develop, a few lines at a time.
    • Depending on the size of the script most other developers will be writing a function and then a unit test for it or the whole script and then multiple unit tests. Remember it is better to write the tests while the code is still fresh in your mind.These scenarios work seamlessly if you use Visual Studio with PowerShell tools, Visual Studio will discover your Pester tests and show them in the test explorer.

Side Note: You can also use the tests to make development easier by mocking out infrastructure that is tricky to test like deleting user accounts in AD or deleting VMs.

  • Running tests with a gated checkin. In this case you want VSTS/TFS to run the build and test suite before the code is committed to the main code repository. VSTS/TFS  supports check in policies and one of them requires the build to pass before it is committed.
  • Running the tests after checkin as part of the build process. This is how most developers will run it and although VSTS/TFS doesn’t have native support for Pester tests you can run the tests using a Powershell script. Usually the whole test suite is executed at this point and can include integration tests.

For the last two scenarios to work you have to be able to install the Pester and PSCodeAnalyzer modules as well as any other modules required for your script on the build server. If you are using VSTS hosted build agents this is a bit of a problem since you need administrator access to install modules in the default locations for PowerShell. You can however manually import a module from a local folder during the build process. You can include the modules in you project or store them in a central location or repo and share them between your scripts. Some modules can even be installed from the Marketplace as a build step. If you are hosting your own Visual Studio Agents or using TFS this is not  a problem, you can install them as part of your build server or on demand.

Running A Pester Test
To execute Pester tests you have run a PowerShell script that will call Invoke-Pester. By default Pester will look for all scripts called *.Test.ps1 recursively but you can specify the script to test and even the specific functions in a script. Read the full documentation here.

Retrieving Test Results
Pester can output the test results in NUnit XML consult the documentation for the syntax and output options. VSTS/TFS can display this format on the test results tab. It doesn’t seem to do the same for the CodeCoverage results but you can still attach the output of the code coverage to your build artifacts by using VSTS/TFS commands.

Alternatively you can roll your own test results by passing the -PassThru parameter to Invoke-Pester, this will return the results as a typed object. You can use this to transform the results in any format you want.

Francois Delport

PowerShell Function Output, Memory Management And ConvertTo-HTML

In this post I’m going to cover PowerShell function output, memory management and ConvertTo-HTML.

Function Output And Return Keyword
In PowerShell the Return keyword is just an exit point for your function, it doesn’t explicitly set the return value of your function. Everything written to the output stream will be returned by your function, calling return and optionally with a value to return will just add to the output of your function.

Sometimes you end up with extra output returned from you function which you may not want. To determine where the extra output came from, do not capture the result as a variable, rather let it go to the console so you can see at which point it is written. Pipe the offending cmdlets to Out-Null to suppress the output.

Memory Management
Since PowerShell is using .NET behind the scenes the same garbage collection rules apply. Memory management is not usually a problem if you execute the script and close PowerShell afterwards since the memory will be reclaimed. If you need to clean up memory during the execution of the script use Remove-Variable or Clear-Variable, depending on whether you want to use the variable again later in the script. This will make the memory from that variable available for garbage collection. The .NET CLR will do a garbage collection if there is memory pressure you don’t have to call Gc.Collect() explicitly unless your script can’t handle the delay when garbage collection happens. In which case you should call GC.collect() at a point in time where it is acceptable for your script but this usually applies to real time systems not the tasks you script with PowerShell.

ConvertTo-HTML And Strings
If you try to create an HTML table from the contents of a string array you will see something like this:

PowerShell Function Output, Memory Management And ConvertTo-HTML

This is because ConvertTo-HTML builds up the table using the properties of the input object, provided they can be displayed. A string object only has two properties Length and Chars[Int32] so it is adding the Length property to the table. To fix this you can make your own object and add a property to it that will be used in the table, in this case Text.

$strings = @('one', 'two', 'three')
 $newstrings = ($strings | ForEach-Object  { new-object -TypeName PSObject -Property @{"Text" = $_.ToString() } } )
 $html = $newstrings  |  ConvertTo-Html

PowerShell Function Output, Memory Management And ConvertTo-HTML

You can also instruct ConvertTo-HTML to include or exclude properties if you don’t want all the properties to show in the table.

Note: You can use Add-Member to add properties to PSObjects only so we couldn’t add a member to .NET string objects but if you for instance used Get-Content to read the strings from a file they will be PSObjects and you can add your own properties to the existing objects without creating new ones.

$PSObjectStrings | ForEach-Object { Add-Member -InputObject $_ -Type NoteProperty -Name "Text" -Value $_.ToString() }

Francois Delport

Creating Your Own PowerShell Host

In this post I’m going to have a quick look at creating your own PowerShell host. I’ve been looking into hosting PowerShell in a Windows service using C# for a new project, this post will serve as a quick intro. Most of the information in this post comes from this MSDN link but I’m hoping to create a one pager intro for busy people.

PowerShell Runspaces
When you execute PowerShell commands it is actually the runspace that is doing the execution work and handling the execution pipeline. The host is mostly responsible for handling input and output streams to interact with the runspace. Examples of hosts are PowerShell.exe, PowerShell ISE or a custom console, WPF, WinForms or Windows service application. The different hosts interact with the PowerShell runspace behind the scenes. When you create your own host you are creating a runspace instance and passing it commands to execute.

Creating A Runspace
Creating the default runspace is pretty easy, just call:

PowerShell ps = PowerShell.Create();

Note: You will need a reference to System.Management.Automation for the code to work.

You can do a lot more to customise the runspace instance. You can set initial variables, configure PowerShell options, import modules and set startup scripts. You can even constrain the commands that each runspace is allowed to execute. Some of the modifications can be applied to the default runspace while others require creating an empty initial state and runspace explicitly. Example here.

Running commands
Once you have a runspace you can give it commands to execute. You do this using the AddCommand and AddParameter methods of the PowerShell object.

powershell.AddCommand("sort-object") .AddParameter("descending").AddParameter("property", "handlecount");

You can add multiple commands to the pipeline before executing them, the results of the commands will be piped together. To start a new statement and a new pipeline call the AddStatement method. You can add script snippets or script files to the pipeline using the AddScript method. You can execute commands synchronously by using the Invoke method or asynchronously using the BeginInvoke method. Example here.

Handling Output
If you invoked the command synchronously you can get the output from the result of the Invoke method.

Collection<PSObject> results = powershell.Invoke();

If you called the command asynchronously you can also wait for it to complete execution or even better you can use events to handle the output as it occurs, example here.

Error Handling
Powershell has separate streams to handle different output, if you look at the PowerShell object’s Streams property you will see there are streams for Debug, Error, Information, Progress, Verbose and Warning. When non-terminating errors occur they will be written to the Error stream but the pipeline won’t raise exceptions. When terminating errors occur the pipeline will throw RuntimeExceptions and you can handle them in your code with a Try..Catch block.

Remote Connections
For commands that doesn’t have a ComputerName parameter to execute remotely you have to create a remote runspace, it is a bit like creating a remote session using Enter-PSSession. You create a WSManConnectionInfo object pointing to an endpoint on the remote machine and then you pass the WSManConnectionInfo object to the RunspaceFactory.CreateRunspace method. Example here.

Interacting With The Host
You can interact with the host from your Powershell scripts, although it is not really applicable to server automation it might be handy for user utilities. You can for instance manipulate variables and controls on the host from your PowerShell script. Example here.

Francois Delport

Error Handling In PowerShell

In this post I’m going to bring together a few odds and ends relating to error handling in PowerShell. I sometimes forget some of the features and it will be handy for future me and hopefully someone else to have it all in one place.

Terminating vs Non-Terminating Errors
In PowerShell errors can be terminating or non-terminating. Terminating errors will terminate the pipeline and your script will end if you didn’t catch the error in a try..catch block. Non-terminating errors will by default write an error message but continue with the next line of the script. You can control the behaviour of non-terminating errors using the $ErrorActionPreference variable at script level or -ErrorAction parameter at command level. The possible values are Stop, Inquire, Continue (*default), Suspend and SilentlyContinue. Stop will raise a terminating error from non-terminating errors, Inquire will prompt the user to continue, SilentlyContinue will not display the error but will continue executing. Under the covers non-terminating errors call the WriteError method while terminating errors call the ThrowTerminatingError method or raises an exception, there are some guidelines here to implement proper error reporting in your own scripts.

Try..Catch blocks
You can use try..catch blocks to catch terminating errors and handle them gracefully without your script terminating. By default non-terminating errors will not be caught by try..catch blocks. You have to use -ErrorAction or $ErrorActionPreference to change their error behaviour to Stop, which will give the same behaviour as terminating errors. Terminating errors caught by try..catch blocks will not display the usual red error message, which makes your script output look much better and you can control how you want to display the errors.

Storing And Retrieving Errors
PowerShell will keep errors that occurred in your script in the $Error variable, it appends errors to this array so you have to clear it yourself if need be by calling $Error.Clear(). You can for instance set the $ErrorActionPreference to SilentlyContinue to prevent error messages on screen and get the errors from the $Error array when you need them. You can also pass a variable to cmdlets that they can use to store errors that occurred during execution by specifying the
-ErrorVariable parameter and the name of your error variable. By default this variable will only show errors that occurred in the last command executed. It is cleared before every execution unless you specify that it should append errors to the variable by adding a plus sign to the variable name when you pass it to the command. Note: You pass in the variable name without the $ sign:

Get-ChildItem -path 'C:\DoesNotExist' -ErrorVariable +MyErrorVar

Viewing Error Details
By default you will not see much detail when you view an error object but you can pipe it to a list and pass the -force parameter to see all the properties.

$Error[0] | fl -Force

Or in this case I used the Show-Object command from the PowerShell CookBook module. There is quite a lot going on here, the objects in the array are ErrorRecords, they have an exception property that is the underlying .NET exception and it can contain an inner exception which is also an .NET exception.

Error Handling In PowerShell

These exceptions can be nested quite deep and can be a valuable source of information when tracking down errors.

Francois Delport

PowerShell Workflows And When To Use Them

In this post I’m going to cover PowerShell Workflows and when to use them. PowerShell workflows give you capabilities scripts can’t but they also bring constraints and nuances. You have to think about your requirements to determine if you really need workflows or if you can use a different approach to achieve the same end result with less fuss.

When do you need workflows?

  • Long running tasks and I mean really long running, think closer to hours not minutes.
  • Surviving local reboots (expected and unexpected). You can checkpoint at specific locations in the workflow to resume but more detail later in the post.
  • Persistence. You can serialise the workflow state to disk and resume it later. For instance if there is a manual step in the workflow you can suspend it, perform the manual step and resume it again.
  • Fine grained flow control combining parallel and sequential execution. This one is not that obvious when you look at a workflow script but you can mix and match and nest parallel and sequential activities.
  • Workflows are more robust. The engine creates separate runspaces for activities, if an activity crashed it won’t bring down the whole workflow.

There are probably more but these are the ones I ran into.

When don’t you necessarily need them?

There are situations that look like they need workflows but may not necessarily. Using workflows in these cases won’t be wrong, it will work but it might be overkill.

  • Scaling out. Running a simple task on multiple remote servers in parallel can be achieved using remoting. There might be a point where workflows will scale better but only if you avoid InLineScript activities.
  • Handling reboots. To a certain level DSC can also deal with reboots by configuring ActionAfterReboot and RebootNodeIfNeeded in the LCM config MSDN article.
  • Running tasks in parallel on the same machine. For simple scenarios you can use PowerShell jobs to achieve the same result.

How to survive reboots

Surviving reboots seem to creep into workflow discussion a lot so I’ll have a quick look at it.

The simplest scenario, which is not restricted to workflows by the way, is running a workflow or script that is acting on remote machines. In this case you restart the remote machine using the “Restart-Computer -Wait” command or activity for a workflow. You don’t have to do anything extra since the machine running the workflow or script is not restarting and the script or workflow will continue running after the reboot.

If you want to restart the computer running the workflow you have to do a bit more work. This will only work for workflows not scripts. You still use the Restart-Computer activity which will checkpoint the workflow and suspend it before it restarts the machine. When a workflow is suspended it creates a job of type PSWorkflowJob that will survive the reboot. After the machine comes back from a reboot you have to resume the workflow.  If you call Get-Job after the reboot you will see your workflow as one of the jobs and then you can call Resume-Job to resume the workflow.

wfjob

Then you use the normal job related commands to handle the job and see the output using Receive-Job etc.

If you look at this link reboot scenarios are explained in detail and there is also a section that shows how to automate the resumption of workflows.

To survive unexpected reboots you have to checkpoint your workflow using the Checkpoint-Workflow command at logical points in your workflow. Put some thought into where you place the checkpoints to achieve the correct outcome after a reboot.

Note:  You cannot checkpoint inside an InLineScript activity since it cannot be serialised.

Francois Delport