How To Convert A Linux Disk Or Image File To VMDK File

In this post I will show you how to convert a Linux disk or raw image file to a VMDK file so you can create a VMWare Workstation virtual machine from it. The basic steps are creating a raw image file of the physical disk using dd and converting the raw image file to a VMware disk/vmdk file. I was using Windows to do this but I’ll mention some Linux instructions as well.

Why this scenario

For on-line Linux machines you can use vCenter Converter to create a virtual machine the catch is it only supports ESXi. In my case I had to create a virtual machine from a disk /off-line machine for VMware Workstation on a Windows machine. At the time I couldn’t find a free native Windows tool that would create a raw image of the  Linux disk. Potentially you could use disk imaging software to clone the disk in a virtual machine but the free Windows ones I tried didn’t have the necessary drivers included in their boot environments for VMWare disk controllers.

Creating the raw image file

To run dd on Windows requires Cygwin, there are no additional settings or packages required. I just accepted all the defaults during installation. It is important to run Cygwin as administrator in Windows or you will get permission denied errors in Linux when you run dd. Once inside Cygwin run the following to identify your attached disks:

cd /dev
ls -la

The device name of your disks will depend on the type and number of attached devices, in my case it was /dev/sdc (the third disk). You can read about Linux device naming here. I wanted to write the image file to my Windows D: drive which will be mounted as /cygdrive/d. Once you identified the source and destination you can run dd to create the image file.

dd if=/dev/sdc of=/cygdrive/d/diskimage.img

Note: I looked into using Windows Subsystem for Linux instead of Cygwin but from what I Googled WSL doesn’t have the ability to address block devices so dd won’t work.

Converting the raw image file to VMDK

I used Starwind V2V Converter to convert the raw img file to a VMDK file, it is free but you have to supply some personal details to get the download link. The app is easy enough to use so I won’t show all the steps here, just choose Local file for the source and Local file VMDK as the destination. Starwind V2V has the option to convert a physical disk to a VMDK file but it didn’t show my Linux disks as a source, only the Windows ones.

On Linux you can use Qemu to convert img files to VMDK and other virtual disk formats. You can also use Qemu to convert a physical disk directly to VMDK file without creating the raw image file first.

I looked into using Qemu in Cygwin instead of Starwind V2V Converter but at the time Qemu packages weren’t available for Cygwin but I guess you could compile it from source and install the dependencies if you really want to go that way.

Creating the VM

Once you have the VMDK file you can create a new Linux virtual machine in VMWare Workstation. Be sure to match the Linux distribution of your source machine. I was converting an Ubuntu machine and it booted successfully without any additional work.

LinuxVM

The final step is to install  open-vm-tools for desktop or server.

#Desktop
sudo apt install open-vm-tools-desktop
#Server
sudo apt install open-vm-tools

Francois Delport

Read SMART Attributes Using PowerShell And Smartmontools

In this post I’ll show you how to read SMART attributes using PowerShell and Smartmontools. I decided to use Powershell and Smartmontools because it will work from the command line which is great for Windows Server Core machines and it can be scheduled to run in the background. I also needed a way to receive notifications if the SMART attributes predicted disk failure. I decided to use Azure Log Analytics and Azure Monitor Alerts since I already use it for other tasks.

DISCLAIMER: I used this for my lab machines, it is not a practical or scalable solution for production.

Checking Drives For Failure Based On SMART Attributes

Windows already exposes the predicted failure status of drives based on SMART attributes so you don’t have to interpret the attributes yourself.

Get-WmiObject -namespace root\wmi –class MSStorageDriver_FailurePredictStatus

But if you want to you can retrieve more detailed SMART attribute data using PowerShell. There are various commands returning different levels of detail, for example:

Get-Disk | Get-StorageReliabilityCounter | fl
Get-WmiObject -namespace root\wmi –class MSStorageDriver_FailurePredictThresholds
Get-WmiObject -namespace root\wmi –class MSStorageDriver_FailurePredictData

If you feel brave you can read the raw SMART attributes but you’ll have to manipulate them to get something in a human readable form.

Get-WmiObject -namespace root\wmi –class MSStorageDriver_ATAPISMartData | Select-Object -ExpandProperty VendorSpecific

For my purposes I retrieve the disks that have the PredictFailure property set to true.

$failures = Get-WmiObject -namespace root\wmi –class MSStorageDriver_FailurePredictStatus -ErrorAction SilentlyContinue | Select InstanceName, PredictFailure, Reason | Where-Object -Property PredictFailure -EQ $true

To receive notifications for the predicted failures I write an error event to the Windows System Event Log.

Write-EventLog -LogName 'System' -Source 'YourSourceName' -EntryType Error -EventId 0 -Category 0 -Message "SMART Failure Prediction"

Before you write to the Event Log you have to register as a source.

New-EventLog -LogName 'System' -Source 'YourSourceName' 

If you are not registered as an event source you will get “The description for Event ID 0 from source source cannot be found.” as part of your event message.

The full script can be found here.

The error event will be picked up by Azure Log Analytics if you are collecting the System Event Log error entries.

AzureLogAnalyticsDataSources
Azure Log Analytics Data Sources

If you need more information on creating alerts read this post.

Retrieving SMART Attributes Using SmartMonTools

Apart from retrieving the failure status I also want to retrieve some extra SMART attributes from the disks. This data is for the next stage of my pet project to create some reports and track the degradation of the disks over time.

I’m using smartctl.exe from the Smartmontools package link. It works from the command line and it can return output in Json format. You can unzip the installer for Windows to get the bin folder containing the exe files.

In short I scan for drives and retrieve the SMART attributes for each one in JSON format and dump the attributes I need to a CSV file. The full script is to long to post but you can find it here.

Later on I will import this into a database for reporting. You could potentially leave it as JSON if you are going to use a document database but I’m leaning towards SQL at the moment.

Francois Delport

How To Install OMS Agent On Server Core

In this short post I’ll cover how to install OMS agent on a Server Core instance.

The first step is to download and extract the installer. This step must be performed on a machine with the Windows GUI installed, Server Core instances won’t work. To extract the installer run:

MMASetup-AMD64.exe /c [/t:ExtractPath]

The /t parameter is optional, you will be prompted to specify the extraction path in the GUI if it wasn’t specified.

To install OMS agent on a Server Core instance you have to run the installer silently by passing the /qn switch along with some other bits of information required by OMS.

See the example PowerShell script below:

$WorkSpaceID = "xxxxxx"
$WorkSpaceKey="xxxxx=="

$ArgumentList = ' /qn ADD_OPINSIGHTS_WORKSPACE=1 ' + "OPINSIGHTS_WORKSPACE_ID=$WorkspaceID " + "OPINSIGHTS_WORKSPACE_KEY=$WorkSpaceKey " + 'AcceptEndUserLicenseAgreement=1'

Start-Process '.\setup.exe'-ArgumentList $ArgumentList-ErrorAction Stop -Wait -Verbose |Out-Null
To confirm the OMS agent is installed you can run the following script:
Get-WmiObject -Query 'select * from win32_product where Name = "Microsoft Monitoring Agent"'
If it was successfully installed you will see the connected agent in the OMS portal after a while.
On a side note if you want to remove OMS Agent from a Server Core instance you can run the following script:
$agent = Get-WmiObject -Query 'select * from win32_product where Name = "Microsoft Monitoring Agent"'
$agent.Uninstall()

Francois Delport

Integrating Google Home And IFTTT Webhooks

In this post I’ll be integrating Google Home and IFTTT Webhooks. Before we begin it is important to understand you are not specifically targeting Google Home devices but rather Google Assistant. Google Assistant can be running on a multitude of different devices like phones or tablets that are not necessarily on the same local network as your Google Home device. With that in mind it makes sense that any HTTP connections made from Google Assistant and IFTTT will be executing in the cloud not the device your are using. To reach local endpoints from Google Assistant you need a publicly accessible endpoint which means you have to open ports on your firewall, have a static IP or DNS name and a web server running to respond to the HTTP calls. Alternatively you can use a relay service like Azure Relay Hybrid Connections or one of the self hosted open source options like Thinktecture RelayServer to enable public connections without opening firewall ports. Both solutions bring up a host of security considerations which I decided to bypass for now by using Azure Functions as the target for the HTTP calls for testing.

The Solution

Connect Your Google Account And IFTTT
This is pretty straight forward so I won’t be explaining the process, just follow this example it.

Create The Azure Function
You can start with a basic HTTP triggered function using this example. For added security I’ll be making a few changes to the function.

  • Under Function App settings -> Networking -> SSL, switch on HTTPS only.
  • Under Integrate  -> Selected HTTP Methods uncheck everything except POST.
  • Under Integrate -> Authorization level select anonymous.

At the time of writing IFTTT didn’t support custom headers in their WebHooks but I needed a way to send an auth key to the Azure Function. I decided to add the auth key to the body of the request and confirm the key myself. Usually you rely on the Azure Functions builtin authentication functionality that requires an “x-functions-key” header.

To test the WebHook call I wanted to extract information like the auth key, ingredient and calling IP address to display it in the log file. You can find the C# code I used in my function here.

Create The IFTTT Applet
In IFTTT create a new applet and choose Google Assistant as the service and “Say a phrase with a text ingredient” as the trigger. Select Webhooks as the action. For this fictional scenario I wanted to shut down devices remotely so I configured my trigger and action like this.

Integrating Google Home And IFTTT Webhooks

Take note of the request JSON body, it contains the auth key and the text ingredient you specified in the trigger using the $ sign. You also have the option to use a number ingredient in which case you use a # sign to represent the number. Use the Add Ingredient button to add your ingredient to the request body or URL, don’t type $ or #.

And here is the result.

Integrating Google Home And IFTTT Webhooks

I tested from my phone but it worked the same from Google Home, just no screenshots 🙂

Francois Delport

Adding Custom Log Files To OMS Log Analytics

In this post I will be adding custom log files to OMS Log Analytics. Custom log files give you the ability to add plain text logs into Log Analytics. Depending on your situation it might be easier to first explore structured logging options like Windows Event Log, Syslog or Application Insights since custom logs have a few limitations.

Configure Custom Logs

At the time of writing custom logs was still in preview, to use it you have to enable the feature in the OMS portal under Settings -> Preview Features. If you are using the Azure portal and the feature is not enabled you won’t see the + button to add a custom log. Once you have custom logs enabled you can use the OMS portal or Azure portal to add a custom log. In the OMS portal open the settings menu by clicking the gear icon in the top right. Under the Data -> Custom Logs menu you will see an Add button to add a custom log.

Adding Custom Log Files To OMS Log Analytics

It is a pretty simple process, just follow the wizard to select a sample file, choose the record delimiter which can be a timestamp or newline, specify the paths to monitor and provide a name for the custom log. Make sure you give the custom log a reasonable name since you will be using it as the identifier in queries.

Take note of the restrictions for custom logs which can be found here. If your custom logs violate any of the criteria they won’t show up in Log Analytics. My custom logs took 30 minutes to show up in Log Analytics but your mileage can vary.

Custom Fields

Log Analytics will store data from the custom log text files in a single field called RawData. To get anything useful out of the custom logs you have to create custom fields over the data. Custom fields are not unique to custom logs you can extract custom fields from any existing fields.

To create a custom field execute a search query that displays the field you want to extract from. In the case of your custom log the table name will be the custom log name. Once you have the results, click on the ellipse to the left of the field name and choose ‘Extract Fields From …’.

Adding Custom Log Files To OMS Log Analytics

 

 

 

 

 

 

 

 

 

On the next screen you can highlight the data you want to extract and match it against current records to refine the extraction process. You can click on records in the search results to further modify the extraction process.

Adding Custom Log Files To OMS Log Analytics

Once you are satisfied with the result save the extraction, detailed instructions here.

Take note, if you create a new custom field your existing data won’t be updated with the new custom field. The custom field will only show on new records ingested by Log Analytics after the custom field was created.

Francois Delport

How To Run Console Applications On A Schedule On Azure Functions

In this post I will show you how to run console applications on a schedule on Azure Functions. This feature is experimental and Azure Functions doesn’t have a project type for scheduled console applications but you can launch a console application from a PowerShell or Batch project using a Timer trigger.

Creating The Function

If this is your first function follow these instructions to create the Function App. Once you have the Function App create a new function inside it by clicking on the plus button next to the Function App. You have to enable experimental language support to see PowerShell and Batch projects.

How To Run Console Applications On A Schedule On Azure Functions

Choose PowerShell or Batch from the language dropdown and select Timer as your trigger .

Once the function is created you have to upload your console application. Easiest way I found is to use Kudu, especially when uploading multiple files. You can access Kudu from the Developer Tools menu in the Platform Features tab in the Functions blade or you can browse to it directly using https://{YourFunctionApp}.scm.azurewebsites.net/.

Alternatively you can upload files from the Azure Portal by clicking View files in the Function app blade.

In Kudu click on the Debug console menu and select Powershell or CMD. Browse to the /site/wwwroot/{YourFunction} folder where you will see your function.json and run.ps1 files already there. Drag and drop any files required onto the browser window to upload them. You can create folders using the plus icon or  the shell window. Make sure all the dlls required to run the application is also copied.

Edit the run.ps1 file using Kudu or the Azure Portal to set the working directory to the folder where your application files are located and add a line to run your application.

How To Run Console Applications On A Schedule On Azure Functions

Specify the schedule

You will find the schedule on the Integrate menu of the function. It uses a cron like syntax with an extra digit for the seconds, you can read more here. The default is every 5 minutes.

Logging and exit codes

To output text to the Azure Functions log you can use the standard Console.Write/WriteLine methods in your code. If you need something more advanced you can also connect Application Insights. If you connect Application Insights at design time you can emit custom logging and telemetry. If you add Application Insights at runtime you get the standard .NET telemetry. You should return specific exit codes from you application to make troubleshooting easier, more here.

Advanced Scenarios

If you have a console app that requires input parameters you can use a HTTP trigger with a C# application to call the console application after you parsed the input parameters.

Francois Delport

Getting Started With C#, .NET Core And VSCode

Getting Started With C#, .NET Core And VSCode is quite different from Visual Studio and the full .NET framework. In this post I’m going to share what I learnt to get you up and running quickly with your first .NET Core C# console application on Windows.

What to install

Apart from VSCode you also need the .NET Core SDK which installs the .NET Core CLI tools required by VSCode, you can find the SDK here. In VSCode you have to install the C# extension.

Create A New C# Project

You can’t import solutions from Visual Studio into VSCode, best way seems to be creating a new project and adding the .cs files you need. To keep things  organised create a new empty folder to store the project. In VSCode open the empty folder


 

 

 

and run

dotnet new 

in the terminal to create a new console application in the folder. You will see a program.cs and {projectname}.csproj file. VSCode will warn you that additional assets are required to build and debug the application, select yes to install them or else the build and debug tasks won’t work later on. You can see a list of templates available by running 

dotnet new -all

Build And Run A C# Project

To build your project select Run Build Task from the Tasks menu and select Build in the next dropdown.

You can also build your project from the command line by running

dotnet restore
dotnet build

Debugging is pretty much self explanatory select it from the Debug menu or press F5. To run your application without debugging run

dotnet run

in the terminal.

Adding .NET Core References And Packages

VSCode is a handy little editor but not really meant for large projects, that said if you really want to use it for a non-trivial application you will probably split the solution into multiple projects and organise your code in folders.

To add source files contained in sub folders to your project select Add Folder to Workspace.

 

 

 

 

 

 

Adding references to your project basically entails editing the .csproj file but there are dotnet commands or extensions to make it easier

To add a project reference run

dotnet add {yourproject.csproj} reference otherproject.csproj

The path to your project is optional if the project is in the current directory.

To add nuget packages run

dotnet add package PackageName

Alternatively install the Nuget Package Manager extension and use the Command Palette.

To reference a DLL install the Add Local .NET Reference extension and use the Command Palette.

 

 

 

 

.NET Core itself is broken up into packages that you add as you need them. For example if you need Linq run

dotnet add package System.Linq

There are also Metapackages that contain groups of related packages, more here.

VSCode and .NET Core is still evolving and some tasks can only be accomplished using the CLI tools the CLI reference can be found here.

Francois Delport

Creating And Restoring Azure Virtual Machine Snapshots For UnManaged Disks

In post I’m going to take a look at creating and restoring Azure Virtual Machine snapshots for unmanaged disks. If you are looking for instructions to create and restore snapshots for managed disks read this post. These instructions are for Azure Resource Manager (ARM) virtual machines, for Azure Service Manager (ASM) virtual machines read this post.

In the case of unmanaged disks the underlying VHD blob can be manipulated like any other blob. This means the blob backing unmanaged OS disks can be overwritten, saving you the trouble of recreating VMs just to revert to a snapshot.

I’m not aware of a way to create snapshots for unmanaged disks in the Azure Portal but there are some tools for example Azure Storage Explorer that can do it.

Update: You can now create and restore snapshots for unmanaged disks in the Azure Portal in the Storage Account blade.

Creating And Restoring Azure Virtual Machine Snapshots For UnManaged Disks

Creating Unmanaged Disk Snapshots

To create a snapshot using PowerShell you retrieve the storage account, reference the blob and create a snapshot. To ensure consistency shutdown your virtual machine beforehand.

$ctx = New-AzureStorageContext -StorageAccountName $StorageAccountName `
  -StorageAccountKey $StorageAccountKey
$blob = Get-AzureStorageBlob -Context $ctx -Container $ContainerName ` 
  -Blob $BlobName 
$snapshot = $blob.ICloudBlob.CreateSnapshot()

You can also create incremental snapshots but it is not available in the PowerShell cmdlets only the Azure Rest API, more info here.

Restore Unmanaged Disk Snapshots

Like I pointed out earlier in the case of unmanaged disks the underlying VHD file is manipulated like any other blob. You can copy the snapshot over its base blob to restore the blob to the snapshot state. This means you can replace the OS disk for a VM without rebuilding it. You can also copy it to another blob to clone your disk.

Since snapshots have the same name as their base blob you can’t retrieve a specific snapshot by its name. The easiest way seems to be retrieving all the snapshots for a blob using the Get-AzureStorageBlob cmdlet and filtering the results to get the specific snapshot you need. For example to retrieve a snapshot by SnapShotTime

$snap = Get-AzureStorageBlob –Context $ctx -Prefix $blobName `
  -Container $containerName | Where-Object {$_.ICloudBlob.IsSnapshot `
  -and $_.SnapshotTime -eq $snapshottime }

Or to retrieve the latest snapshot.

$lastsnap = (Get-AzureStorageBlob –Context $ctx -Prefix $BlobName `
  -Container $ContainerName | Where-Object {$_.ICloudBlob.IsSnapshot `
  -and $_.SnapshotTime -ne $null} | `
  Sort -Property SnapshotTime -Descending )[0]

To restore a snapshot you copy it to a new blob object or you can overwrite an existing blob. This makes it very easy to rollback a VHD to a snapshot without recreating the VM.

$lastsnap = (Get-AzureStorageBlob –Context $ctx -Prefix $BlobName `
  -Container $ContainerName | Where-Object `                                     
  {$_.ICloudBlob.IsSnapshot -and $_.SnapshotTime -ne $null} |
  Sort-Object -Property SnapshotTime -Descending)[0]
$snapshotblob = [Microsoft.WindowsAzure.Storage.Blob.CloudBlob] `    
  $lastsnap.ICloudBlob
$blob.ICloudBlob.BreakLease() 
Start-AzureStorageBlobCopy –Context $ctx -ICloudBlob $snapshotblob `
  -DestBlob $blobName -DestContainer $ContainerName
Get-AzureStorageBlobCopyState -Blob $blobName -Container $ContainerName `
  -Context $ctx -WaitForComplete

Take note if you try to overwrite a VHD that is attached to a VM you will receive an error message indicating there is an existing lease on the VHD. The BreakLease call will remove the lease and the lease will be created again when you start the VM.

Francois Delport

Creating And Restoring Azure Virtual Machine Snapshots For Managed Disks

In post I’m going to take a look at creating and restoring Azure Virtual Machine snapshots for managed disks. These instructions are for Azure Resource Manager (ARM) virtual machines, for Azure Service Manager (ASM) virtual machines read this post. If you are looking for instructions to create and restore snapshots for unmanaged disks read this post.

Why use snapshots?

Snapshots can be used to duplicate a VM relatively quickly since it is faster than copying a blob. You still have to shutdown the source VM to ensure consistency but creating the snapshot only takes a few seconds. Once you have the snapshot you can use it to create new VMs or copies of the VHD blob while the source VM can be powered on again.

Snapshots can also be used as a backup strategy for VMs although Azure Backups provides better functionality with a lot less effort albeit a bit more expensive since you pay for the recovery vault.

The one advantage of snapshots is the ability to overwrite the OS disk without recreating the VM but only for unmanaged disks at the time of writing.

Managed Disk Snapshots

You can create snapshots in the Azure portal by selecting your managed disk from the disks main menu or the disks section on the VM blade.

Azure Virtual Machine Snapshots For Managed Disks

Take note of the export button, you can use it to export the managed disk to a VHD blob which can be used to create unmanaged disks.

To create a snapshot using PowerShell call the New-AzureRmSnapshot command along with New-AzureRmSnapshotConfig to configure the snapshot options.

$mandisk = Get-AzureRmDisk -ResourceGroupName $rsgName -DiskName $diskname
$snapshot = New-AzureRmSnapshotConfig -SourceUri $mandisk.Id `
  -CreateOption Copy -Location $azureregion 
New-AzureRmSnapshot -Snapshot $snapshot -SnapshotName $Name `
  -ResourceGroupName $rsgName

Restoring Managed Disk Snapshots

At the time of writing you couldn’t overwrite or change the managed OS disk on an existing VM but you can create a new managed disk from a snapshot and then create a new VM from the managed disk.

Update: You can now swap a Managed Disk on a VM by replacing the disk Id with another one. This still involves creating another disk from your snapshot and swapping it but at least you don’t have to recreate the VM anymore. Thanks to Ralph Herold for pointing it out to me.

$vm = Get-AzureRmVM -ResourceGroupName osrg -Name vm2 
$disk = Get-AzureRmDisk -ResourceGroupName osrg -Name osbackup 
Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $disk.Id -Name $disk.Name 
Update-AzureRmVM -ResourceGroupName osrg -VM $vm

I assume you can do the same for data disks using Set-AzureRmVMDataDisk but I didn’t try it yet. Full documentation here.

You can create a new Managed Disk from a snapshot in the Azure Portal or PowerShell. In the Azure Portal select Create New Resource from the main portal menu and search for Managed Disks to create a new Managed Disk. There will be a Source Type dropdown where you can select Snapshot and you will see a list of your snapshots to use as the source for the new Managed Disk.

Azure Virtual Machine Snapshots For Managed Disks

When the create disk operation is completed select the newly created managed disk and create a new VM from the disk.

Azure Virtual Machine Snapshots For Managed Disks

If you want to script it in PowerShell the steps are basically:

  1. Retrieve the snapshot.
  2. Create a new disk configuration specifying the snapshot as the source.
  3. Create a new managed disk from the disk configuration and attach it to a new virtual machine.

If this is a data disk you can attach it to an existing VM instead of creating a new VM but OS disks can only be attached to new VMs.

...

$snapshot = Get-AzureRmSnapshot -ResourceGroupName $rsg `
  -SnapshotName $yoursnapshotname 
 
$diskconf = New-AzureRmDiskConfig -AccountType $storagetype `
  -Location   $snapshot.Location -SourceResourceId $snapshot.Id `
  -CreateOption Copy

$disk = New-AzureRmDisk -Disk $diskconf -ResourceGroupName $rsg `
  -DiskName $osDiskName
$vm = Get-AzureRmVM ...
$vm = Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $disk.Id `
  -CreateOption Attach -Windows

...

Full script can be found here.

Francois Delport

Azure Log Analytics Query Quick Start

This post is an Azure Log Analytics query quick start to get you up and running with queries in a few minutes. It follows on my previous post showing some of the common tasks performed in Azure Log Analytics. The official documentation can be found here.

Query Syntax

Queries in Azure Log Analytics start with a data table name followed by query operators and optionally rendering instructions. Data is piped from one operator to the next using a pipe sign.

Event
 | where Computer like "webserver" and EventID == 14
 | summarize count() by Computer
 | render barchart

Common Query Operators

Summarize : Similar to the SQL Group By statement it applies one or more aggregations with optionally one or more grouping expressions. For example to see how many events were logged per computer and when the last one was logged

Event
| summarize count(Computer), max(TimeGenerated) by Computer, EventLevelName

Distinct: Returns distinct values from one or more column(s)

Event | distinct Computer

Bin: Groups records into bins and works on numbers and dates. For example if you wanted to see how many events were logged per hour.

Event | summarize count(EventID) by bin(TimeGenerated, 1h)

Or group events by their eventid in bins of 1000.

Event | summarize count(EventID) by bin(EventID, 1000)

Or aggregate events by eventid and group on EventLevelName per day. For example to see the failed versus successful entries per day

Event | summarize count(EventID) by EventLevelName , bin(TimeGenerated, 1d)

Join: Just like your typical SQL join.

Event | join kind=inner (SecurityEvent) on Computer

Let: Let stores a value in a variable, the values can be tabular query results or any user supplied value. The variable can then be used in queries. For example if you have to join results from two very long queries or store constants used in your queries.

let errors = Event | ... very long query ...| summarize Count = count(EventID) by Computer, bin(TimeGenerated, 1h);

let memory = Perf | ... very long query ... | summarize avg(CounterValue) by Computer, bin(TimeGenerated,1h);

errors | join kind= inner (memory) on TimeGenerated

Project: Selects the columns to include in the query result, just like SQL Select statement.

Parse: Parse text into columns, it is a long explanation link here. It is really handy to extract data into columns from free-form text like custom log files.

Event| where EventLog == "DPM Backup Events"| parse RenderedDescription with * "Backup job for datasource: " ServerName " on " *

In this example RenderedDescription looked like this:
Backup job for datasource: SomeServerName on production server: SomeDPMServerName completed succcessfully…“.

The section between the qoutes are the “guide” string to look for, including the spaces and ServerName is new column name to extract.

Render: Renders a chart, too many options to mention here look at the documentation.

Event
| where Computer like "local" and EventID == 14
| summarize count() by Computer
| render piechart

Extend: Create calculated columns.

Event | extend RequiresAttention = iif(EventLevel == 4, "Yes", "No" )
Scope: You can create queries that span multiple applications and workspaces. Use the app() and workspace() expression to include other Application Insights applications and OMS workspaces in your query.
Datetime: You can use basic arithmetic on datetime values. You can subtract two datetime values to get a timespan value representing the difference between them. You can add or subtract a timespan value from a datetime value. You can use todatetime() to convert literals to datetime values. You can find the list of supported literals here. As per the documentation try to stick to ISO8610 date formats.
Case Sensitivity: String comparisons are case insensitive except for ==. There are case sensitive equivalents for most comparison operators ending in “_cs”
Entity names and query operator names are case sensitive.

 

Francois Delport