Obscure KQL Operators Every Expert Should Know
Unlocking KQL’s Secret Menu—No Decoder Ring Required
Kusto Query Language (KQL) is a powerful tool for querying and analyzing data in Azure Data Explorer, Log Analytics, and other Azure services. While most users are familiar with common operators like where
, summarize
, and join
, KQL hides some lesser-known operators that can supercharge your queries. These obscure operators can help you tackle complex scenarios, optimize performance, and uncover insights that might otherwise remain hidden. In this post, we'll dive into five underutilized KQL operators that every KQL expert should have in their toolkit, complete with practical examples.
1. mv-expand
: Unraveling Multi-Value Columns
The mv-expand
operator is a gem for dealing with dynamic arrays or property bags in your data. It expands multi-valued columns into multiple rows, making it easier to analyze nested or array-based data.
When to Use It
Use mv-expand
when you have JSON arrays or dynamic columns that need to be flattened for analysis, such as extracting individual elements from a list of tags or properties.
Example
Suppose you have a table Logs
with a column Tags
containing arrays of strings, like ["error", "critical", "alert"]
. You want to count occurrences of each tag.
Logs
| where Timestamp > ago(1d)
| mv-expand Tag = Tags
| summarize Count = count() by tostring(Tag)
| order by Count desc
Explanation:
mv-expand Tag = Tags
creates a new row for each element in theTags
array, assigning the element to a new columnTag
.tostring(Tag)
converts the dynamic type to a string for grouping.The query then counts occurrences of each tag and sorts by count.
Output:
Tag Count error 150 critical 100 alert 50
This is invaluable for analyzing JSON-like data without complex parsing.
2. make-series
: Time-Series Magic
The make-series
operator creates a time-series aggregation, ideal for trend analysis over time. It generates arrays of values aggregated over time bins, which is perfect for visualizing trends or detecting anomalies.
When to Use It
Use make-series
when you need to analyze trends over time, such as tracking metrics like CPU usage or request counts in regular time intervals.
Example
Imagine a Metrics
table with CPU usage data. You want to analyze average CPU usage per hour over the last day.
Metrics
| make-series AvgCPU = avg(CPUUsage) default=0 on Timestamp from ago(1d) to now() step 1h by ServerName
| project ServerName, AvgCPU, Timestamp
Explanation:
make-series
aggregatesCPUUsage
into hourly bins, averaging the values.default=0
specifies what to do with missing data points.by ServerName
groups the series by server, creating one series per server.The result includes arrays of
AvgCPU
values and correspondingTimestamp
arrays.
Output:
ServerName AvgCPU Timestamp Server1 [45.2, 50.1, 48.9, ...] [2025-07-20T08:00, ...] Server2 [60.3, 55.7, 58.2, ...] [2025-07-20T08:00, ...]
Pair this with the render
operator (e.g., render timechart
) for visualizations.
3. bag_unpack
: Decoding Property Bags
The bag_unpack
operator extracts key-value pairs from a dynamic column (property bag) into separate columns. It’s a lifesaver when dealing with semi-structured JSON data.
When to Use It
Use bag_unpack
when you need to transform dynamic property bags into structured columns for easier querying.
Example
Consider a Events
table with a Properties
column containing JSON like {"Region": "US", "Status": "Failed"}
. You want to analyze events by region and status.
Events
| where Timestamp > ago(7d)
| evaluate bag_unpack(Properties)
| summarize Count = count() by Region, Status
Explanation:
evaluate bag_unpack(Properties)
creates a new column for each key in theProperties
bag (e.g.,Region
,Status
).The query then groups by these columns to count occurrences.
Output:
Region Status Count US Failed 200 EU Success 150
This operator simplifies working with dynamic data, avoiding manual JSON parsing.
4. pivot
: Reshaping Data Dynamically
The pivot
operator transforms rows into columns based on a specified column’s values, creating a wide-format table. It’s like a spreadsheet pivot table but in KQL.
When to Use It
Use pivot
when you need to reshape data for reporting, such as turning categorical values into columns for easier comparison.
Example
Suppose a Sales
table tracks product sales by region. You want to see sales amounts with regions as columns.
Sales
| where SaleDate > ago(30d)
| evaluate pivot(Region, sum(Amount), Product)
Explanation:
pivot(Region, sum(Amount), Product)
creates a column for eachRegion
, withsum(Amount)
as the values, grouped byProduct
.The result is a table with products as rows and regions as columns.
Output:
Product US EU Asia Laptop 50000 30000 20000 Phone 40000 25000 15000
This is perfect for cross-tabular reports or dashboards.
5. arg_max
and arg_min
: Pinpointing Extremes
The arg_max
and arg_min
operators return the row with the maximum or minimum value of a specified column, along with other columns from that row. They’re great for finding the most recent or extreme records.
When to Use It
Use arg_max
to find the latest record or arg_min
for the earliest, based on a metric or timestamp.
Example
In a Logs
table, you want the most recent log entry per user, including their status.
Logs
| summarize arg_max(Timestamp, Status) by UserId
Explanation:
arg_max(Timestamp, Status)
keeps the row with the maximumTimestamp
for eachUserId
, including theStatus
column.The result shows the latest status for each user.
Output:
UserId Timestamp Status U001 2025-07-21T08:00:00 Active U002 2025-07-21T07:45:00 Offline
This is useful for tracking the latest state in streaming or time-series data.
TLDR
These obscure KQL operators—mv-expand
, make-series
, bag_unpack
, pivot
, and arg_max
/arg_min
—unlock powerful ways to manipulate and analyze data. By adding them to your toolkit, you can handle complex scenarios like nested JSON, time-series trends, and dynamic pivoting with ease. Experiment with these examples in your KQL environment to see their potential. For more KQL tips, check the official KQL documentation.