โ† Back to Blog

The 9 Anomaly Detection Methods ThresholdIQ Uses โ€” Explained in Plain English

Published March 14, 2026 ยท 11 min read ยท Deep Dive

When ThresholdIQ detects an anomaly in your data, it doesn't just compare your value to a fixed number. It runs nine different analyses simultaneously, each designed to catch a different class of problem. This post explains each one without the maths โ€” just what it does, what it's good at, and what it misses on its own.

Key principle: No single method is reliable on its own. EWMA misses seasonal patterns. SARIMA misses global outliers. Isolation Forest doesn't understand time. That's why all nine run together โ€” each catches what the others miss.
โšก
1. Multi-Window Z-score
Primary severity driver

This is the main engine that decides Warning, Critical or Emergency. It looks at your data through four different "lenses" โ€” windows of 50, 100, 200 and 500 data points โ€” and asks: "is this value unusual compared to its recent history?"

A thermometer that compares your current temperature to your readings from the last hour, last day, last week, and last month โ€” separately. A fever that looks bad on all four timescales is more serious than one that only looks bad on one.
Sudden spikes, abrupt drops, values that have been drifting outside normal range across multiple time horizons.
Anomalies that are normal for that time of day (e.g. a Sunday low); sensors that are stuck but within normal range.
๐Ÿ“ˆ
2. EWMA (Exponentially Weighted Moving Average)
Spike sensitivity boost

EWMA smooths your data by giving more weight to recent values than older ones. It then compares each actual value to the smoothed estimate. A big difference means something changed suddenly.

Imagine your expected daily sales based on a running average โ€” but one where yesterday counts more than last month. If today is radically different from that expectation, EWMA flags it.
Fast, sharp spikes โ€” things that happen and resolve quickly, which a slow rolling average would smooth over and miss.
Gradual trends โ€” a slow decline looks normal to EWMA because the average adjusts as the decline progresses.
๐ŸŒ€
3. SARIMA (Seasonal ARIMA)
Seasonal pattern expert

SARIMA is a forecasting model that explicitly learns seasonal cycles โ€” daily, weekly, or other repeating patterns. It predicts what each point should be based on the pattern, then flags points that are far from prediction.

A senior analyst who has worked Monday-through-Friday for three years. They know Thursday afternoon is always busy, Monday morning always starts slow. They'd immediately notice if Thursday was unusually quiet โ€” even if the raw number wasn't technically "low."
Anomalies that are wrong for their specific time slot โ€” Tuesday readings that are fine on Tuesday's average but anomalous for a Tuesday at 3pm specifically.
Requires at least 40โ€“100 data points to train. Won't run on very small datasets. Not useful for data without regular time patterns.
๐ŸŒฒ
4. Isolation Forest
Global outlier detector

Isolation Forest looks at all your metrics at once โ€” not just one column. It asks: "if I randomly split the data space with a series of cuts, which points get isolated quickly?" Unusual points, far from the main cluster, get isolated in fewer cuts.

Imagine sorting a box of mixed fruit into groups. Normal apples and oranges cluster together easily. The single starfruit gets isolated immediately โ€” it doesn't group naturally with anything else. That starfruit is the anomaly.
Global multivariate outliers โ€” combinations of values across multiple columns that are globally unusual, even if each individual column looks fine. Sensor failures that read zero when everything else is normal.
Locally contextual anomalies โ€” a value that's normal globally but wrong for its specific time of day. That's SARIMA's job.
๐Ÿ”—
5. Correlation Deviation
Multi-metric failure detector

This method watches whether related metrics deviate in the same direction at the same time. Revenue down + margin down + order count down simultaneously is a much stronger signal than any one of them alone.

A doctor who sees a patient with a fever, low blood pressure, AND elevated white cell count at the same time is more alarmed than if any one of those readings were slightly off. The combination tells a different story than each reading alone.
Coordinated failures โ€” payment processor outages (multiple transaction metrics all degrade at once), supply chain disruptions (inventory + fulfilment rate + delivery time all worsen together), fraud patterns.
Isolated single-metric anomalies โ€” this method requires two or more metrics to deviate together. A single column spiking is covered by multi-window.
๐Ÿ“
6. DBSCAN (Density-Based Clustering)
Behavioural outlier

DBSCAN groups your data points into clusters based on how densely they're packed together. Points that don't belong to any cluster are labelled "noise" โ€” and those are your anomalies.

A security guard watching a car park. Most cars arrive and leave in normal patterns โ€” clusters of activity at 9am, lunchtime, 5pm. A car that arrives at 3am and stays until 4am doesn't fit any cluster. Flag it.
Behavioural outliers โ€” data points that don't match any recognised operating pattern in your business. Reverse wiring on sensors, unusual combinations of metric values that have never appeared before in the history of the data.
Known seasonal patterns โ€” if 3am arrivals are normal (say, overnight delivery operations), DBSCAN will cluster those too once it sees enough of them.
๐ŸŒ™
7. Seasonal Baseline
False positive reducer

This method builds a separate "normal range" for every hour-of-day and day-of-week combination. Sunday readings are compared against Sunday history. 3am readings are compared against 3am history. Not against the all-time average.

A cinema that knows Saturday evenings are always packed and Tuesday afternoons are always quiet. They'd only be alarmed by a quiet Saturday or a packed Tuesday โ€” not by a quiet Tuesday, which is completely expected.
Anomalies within their own seasonal context โ€” a drop that's genuinely bad for a Thursday morning, not just bad against the overall average. This is what prevents Sunday-low false alarms.
Works best with 4+ weeks of data so each time slot has enough history to build a meaningful baseline.
๐Ÿ“‰
8. Trend Detection
Early warning for gradual drift

Trend detection compares the average of three consecutive 50-point windows. If each successive window's average is higher (or lower) than the previous one, a monotonic drift is confirmed and flagged.

A manager who reviews weekly numbers and notices the team's output was slightly lower in week 1, a bit lower in week 2, lower again in week 3. Each week looks "almost fine." The three-week pattern is unmistakably a problem forming.
Gradual deterioration โ€” budget creep, slow stock depletion, rising latency, declining satisfaction scores. Problems that are invisible on any single day but obvious across three consecutive windows.
Sudden spikes or one-off events โ€” those are multi-window and EWMA's territory. Trend detection specifically looks for sustained direction, not speed.
๐Ÿšซ
9. Stuck / Zero Detection
Hardware & pipeline failure

This method checks for two failure modes that produce false-looking "normal" data. First: values that become suspiciously constant (zero variance over a rolling window when history shows variance). Second: a sudden drop to zero from a consistently non-zero series.

An electricity meter that reads exactly 2,847 kWh every hour for a day. Technically within range. But no real consumption pattern is perfectly constant. The meter has frozen. Similarly, a data feed that suddenly reports ยฃ0 sales when it was averaging ยฃ85,000 has almost certainly broken.
Sensor failures, frozen data feeds, broken integrations, ETL pipeline outages that produce placeholder zeros instead of real values. These are invisible to standard threshold rules because the value stays "within range."
Genuine zeros โ€” if your business legitimately has zero sales on public holidays, this method will learn that pattern over time and stop flagging it.

Why Nine Methods Instead of One?

Each method covers different failure modes and data patterns:

The combination is what makes automatic detection practical. Each method fills the gaps of the others. When multiple methods agree โ€” say EWMA + correlation + Isolation Forest all flag the same point โ€” that's an Emergency. When only one method fires, it's a Warning. The fusion of scores is what generates severity that actually means something.

Seeing Which Methods Fired

After ThresholdIQ runs detection on your file, the Detection Signals tab shows you exactly which of the nine methods contributed to each alert. You'll see bars for each method, how many times each fired, and at what severity level. This makes the engine transparent โ€” you're never just told "something is wrong" without an explanation.

Try the Full Detection Engine Free โ€” Upload Your File โ†’

All 9 methods. Zero configuration. Your data never leaves your browser.