Dear Colleagues

In recent years there has been an explosion of papers on anomaly detection for time series.

Most of these papers test one or more the main benchmarks in this domain, Yahoo, SDM, NAB, NASA etc.

However, it is come to our attention that these datasets have problems that may make them unsuitable for comparing algorithms, and may make any findings on them suffer from the illusion of progress.

In brief the problems include:

1)	Mislabeled ground truth (both false positives and false negatives).

2)	Run-to failure-basis (an algorithm that guesses the last point can significantly beat default rate).

3)	Triviality. A large fraction of the problems can be solved with a single line of code, no parameters, no need to see training data.

4)	Unrealistic density. For some examples, more than half the data consist of anomalies. This is a subjective point, but is this really anomaly detection, and not classification? In any case, would we see this in the real world? Surely after spotting the first anomaly or two, we would have intervened.


With this in mind Keogh’s Lab (the lab that brought you the UCR Time Series Classification Archive [a]) will host an anomaly detection contest in the coming months (sponsorship TBA). After the contest, the datasets will be placed in the public domain forever.
We would like you to contribute a real or synthetic data. This document [b] explains how to do so. It you are already working with time series, we suspect you could create an example in as little as ten minutes, using our simple template code.

We hope that you will be willing to offer this service to the community. Check back for the official contest announcement in the next few months.

Many thanks

Eamonn Keogh