Skip to Main Content
As the use of intrusion detection systems (IDSs) continues to climb and as researchers find more ways to detect attacks amid a vast ocean of data. The problem of testing IDS solutions has reared its ugly bead. Showing that one technique is better than another or training an IDS about normal usage requires test data. As it turns out, collecting or creating such a data set is something of a catch-22. If the data already contains attacks, researchers will train the IDS to see the attacks as normal; the IDS could then fail to register them as malicious events in the future. The most efficient way, however, to determine whether a large data set contains malicious events is to scan it with existing IDS. Thus, any attacks that the existing IDS fails to find are presented to the new IDS as normal data leading to potential false negatives. Clearly, breaking this cycle requires an independent source of verifiable attack-free training data with which to train IDSs.