Skip to Main Content
Empirical evidence shows that massive data sets have rarely (if ever) a stationary underlying distribution. To obtain meaningful classification models, partitioning data into different concepts is required as an inherent part of learning. However, existing state-of-the-art approaches to concept drift detection work only sequentially (i.e. in a non-parallel fashion) which is a serious scalability limitation. To address this issue, we extend one of the sequential approaches to work in parallel and propose an Online Map-Reduce Drift Detection Method (OMR-DDM). It uses the combined online error rate of the parallel classification algorithms to identify changes in the underlying concept. For reasons of algorithmic efficiency it is built on a modified version of the popular Map-Reduce paradigm which permits for using preliminary results within mappers. An experimental evaluation shows that the proposed method can accurately detect concept drift while exploiting parallel processing. This paves the way to obtaining classification models which consider concept drift on massive data.