I. Introduction
Despite the fact that the power of cutting edge computer systems has been increased all these years with a stable trend, the corresponding increase rate of the volume of produced data is dramatically sharper, leading to vast amounts of datasets while, simultaneously, rising issues of efficient data manipulation. Taking also into consideration the complex cases of “concept drift” [1] – where the inputs to general Machine Learning (ML) recognition systems change dynamically - and fast evolving systems [2] which occur mainly due to the nature of the tackled application and the aspects of online learning - it is easily perceived that more and more applications need to be satisfied under both time and capacity restrictions, avoiding to mention big data era. Some usual ways of coping with such kind of tasks are: i) to increase the computational power of employed working systems [3], ii) design more efficient algorithms that could operate under such environments [4] and iii) convert effective algorithms to parallel versions [5].