Skip to Main Content
Learning-from-abstraction (LFA) is a recently proposed model-based distributed data mining approach which aims to the mining process both scalable and privacy preserving. However how to set the right trade-off between the abstraction levels of the local data sources and the global model accuracy is crucial for getting the optimal abstraction, especially when the local data are inter-correlated to different extents. In this paper, we define the optimal abstraction task as a game and compute the Nash equilibrium as its solution. Also, we propose an iterative version of the game so that the Nash equilibrium can be computed by actively exploring details from the local sources in a need-to-know manner. We tested the proposed game theoretic approach using a number of data sets for model-based clustering with promising results obtained.