In the physical environment used as a reference for the testbed of the proposed GSelf-MapReduce method, a structure consisting of a Data Center Gateway (DCG), Data Center...
Abstract:
Big data are often stored close to the locations where they are generated, owing to the cost of data transfer. These stored data are moved to a single location for proces...Show MoreMetadata
Abstract:
Big data are often stored close to the locations where they are generated, owing to the cost of data transfer. These stored data are moved to a single location for processing or processed at that location. In the literature, it is possible to find different methods for processing data in distributed data centers. In this study, we present a new method for data processing called GSelf-MapReduce. In the proposed method, shuffling is performed among heterogeneous data center (DC) that complete the data-processing process. To calculate the data processing cost of the reduce function of the DCs, a polynomial regression model was created using the data obtained in the test environment, and the coefficients obtained from this model were used in the decision process. The key/value pairs to be shuffled are distributed according to the cost of the DCs, considering their location. In addition, not all DCs are waited to finish their job for shuffling. DCs that complete their job perform shuffling among themselves. Thus, the keys are deduplicated between these DCs. The shuffling volume in the last phase and the total job completion time are reduced. The performance of the proposed method was compared with that of four different distributed data processing methods in the literature. As a result, this work generates 15% less shuffled data than the closest work.
In the physical environment used as a reference for the testbed of the proposed GSelf-MapReduce method, a structure consisting of a Data Center Gateway (DCG), Data Center...
Published in: IEEE Access ( Volume: 12)