Loading [MathJax]/extensions/MathMenu.js
Enhancing the MapReduce training of BP neural networks based on local weight matrix evolution | IEEE Conference Publication | IEEE Xplore

Enhancing the MapReduce training of BP neural networks based on local weight matrix evolution


Abstract:

Training Back-Propagation Neural Networks (BPNNs) on big datasets faces two challenges, the hight time cost and the possibility of getting trapped into local optimum. Map...Show More

Abstract:

Training Back-Propagation Neural Networks (BPNNs) on big datasets faces two challenges, the hight time cost and the possibility of getting trapped into local optimum. MapReduce has been introduced to improve the efficiency of BPNN training on big datasets in recent years. After each turn of BPNN training on each split of the dataset concurrently, lots of local BPNNs that are only convergent on the specific split will be produced, and a global BPNN candidate convergent on the whole dataset needs to be generated from them. This process is full of challenges because it has a high impact on the training efficiency as well as the training accuracy. The paper introduces the evolution of the local BPNNs into the MapReduce training of BPNN, and proposes a novel approach. Profiting from the advantages of EAs in global optimum searching, the approach can reduce the iterations to get the global convergent BPNN candidate and avoid the training process to get trapped into local optimum. Experiments show the approach can improve the training efficiency and accuracy remarkably. The approach has also been applied into a real-world big data application and verified it can work well on big and high dimension datasets.
Date of Conference: 11-14 December 2017
Date Added to IEEE Xplore: 15 January 2018
ISBN Information:
Conference Location: Boston, MA, USA

Contact IEEE to Subscribe

References

References is not available for this document.