Skip to Main Content
This work gives improvement of gradient learning algorithms for adjusting neural network weights. Suggested improvement results in alternative method that converge in less iteration and is inherently parallel, convenient for implementation on computer grid. Experimental results show time savings in multiple thread execution for a wide range of MLP neural network parameters, such as size of input/output data matrix, number of neurons and layers.
Date of Conference: 25-27 June 2008