Scheduled System Maintenance:
On May 6th, system maintenance will take place from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). During this time, there may be intermittent impact on performance. We apologize for the inconvenience.
By Topic

Neural network training with constrained integer weights

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
2 Author(s)
Plagianakos, V.P. ; Dept. of Math., Patras Univ., Greece ; Vrahatis, M.N.

Presents neural network training algorithms which are based on the differential evolution (DE) strategies introduced by Storn and Price (J. of Global Optimization, vol. 11, pp. 341-59, 1997). These strategies are applied to train neural networks with small integer weights. Such neural networks are better suited for hardware implementation than the real weight ones. Furthermore, we constrain the weights and biases in the range [-2k+1, 2k-1], for k=3,4,5. Thus, they can be represented by just k bits. These algorithms have been designed keeping in mind that the resulting integer weights require less bits to be stored and the digital arithmetic operations between them are more easily implemented in hardware. Obviously, if the network is trained in a constrained weight space, smaller weights are found and less memory is required. On the other hand, the network training procedure can be more effective and efficient when large weights are allowed. Thus, for a given application, a trade-off between effectiveness and memory consumption has to be considered. We present the results of evolution algorithms for this difficult task. Based on the application of the proposed class of methods on classical neural network benchmarks, our experience is that these methods are effective and reliable

Published in:

Evolutionary Computation, 1999. CEC 99. Proceedings of the 1999 Congress on  (Volume:3 )

Date of Conference: