Abstract:
A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of limited numerical precision can be used with e...Show MoreMetadata
Abstract:
A key question in the design of specialized hardware for simulation of neural networks is whether fixed-point arithmetic of limited numerical precision can be used with existing learning algorithms. An empirical study of the effects of limited precision in cascade-correlation networks on three different learning problems is presented. It is shown that learning can fail abruptly as the precision of network weights or weight-update calculations is reduced below a certain level, typically about 13 bits including the sign. Techniques for dynamic rescaling and probabilistic rounding that allow reliable convergence down to 7 bits of precision or less, with only a small and gradual reduction in the quality of the solutions, are introduced.<>
Published in: IEEE Transactions on Neural Networks ( Volume: 3, Issue: 4, July 1992)
DOI: 10.1109/72.143374