Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
2 Author(s)
Heskes, T. ; Dept. of Med. Phys. & Biophys., Nijmegen Univ., Netherlands ; Wiegerinck, W.

We study and compare different neural network learning strategies: batch-mode learning, online learning, cyclic learning, and almost-cyclic learning. Incremental learning strategies require less storage capacity than batch-mode learning. However, due to the arbitrariness in the presentation order of the training patterns, incremental learning is a stochastic process; whereas batch-mode learning is deterministic. In zeroth order, i.e., as the learning parameter /spl eta/ tends to zero, all learning strategies approximate the same ordinary differential equation for convenience referred to as the "ideal behavior". Using stochastic methods valid for small learning parameters /spl eta/, we derive differential equations describing the evolution of the lowest-order deviations from this ideal behavior. We compute how the asymptotic misadjustment, measuring the average asymptotic distance from a stable fixed point of the ideal behavior, scales as a function of the learning parameter and the number of training patterns. Knowing the asymptotic misadjustment, we calculate the typical number of learning steps necessary to generate a weight within order /spl epsiv/ of this fixed point, both with fixed and time-dependent learning parameters. We conclude that almost-cyclic learning (learning with random cycles) is a better alternative for batch-mode learning than cyclic learning (learning with a fixed cycle).

Published in:

Neural Networks, IEEE Transactions on  (Volume:7 ,  Issue: 4 )