Skip to Main Content
Neural networks have previously been used for modelling the nonlinear characteristics of memoryless nonlinear channels using the backpropagation learning (BP) with experimental training data (Ibnkahla et al. 1997). The mean transient and convergence behavior of a simplified two-layer neural network have also been studied (Bershad et al. 1997). The network was trained with zero mean Gaussian data. This paper extends these results to include the effects of the weight fluctuations upon the mean-square-error (MSE). A new methodology is presented which can be extended to other nonlinear learning problems. The new mathematical model is able to predict the MSE learning behavior as a function of the algorithm step size /spl mu/. Linear recursions are derived for the variance and covariance of the weights which depend nonlinearly upon the mean weights. As in linear gradient search problems (LMS, etc.), there exists an optimum /spl mu/ (minimizing the MSE) which is the trade-off between fast learning and small weight fluctuations. Monte Carlo simulations display excellent agreement with the theoretical predictions for various /spl mu/.