Skip to Main Content
A simple analog-signal synapse model is developed and later implemented on a standard 0.35 μm CMOS process to provide for large scale of integration, high processing speed and manufacturability of a multi-layer artificial neural network. Synapse nonlinearity with respect to synapse weight is studied. Demonstrated is the capability of the circuit to operate in both feed-forward and learning (training) mode. The effect of the synapse's inherent quadratic nonlinearity on learning convergence and on the optimization of weight vector update direction is analyzed and found to be beneficial. The suitability of the proposed implementation for very large-scale artificial neural networks is confirmed.