I. Introduction
Neural network algorithms are useful for tasks such as audio-visual classification [1]-[3] and learning dynamic control [4]. Hardware implementation of such learning algorithms can improvise performance in the field of robotics and edge devices. Sub-threshold analog design of such systems leads to efficient power and area characteristics compared to digital design making them suitable for larger architectures and power-crunch areas like edge devices. Neural population coding is inspired from various cortical regions [5]-[10]. By considering the response from an ensemble of neurons [11], classification and regression tasks can be performed. In echo-state networks (ESNs), a reservoir of neurons is used to process temporal data [12]. Moreover, architectures like population coding and ESNs uses random and fixed weights in initial layers, which reduces the amount of memory required to store these weights [13]-[15], hence, making them more hardware friendly. Several deep learning architectures have also evolved over a period of time [16], [17] and different variations of these have been proposed to make them more reliable and efficient [18]–[22]. To cater to these evolving architectures, we propose a hardware model of the neuron, which can be generalized and adapted to variations in architectures. Various works on neuron models [23]–[27] exist. Also, there are existing works which utilize random device matches for random and fixed weights in population coding [13], [14]. Our design is a four quadrant current mode, which can be cascaded together for deep learning architectures. The activation function of the proposed neuron model approximates the ‘tanh’ curve, and can be controlled. This imparts flexibility to our proposed model which helps the architecture to learn better, especially in the case of population coding and ESNs, where randomness arising from only device mismatches may not be enough.