Abstract:
Implementation of artificial neural network (ANN) in hardware is needed to fully utilize the inherent parallelism. Presented work focuses on the configuration of field-pr...Show MoreMetadata
Abstract:
Implementation of artificial neural network (ANN) in hardware is needed to fully utilize the inherent parallelism. Presented work focuses on the configuration of field-programmable gate array (FPGA) to realize the activation function utilized in ANN. The computation of a nonlinear activation function (AF) is one of the factors that constraint the area or the computation time. The most popular AF is the log-sigmoid function, which has different possibilities of realizing in digital hardware. Equation approximation, lookup table (LUT) based approach and piecewise linear approximation (PWL) are a few to mention. A two-fold approach to optimize the resource requirement is presented here. Primarily, fixed-point computation (FXP) that needs minimal hardware, as against floating-point computation (FLP) is followed. Secondly, the PWL approximation of AF with more precision is proved to consume lesser Si area when compared to LUT based AF. Experimental results are presented for computation.
Date of Conference: 26-28 November 2008
Date Added to IEEE Xplore: 08 December 2008
Print ISBN:978-0-7695-3382-7