Abstract:
The failure of Dennard's scaling, the end of Moore's law and the recent developments regarding Deep Neural Networks (DNN) are leading computer scientists, practitioners a...Show MoreMetadata
Abstract:
The failure of Dennard's scaling, the end of Moore's law and the recent developments regarding Deep Neural Networks (DNN) are leading computer scientists, practitioners and vendors to fundamentally revise computer architecture to look for potential performance improvements. One such candidate is to re-evaluate how floating point operations are performed, which have remained (nearly) unchanged for the past three decades. The POSIT numerical format was introduced in mid-2017 and is claimed to be more accurate, have a wider dynamical range and fewer unused states compared to the IEEE-754. However, the hardware implications of migrating to a POSIT format are unknown. In this paper, we present our results regarding POSITs and their implementation on Field-Programmable Gate-Arrays (FPGAs). We designed a tool that automatically generates and pipelines POSIT operators that can be used as drop-in replacement in processing units or in High-Level Synthesis tools (e.g. Intel FPGA SDK for OpenCL). We empirically quantified the performance and area overhead of our POSIT implementation compared to two well-known IEEE-754 implementations, and show that our units can operate at comparable frequencies. We finally show how we can integrate our POSIT hardware into existing Intel FPGA SDK for OpenCL data-paths, enabling software programmers to easily continue to evaluate POSITs.
Published in: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)
Date of Conference: 21-25 May 2018
Date Added to IEEE Xplore: 06 August 2018
ISBN Information: