Abstract:
The implementation of the Fused Multiply and Add (FMA) operation has been extensively studied in the literature on standard and large precisions. We suggest re- visiting ...Show MoreMetadata
Abstract:
The implementation of the Fused Multiply and Add (FMA) operation has been extensively studied in the literature on standard and large precisions. We suggest re- visiting those studies for 16-bit precision. We introduce a variation of the Mixed precision FMA targeted for applications processing low precision inputs (such as machine learning). We also introduce several versions of a fixed point based floating- point FMA which performs an exact accumulation of binary16 numbers. We study the implementation and area footprint of those operators in comparison with standard FMAs.
Published in: 2017 IEEE 24th Symposium on Computer Arithmetic (ARITH)
Date of Conference: 24-26 July 2017
Date Added to IEEE Xplore: 31 August 2017
ISBN Information:
Print ISSN: 1063-6889