Skip to Main Content
A unique word-serial inner-product processor architecture is proposed to capitalize on the high-speed serial-link bus. To eliminate the input buffers and deserializers, partial products are generated immediately from the serial input data and accumulated by an array of small binary counters operating in parallel to form a reduced partial product matrix directly. The height of the resultant partial product matrix is reduced logarithmically, and hence the carry-save-adder tree needed to complete the inner-product computation is smaller and faster. The small binary counters act as active on-chip buffers to mitigate the workload of the partial product accumulator. Their ability to accumulate partial product bits faster than combinatorial full adder leads to a simple two-stage architecture of high throughput and low latency. The architecture consumes 46% less silicon area, 24% less energy per inner-product computation and 70% less total interconnect length than its merged arithmetic counterpart in 65 nm CMOS process. In addition, the architecture requires only 4 metal layers out of available 7 layers for signal and power routing. By emulating the on-chip serial-link bus architecture on both designs, it is demonstrated that the proposed design is most suited for high-speed on-chip serial-link bus architecture.
Circuits and Systems I: Regular Papers, IEEE Transactions on (Volume:59 , Issue: 12 )
Date of Publication: Dec. 2012