Loading [MathJax]/extensions/MathMenu.js
Artificial Neural Networks in Hardware | part of Learning in Energy-Efficient Neuromorphic Computing: Algorithm and Architecture Co-Design | Wiley-IEEE Press books | IEEE Xplore

Artificial Neural Networks in Hardware

;

Chapter Abstract:

This chapter discusses neural networks implemented on different hardware platforms. It considers the advantages and disadvantages of different platforms. There are three ...Show More

Chapter Abstract:

This chapter discusses neural networks implemented on different hardware platforms. It considers the advantages and disadvantages of different platforms. There are three types of hardware that neural network algorithms can be deployed: general‐purpose processors, field‐programmable gate arrays (FPGAs), and application‐specific integrated circuits. The chapter presents a few representative digital accelerators. It explains he strategies of building low‐power high‐performance FPGA‐based Artificial Neural Network accelerators. One good example is using a memristorin analog/mixed‐signal accelerators. Since memristors were first demonstrated in 2008, they have drawn the attention of many researchers. The hardware architecture is called the deep in‐memory architecture, where multiple rows of static random‐access memory cells in the array are logically grouped together to form a word‐row. The chapter also explains one important strategy in building energy‐efficient accelerators is to optimize the dataflow and memory access. It presents a customized accelerator is demonstrated for the adaptive dynamic programming algorithm.
Page(s): 61 - 118
Copyright Year: 2020
Edition: 1
ISBN Information:

Contact IEEE to Subscribe