Abstract:
Traditional IO links are insufficient to transport high volume of image sensor data, under stringent power and latency constraints. To address this, we demonstrate a low ...Show MoreMetadata
Abstract:
Traditional IO links are insufficient to transport high volume of image sensor data, under stringent power and latency constraints. To address this, we demonstrate a low latency, low power in-sensor computing architecture to compress the data from a 3D-stacked dynamic vision sensor (DVS). In this design, we adopt a 4-bit autoencoder algorithm and implement it on an AI computing layer with in-memory computing (IMC) to enable real-time compression of DVS data. To support 3-D integration, this architecture is optimized to handle the unique constraints, including footprint to match the size of the sensor array, low latency to manage the continuous data stream, and low-power consumption to avoid thermal issues. Our prototype chip in 65-nm CMOS demonstrates the new concept of 3-D in-sensor computing, achieving < 6 mW power consumption at 1–10 MHz operating frequency, and 10\times compression ratio on 256\times 256 DVS pixels.
Published in: IEEE Solid-State Circuits Letters ( Volume: 7)