Skip to Main Content
Example-based learning, as performed by neural networks and other approximation and classification techniques, is both computationally intensive and I/O intensive, typically Involving the optimization of hundreds or thousands of parameters during repeated network evaluations over a database of example vectors. Although there Is currently no dominant approach or technique among the various neural networks and learning algorithms, the basic functionality of most neural networks can be conceptually realized as a multidimensional look-up table. While multidimensional look-up tables are clearly impractical due to the exponential memory requirements, we are pursuing an approach using interpolation based only on the sparse data provided by an initial example database. In particular, we have designed prototype VLSI components for searching multidimensional example databases for the X closest examples to an input query as determined by a programmable metric using a massively parallel search. This nearest-neighbor approach can be used directly for classification, or in conjunction with any number of neural network algorithms that exploit local fitting. The hardware removes the I/O bottleneck from the learning task by supplying a reduced set of examples for localized training or classification. Though nearest-neighbor retrieval algorithms have efficient software implementations for low-dimensional databases, exhaustive searching is the only effective approach for handling high-dimensional data. The parallel VLSI hardware we have designed can accelerate the exhaustive search by three orders of magnitude. We believe this special purpose VLSI will have direct application in systems requiring learning functionality and in accelerating learning applications on large, high-dimensional databases.