Artificial neural networks (ANN) are an established area of artificial intelligence (AI) and computer science. ANNs have been used in a number of ways for research and industrial projects. However, despite ANN research spanning many years, the typical implementation is a single threaded programming model. This paper presents a fully parallel implementation of a Hopfield neural network using a supercomputer. The goal of this project is to develop a core learning unit capable of enormous range of scaling ability over a large number of nodes in a supercomputer. Furthermore, we integrate techniques that minimize the dependencies on any particular topology thus making it easier to port to other supercomputing environments. Ideally, other SHARC-net users extend these ideas and conduct research using the tools developed in this project. This paper provides an outline of the issues associated with the development of this artificial neural network on SHARC-net, the benefits of such work, the difficulties encountered and future directions.