Skip to Main Content
Artificial neural networks are contemplated as the future of computing technology. It tenders solutions for numerous complex problems such as robotics arm control, speech recognition, signal processing, pattern recognition etc. These networks exhibit inherent parallelism i.e. that is it is their rudimentary property to perform parallel computing because the networks emulates the structure of a human brain which has the property to respond to various inputs parallelly and to exploit this parallelism we need to run these on parallel processors. This aggravates the computation speed substantially. Despite the increase in computational speed, the complex and exorbitant structure of these is a major drawback. This paper is directed to study the various techniques through which we can implement the learning processes parallely. The two types of simulation techniques are software techniques and hardware techniques. The paper alludes the software techniques of simulation. In the last section we also study various topologies and models and infer results based on complexity and costs, thus showing best available option for parallel computing arrangement so as to provide maximum output in minimum time.