Neural-network techniques, particularly back- propagation algorithms, have been widely used as a tool for discovering a mapping function between a known set of input and output examples. Neural networks learn from the known example set by adjusting its internal parameters, referred to as weights, using an optimisation procedure based on the 'least square fit principle'. The optimisation procedure normally involves thousands of iterations to converge to an acceptable solution. Hence, improving the computational efficiency of a neural- network algorithm is an active area of research. It has been shown in the existing literature that the variation of the gain parameter improves the learning efficiency of the gradient-descent method. However, it can be concluded from previous researchers' claims that the adaptive-gain variation improved the learning rate and hence the efficiency. It was discovered in this research that the gain variation has no influence on the learning rate; however, it actually influences the search direction. A novel technique of integrating the adaptive-learning rate method coupled with an improved search direction for improving the computational efficiency of neural networks is presented. The new approach had shown that this modification can significantly enhance the computational efficiency of training process.