C5/C6 tetraplegic patients and transhumeral amputees may be able to use voluntary shoulder motion as command signals for a functional electrical stimulation (FES) system or a transhumeral prosthesis. Such prostheses require, at the most basic level, the control of endpoint position in three dimensions, hand orientation, and grasp. Spatiotemporal synergies exist between the proximal and distal arm joints for goal-oriented reaching movements as performed by able-bodied subjects. To fit these synergies, we utilized three-layer artificial neural networks. These networks could be used as a means for obtaining user intent information during reaching movements. We conducted reaching experiments in which subjects reached to and grasped a handle in a three-dimensional gantry. In our previous work, the three rotational angles at the shoulder were used to predict elbow flexion/extension angle during reaches on a two-dimensional plane. In this paper, we extend this model to include the two translational movements at the shoulder as inputs and an additional output of forearm pronation/supination. Counterintuitively, as the complexity of the task and the complexity of the neural network architecture increased, the performance also improved.