An algorithm is proposed for the design of ``on-line'' learning controllers to control a discrete stochastic plant. The subjective probabilities of applying control actions from a finite set of allowable actions using random strategy, after any plant-environment situation (called an ``event'') is observed, are modified through the algorithm. The subjective probability for the optimal action is proved to approach one with probability one for any observed event. The optimized performance index is the conditional expectation of the instantaneous performance evaluations with respect to the observed events and the allowable actions. The algorithm is described through two transformations, T1, and T2. After the ``ordering transformation'' T1 is applied on the estimates of the performance indexes of the allowable actions, the ``learning transformation'' T2 modifies the subjective probabilities. The cases of discrete and continuous features are considered. In the latter, the Potential Function Method is employed. The algorithm is compared with a linear reinforcement schenme and computer simulation results are presented.