Loading [MathJax]/extensions/MathMenu.js
Learning to trade via direct reinforcement | IEEE Journals & Magazine | IEEE Xplore

Learning to trade via direct reinforcement


Abstract:

We present methods for optimizing portfolios, asset allocations, and trading systems based on direct reinforcement (DR). In this approach, investment decision-making is v...Show More

Abstract:

We present methods for optimizing portfolios, asset allocations, and trading systems based on direct reinforcement (DR). In this approach, investment decision-making is viewed as a stochastic control problem, and strategies are discovered directly. We present an adaptive algorithm called recurrent reinforcement learning (RRL) for discovering investment policies. The need to build forecasting models is eliminated, and better trading performance is obtained. The direct reinforcement approach differs from dynamic programming and reinforcement algorithms such as TD-learning and Q-learning, which attempt to estimate a value function for the control problem. We find that the RRL direct reinforcement framework enables a simpler problem representation, avoids Bellman's curse of dimensionality and offers compelling advantages in efficiency. We demonstrate how direct reinforcement can be used to optimize risk-adjusted investment returns (including the differential Sharpe ratio), while accounting for the effects of transaction costs. In extensive simulation work using real financial data, we find that our approach based on RRL produces better trading strategies than systems utilizing Q-learning (a value function method). Real-world applications include an intra-daily currency trader and a monthly asset allocation system for the S&P 500 Stock Index and T-Bills.
Published in: IEEE Transactions on Neural Networks ( Volume: 12, Issue: 4, July 2001)
Page(s): 875 - 889
Date of Publication: 31 July 2001

ISSN Information:

PubMed ID: 18249919

Contact IEEE to Subscribe

References

References is not available for this document.