By Topic

Improving Generalization Performance in Co-Evolutionary Learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

The purchase and pricing options are temporarily unavailable. Please try again later.
4 Author(s)
Siang Yew Chong ; Sch. of Comput. Sci., Univ. of Nottingham, Semenyih, Malaysia ; Tino, P. ; Day Chyi Ku ; Xin Yao

Recently, the generalization framework in co-evolutionary learning has been theoretically formulated and demonstrated in the context of game-playing. Generalization performance of a strategy (solution) is estimated using a collection of random test strategies (test cases) by taking the average game outcomes, with confidence bounds provided by Chebyshev's theorem. Chebyshev's bounds have the advantage that they hold for any distribution of game outcomes. However, such a distribution-free framework leads to unnecessarily loose confidence bounds. In this paper, we have taken advantage of the near-Gaussian nature of average game outcomes and provided tighter bounds based on parametric testing. This enables us to use small samples of test strategies to guide and improve the co-evolutionary search. We demonstrate our approach in a series of empirical studies involving the iterated prisoner's dilemma (IPD) and the more complex Othello game in a competitive co-evolutionary learning setting. The new approach is shown to improve on the classical co-evolutionary learning in that we obtain increasingly higher generalization performance using relatively small samples of test strategies. This is achieved without large performance fluctuations typical of the classical approach. The new approach also leads to faster co-evolutionary search where we can strictly control the condition (sample sizes) under which the speedup is achieved (not at the cost of weakening precision in the estimates).

Published in:

Evolutionary Computation, IEEE Transactions on  (Volume:16 ,  Issue: 1 )