By Topic

Separating distribution-free and mistake-bound learning models over the Boolean domain

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Blum, A. ; MIT Lab. for Comput. Sci., Cambridge, MA, USA

Two of the most commonly used models in computational learning theory are the distribution-free model, in which examples are chosen from a fixed but arbitrary distribution, and the absolute mistake-bound model, in which examples are presented in order by an adversary. Over the Boolean domain {0,1}n, it is known that if the learner is allowed unlimited computational resources, then any concept class learnable in one model is also learnable in the other. In addition, any polynomial-time learning algorithm for a concept class in the mistake-bound model can be transformed into one that learns the class in the distribution-free model. It is shown that if one-way functions exist, then the converse does not hold. The author presents a concept class over {0.1}n that is learnable in the distribution-free model but is not learnable in the absolute mistake-bound model if one-way functions exist. In addition, the concept class remains hard to learn in the mistake-bound model, even if the learner is allowed a polynomial number of membership queries

Published in:

Foundations of Computer Science, 1990. Proceedings., 31st Annual Symposium on

Date of Conference:

22-24 Oct 1990