By Topic

An Introduction to Computational Learning Theory

Cover Image Copyright Year: 1994
Author(s): Michael J. Kearns; Umesh Vazirani
Publisher: MIT Press
Content Type : Books & eBooks
Topics: Computing & Processing
  • Print

Abstract

Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics.Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning.Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs.The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation.

  •   Click to expandTable of Contents

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Front Matter

      Page(s): i - xiii
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: Half Title, Title, Copyright, Contents, Preface, Half Title View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      The Probably Approximately Correct Learning Model

      Page(s): 1 - 29
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: A Rectangle Learning Game, A General Model, Learning Boolean Conjunctions, Intractability of Learning 3-Term DNF Formulae, Using 3-CNF Formulae to Avoid Intractability, Exercises, Bibliographic Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Occam's Razor

      Page(s): 31 - 48
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: Occam Learning and Succinctness, Improving the Sample Size for Learning Conjunctions, Learning Conjunctions with Few Relevant Variables, Learning Decision Lists, Exercises, Bibliographic Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      The Vapnik-Chervonenkis Dimension

      Page(s): 49 - 71
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: When Can Infinite Classes Be Learned with a Finite Sample?, The Vapnik-Chervonenkis Dimension, Examples of the VC Dimension, Polynomial Bound on |Πc(S)|, A Polynomial Bound on the Sample Size for PAC Learning, Sample Size Lower Bounds, An Application to Neural Networks, Exercises, Bibliographic Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Weak and Strong Learning

      Page(s): 73 - 102
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: A Relaxed Definition of Learning?, Boosting the Confidence, Boosting the Accuracy, Exercises, Bibliographic Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Learning in the Presence of Noise

      Page(s): 103 - 122
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: The Classification Noise Model, An Algorithm for Learning Conjunctions from Statistics, The Statistical Query Learning Model, Simulating Statistical Queries in the Presence of Noise, Exercises, Bibliographic Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Inherent Unpredictability

      Page(s): 123 - 142
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: Representation Dependent and Independent Hardness, The Discrete Cube Root Problem, Small Boolean Circuits Are Inherently Unpredictable, Reducing the Depth of Inherently Unpredictable Circuits, A General Method and Its Application to Neural Networks, Exercises, Bibliographic Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Reducibility in PAC Learning

      Page(s): 143 - 154
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: Reducing DNF to Monotone DNF, A General Method for Reducibility, Reducing Boolean Formulae to Finite Automata, Exercises, Bibliographic Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Learning Finite Automata by Experimentation

      Page(s): 155 - 187
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: Active and Passive Learning, Exact Learning Using Queries, Exact Learning of Finite Automata, Learning without a Reset, Exercises, Bibliographic Notes View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Appendix: Some Tools for Probabilistic Analysis

      Page(s): 189 - 192
      Copyright Year: 1994

      MIT Press eBook Chapters

      This chapter contains section titled: The Union Bound, Markov's Inequality, Chernoff Bounds View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Bibliography

      Page(s): 193 - 203
      Copyright Year: 1994

      MIT Press eBook Chapters

      Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics.Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning.Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs.The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation. View full abstract»

    • Full text access may be available. Click article title to sign in or learn about subscription options.

      Index

      Page(s): 205 - 207
      Copyright Year: 1994

      MIT Press eBook Chapters

      Emphasizing issues of computational efficiency, Michael Kearns and Umesh Vazirani introduce a number of central topics in computational learning theory for researchers and students in artificial intelligence, neural networks, theoretical computer science, and statistics.Computational learning theory is a new and rapidly expanding area of research that examines formal models of induction with the goals of discovering the common methods underlying efficient learning algorithms and identifying the computational impediments to learning.Each topic in the book has been chosen to elucidate a general principle, which is explored in a precise formal setting. Intuition has been emphasized in the presentation to make the material accessible to the nontheoretician while still providing precise arguments for the specialist. This balance is the result of new proofs of established theorems, and new presentations of the standard proofs.The topics covered include the motivation, definitions, and fundamental results, both positive and negative, for the widely studied L. G. Valiant model of Probably Approximately Correct Learning; Occam's Razor, which formalizes a relationship between learning and data compression; the Vapnik-Chervonenkis dimension; the equivalence of weak and strong learning; efficient learning in the presence of noise by the method of statistical queries; relationships between learning and cryptography, and the resulting computational limitations on efficient learning; reducibility between learning problems; and algorithms for learning finite automata from active experimentation. View full abstract»