By Topic

Weak Derandomization of Weak Algorithms: Explicit Versions of Yao's Lemma

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$31 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

1 Author(s)
Shaltiel, R. ; Dept. of Comput. Sci., Univ. of Haifa, Haifa, Israel

A simple averaging argument shows that given a randomized algorithm A and a function f such that for every input x, Pr[A(x) = f(x)] ges 1-p (where the probability is over the coin tosses of A), there exists a nonuniform deterministic algorithm B "of roughly the same complexity'' such that Pr[B(x) = f(x)] ges 1-p (where the probability is over a uniformly chosen input x). This implication is often referred to as "the easy direction of Yao's lemma'' and can be thought of as "weak derandomization'' in the sense that B is deterministic but only succeeds on most inputs. The implication follows as there exists a fixed value r' for the random coins of A such that "hardwiring r' into A'' produces a deterministic algorithm B. However, this argument does not give a way to explicitly construct B. In this paper we consider the task of proving uniform versions of the implication above. That is, how to explicitly construct a deterministic algorithm B when given a randomized algorithm A. We prove such derandomization results for several classes of randomized algorithms. These include: randomized communication protocols, randomized decision trees (here we improve a previous result by Zimand), randomized streaming algorithms and randomized algorithms computed by polynomial size constant depth circuits. Our proof uses an approach suggested by Goldreich and Wigderson and "extracts randomness from the input''. We show that specialized (seedless) extractors can produce randomness that is in some sense not correlated with the input. Our analysis can be applied to any class of randomized algorithms as long as one can explicitly construct the appropriate extractor. Some of our derandomization results follow by constructing a new notion of seedless extractors that we call "extractors for recognizable distributions'' which may be of independent interest.

Published in:

Computational Complexity, 2009. CCC '09. 24th Annual IEEE Conference on

Date of Conference:

15-18 July 2009