Date 912 June 2010
Filter Results

[Front cover]
Page(s): C1 
[Title page i]
Page(s): i 
[Title page iii]
Page(s): iii 
[Copyright notice]
Page(s): iv 
Table of contents
Page(s): v  vii 
Preface
Page(s): viii 
Committees
Page(s): ix 
listreviewer
Page(s): x 
Awards
Page(s): xi 
Parallel Repetition of Two Prover Games (Invited Survey)
Page(s): 3  6The parallel repetition theorem states that for any twoprover game with value smaller than 1, parallel repetition reduces the value of the game in an exponential rate. We give a short introduction to the problem of parallel repetition of twoprover games and some of its applications in theoretical computer science, mathematics and physics. We will concentrate mainly on recent results. View full abstract»

No Strong Parallel Repetition with Entangled and Nonsignaling Provers
Page(s): 7  15We consider oneround games between a classical verifier and two provers. One of the main questions in this area is the parallel repetition question: If the game is played H times in parallel, does the maximum winning probability decay exponentially in ℓ? In the classical setting, this question was answered in the affirmative by Raz. More recently the question arose whether the decay is of the form (1  ⊖ (ε))^{ℓ} where 1  ε is the value of the game and H is the number of repetitions. This question is known as the strong parallel repetition question and was motivated by its connections to the unique games conjecture. It was resolved by Raz who showed that strong parallel repetition does not hold, even in the very special case of games known as XOR games. This opens the question whether strong parallel repetition holds in the case when the provers share entanglement. Evidence for this is provided by the behavior of XOR games, which have strong (in fact perfect) parallel repetition, and by the recently proved strong parallel repetition of linear unique games. A similar question was open for games with socalled nonsignaling provers. Here the best known parallel repetition theorem is due to Holenstein, and is of the form (1  ⊖ (ε^{2}))^{ℓ}. We show that strong parallel repetition holds neither with entangled provers nor with nonsignaling provers. In particular we obtain that Holenstein's bound is tight. Along the way we also provide a tight characterization of the asymptotic behavior of the entangled value under parallel repetition of unique games in terms of a semidefinite program. View full abstract»

Derandomized Parallel Repetition of Structured PCPs
Page(s): 16  27A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts a false proof is called the soundness error, and is an important parameter of a PCP system that one seeks to minimize. Constructing PCPs with subconstant soundness error and, at the same time, a minimal number of queries into the proof (namely two) is especially important due to applications for inapproximability. In this work we construct such PCP verifiers, i.e., PCPs that make only two queries and have subconstant soundness error. Our construction can be viewed as a combinatorial alternative to the "manifold vs. point'' construction, which is the only construction in the literature for this parameter range. The "manifold vs. point'' PCP is based on a low degree test, while our construction is based on a direct product test. Our construction of a PCP is based on extending the derandomized direct product test of Impagliazzo, Kabanets and Wigderson (STOC 09) to a derandomized parallel repetition theorem. More accurately, our PCP construction is obtained in two steps. We first prove a derandomized parallel repetition theorem for specially structured PCPs. Then, we show that any PCP can be transformed into one that has the required structure, by embedding it on a deBruijn graph. View full abstract»

Derandomized Parallel Repetition Theorems for Free Games
Page(s): 28  37Raz's parallel repetition theorem together with improvements of Holenstein shows that for any twoprover oneround game with value at most 1  ∈ (for ∈ ≤ 1/2), the value of the game repeated n times in parallel on independent inputs is at most (1∈)^{Ω(∈}^{2}^{n/ℓ)} where ℓ is the answer length of the game. For free games (which are games in which the inputs to the two players are uniform and independent) the constant 2 can be replaced with 1 by a result of Barak, Rao, Raz, Rosen and Shaltiel. Consequently, n = O(^{tℓ}/_{∈}) repetitions suffice to reduce the value of a free game from 1  ∈ to (1  ∈)^{t}, and denoting the input length of the game by m, if follows that nm = O(^{tℓm}/_{∈}) random bits can be used to prepare n independent inputs for the parallel repetition game. In this paper we prove a derandomized version of the parallel repetition theorem for free games and show that O(t(m+ℓ)) random bits can be used to generate correlated inputs such that the value of the parallel repetition game on these inputs has the same behavior. Thus, in terms of randomness complexity, correlated parallel repetition can reduce the value of free games at the "correct rate" when ℓ = O(m). Our technique uses strong extractors to "derandomize" a lemma of, and can be also used to derandomize a parallel repetition theorem of Parnafes, Raz and Wigderson for communication games in the special case that the game is free. View full abstract»

Derandomizing ArthurMerlin Games and Approximate Counting Implies ExponentialSize Lower Bounds
Page(s): 38  49We show that if ArthurMerlin protocols can be derandomized, then there is a Boolean function computable in deterministic exponentialtime with access to an NP oracle, that cannot be computed by Boolean circuits of exponential size. More formally, if prAM ⊆ P^{NP} then there is a Boolean function in E^{NP} that requires circuits of size 2^{Ω(n)}. prAM is the class of promise problems that have ArthurMerlin protocols, Pνρ is the class of functions that can be computed in deterministic polynomialtime with an NP oracle and E^{NP} is its exponential analogue. The lower bound in the conclusion of our theorem suffices to construct very strong pseudorandom generators. We also show that the same conclusion holds if the problem of approximate counting the number of accepting paths of a nondeterministic Turing machine up to multiplicative factors can be done in nondeterministic polynomialtime. In other words, showing nondeterministic fully polynomialtime approximation schemes for #Pcomplete problems require proving exponentialsize circuit lower bounds. A few works have already shown that if we can find efficient deterministic solutions to some specific tasks (or classes) that are known to be solvable efficiently by randomized algorithms (or proofs), then we obtain lower bounds against certain circuit models. These lower bounds were only with respect to polynomialsize circuits even if full derandomization is assumed' Thus they only implied fairly weak pseudorandom generators (if at all). A key ingredient in our proof is a connection between computational learning theory and exponentialsize lower bounds. We show that the existence of deterministic learning algorithms with certain properties implies exponentialsize lower bounds, where the complexity of the hard function is related to the complexity of the learning algorithm. View full abstract»

Simple Affine Extractors Using Dimension Expansion
Page(s): 50  57Let F_{q} be the field of q elements. An (n, k)affine extractor is a mapping D : F_{q}^{n}→ {0,1} such that for any kdimensional affine subspace X ⊆ F_{q}^{n}, D(x) is an almost unbiased bit when x is chosen uniformly from X. Loosely speaking, the problem of explicitly constructing affine extractors gets harder as q gets smaller and easier as k gets larger. This is reflected in previous results: When q is 'large enough', specifically q = Ω(n^{2}), Gabizon and Raz construct affine extractors for any k ≥ 1. In the 'hardest case', i.e. when q = 2, Bourgain constructs affine extractors for k ≥ δn for any constant (and even slightly subconstant) δ > 0. Our main result is the following: Fix any k ≥ 2 and let d = 5n/k. Then whenever q > 2 · d^{2} and p = char(F_{q}) > d, we give an explicit (n, k)affine extractor. For example, when k = δn for constant δ > 0, we get an extractor for a field of constant size Ω((1/δ)^{2}). We also get weaker results for fields of arbitrary characteristic (but can still work with a constant field size when k = δn for constant δ > 0). Thus our result may be viewed as a 'fieldsize/dimension' tradeoff for affine extractors. For a wide range of k this gives a new result, but even for large k where we do not improve (or even match) the previous result of, we believe that our construction and proof have the advantage of being very simple: Assume n is prime and d is odd, and fix any nontrivial linear map T : F_{q}^{n}→ F_{q}. Define QR : F_{q} → {0,1} by QR(x) = 1 if and only if x is a quadratic residue. Then, the function D : F_{q}^{n} → {0,1} defined by D(x) =^{△} QR(T(x^{d})) is an (n, k)affine extractor. Our proof uses a result of H  eur, Leung and Xiang giving a lower bound on the dimension of products of subspaces. View full abstract»

Derandomizing from Random Strings
Page(s): 58  63In this paper we show that BPP is truthtable reducible to the set of Kolmogorov random strings R_{K}. It was previously known that PSPACE, and hence BPP is Turingreducible to R_{K}. The earlier proof relied on the adaptivity of the Turingreduction to find a Kolmogorovrandom string of polynomial length using the set R_{K} as oracle. Our new nonadaptive result relies on a new fundamental fact about the set R_{K}, namely each initial segment of the characteristic sequence of R_{K} has high Kolmogorov complexity. As a partial converse to our claim we show that strings of very high Kolmogorovcomplexity when used as advice are not much more useful than randomly chosen strings. View full abstract»

On the Power of Randomized Reductions and the Checkability of SAT
Page(s): 64  75We prove new results regarding the complexity of various complexity classes under randomized oracle reductions. We first prove that BPP^{PSZK} ⊆ AM ∩ coAM, where PSZK is the class of promise problems having statistical zero knowledge proofs. This strengthens the previously known facts that PSZK is closed under NC^{1} truthtable reductions (Sahai and Vadhan, J. ACM '03) and that P^{PSZK} ⊆ AM ∩ coAM (Vadhan, personal communication). Our proof relies on showing that a certain class of realvalued functions that we call ℝTUAM can be approximated using an AM protocol. Then we investigate the power of randomized oracle reductions with relation to the notion of instance checking (Blum and Kannan, J. ACM '95). We observe that a theorem of Beigel implies that if any problem in TFNP such as Nash equilibrium is NPhard under randomized oracle reductions, then SAT is checkable. We also observe that Beigel's theorem can be extended to an averagecase setting by relating checking to the notion of program testing (Blum et al., JCSS '93). From this, we derive that if oneway functions can be based on NPhardness via a randomized oracle reduction, then SAT is checkable. By showing that NP has a nonuniform tester, we also show that worstcase to averagecase randomized oracle reduction for any relation (or language) R E NP implies that R has a nonuniform instance checker. These results hold even for adaptive randomized oracle reductions. View full abstract»

A New Sampling Protocol and Applications to Basing Cryptographic Primitives on the Hardness of NP
Page(s): 76  87We investigate the question of what languages can be decided efficiently with the help of a recursive collisionfinding oracle. Such an oracle can be used to break collisionresistant hash functions or, more generally, statistically hiding commitments. The oracle we consider, Sam_{d} where d is the recursion depth, is based on the identicallynamed oracle defined in the work of Haitner et al. (FOCS '07). Our main result is a constantround publiccoin protocol "AMSam" that allows an efficient verifier to emulate a Sam_{d} oracle for any constant depth d = O(1) with the help of a BPP^{NP} proverAMSam allows us to conclude that if L is decidable by a kadaptive randomized oracle algorithm with access to a Sam_{O(1)} oracle, then L ∈ AM[k] ∩ coAM[k]. The above yields the following corollary: assume there exists an O(1)adaptive reduction that bases constantround statistically hiding commitment on NPhardness, then NP ⊆ coAM and the polynomial hierarchy collapses. The same result holds for any primitive that can be broken by Sam_{O(1)} including collisionresistant hash functions and O(1)round oblivious transfer where security holds statistically for one of the parties. We also obtain nontrivial (though weaker) consequences for kadaptive reductions for any k = poly(n). Prior to our work, most results in this research direction either applied only to nonadaptive reductions (Bogdanov and Trevisan, SIAM J. of Comp. '06 and Akavia et al., FOCS '06) or to oneway permutations (Brassard FOCS '79). The main technical tool we use to prove the above is a new constantround publiccoin protocol (SampleWithSize), which we believe to be of interest in its own right, that guarantees the following: given an efficient function f on n bits, let D be the output distribution D = f(U_{n}), then SampleWithSize allows an efficient verifier Arthur to use an allpowerful prover Merlin's help to sample a rando  m y ← D along with a good multiplicative approximation of the probability p_{y} = Pr_{y' ← D} [y' = y]. The crucial feature of SampleWithSize is that it extends even to distributions of the form D = f(Us), where Us is the uniform distribution on an efficiently decidable subset S ⊆ {0,1}^{n} (such D are called efficiently samplable with postselection), as long as the verifier is also given a good approximation of the value S. View full abstract»

The ProgramEnumeration Bottleneck in AverageCase Complexity Theory
Page(s): 88  95Three fundamental results of Levin involve algorithms or reductions whose running time is exponential in the length of certain programs. We study the question of whether such dependency can be made polynomial. 1) Levin's "optimal search algorithm" performs at most a constant factor more slowly than any other fixed algorithm. The constant, however, is exponential in the length of the competing algorithm. We note that the running time of a universal search cannot be made "fully polynomial" (that is, the relation between slowdown and program length cannot be made polynomial), unless P=NP. 2) Levin's "universal oneway function" result has the following structure: there is a polynomial time computable function f_{Levin} such that if there is a polynomial time computable adversary A that inverts f_{Levin} on an inverse polynomial fraction of inputs, then for every polynomial time computable function g there also is a polynomial time adversary A_{g} that inverts g on an inverse polynomial fraction of inputs. Unfortunately, again the running time of A_{g} depends exponentially on the bit length of the program that computes g in polynomial time. We show that a fully polynomial uniform reduction from an arbitrary oneway function to a specific oneway function is not possible relative to an oracle that we construct, and so no "universal oneway function" can have a fully polynomial security analysis via relativizing techniques. 3) Levin's completeness result for distributional NP problems implies that if a specific problem in NP is easy on average under the uniform distribution, then every language L in NP is also easy on average under any polynomial time computable distribution. The running time of the implied algorithm for L, however, depends exponentially on the bit length of the nondeterministic polynomial time Turing machine that decides L. We show that if a completeness result for distributional NP can be proved via a "fully uniform"  and "fully polynomial" time reduction, then there is a worstcase to averagecase reduction for NPcomplete problems. In particular, this means that a fully polynomial completeness result for distributional NP is impossible, even via randomized truthtable reductions, unless the polynomial hierarchy collapses. View full abstract»

On the Unique Games Conjecture (Invited Survey)
Page(s): 99  121This article surveys recently discovered connections between the Unique Games Conjecture and computational complexity, algorithms, discrete Fourier analysis, and geometry. View full abstract»

Spectral Algorithms for Unique Games
Page(s): 122  130We present a new algorithm for Unique Games which is based on purely spectral techniques, in contrast to previous work in the area, which relies heavily on semidefinite programming (SDP). Given a highly satisfiable instance of Unique Games, our algorithm is able to recover a good assignment. The approximation guarantee depends only on the completeness of the game, and not on the alphabet size, while the running time depends on spectral properties of the LabelExtended graph associated with the instance of Unique Games. In particular, we show how our techniques imply a quasipolynomial time algorithm that decides satisfiability of a game on the KhotVishnoi [14] integrality gap instance. Notably, when run on that instance, the standard SDP relaxation of Unique Games fails. As a special case, we also show how to rederive a polynomial time algorithm for Unique Games on expander constraint graphs (similar to [2]) and a subexponential time algorithm for Unique Games on the Hypercube. View full abstract»

A LogSpace Algorithm for Reachability in Planar Acyclic Digraphs with Few Sources
Page(s): 131  138Designing algorithms that use logarithmic space for graph reachability problems is fundamental to complexity theory. It is well known that for general directed graphs this problem is equivalent to the NL vs L problem. This paper focuses on the reachability problem over planar graphs where the complexity is unknown. Showing that the planar reachability problem is NLcomplete would show that nondeterministic logspace computations can be made unambiguous. On the other hand, very little is known about classes of planar graphs that admit logspace algorithms. We present a new `sourcebased' structural decomposition method for planar DAGs. Based on this decomposition, we show that reachability for planar DAGs with m sources can be decided deterministically in O(m + log n) space. This leads to a logspace algorithm for reachability in planar DAGs with O(log n) sources. Our result drastically improves the class of planar graphs for which we know how to decide reachability in deterministic logspace. Specifically, the class extends from planar DAGs with at most two sources to at most O(log n) sources. View full abstract»

On the Matching Problem for Special Graph Classes
Page(s): 139  150An even cycle in a graph is called nice by Lovász and Plummer in [LP86] if the graph obtained by deleting all vertices of the cycle has some perfect matching. In the present paper we prove some new complexity bounds for various versions of problems related to perfect matchings in graphs with a polynomially bounded number of nice cycles. We show that for graphs with a polynomially bounded number of nice cycles the perfect matching decision problem is in SPL, it is hard for FewL, and the perfect matching construction problem is in L^{C=L}∩⊕L. Furthermore, we significantly improve the best known upper bounds, proved by Agrawal, Hoang, and Thierauf in the STACS'07paper [AHT07], for the polynomially bounded perfect matching problem by showing that the construction and the counting versions are in C=L∩⊕L and in C=L, respectively. Note that SPL, ⊕L, C=L and L^{C=L} are contained in NC^{2}. Moreover, we show that the problem of computing a maximum matching for bipartite planar graphs is in L^{C=L}. This solves Open Question 4.7 stated in the STACS'08paper by Datta, Kulkarni, and Roy [DKR08] where it is asked whether computing a maximum matching even for bipartite planar graphs can be done in NC. We also show that the problem of computing a maximum matching for graphs with a polynomially bounded number of even cycles is in L^{C=L}. View full abstract»

On the Relative Strength of Pebbling and Resolution
Page(s): 151  162The last decade has seen a revival of interest in pebble games in the context of proof complexity. Pebbling has proven to be a useful tool for studying resolutionbased proof systems when comparing the strength of different subsystems, showing bounds on proof space, and establishing sizespace tradeoffs. The typical approach has been to encode the pebble game played on a graph as a CNF formula and then argue that proofs of this formula must inherit (various aspects of) the pebbling properties of the underlying graph. Unfortunately, the reductions used here are not tight. To simulate resolution proofs by pebblings, the full strength of nondeterministic blackwhite pebbling is needed, whereas resolution is only known to be able to simulate deterministic black pebbling. To obtain strong results, one therefore needs to find specific graph families which either have essentially the same properties for black and blackwhite pebbling (not at all true in general) or which admit simulations of blackwhite pebblings in resolution. This paper contributes to both these approaches. First, we design a restricted form of blackwhite pebbling that can be simulated in resolution and show that there are graph families for which such restricted pebblings can be asymptotically better than black pebblings. This proves that, perhaps somewhat unexpectedly, resolution can strictly beat blackonly pebbling, and in particular that the space lower bounds on pebbling formulas in [BenSasson and Nordstrom 2008] are tight. Second, we present a versatile parametrized graph family with essentially the same properties for black and blackwhite pebbling, which gives sharp simultaneous tradeoffs for black and blackwhite pebbling for various parameter settings. Both of our contributions have been instrumental in obtaining the timespace tradeoff results for resolutionbased proof systems in [BenSasson and Nordstrom 2009]. View full abstract»

TradeOff Lower Bounds for Stack Machines
Page(s): 163  171A space bounded Stack Machine is a regular Turing Machine with a readonly input tape, several space bounded readwrite work tapes, and an unbounded stack. Stack Machines with a logarithmic space bound have been connected to other classical models of computation, such as polynomial time Turing Machines (P) (Cook; 1971) and polynomial size, polylogarithmic depth, bounded fanin circuits (NC) e.g., (Borodin et al.; 1989). In this paper, we give the first known lower bound for Stack Machines. This comes in the form of a tradeoff lower bound between space and number of passes over the input tape. Specifically, we give an explicit permuted inner product function such that any Stack Machine computing this function requires either sublinear polynomial space or sublinear polynomial number of passes. In the case of logarithmic space Stack Machines, this yields an unconditional sublinear polynomial lower bound for the number of passes. To put this result in perspective, we note that Stack Machines with logarithmic space and a single pass over the input can compute Parity, Majority, as well as certain languages outside NC. The latter follows from (Allender; 1989), conditional on the widely believed complexity assumption that EXP is different from PSPACE. Our technique is a novel communication complexity reduction, thereby extending the already wide range of models of computation for which communication complexity can be used to obtain lower bounds. Informally, we show that a kplayer numberinhand communication protocol for a base function f can efficiently simulate a space and passbounded Stack Machine for a related function F, which consists of several permuted instances of f, bundled together by a combining function h. Tradeoff lower bounds for Stack Machines then follow from known communication complexity lower bounds. The framework for this reduction was given by (Beame and HuynhNgoc; 2008), who used it to obtain similar tradeoff lower bounds for Turing Machines wi  th a constant number of passbounded external tapes. We also prove that the latter cannot efficiently simulate Stack Machines, conditional on the complexity assumption that E is not a subset of PSPACE. It is the treatment of an unbounded stack which constitutes the main technical novelty in our communication complexity reduction. View full abstract»