Date 57 June 2013
Filter Results

[Front cover]
Publication Year: 2013 , Page(s): C4 
[Title page i]
Publication Year: 2013 , Page(s): i 
[Title page iii]
Publication Year: 2013 , Page(s): iii 
[Copyright notice]
Publication Year: 2013 , Page(s): iv 
Table of contents
Publication Year: 2013 , Page(s): v  vii 

Organizing Committee
Publication Year: 2013 , Page(s): ix 
Program Committee
Publication Year: 2013 , Page(s): x 
Reviewers
Publication Year: 2013 , Page(s): xi 

Random Arithmetic Formulas Can Be Reconstructed Efficiently
Publication Year: 2013 , Page(s): 1  9Informally stated, we present here a randomized algorithm that given blackbox access to the polynomial f computed by an unknown/hidden arithmetic formula φ reconstructs, on average, an equivalent or smaller formula φ̂ in time polynomial in the size of its output φ̂. Specifically, we consider arithmetic formulas wherein the underlying tree is a complete binary tree, the leaf nodes are labelled by affine forms (i.e. degree one polynomials) over the input variables and where the internal nodes consist of alternating layers of addition and multiplication gates. We call these alternating normal form (ANF) formulas. If a polynomial f can be computed by an arithmetic formula μ of size s, it can also be computed by an ANF formula φ, possibly of slightly larger size s^{O(1)}. Our algorithm gets as input blackbox access to the output polynomial f (i.e. for any point x in the domain, it can query the blackbox and obtain f(x) in one step) of a random ANF formula φ of size s (wherein the coefficients of the affine forms in the leaf nodes of φ are chosen independently and uniformly at random from a large enough subset of the underlying field). With high probability (over the choice of coefficients in the leaf nodes), the algorithm efficiently (i.e. in time ^{sO(1)}) computes an ANF formula φ̂ of size s computing f. This then is the strongest model of arithmetic computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst case. View full abstract»

Formulas are Exponentially Stronger than Monotone Circuits in Noncommutative Setting
Publication Year: 2013 , Page(s): 10  14We give an example of a noncommutative monotone polynomial f which can be computed by a polynomialsize noncommutative formula, but every monotone noncommutative circuit computing f must have an exponential size. In the noncommutative setting this gives, a fortiori, an exponential separation between monotone and general formulas, monotone and general branching programs, and monotone and general circuits. This answers some questions raised by Nisan. View full abstract»

On MediumUniformity and Circuit Lower Bounds
Publication Year: 2013 , Page(s): 15  23
Cited by: Papers (1)We explore relationships between circuit complexity, the complexity of generating circuits, and algorithms for analyzing circuits. Our results can be divided into two parts: 1. Lower Bounds Against MediumUniform Circuits. Informally, a circuit class is “medium uniform” if it can be generated by an algorithmic process that is somewhat complex (stronger than LOGTIME) but not infeasible. Using a new kind of indirect diagonalization argument, we prove several new unconditional lower bounds against medium uniform circuit classes, including: ; For all k, P is not contained in Puniform SIZE(n^{k}). That is, for all k there is a language L_{k} ∈ P that does not have O(n^{k})size circuits constructible in polynomial time. This improves Kannan's lower bound from 1982 that NP is not in Puniform SIZE(n^{k}) for any fixed k. ; For all k, NP is not in P_{}^{NP}uniform SIZE(n^{k}). This also improves Kannan's theorem, but in a different way: the uniformity condition on the circuits is stronger than that on the language itself. ; For all k, LOGSPACE does not have LOGSPACEuniform branching programs of size n^{k}. 2. Eliminating NonUniformity and (NonUniform) Circuit Lower Bounds. We complement these results by showing how to convert any potential simulation of LOGTIMEuniform NC^{1} in ACC^{0}/poly or TC^{0}/poly into a mediumuniform simulation using small advice. This lemma can be used to simplify the proof that faster SAT algorithms imply NEXP circuit lower bounds, and leads to the following new connection: . Consider the following task: given a TC^{0} circuit C of n^{O(1)} size, output yes when C is unsatisfiable, and output no when C has at least 2^{n2} satisfying assignments. (Behavior on other inputs can be arbitrary.) Clearly, this problem can be solved efficiently using randomness. If this problem can be solved deterministical y in 2^{nω(log n)} time, then NEXP ⊄ TC^{0}/poly. The lemma can also be used to derandomize randomized TC^{0} simulations of NC^{1} on almost all inputs: ; Suppose NC^{1} ⊆ BPTC^{0}. Then for every ε > 0 and every language L in NC^{1}, there is a (uniform) TC^{0} circuit family of polynomial size recognizing a language L' such that L and L' differ on at most 2^{nϵ} inputs of length n, for all n. View full abstract»

Towards a Reverse Newman's Theorem in Interactive Information Complexity
Publication Year: 2013 , Page(s): 24  33Newman's theorem states that we can take any publiccoin communication protocol and convert it into one that uses only private randomness with only a little increase in communication complexity. We consider a reversed scenario in the context of information complexity: can we take a protocol that uses private randomness and convert it into one that only uses public randomness while preserving the information revealed to each player? We prove that the answer is yes, at least for protocols that use a bounded number of rounds. As an application, we prove new direct sum theorems through the compression of interactive communication in the boundedround setting. Furthermore, we show that if a Reverse Newman's Theorem can be proven in full generality, then full compression of interactive communication and fullygeneral directsum theorems will result. View full abstract»

Shared Randomness and Quantum Communication in the Multiparty Model
Publication Year: 2013 , Page(s): 34  43We study shared randomness in the context of multiparty numberinhand communication protocols in the simultaneous message passing model. We show that with three or more players, shared randomness exhibits new interesting properties that have no direct analogues in the twoparty case. First, we demonstrate a hierarchy of modes of shared randomness, with the usual shared randomness where all parties access the same random string as the strongest form in the hierarchy. We show exponential separations between its levels, and some of our bounds may be of independent interest. For example, we show that the equality function can be solved by a protocol of constant length using the weakest form of shared randomness, which we call XORshared randomness. Second, we show that quantum communication cannot replace shared randomness in the kparty case, where k ≥ 3 is any constant. We demonstrate a promise function GP_{k} that can be computed by a classical protocol of constant length when (the strongest form of) shared randomness is available, but any quantum protocol without shared randomness must send n^{Ω(1)} qubits to compute it. Moreover, the quantum complexity of GPk remains n^{Ω(1)} even if the “second strongest” mode of shared randomness is available. While a somewhat similar separation was already known in the twoparty case, in the multiparty case our statement is qualitatively stronger: · In the twoparty case, only a relational communication problem with similar properties is known. · In the twoparty case, the gap between the two complexities of a problem can be at most exponential, as it is known that 2^{O(c)} log n qubits can always replace shared randomness in any cbit protocol. Our bounds imply that with quantum communication alone, in general, it is not possible to simulate efficiently even a threebit threeparty classical protocol that uses shared randomness. View full abstract»

On the Power of Nonadaptive Learning Graphs
Publication Year: 2013 , Page(s): 44  55
Cited by: Papers (2)We introduce a notion of the quantum query complexity of a certificate structure. This is a formalisation of a wellknown observation that many quantum query algorithms only require the knowledge of the disposition of possible certificates in the input string, not the precise values therein. Next, we derive a dual formulation of the complexity of a nonadaptive learning graph, and use it to show that nonadaptive learning graphs are tight for all certificate structures. By this, we mean that there exists a function possessing the certificate structure and such that a learning graph gives an optimal quantum query algorithm for it. For a special case of certificate structures generated by certificates of bounded size, we construct a relatively general class of functions having this property. The construction is based on orthogonal arrays, and generalizes the quantum query lower bound for the ksum problem derived recently. Finally, we use these results to show that the best known learning graph for the triangle problem is almost optimal in these settings. This also gives a quantum query lower bound for the trianglesum problem. View full abstract»

The Correct Exponent for the GotsmanLinial Conjecture
Publication Year: 2013 , Page(s): 56  64We prove new bounds on the average sensitivity of polynomial threshold functions. In particular, we show the average sensitivity of a polynomial threshold function of constant degree is not much more than the square root of the dimension of its space of definition. This bound amounts to a significant improvement over previous bounds, and in particular, for fixed degree provides the correct asymptotic exponent in the dimension. View full abstract»

Approaching the Chasm at Depth Four
Publication Year: 2013 , Page(s): 65  73
Cited by: Papers (5)AgrawalVinay [AV08] and Koiran [Koi12] have recently shown that an exp(ω(√n log^{2} n)) lower bound for depth four homogeneous circuits computing the permanent with bottom layer of × gates having fanin bounded by √n translates to superpolynomial lower bound for general arithmetic circuits computing the permanent. Motivated by this, we examine the complexity of computing the permanent and determinant via such homogeneous depth four circuits with bounded bottom fanin. We show here that any homogeneous depth four arithmetic circuit with bottom fanin bounded by √n computing the permanent (or the determinant) must be of size exp(Ω(√n)). View full abstract»

Approximating Boolean Functions with Depth2 Circuits
Publication Year: 2013 , Page(s): 74  85We study the complexity of approximating Boolean functions with DNFs and other depth2 circuits, exploring two main directions: universal bounds on the approximability of all Boolean functions, and the approximability of the parity function. In the first direction, our main positive results are the first nontrivial universal upper bounds on appropriability by DNFs: : Every Boolean function can be εapproximated by a DNF of size O_{ε}(2^{n}/ log n). : Every Boolean function can be εapproximated by a DNF of width c_{ε} n, where c_{ε} <; 1. Our techniques extend broadly to give strong universal upper bounds on approximability by various depth2 circuits that generalize DNFs, including the intersection of halfspaces, lowdegree PTFs, and unate functions. We show that the parameters of our constructions come close to matching the informationtheoretic inapproximability of a random function. In the second direction our main positive result is the construction of an explicit DNF that approximates the parity function: : PAR_{n} can be εapproximated by a DNF of size 2^{(12ε)n} and width (1  2ε)n. Using Fourier analytic tools we show that our construction is essentially optimal not just within the class of DNFs, but also within the far more expressive classes of the intersection of halfspaces and intersection of unate functions. View full abstract»

Constructing Hard Functions Using Learning Algorithms
Publication Year: 2013 , Page(s): 86  97
Cited by: Papers (1)Fort now and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class of boolean circuits C contained in P/poly of Boolean is exactly learnable with membership and equivalence queries in polynomialtime, then EXP^NP is not contained in C (the class EXP^NP was subsequently improved to EXP by Hitchcock and Harkins). In this paper, we improve on these results and show * If C is exactly learnable with membership and equivalence queries in polynomialtime, then DTIME(n^{omega(1)}) is not contained in C. We obtain even stronger consequences if C is learnable in the mistakebounded model, in which case we prove an averagecase hardness result against C. * If C is learnable in polynomial time in the PAC model then PSPACE is not contained in C, unless PSPACE is contained in BPP. Removing this extra assumption from the statement of the theorem would provide an unconditional separation of PSPACE and BPP. * If C is efficiently learnable in the Correlational Statistical Query (CSQ) model, we show that there exists an explicit function f that is averagecase hard for circuits in C. This result provides stronger averagecase hardness guarantees than those obtained by SQdimension arguments (Blum et al. 1993). We also obtain a nonconstructive extension of this result to the stronger Statistical Query (SQ) model. Similar results hold in the case where the learning algorithm runs in sub exponential time. Our proofs regarding exact and mistakebounded learning are simple and selfcontained, yield explicit hard functions, and show how to use mistakebounded learners to "diagonalize"' over families of polynomialsize circuits. Our consequences for PAC learning lead to new proofs of KarpLiptonstyle collapse results, and the lower bounds from SQ learning make use of recent work relating combinatorial discrepancy to the existence of hardonaverage functions. View full abstract»

Short Lists with Short Programs in Short Time
Publication Year: 2013 , Page(s): 98  108
Cited by: Papers (1)Given a machine U, a cshort program for x is a string p such that U(p) = x and the length of p is bounded by c + (the length of a shortest program for x). We show that for any universal machine, it is possible to compute in polynomial time on input x a list of polynomial size guaranteed to contain a O(logx)short program for x. We also show that there exist computable functions that map every x to a list of size O(x^{2}) containing a O(1)short program for x and this is essentially optimal because we prove that such a list must have size Ω(x^{2}). Finally we show that for some machines, computable lists containing a shortest program must have length Ω(2^{x}). View full abstract»

Lower Bounds for DNFrefutations of a Relativized Weak Pigeonhole Principle
Publication Year: 2013 , Page(s): 109  120
Cited by: Papers (1)The relativized weak pigeonhole principle states that if at least 2n out of n^{2} pigeons fly into n holes, then some hole must be doubly occupied. We prove that every DNFrefutation of the CNF encoding of this principle requires size 2((log n)^{3/2ε}) for every ϵ > 0 and every sufficiently large n. For its proof we need to discuss the existence of unbalanced lowdegree bipartite expanders satisfying a certain robustness condition. View full abstract»

LS+ Lower Bounds from Pairwise Independence
Publication Year: 2013 , Page(s): 121  132
Cited by: Papers (1)We consider the complexity of LS_{+} refutations of unsatisfiable instances of Constraint Satisfaction Problems (kCSPs) when the underlying predicate supports a pairwise independent distribution on its satisfying assignments. This is the most general condition on the predicates under which the corresponding MAX kCSP problem is known to be approximation resistant. We show that for random instances of such kCSPs on n variables, even after Ω(n) rounds of the LS_{+} hierarchy, the integrality gap remains equal to the approximation ratio achieved by a random assignment. In particular, this also shows that LS_{+} refutations for such instances require rank Ω(n). We also show the stronger result that refutations for such instances in the static LS_{+} proof system requires size exp(Ω(n)). View full abstract»

Just a Pebble Game
Publication Year: 2013 , Page(s): 133  143
Cited by: Papers (1)The twoplayer pebble game of DymondTompa is identified as a barrier for existing techniques to save space or to speed up parallel algorithms for evaluation problems. Many combinatorial lower bounds to study I versus NI and NC versus P under different restricted settings scale in the same way as the pebbling algorithm of DymondTompa. These lower bounds include, (1) the monotone separation of mI from mNI by studying the size of monotone switching networks in Potechin '10; (2) a new semantic separation of NC from P and of NC^{i} from NC^{i+1} by studying circuit depth, based on the techniques developed for the semantic separation of NC^{1} from NC^{2} by the universal composition relation in EdmondsImpagliazzoRudichSgall '01 and in Hastad Wigderson '97; and (3) the monotone separation of mNC from mP and of mNC^{i} from mNC^{i+1} by studying (a) the depth of monotone circuits in RazMcKenzie '99; and (b) the size of monotone switching networks in Chan Potechin '12. This supports the attempt to separate NC from P by focusing on depth complexity, and suggests the study of combinatorial invariants shaped by pebbling for proving lower bounds. An application to proof complexity gives tight bounds for the size and the depth of some refinements of resolution refutations. View full abstract»

Quantum XOR Games
Publication Year: 2013 , Page(s): 144  155We introduce quantum XOR games, a model of twoplayer oneround games that extends the model of XOR games by allowing the referee's questions to the players to be quantum states. We give examples showing that quantum XOR games exhibit a wide range of behaviors that are known not to exist for standard XOR games, such as cases in which the use of entanglement leads to an arbitrarily large advantage over the use of no entanglement. By invoking two deep extensions of Grothendieck's inequality, we present an efficient algorithm that gives a constantfactor approximation to the best performance players can obtain in a given game, both in case they have no shared entanglement and in case they share unlimited entanglement. As a byproduct of the algorithm we prove some additional interesting properties of quantum XOR games, such as the fact that sharing a maximally entangled state of arbitrary dimension gives only a small advantage over having no entanglement at all. View full abstract»