By Topic

Computational Complexity (CCC), 2013 IEEE Conference on

Date 5-7 June 2013

Filter Results

Displaying Results 1 - 25 of 41
  • [Front cover]

    Publication Year: 2013 , Page(s): C4
    Save to Project icon | Request Permissions | PDF file iconPDF (952 KB)  
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2013 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (116 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2013 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (313 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2013 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (120 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2013 , Page(s): v - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (137 KB)  
    Freely Available from IEEE
  • Preface

    Publication Year: 2013 , Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (112 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Organizing Committee

    Publication Year: 2013 , Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (212 KB)  
    Freely Available from IEEE
  • Program Committee

    Publication Year: 2013 , Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (208 KB)  
    Freely Available from IEEE
  • Reviewers

    Publication Year: 2013 , Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (83 KB)  
    Freely Available from IEEE
  • CCC 2013 Awards

    Publication Year: 2013 , Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Random Arithmetic Formulas Can Be Reconstructed Efficiently

    Publication Year: 2013 , Page(s): 1 - 9
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (247 KB) |  | HTML iconHTML  

    Informally stated, we present here a randomized algorithm that given blackbox access to the polynomial f computed by an unknown/hidden arithmetic formula φ reconstructs, on average, an equivalent or smaller formula φ̂ in time polynomial in the size of its output φ̂. Specifically, we consider arithmetic formulas wherein the underlying tree is a complete binary tree, the leaf nodes are labelled by affine forms (i.e. degree one polynomials) over the input variables and where the internal nodes consist of alternating layers of addition and multiplication gates. We call these alternating normal form (ANF) formulas. If a polynomial f can be computed by an arithmetic formula μ of size s, it can also be computed by an ANF formula φ, possibly of slightly larger size sO(1). Our algorithm gets as input blackbox access to the output polynomial f (i.e. for any point x in the domain, it can query the blackbox and obtain f(x) in one step) of a random ANF formula φ of size s (wherein the coefficients of the affine forms in the leaf nodes of φ are chosen independently and uniformly at random from a large enough subset of the underlying field). With high probability (over the choice of coefficients in the leaf nodes), the algorithm efficiently (i.e. in time sO(1)) computes an ANF formula φ̂ of size s computing f. This then is the strongest model of arithmetic computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst case. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Formulas are Exponentially Stronger than Monotone Circuits in Non-commutative Setting

    Publication Year: 2013 , Page(s): 10 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (167 KB) |  | HTML iconHTML  

    We give an example of a non-commutative monotone polynomial f which can be computed by a polynomial-size non-commutative formula, but every monotone non-commutative circuit computing f must have an exponential size. In the non-commutative setting this gives, a fortiori, an exponential separation between monotone and general formulas, monotone and general branching programs, and monotone and general circuits. This answers some questions raised by Nisan. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Medium-Uniformity and Circuit Lower Bounds

    Publication Year: 2013 , Page(s): 15 - 23
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (270 KB) |  | HTML iconHTML  

    We explore relationships between circuit complexity, the complexity of generating circuits, and algorithms for analyzing circuits. Our results can be divided into two parts: 1. Lower Bounds Against Medium-Uniform Circuits. Informally, a circuit class is “medium uniform” if it can be generated by an algorithmic process that is somewhat complex (stronger than LOGTIME) but not infeasible. Using a new kind of indirect diagonalization argument, we prove several new unconditional lower bounds against medium uniform circuit classes, including: ; For all k, P is not contained in P-uniform SIZE(nk). That is, for all k there is a language Lk ∈ P that does not have O(nk)-size circuits constructible in polynomial time. This improves Kannan's lower bound from 1982 that NP is not in P-uniform SIZE(nk) for any fixed k. ; For all k, NP is not in P||NP-uniform SIZE(nk). This also improves Kannan's theorem, but in a different way: the uniformity condition on the circuits is stronger than that on the language itself. ; For all k, LOGSPACE does not have LOGSPACE-uniform branching programs of size nk. 2. Eliminating Non-Uniformity and (Non-Uniform) Circuit Lower Bounds. We complement these results by showing how to convert any potential simulation of LOGTIME-uniform NC1 in ACC0/poly or TC0/poly into a medium-uniform simulation using small advice. This lemma can be used to simplify the proof that faster SAT algorithms imply NEXP circuit lower bounds, and leads to the following new connection: . Consider the following task: given a TC0 circuit C of nO(1) size, output yes when C is unsatisfiable, and output no when C has at least 2n-2 satisfying assignments. (Behavior on other inputs can be arbitrary.) Clearly, this problem can be solved efficiently using randomness. If this problem can be solved deterministical- y in 2n-ω(log n) time, then NEXP ⊄ TC0/poly. The lemma can also be used to derandomize randomized TC0 simulations of NC1 on almost all inputs: ; Suppose NC1 ⊆ BPTC0. Then for every ε > 0 and every language L in NC1, there is a (uniform) TC0 circuit family of polynomial size recognizing a language L' such that L and L' differ on at most 2 inputs of length n, for all n. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards a Reverse Newman's Theorem in Interactive Information Complexity

    Publication Year: 2013 , Page(s): 24 - 33
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (254 KB) |  | HTML iconHTML  

    Newman's theorem states that we can take any public-coin communication protocol and convert it into one that uses only private randomness with only a little increase in communication complexity. We consider a reversed scenario in the context of information complexity: can we take a protocol that uses private randomness and convert it into one that only uses public randomness while preserving the information revealed to each player? We prove that the answer is yes, at least for protocols that use a bounded number of rounds. As an application, we prove new direct sum theorems through the compression of interactive communication in the bounded-round setting. Furthermore, we show that if a Reverse Newman's Theorem can be proven in full generality, then full compression of interactive communication and fully-general direct-sum theorems will result. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shared Randomness and Quantum Communication in the Multi-party Model

    Publication Year: 2013 , Page(s): 34 - 43
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (246 KB) |  | HTML iconHTML  

    We study shared randomness in the context of multi-party number-in-hand communication protocols in the simultaneous message passing model. We show that with three or more players, shared randomness exhibits new interesting properties that have no direct analogues in the two-party case. First, we demonstrate a hierarchy of modes of shared randomness, with the usual shared randomness where all parties access the same random string as the strongest form in the hierarchy. We show exponential separations between its levels, and some of our bounds may be of independent interest. For example, we show that the equality function can be solved by a protocol of constant length using the weakest form of shared randomness, which we call XOR-shared randomness. Second, we show that quantum communication cannot replace shared randomness in the k-party case, where k ≥ 3 is any constant. We demonstrate a promise function GPk that can be computed by a classical protocol of constant length when (the strongest form of) shared randomness is available, but any quantum protocol without shared randomness must send nΩ(1) qubits to compute it. Moreover, the quantum complexity of GPk remains nΩ(1) even if the “second strongest” mode of shared randomness is available. While a somewhat similar separation was already known in the two-party case, in the multi-party case our statement is qualitatively stronger: · In the two-party case, only a relational communication problem with similar properties is known. · In the two-party case, the gap between the two complexities of a problem can be at most exponential, as it is known that 2O(c) log n qubits can always replace shared randomness in any c-bit protocol. Our bounds imply that with quantum communication alone, in general, it is not possible to simulate efficiently even a three-bit three-party classical protocol that uses shared randomness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Power of Non-adaptive Learning Graphs

    Publication Year: 2013 , Page(s): 44 - 55
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (290 KB) |  | HTML iconHTML  

    We introduce a notion of the quantum query complexity of a certificate structure. This is a formalisation of a well-known observation that many quantum query algorithms only require the knowledge of the disposition of possible certificates in the input string, not the precise values therein. Next, we derive a dual formulation of the complexity of a non-adaptive learning graph, and use it to show that non-adaptive learning graphs are tight for all certificate structures. By this, we mean that there exists a function possessing the certificate structure and such that a learning graph gives an optimal quantum query algorithm for it. For a special case of certificate structures generated by certificates of bounded size, we construct a relatively general class of functions having this property. The construction is based on orthogonal arrays, and generalizes the quantum query lower bound for the k-sum problem derived recently. Finally, we use these results to show that the best known learning graph for the triangle problem is almost optimal in these settings. This also gives a quantum query lower bound for the triangle-sum problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Correct Exponent for the Gotsman-Linial Conjecture

    Publication Year: 2013 , Page(s): 56 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (245 KB) |  | HTML iconHTML  

    We prove new bounds on the average sensitivity of polynomial threshold functions. In particular, we show the average sensitivity of a polynomial threshold function of constant degree is not much more than the square root of the dimension of its space of definition. This bound amounts to a significant improvement over previous bounds, and in particular, for fixed degree provides the correct asymptotic exponent in the dimension. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approaching the Chasm at Depth Four

    Publication Year: 2013 , Page(s): 65 - 73
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    Agrawal-Vinay [AV08] and Koiran [Koi12] have recently shown that an exp(ω(√n log2 n)) lower bound for depth four homogeneous circuits computing the permanent with bottom layer of × gates having fanin bounded by √n translates to super-polynomial lower bound for general arithmetic circuits computing the permanent. Motivated by this, we examine the complexity of computing the permanent and determinant via such homogeneous depth four circuits with bounded bottom fanin. We show here that any homogeneous depth four arithmetic circuit with bottom fanin bounded by √n computing the permanent (or the determinant) must be of size exp(Ω(√n)). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximating Boolean Functions with Depth-2 Circuits

    Publication Year: 2013 , Page(s): 74 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (310 KB) |  | HTML iconHTML  

    We study the complexity of approximating Boolean functions with DNFs and other depth-2 circuits, exploring two main directions: universal bounds on the approximability of all Boolean functions, and the approximability of the parity function. In the first direction, our main positive results are the first non-trivial universal upper bounds on appropriability by DNFs: : Every Boolean function can be ε-approximated by a DNF of size Oε(2n/ log n). : Every Boolean function can be ε-approximated by a DNF of width cε n, where cε <; 1. Our techniques extend broadly to give strong universal upper bounds on approximability by various depth-2 circuits that generalize DNFs, including the intersection of halfspaces, low-degree PTFs, and unate functions. We show that the parameters of our constructions come close to matching the information-theoretic inapproximability of a random function. In the second direction our main positive result is the construction of an explicit DNF that approximates the parity function: : PARn can be ε-approximated by a DNF of size 2(1-2ε)n and width (1 - 2ε)n. Using Fourier analytic tools we show that our construction is essentially optimal not just within the class of DNFs, but also within the far more expressive classes of the intersection of halfspaces and intersection of unate functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constructing Hard Functions Using Learning Algorithms

    Publication Year: 2013 , Page(s): 86 - 97
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (264 KB) |  | HTML iconHTML  

    Fort now and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class of boolean circuits C contained in P/poly of Boolean is exactly learnable with membership and equivalence queries in polynomial-time, then EXP^NP is not contained in C (the class EXP^NP was subsequently improved to EXP by Hitchcock and Harkins). In this paper, we improve on these results and show * If C is exactly learnable with membership and equivalence queries in polynomial-time, then DTIME(n^{omega(1)}) is not contained in C. We obtain even stronger consequences if C is learnable in the mistake-bounded model, in which case we prove an average-case hardness result against C. * If C is learnable in polynomial time in the PAC model then PSPACE is not contained in C, unless PSPACE is contained in BPP. Removing this extra assumption from the statement of the theorem would provide an unconditional separation of PSPACE and BPP. * If C is efficiently learnable in the Correlational Statistical Query (CSQ) model, we show that there exists an explicit function f that is average-case hard for circuits in C. This result provides stronger average-case hardness guarantees than those obtained by SQ-dimension arguments (Blum et al. 1993). We also obtain a non-constructive extension of this result to the stronger Statistical Query (SQ) model. Similar results hold in the case where the learning algorithm runs in sub exponential time. Our proofs regarding exact and mistake-bounded learning are simple and self-contained, yield explicit hard functions, and show how to use mistake-bounded learners to "diagonalize"' over families of polynomial-size circuits. Our consequences for PAC learning lead to new proofs of Karp-Lipton-style collapse results, and the lower bounds from SQ learning make use of recent work relating combinatorial discrepancy to the existence of hard-on-average functions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Short Lists with Short Programs in Short Time

    Publication Year: 2013 , Page(s): 98 - 108
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (239 KB) |  | HTML iconHTML  

    Given a machine U, a c-short program for x is a string p such that U(p) = x and the length of p is bounded by c + (the length of a shortest program for x). We show that for any universal machine, it is possible to compute in polynomial time on input x a list of polynomial size guaranteed to contain a O(log|x|)-short program for x. We also show that there exist computable functions that map every x to a list of size O(|x|2) containing a O(1)-short program for x and this is essentially optimal because we prove that such a list must have size Ω(|x|2). Finally we show that for some machines, computable lists containing a shortest program must have length Ω(2|x|). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lower Bounds for DNF-refutations of a Relativized Weak Pigeonhole Principle

    Publication Year: 2013 , Page(s): 109 - 120
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB) |  | HTML iconHTML  

    The relativized weak pigeonhole principle states that if at least 2n out of n2 pigeons fly into n holes, then some hole must be doubly occupied. We prove that every DNF-refutation of the CNF encoding of this principle requires size 2((log n)3/2-ε) for every ϵ > 0 and every sufficiently large n. For its proof we need to discuss the existence of unbalanced low-degree bipartite expanders satisfying a certain robustness condition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LS+ Lower Bounds from Pairwise Independence

    Publication Year: 2013 , Page(s): 121 - 132
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB)  

    We consider the complexity of LS+ refutations of unsatisfiable instances of Constraint Satisfaction Problems (k-CSPs) when the underlying predicate supports a pairwise independent distribution on its satisfying assignments. This is the most general condition on the predicates under which the corresponding MAX k-CSP problem is known to be approximation resistant. We show that for random instances of such k-CSPs on n variables, even after Ω(n) rounds of the LS+ hierarchy, the integrality gap remains equal to the approximation ratio achieved by a random assignment. In particular, this also shows that LS+ refutations for such instances require rank Ω(n). We also show the stronger result that refutations for such instances in the static LS+ proof system requires size exp(Ω(n)). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Just a Pebble Game

    Publication Year: 2013 , Page(s): 133 - 143
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (267 KB) |  | HTML iconHTML  

    The two-player pebble game of Dymond-Tompa is identified as a barrier for existing techniques to save space or to speed up parallel algorithms for evaluation problems. Many combinatorial lower bounds to study I versus NI and NC versus P under different restricted settings scale in the same way as the pebbling algorithm of Dymond-Tompa. These lower bounds include, (1) the monotone separation of m-I from m-NI by studying the size of monotone switching networks in Potechin '10; (2) a new semantic separation of NC from P and of NCi from NCi+1 by studying circuit depth, based on the techniques developed for the semantic separation of NC1 from NC2 by the universal composition relation in Edmonds-Impagliazzo-Rudich-Sgall '01 and in Hastad- Wigderson '97; and (3) the monotone separation of m-NC from m-P and of m-NCi from m-NCi+1 by studying (a) the depth of monotone circuits in Raz-McKenzie '99; and (b) the size of monotone switching networks in Chan- Potechin '12. This supports the attempt to separate NC from P by focusing on depth complexity, and suggests the study of combinatorial invariants shaped by pebbling for proving lower bounds. An application to proof complexity gives tight bounds for the size and the depth of some refinements of resolution refutations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantum XOR Games

    Publication Year: 2013 , Page(s): 144 - 155
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (282 KB) |  | HTML iconHTML  

    We introduce quantum XOR games, a model of two-player one-round games that extends the model of XOR games by allowing the referee's questions to the players to be quantum states. We give examples showing that quantum XOR games exhibit a wide range of behaviors that are known not to exist for standard XOR games, such as cases in which the use of entanglement leads to an arbitrarily large advantage over the use of no entanglement. By invoking two deep extensions of Grothendieck's inequality, we present an efficient algorithm that gives a constant-factor approximation to the best performance players can obtain in a given game, both in case they have no shared entanglement and in case they share unlimited entanglement. As a byproduct of the algorithm we prove some additional interesting properties of quantum XOR games, such as the fact that sharing a maximally entangled state of arbitrary dimension gives only a small advantage over having no entanglement at all. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.