By Topic

Foundations of Computer Science, 2008. FOCS '08. IEEE 49th Annual IEEE Symposium on

Date 25-28 Oct. 2008

Filter Results

Displaying Results 1 - 25 of 92
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (442 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (30 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (64 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (146 KB)  
    Freely Available from IEEE
  • Foreword

    Page(s): xii
    Save to Project icon | Request Permissions | PDF file iconPDF (93 KB)  
    Freely Available from IEEE
  • Committees

    Page(s): xiii
    Save to Project icon | Request Permissions | PDF file iconPDF (85 KB)  
    Freely Available from IEEE
  • list-reviewer

    Page(s): xiv - xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (80 KB)  
    Freely Available from IEEE
  • The Polynomial Method in Quantum and Classical Computing

    Page(s): 3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (109 KB) |  | HTML iconHTML  

    In 1889, A. A. Markov proved a powerful result about low-degree real polynomials: roughly speaking, that such polynomials cannot have a sharp jump followed by a long, relatively flat part. A century later, this result - as well as other results from the field of approximation theory - came to play a surprising role in classical and quantum complexity theory. In this article, the author tries to tell this story in an elementary way, beginning with classic results in approximation theory and ending with some recent applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Theory of Sponsored Search Auctions

    Page(s): 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (162 KB) |  | HTML iconHTML  

    Web search engines are becoming an increasingly important advertising medium. When a user poses a query in addition to search results, the search engine also returns a few advertisements. On most major search engines, the choice and assignment of ads to positions is determined by an auction among all advertisers who placed a bid on some keyword that matches the query. The user might click on one or more of the ads, in which case (in the pay-per-click model) the advertiser receiving the click pays the search engine a price determined by the auction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Average-case Complexity

    Page(s): 11
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (74 KB) |  | HTML iconHTML  

    We review the many open questions and the few things that are known about the average-case complexity of computational problems. We shall follow the presentations of Impagliazzo, of Goldreich, and of Bogdanov and the author, and focus on the following subjects. (i). Average-case tractability. What does it mean for a problem to have an "efficient on average'' algorithm with respect to a distribution of instances? There is more than one ``correct'' answer to this question, and a numberof subtleties arise, which are interesting to discuss. (ii) Worst case versus average-case. Is the existence of hard-on-averageproblems in a complexity class equivalent to the existence of worst-case-hardproblems? This is the case for complexity classes like PSPACE and EXP, but it is openfor NP, with partial evidence pointing to a negative answer. (To be sure, we believethat hard-on-average, and also worst-case hard problems, exist in NP, and if so theirexistence is ``equivalent'' in the way two true statements are logically equivalent. There is, however, partial evidence that such an equivalence cannot be establishedvia reductions. It is also known that such an equivalence cannot be established viaany relativizing technique.) (iii) Amplification of average-case hardness. A weak sense in which aproblem may be hard-on-average is that every efficient algorithm fails on a noticeable(at least inverse polynomial) fraction of inputs; a strong sense is that noalgorithm can do much better than guess the answer at random. In many settings,the existence of problems of weak average-case complexity implies the existenceof problems, in the same complexity class, of strong average-case complexity.It remains open to prove such equivalence in the setting of uniform algorithmsfor problems in NP. (Some partial results are known even in this setting.) (iv) Reductions and Completeness. Levin initiated a theoryof completeness for distributional problems under reductions that preserveaverage-case tracta- - bility. Even establishing the existence of an NP-completeproblem in this theory is a non-trivial (and interesting) result. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Truthful Approximation Schemes for Single-Parameter Agents

    Page(s): 15 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (326 KB) |  | HTML iconHTML  

    We present the first monotone randomized polynomial-time approximation scheme (PTAS) for minimizing the makespan of parallel related machines (Q||Cmax), the paradigmatic problem in single-parameter algorithmic mechanism design. This result immediately gives a polynomial-time, truthful (in expectation) mechanism whose approximation guarantee attains the best-possible one for all polynomial-time algorithms (assuming P not equal to NP). Our algorithmic techniques are flexible and also yield, among other results, a monotone deterministic quasi-PTAS for Q||Cmax and a monotone randomized PTAS for max-min scheduling on related machines. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discretized Multinomial Distributions and Nash Equilibria in Anonymous Games

    Page(s): 25 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (350 KB) |  | HTML iconHTML  

    We show that there is a polynomial-time approximation scheme for computing Nash equilibria in anonymous games with any fixed number of strategies (a very broad and important class of games), extending the two-strategy result of Daskalakis and Papadimitriou 2007. The approximation guarantee follows from a probabilistic result of more general interest: The distribution of the sum of n independent unit vectors with values ranging over {e1,...,ek}, where ei is the unit vector along dimension i of the k-dimensional Euclidean space, can be approximated by the distribution of the sum of another set of independent unit vectors whose probabilities of obtaining each value are multiples of 1/z for some integer z, and so that the variational distance of the two distributions is at most eps, where eps is bounded by an inverse polynomial in z and a function of k, but with no dependence on n. Our probabilistic result specifies the construction of a surprisingly sparse epsi-cover- under the total variation distance - of the set of distributions of sums of independent unit vectors, which is of interest on its own right. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximation Algorithms for Single-minded Envy-free Profit-maximization Problems with Limited Supply

    Page(s): 35 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (360 KB) |  | HTML iconHTML  

    We present the first polynomial-time approximation algorithms for single-minded envy-free profit-maximization problems (Guruswami et al., 2005) with limited supply. Our algorithms return a pricing scheme and a subset of customers that are designated the winners, which satisfy the envy-freeness constraint, whereas in our analyses, we compare the profit of our solution against the optimal value of the corresponding social-welfare-maximization (SWM) problem of finding a winner-set with maximum total value. Our algorithms take any LP-based alpha-approximation algorithm for the corresponding SWM problem as input and return a solution that achieves profit at least OPT/O (alpha ldr log umax), where OPT is the optimal value of the SWM problem, and umax is the maximum supply of an item. This immediately yields approximation guarantees of O(radicmlog umax) for the general single-minded envy-free problem; and O(log umax) for the tollbooth and highway problems (Guruswami et al., 2005), and the graph-vertex pricing problem (Balcan and Blum, 2006) (alpha = O(1) for all the corresponding SWM problems). Since OPT is an upper bound on the maximum profit achievable by any solution (i.e., irrespective of whether the solution satisfies the envy-freeness constraint), our results directly carry over to the non-envy-free versions of these problems too. Our result also thus (constructively) establishes an upper bound of O(alpha ldr log umax) on the ratio of (i) the optimum value of the profit-maximization problem and OPT; and (ii) the optimum profit achievable with and without the constraint of envy-freeness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Market Equilibria in Polynomial Time for Fixed Number of Goods or Agents

    Page(s): 45 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (319 KB) |  | HTML iconHTML  

    We consider markets in the classical Arrow-Debreu model. There are n agents and m goods. Each buyer has a concave utility function (of the bundle of goods he/she buys) and an initial bundle. At an ldquoequilibriumrdquo set of prices for goods, if each individual buyer separately ex-changes the initial bundle for an optimal bundle at the set prices, the market clears, i.e., all goods are exactly consumed. Classical theorems guarantee the existence of equilibria, but computing them has been the subject of much recent research. In the related area of Multi-Agent Games,much attention has been paid to the complexity as well as algorithms. While most general problems are hard, polynomial time algorithms have been developed for restricted classes of games, when one assumes the number of strategies is constant.For the Market Equilibrium problem, several important special cases of utility functions have been tackled. Here we begin a program for this problem similar to that for multi-agent games, where general utilities are considered. We begin by showing that if the utilities are separable piece-wise linear concave (PLC) functions, and the number of goods(or alternatively the number of buyers) is constant, then we can compute an exact equilibrium in polynomial time.Our technique for the constant number of goods is to de-compose the space of price vectors into cells using certain hyperplanes, so that in each cell, each buyerpsilas threshold marginal utility is known. Still, one needs to solve a linear optimization problem in each cell. We then show the main result - that for general (non-separable) PLC utilities, an exact equilibrium can be found in polynomial time provided the number of goods is constant. The starting point of the algorithm is a ldquocell-decompositionrdquo of the space of price vectors using polynomial surfaces (instead of hyperplanes).We use results from computational algebraic geometry to bound the number of such cells. For solving the problem inside each- - cell, we introduce and use a novel LP-duality based method. We note that if the number of buyers and agents both can vary, the problem is PPAD hard even for the very special case of PLC utilities - namely Leontief utilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Sign-Rank of AC^O

    Page(s): 57 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (334 KB) |  | HTML iconHTML  

    The sign-rank of a matrix A = [Aij] with plusmn1 entries is the least rank of a real matrix B = [Bij] with AijBij > 0 for all i, j. We obtain the first exponential lower bound on the sign-rank of a function in AC0. Namely, let f(x, y) = Lambdai=1 m Lambdaj=1 m 2(xij Lambda yij). We show that the matrix [f(x, y)]x, y has sign-rank 2Omega(m). This in particular implies that Sigma2 ccnsubeUPPcc, which solves a long-standing open problem posed by Babai, Frankl, and Simon (1986). Our result additionally implies a lower bound in learning theory. Specifically, let Phi1,..., Phir : {0, 1}n rarrRopf be functions such that every DNF formula f : {0, 1}n rarr {-1, +1} of polynomial size has the representation f equiv sign(a1Phi1 + hellip + arPhir) for some reals a1,..., ar. We prove that then r ges 2Omega(n 1/3 ), which essentially matches an upper bound of 2Otilde(n 1/3 ) due to Klivans and Servedio (2001). Finally, our work yields the first exponential lower bound on the size of threshold-of-majority circuits computing a function in AC0. This substantially generalizes and strengthens the results of Krause and Pudlak (1997). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Arithmetic Circuits: A Chasm at Depth Four

    Page(s): 67 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (287 KB) |  | HTML iconHTML  

    We show that proving exponential lower bounds on depth four arithmetic circuits imply exponential lower bounds for unrestricted depth arithmetic circuits. In other words, for exponential sized circuits additional depth beyond four does not help. We then show that a complete black-box derandomization of identity testing problem for depth four circuits with multiplication gates of small fanin implies a nearly complete derandomization of general identity testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dense Subsets of Pseudorandom Sets

    Page(s): 76 - 85
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB) |  | HTML iconHTML  

    A theorem of Green, Tao, and Ziegler can be stated (roughly) as follows: ifR is a pseudorandom set, and D is a dense subset of R, then D may be modeled by a set M that is dense in the entire domain such that D and M are indistinguishable. (The precise statement refers to"measures" or distributions rather than sets.) The proof of this theorem is very general, and it applies to notions of pseudo-randomness and indistinguishability defined in terms of any family of distinguishers with some mild closure properties. The proof proceeds via iterative partitioning and an energy increment argument, in the spirit of the proof of the weak Szemeredi regularity lemma. The "reduction" involved in the proof has exponential complexity in the distinguishing probability. We present a new proof inspired by Nisan's proof of Impagliazzo's hardcore set theorem. The reduction in our proof has polynomial complexity in the distinguishing probability and provides a new characterization of the notion of "pseudoentropy" of a distribution. A proof similar to ours has also been independently discovered by Gowers [2]. We also follow the connection between the two theorems and obtain a new proof of Impagliazzo's hardcore set theorem via iterative partitioning and energy increment. While our reduction has exponential complexity in some parameters, it has the advantage that the hardcore set is efficiently recognizable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Almost-Natural Proofs

    Page(s): 86 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (219 KB) |  | HTML iconHTML  

    Razborov and Rudich have shown that so-called "natural proofs" are not useful for separating P from NP unless hard pseudorandom number generators do not exist. This famous result is widely regarded as a serious barrier to proving strong lower bounds in circuit complexity theory. By definition, a natural combinatorial property satisfies two conditions, constructivity and largeness. Our main result is that if the largeness condition is weakened slightly, then not only does the Razborov-Rudich proof break down, but such "almost-natural" (and useful) properties provably exist. Specifically, under the same pseudorandomness assumption that Razborov and Rudich make, a simple, explicit property that we call "discrimination" suffices to separate P/poly from NP; discrimination is nearly linear-time computable and almost large, having density 2-q(n) where q grows slightly faster than a quasi-polynomial function. For those who hope to separate P from NP using random function properties in some sense, discrimination is interesting, because it is constructive, yet may be thought of as a minor alteration of a property of a random function. The proof relies heavily on the self-defeating character of natural proofs. Our proof technique also yields an unconditional result, namely that there exist almost-large and useful properties that are constructive, if we are allowed to call non-uniform low-complexity classes "constructive." We note, though, that this unconditional result can also be proved by a more conventional counting argument. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Connectivity: Connecting to Networks and Geometry

    Page(s): 95 - 104
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (337 KB) |  | HTML iconHTML  

    Dynamic connectivity is a well-studied problem, but so far the most compelling progress has been confined to the edge-update model: maintain an understanding of connectivity in an undirected graph, subject to edge insertions and deletions. In this paper, we study two more challenging, yet equally fundamental problems: subgraph connectivity asks to maintain an understanding of connectivity under vertex updates: updates can turn vertices on and off, and queries refer to the subgraph induced by "on" vertices. (For instance, this is closer to applications in networks of routers, where node faults may occur.)We describe a data structure supporting vertex updates in O~(m^{2/3}) amortized time, where m denotes the number of edges in the graph. This greatly improves over the previous result [Chan, STOC'02], which required fast matrix multiplication and had an update time of O(m^{0.94}). The new data structure is also simpler. Geometric connectivity asks to maintain a dynamic set of n geometric objects, and query connectivity in their intersection graph. (For instance, the intersection graph of balls describes connectivity in a network of sensors with bounded transmission radius.) Previously, nontrivial fully dynamic results were known only for special cases like axis-parallel line segments and rectangles. We provide similarly improved update times, O~(n^{2/3}), for these special cases. Moreover, we show how to obtain sublinear update bounds for virtually all families of geometric objects which allow sublinear-time range queries. In particular, we obtain the first sublinear update time for arbitrary 2D line segments: O*(n^{9/10}); for d-dimensional simplices: O*(n^{1-1/d(2d+1)}); and for d-dimensional balls: O*(n^{1-1/(d+1)(2d+3)}). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algorithms for Single-Source Vertex Connectivity

    Page(s): 105 - 114
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (369 KB) |  | HTML iconHTML  

    In the survivable network design problem (SNDP) the goal is to find a minimum cost subset of edges that satisfies a given set of pairwise connectivity requirements among the vertices. This general network design framework has been studied extensively and is tied to the development of major algorithmic techniques. For the edge-connectivity version of the problem, a 2-approximation algorithm is known for arbitrary pairwise connectivity requirements. However, no non-trivial algorithms are known for its vertex connectivity counterpart. In fact, even highly restricted special cases of the vertex connectivity version remain poorly understood.We study the single-source k-vertex connectivity version of SNDP. We are given a graph G(V,E) with a subset T of terminals and a source vertex s, and the goal is to find a minimum cost subset of edges ensuring that every terminal is k-vertex connected to s. Our main result is an O(k log n)-approximation algorithm for this problem; this improves upon the recent 2O(k 2 )log4 n-approximation. Our algorithm is based on an intuitive rerouting scheme. The analysis relies on a structural result that may be of independent interest: we show that any solution can be decomposed into a disjoint collection of multiple-legged spiders, which are then used to re-route flow from terminals to the source via other terminals.We also obtain the first non-trivial approximation algorithm for the vertex-cost version of the same problem, achieving an O(k7log2 n)-approximation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Polynomial-Time Approximation Scheme for Euclidean Steiner Forest

    Page(s): 115 - 124
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (329 KB) |  | HTML iconHTML  

    We give a randomized O(n2 log n)-time approximation scheme for the Steiner forest problem in the Euclidean plane. For every fixed epsi > 0 and given any n pairs of terminals in the plane, our scheme finds a (1 + epsi)- approximation to the minimum-length forest that connects every pair of terminals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Degree Bounded Network Design with Metric Costs

    Page(s): 125 - 134
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB) |  | HTML iconHTML  

    Given a complete undirected graph, a cost function on edges and a degree bound B, the degree bounded network design problem is to find a minimum cost simple subgraph with maximum degree B satisfying given connectivity requirements. Even for simple connectivity requirement such as finding a spanning tree, computing a feasible solution for the degree bounded network design problem is already NP-hard, and thus there is no polynomial factor approximation algorithm for this problem. In this paper, we show that when the cost function satisfies triangle inequalities, there are constant factor approximation algorithms for various degree bounded network design problems.Global edge-connectivity: There is a (2+1/k)-approximation algorithm for the minimum bounded degree k-edge-connected subgraph problem. Local edge-connectivity: There is a 6-approximation algorithm for the minimum bounded degree Steiner network problem. Global vertex-connectivity: there is a (2+(k-1)/n+1/k)-approximation algorithm for the minimum bounded degree k-vertex-connected subgraph problem. Spanning tree: there is an (1+1/(d-1))-approximation algorithm for the minimum bounded degree spanning tree problem. These approximation algorithms return solutions with smallest possible maximum degree, and the cost guarantee is obtained by comparing to the optimal cost when there are no degree constraints. This demonstrates that degree constraints can be incorporated into network design problems with metric costs.Our algorithms can be seen as a generalization of Christofides' algorithm for metric TSP. The main technical tool is a simplicity-preserving edge splitting-off operation, which is used to "short-cut" vertices with high degree while maintaining connectivity requirements and preserving simplicity of the solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matrix Sparsification for Rank and Determinant Computations via Nested Dissection

    Page(s): 137 - 145
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB) |  | HTML iconHTML  

    The nested dissection method developed by Lipton, Rose, and Tarjan is a seminal method for quickly performing Gaussian elimination of symmetric real positive definite matrices whose support structure satisfies good separation properties (e.g. planar). One can use the resulting LU factorization to deduce various parameters of the matrix. The main results of this paper show that we can remove the three restrictions of being "symmetric", being "real", and being "positive definite" and still be able to compute the rank and, when relevant, also the absolute determinant, while keeping the running time of nested dissection. Our results are based, in part, on an algorithm that, given an arbitrary square matrix A of order n having m non-zero entries, creates another square matrix B of order n + 2t = O(m) with the property that each row and each column of B contains at most three nonzero entries, and, furthermore, rank(B) = rank (A) + 2t and det(B) = det(A). The running time of this algorithm is only O(m), which is optimal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Modular Composition in any Characteristic

    Page(s): 146 - 155
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (325 KB) |  | HTML iconHTML  

    We give an algorithm for modular composition of degree n univariate polynomials over a finite field Fq requiring n 1 + o(1) log1 + o(1) q bit operations; this had earlier been achieved in characteristic no(1) by Umans (2008). As an application, we obtain a randomized algorithm for factoring degree n polynomials over Fq requiring (n1.5 + o(1) + n 1 + o(1) log q) log1 + o(1) q bit operations, improving upon the methods of von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998). Our results also imply algorithms for irreducibility testing and computing minimal polynomials whose running times are best-possible, up to lower order terms.As in Umans (2008), we reduce modular composition to certain instances of multipoint evaluation of multivariate polynomials. We then give an algorithm that solves this problem optimally (up to lower order terms), in arbitrary characteristic. The main idea is to lift to characteristic 0, apply a small number of rounds of multimodular reduction, and finish with a small number of multidimensional FFTs. The final evaluations are then reconstructed using the Chinese Remainder Theorem. As a bonus, we obtain a very efficient data structure supporting polynomial evaluation queries, which is of independent interest. Our algorithm uses techniques which are commonly employed in practice, so it may be competitive for real problem sizes. This contrasts with previous asymptotically fast methods relying on fast matrix multiplication. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.