Date 2528 Oct. 2008
Filter Results

[Front cover]
Page(s): C1 
[Title page i]
Page(s): i 
[Title page iii]
Page(s): iii 
[Copyright notice]
Page(s): iv 
Table of contents
Page(s): v  xi 
Foreword
Page(s): xii 
Committees
Page(s): xiii 
listreviewer
Page(s): xiv  xvi 
The Polynomial Method in Quantum and Classical Computing
Page(s): 3In 1889, A. A. Markov proved a powerful result about lowdegree real polynomials: roughly speaking, that such polynomials cannot have a sharp jump followed by a long, relatively flat part. A century later, this result  as well as other results from the field of approximation theory  came to play a surprising role in classical and quantum complexity theory. In this article, the author tries to tell this story in an elementary way, beginning with classic results in approximation theory and ending with some recent applications. View full abstract»

Theory of Sponsored Search Auctions
Page(s): 7Web search engines are becoming an increasingly important advertising medium. When a user poses a query in addition to search results, the search engine also returns a few advertisements. On most major search engines, the choice and assignment of ads to positions is determined by an auction among all advertisers who placed a bid on some keyword that matches the query. The user might click on one or more of the ads, in which case (in the payperclick model) the advertiser receiving the click pays the search engine a price determined by the auction. View full abstract»

Averagecase Complexity
Page(s): 11We review the many open questions and the few things that are known about the averagecase complexity of computational problems. We shall follow the presentations of Impagliazzo, of Goldreich, and of Bogdanov and the author, and focus on the following subjects. (i). Averagecase tractability. What does it mean for a problem to have an "efficient on average'' algorithm with respect to a distribution of instances? There is more than one ``correct'' answer to this question, and a numberof subtleties arise, which are interesting to discuss. (ii) Worst case versus averagecase. Is the existence of hardonaverageproblems in a complexity class equivalent to the existence of worstcasehardproblems? This is the case for complexity classes like PSPACE and EXP, but it is openfor NP, with partial evidence pointing to a negative answer. (To be sure, we believethat hardonaverage, and also worstcase hard problems, exist in NP, and if so theirexistence is ``equivalent'' in the way two true statements are logically equivalent. There is, however, partial evidence that such an equivalence cannot be establishedvia reductions. It is also known that such an equivalence cannot be established viaany relativizing technique.) (iii) Amplification of averagecase hardness. A weak sense in which aproblem may be hardonaverage is that every efficient algorithm fails on a noticeable(at least inverse polynomial) fraction of inputs; a strong sense is that noalgorithm can do much better than guess the answer at random. In many settings,the existence of problems of weak averagecase complexity implies the existenceof problems, in the same complexity class, of strong averagecase complexity.It remains open to prove such equivalence in the setting of uniform algorithmsfor problems in NP. (Some partial results are known even in this setting.) (iv) Reductions and Completeness. Levin initiated a theoryof completeness for distributional problems under reductions that preserveaveragecase tracta  bility. Even establishing the existence of an NPcompleteproblem in this theory is a nontrivial (and interesting) result. View full abstract»

Truthful Approximation Schemes for SingleParameter Agents
Page(s): 15  24We present the first monotone randomized polynomialtime approximation scheme (PTAS) for minimizing the makespan of parallel related machines (QC_{max}), the paradigmatic problem in singleparameter algorithmic mechanism design. This result immediately gives a polynomialtime, truthful (in expectation) mechanism whose approximation guarantee attains the bestpossible one for all polynomialtime algorithms (assuming P not equal to NP). Our algorithmic techniques are flexible and also yield, among other results, a monotone deterministic quasiPTAS for QC_{max} and a monotone randomized PTAS for maxmin scheduling on related machines. View full abstract»

Discretized Multinomial Distributions and Nash Equilibria in Anonymous Games
Page(s): 25  34We show that there is a polynomialtime approximation scheme for computing Nash equilibria in anonymous games with any fixed number of strategies (a very broad and important class of games), extending the twostrategy result of Daskalakis and Papadimitriou 2007. The approximation guarantee follows from a probabilistic result of more general interest: The distribution of the sum of n independent unit vectors with values ranging over {e_{1},...,ek}, where e_{i} is the unit vector along dimension i of the kdimensional Euclidean space, can be approximated by the distribution of the sum of another set of independent unit vectors whose probabilities of obtaining each value are multiples of 1/z for some integer z, and so that the variational distance of the two distributions is at most eps, where eps is bounded by an inverse polynomial in z and a function of k, but with no dependence on n. Our probabilistic result specifies the construction of a surprisingly sparse epsicover under the total variation distance  of the set of distributions of sums of independent unit vectors, which is of interest on its own right. View full abstract»

Approximation Algorithms for Singleminded Envyfree Profitmaximization Problems with Limited Supply
Page(s): 35  44We present the first polynomialtime approximation algorithms for singleminded envyfree profitmaximization problems (Guruswami et al., 2005) with limited supply. Our algorithms return a pricing scheme and a subset of customers that are designated the winners, which satisfy the envyfreeness constraint, whereas in our analyses, we compare the profit of our solution against the optimal value of the corresponding socialwelfaremaximization (SWM) problem of finding a winnerset with maximum total value. Our algorithms take any LPbased alphaapproximation algorithm for the corresponding SWM problem as input and return a solution that achieves profit at least OPT/O (alpha ldr log u_{max}), where OPT is the optimal value of the SWM problem, and u_{max} is the maximum supply of an item. This immediately yields approximation guarantees of O(radicmlog u_{max}) for the general singleminded envyfree problem; and O(log u_{max}) for the tollbooth and highway problems (Guruswami et al., 2005), and the graphvertex pricing problem (Balcan and Blum, 2006) (alpha = O(1) for all the corresponding SWM problems). Since OPT is an upper bound on the maximum profit achievable by any solution (i.e., irrespective of whether the solution satisfies the envyfreeness constraint), our results directly carry over to the nonenvyfree versions of these problems too. Our result also thus (constructively) establishes an upper bound of O(alpha ldr log u_{max}) on the ratio of (i) the optimum value of the profitmaximization problem and OPT; and (ii) the optimum profit achievable with and without the constraint of envyfreeness. View full abstract»

Market Equilibria in Polynomial Time for Fixed Number of Goods or Agents
Page(s): 45  53We consider markets in the classical ArrowDebreu model. There are n agents and m goods. Each buyer has a concave utility function (of the bundle of goods he/she buys) and an initial bundle. At an ldquoequilibriumrdquo set of prices for goods, if each individual buyer separately exchanges the initial bundle for an optimal bundle at the set prices, the market clears, i.e., all goods are exactly consumed. Classical theorems guarantee the existence of equilibria, but computing them has been the subject of much recent research. In the related area of MultiAgent Games,much attention has been paid to the complexity as well as algorithms. While most general problems are hard, polynomial time algorithms have been developed for restricted classes of games, when one assumes the number of strategies is constant.For the Market Equilibrium problem, several important special cases of utility functions have been tackled. Here we begin a program for this problem similar to that for multiagent games, where general utilities are considered. We begin by showing that if the utilities are separable piecewise linear concave (PLC) functions, and the number of goods(or alternatively the number of buyers) is constant, then we can compute an exact equilibrium in polynomial time.Our technique for the constant number of goods is to decompose the space of price vectors into cells using certain hyperplanes, so that in each cell, each buyerpsilas threshold marginal utility is known. Still, one needs to solve a linear optimization problem in each cell. We then show the main result  that for general (nonseparable) PLC utilities, an exact equilibrium can be found in polynomial time provided the number of goods is constant. The starting point of the algorithm is a ldquocelldecompositionrdquo of the space of price vectors using polynomial surfaces (instead of hyperplanes).We use results from computational algebraic geometry to bound the number of such cells. For solving the problem inside each  cell, we introduce and use a novel LPduality based method. We note that if the number of buyers and agents both can vary, the problem is PPAD hard even for the very special case of PLC utilities  namely Leontief utilities. View full abstract»

The SignRank of AC^O
Page(s): 57  66The signrank of a matrix A = [A_{ij}] with plusmn1 entries is the least rank of a real matrix B = [B_{ij}] with A_{ij}B_{ij} > 0 for all i, j. We obtain the first exponential lower bound on the signrank of a function in AC^{0}. Namely, let f(x, y) = Lambda_{i=1} ^{m} Lambda_{j=1} ^{m} ^{2}(x_{ij} Lambda y_{ij}). We show that the matrix [f(x, y)]_{x,} _{y} has signrank 2^{Omega(m)}. This in particular implies that Sigma_{2} ^{cc}nsubeUPP^{cc}, which solves a longstanding open problem posed by Babai, Frankl, and Simon (1986). Our result additionally implies a lower bound in learning theory. Specifically, let Phi_{1},..., Phi_{r} : {0, 1}^{n} rarrRopf be functions such that every DNF formula f : {0, 1}^{n} rarr {1, +1} of polynomial size has the representation f equiv sign(a_{1}Phi_{1} + hellip + a_{r}Phi_{r}) for some reals a_{1},..., a_{r}. We prove that then r ges 2^{Omega(n} ^{1/3} ^{)}, which essentially matches an upper bound of 2^{Otilde(n} ^{1/3} ^{)} due to Klivans and Servedio (2001). Finally, our work yields the first exponential lower bound on the size of thresholdofmajority circuits computing a function in AC^{0}. This substantially generalizes and strengthens the results of Krause and Pudlak (1997). View full abstract»

Arithmetic Circuits: A Chasm at Depth Four
Page(s): 67  75We show that proving exponential lower bounds on depth four arithmetic circuits imply exponential lower bounds for unrestricted depth arithmetic circuits. In other words, for exponential sized circuits additional depth beyond four does not help. We then show that a complete blackbox derandomization of identity testing problem for depth four circuits with multiplication gates of small fanin implies a nearly complete derandomization of general identity testing. View full abstract»

Dense Subsets of Pseudorandom Sets
Page(s): 76  85A theorem of Green, Tao, and Ziegler can be stated (roughly) as follows: ifR is a pseudorandom set, and D is a dense subset of R, then D may be modeled by a set M that is dense in the entire domain such that D and M are indistinguishable. (The precise statement refers to"measures" or distributions rather than sets.) The proof of this theorem is very general, and it applies to notions of pseudorandomness and indistinguishability defined in terms of any family of distinguishers with some mild closure properties. The proof proceeds via iterative partitioning and an energy increment argument, in the spirit of the proof of the weak Szemeredi regularity lemma. The "reduction" involved in the proof has exponential complexity in the distinguishing probability. We present a new proof inspired by Nisan's proof of Impagliazzo's hardcore set theorem. The reduction in our proof has polynomial complexity in the distinguishing probability and provides a new characterization of the notion of "pseudoentropy" of a distribution. A proof similar to ours has also been independently discovered by Gowers [2]. We also follow the connection between the two theorems and obtain a new proof of Impagliazzo's hardcore set theorem via iterative partitioning and energy increment. While our reduction has exponential complexity in some parameters, it has the advantage that the hardcore set is efficiently recognizable. View full abstract»

AlmostNatural Proofs
Page(s): 86  91Razborov and Rudich have shown that socalled "natural proofs" are not useful for separating P from NP unless hard pseudorandom number generators do not exist. This famous result is widely regarded as a serious barrier to proving strong lower bounds in circuit complexity theory. By definition, a natural combinatorial property satisfies two conditions, constructivity and largeness. Our main result is that if the largeness condition is weakened slightly, then not only does the RazborovRudich proof break down, but such "almostnatural" (and useful) properties provably exist. Specifically, under the same pseudorandomness assumption that Razborov and Rudich make, a simple, explicit property that we call "discrimination" suffices to separate P/poly from NP; discrimination is nearly lineartime computable and almost large, having density 2^{q(n)} where q grows slightly faster than a quasipolynomial function. For those who hope to separate P from NP using random function properties in some sense, discrimination is interesting, because it is constructive, yet may be thought of as a minor alteration of a property of a random function. The proof relies heavily on the selfdefeating character of natural proofs. Our proof technique also yields an unconditional result, namely that there exist almostlarge and useful properties that are constructive, if we are allowed to call nonuniform lowcomplexity classes "constructive." We note, though, that this unconditional result can also be proved by a more conventional counting argument. View full abstract»

Dynamic Connectivity: Connecting to Networks and Geometry
Page(s): 95  104Dynamic connectivity is a wellstudied problem, but so far the most compelling progress has been confined to the edgeupdate model: maintain an understanding of connectivity in an undirected graph, subject to edge insertions and deletions. In this paper, we study two more challenging, yet equally fundamental problems: subgraph connectivity asks to maintain an understanding of connectivity under vertex updates: updates can turn vertices on and off, and queries refer to the subgraph induced by "on" vertices. (For instance, this is closer to applications in networks of routers, where node faults may occur.)We describe a data structure supporting vertex updates in O~(m^{2/3}) amortized time, where m denotes the number of edges in the graph. This greatly improves over the previous result [Chan, STOC'02], which required fast matrix multiplication and had an update time of O(m^{0.94}). The new data structure is also simpler. Geometric connectivity asks to maintain a dynamic set of n geometric objects, and query connectivity in their intersection graph. (For instance, the intersection graph of balls describes connectivity in a network of sensors with bounded transmission radius.) Previously, nontrivial fully dynamic results were known only for special cases like axisparallel line segments and rectangles. We provide similarly improved update times, O~(n^{2/3}), for these special cases. Moreover, we show how to obtain sublinear update bounds for virtually all families of geometric objects which allow sublineartime range queries. In particular, we obtain the first sublinear update time for arbitrary 2D line segments: O*(n^{9/10}); for ddimensional simplices: O*(n^{11/d(2d+1)}); and for ddimensional balls: O*(n^{11/(d+1)(2d+3)}). View full abstract»

Algorithms for SingleSource Vertex Connectivity
Page(s): 105  114In the survivable network design problem (SNDP) the goal is to find a minimum cost subset of edges that satisfies a given set of pairwise connectivity requirements among the vertices. This general network design framework has been studied extensively and is tied to the development of major algorithmic techniques. For the edgeconnectivity version of the problem, a 2approximation algorithm is known for arbitrary pairwise connectivity requirements. However, no nontrivial algorithms are known for its vertex connectivity counterpart. In fact, even highly restricted special cases of the vertex connectivity version remain poorly understood.We study the singlesource kvertex connectivity version of SNDP. We are given a graph G(V,E) with a subset T of terminals and a source vertex s, and the goal is to find a minimum cost subset of edges ensuring that every terminal is kvertex connected to s. Our main result is an O(k log n)approximation algorithm for this problem; this improves upon the recent 2^{O(k} ^{2} ^{)}log^{4} napproximation. Our algorithm is based on an intuitive rerouting scheme. The analysis relies on a structural result that may be of independent interest: we show that any solution can be decomposed into a disjoint collection of multiplelegged spiders, which are then used to reroute flow from terminals to the source via other terminals.We also obtain the first nontrivial approximation algorithm for the vertexcost version of the same problem, achieving an O(k^{7}log^{2} n)approximation. View full abstract»

A PolynomialTime Approximation Scheme for Euclidean Steiner Forest
Page(s): 115  124We give a randomized O(n^{2} log n)time approximation scheme for the Steiner forest problem in the Euclidean plane. For every fixed epsi > 0 and given any n pairs of terminals in the plane, our scheme finds a (1 + epsi) approximation to the minimumlength forest that connects every pair of terminals. View full abstract»

Degree Bounded Network Design with Metric Costs
Page(s): 125  134Given a complete undirected graph, a cost function on edges and a degree bound B, the degree bounded network design problem is to find a minimum cost simple subgraph with maximum degree B satisfying given connectivity requirements. Even for simple connectivity requirement such as finding a spanning tree, computing a feasible solution for the degree bounded network design problem is already NPhard, and thus there is no polynomial factor approximation algorithm for this problem. In this paper, we show that when the cost function satisfies triangle inequalities, there are constant factor approximation algorithms for various degree bounded network design problems.Global edgeconnectivity: There is a (2+1/k)approximation algorithm for the minimum bounded degree kedgeconnected subgraph problem. Local edgeconnectivity: There is a 6approximation algorithm for the minimum bounded degree Steiner network problem. Global vertexconnectivity: there is a (2+(k1)/n+1/k)approximation algorithm for the minimum bounded degree kvertexconnected subgraph problem. Spanning tree: there is an (1+1/(d1))approximation algorithm for the minimum bounded degree spanning tree problem. These approximation algorithms return solutions with smallest possible maximum degree, and the cost guarantee is obtained by comparing to the optimal cost when there are no degree constraints. This demonstrates that degree constraints can be incorporated into network design problems with metric costs.Our algorithms can be seen as a generalization of Christofides' algorithm for metric TSP. The main technical tool is a simplicitypreserving edge splittingoff operation, which is used to "shortcut" vertices with high degree while maintaining connectivity requirements and preserving simplicity of the solutions. View full abstract»

Matrix Sparsification for Rank and Determinant Computations via Nested Dissection
Page(s): 137  145The nested dissection method developed by Lipton, Rose, and Tarjan is a seminal method for quickly performing Gaussian elimination of symmetric real positive definite matrices whose support structure satisfies good separation properties (e.g. planar). One can use the resulting LU factorization to deduce various parameters of the matrix. The main results of this paper show that we can remove the three restrictions of being "symmetric", being "real", and being "positive definite" and still be able to compute the rank and, when relevant, also the absolute determinant, while keeping the running time of nested dissection. Our results are based, in part, on an algorithm that, given an arbitrary square matrix A of order n having m nonzero entries, creates another square matrix B of order n + 2t = O(m) with the property that each row and each column of B contains at most three nonzero entries, and, furthermore, rank(B) = rank (A) + 2t and det(B) = det(A). The running time of this algorithm is only O(m), which is optimal. View full abstract»

Fast Modular Composition in any Characteristic
Page(s): 146  155We give an algorithm for modular composition of degree n univariate polynomials over a finite field F_{q} requiring n ^{1} ^{+} ^{o(1)} log^{1} ^{+} ^{o(1)} q bit operations; this had earlier been achieved in characteristic n^{o(1)} by Umans (2008). As an application, we obtain a randomized algorithm for factoring degree n polynomials over F_{q} requiring (n^{1.5} ^{+} ^{o(1)} + n ^{1} ^{+} ^{o(1)} log q) log^{1} ^{+} ^{o(1)} q bit operations, improving upon the methods of von zur Gathen & Shoup (1992) and Kaltofen & Shoup (1998). Our results also imply algorithms for irreducibility testing and computing minimal polynomials whose running times are bestpossible, up to lower order terms.As in Umans (2008), we reduce modular composition to certain instances of multipoint evaluation of multivariate polynomials. We then give an algorithm that solves this problem optimally (up to lower order terms), in arbitrary characteristic. The main idea is to lift to characteristic 0, apply a small number of rounds of multimodular reduction, and finish with a small number of multidimensional FFTs. The final evaluations are then reconstructed using the Chinese Remainder Theorem. As a bonus, we obtain a very efficient data structure supporting polynomial evaluation queries, which is of independent interest. Our algorithm uses techniques which are commonly employed in practice, so it may be competitive for real problem sizes. This contrasts with previous asymptotically fast methods relying on fast matrix multiplication. View full abstract»