Date 2123 Oct. 1985
Filter Results

[Front cover]
Page(s): C1 
Table of contents
Page(s): ix  xii 
Foreword
Page(s): iii 
Machtey Award
Page(s): v 
Separating the polynomialtime hierarchy by oracles
Page(s): 1  10We present exponential lower bounds on the size of depthk Boolean circuits for computing certain functions. These results imply that there exists an oracle set A such that, relative to A, all the levels in the polynomialtime hierarchy are distinct, i.e., ΣkP,A is properly contained in Σk+1P,A for all k. View full abstract»

Deterministic simulation of probabilistic constant depth circuits
Page(s): 11  19We explicitly construct, for every integer n and ε ≫ 0, a family of functions (psuedorandom bit generators) fn,ε:{0,1}nε → {0,1}n with the following property: for a random seed, the pseudorandom output "looks random" to any polynomial size, constant depth, unbounded fanin circuit. Moreover, the functions fn,ε themselves can be computed by uniform polynomial size, constant depth circuits. Some (interrelated) consequences of this result are given below. 1) Deterministic simulation of probabilistic algorithms. The constant depth analogues of the probabilistic complexity classes RP and BPP are contained in the deterministic complexity classes DSPACE(nε) and DTIME(2nε) for any ε ≫ 0. 2) Making probabilistic constructions deterministic. Some probablistic constructions of structures that elude explicit constructions can be simulated in the above complexity classes. 3) Approximate counting. The number of satisfying assignments to a (CNF or DNF) formula, if not too small, can be arbitrarily approximated in DSPACE(nε) and DTIME(2nε), for any ε ≫ 0. We also present two results for the special case of depth 2 circuits. They deal, respectively, with finding a satisfying assignment and approximately counting the number of assignments. For example, for 3CNF formulas with a fixed fraction of satisfying assignmemts, both tasks can be performed in polynomial time! View full abstract»

Amplification of probabilistic boolean formulas
Page(s): 20  29The amplification of probabilistic Boolean formulas refers to combining independent copies of such formulas to reduce the error probability. Les Valiant used the amplification method to produce monotone Boolean formulas of size O(n5.3) for the majority function of n variables. In this paper we show that the amount of amplification that Valiant obtained is optimal. In addition, using the amplification method we give an O(k4.3 n log n) upper bound for the size of monotone formulas computing the kth threshold function of n variables. View full abstract»

On networks of noisy gates
Page(s): 30  38We show that many Boolean functions (including, in a certain sense, "almost all" Boolean functions) have the property that the number of noisy gates needed to compute them differs from the number of noiseless gates by at most a constant factor. This may be contrasted with results of von Neumann, Dobrushin and Ortyukov to the effect that (1) for every Boolean function, the number of noisy gates needed is larger by at most a logarithmic factor, and (2) for some Boolean functions, it is larger by at least a logarithmic factor. View full abstract»

How easy is local search?
Page(s): 39  42First Page of the ArticleView full abstract» 
Identification is easier than decoding
Page(s): 43  50Several questions related to the complexity of communication over channels with noise are addressed. We compare some of our results to wellknown results in information theory. In particular we compare the following two problems. Assuming that the communication channel between two processors P1 and P2 makes an error with probability ε≫0, the identification problem is to determine whether P1 and P2 have the same nbit integer. The decoding problem is for P2 to determine the nbit integer of P1. For the latter problem we show that given any arbitrarily large constant λ≫0, there exists an ε, 0≪ε≪1/2, for which no scheme requiring less than λn bits of communication can guarantee (for large n) any bound q≪1 on the error probability. On the other hand, given any arbitrarily small constant γ≫0 and any ε, 0≪ε≪1/2, the identification problem can be solved with (1+γ)n bits of (oneway) communication with an error probability bounded by c2αn, where c and α are positive constants. These techniques are extended to other problems, and a onebit output Boolean function is shown to exhibit a similar behavior to that of the decoding problem regardless of how the input bits are partitioned among the two processors. View full abstract»

Three theorems on polynomial degrees of NPsets
Page(s): 51  55We show that recursive ascending sequences of polynomial time (p) degrees do not possess minimal upper bounds; that, for every nonzero pdegree a, there is a lesser nonzero pdegree b which does not help a; and that every nonzero pdegree is half of a minimal pair. View full abstract»

Simulating two pushdown stores by one tape in O(n1.5v) time
Page(s): 56  64Based on two graph separator theorems, we present two unexpected upper bounds and resolve several open problems for online computations. (1) 1 tape nondeterministic machines can simulate 2 pushdown stores in time O(n1.5√logn) (true for both online and offline machines). Together with the Ω(n1.5/√logn) lower bound, this solves the open problem 1 in [DGPR] for the 1 tape vs. 2 pushdown case. It also disproves the commonly conjectured Ω(n2) lower bound. (2) The languages defined by Maass and Freivalds, aimed to obtain optimal lower bound for 1 tape nondeterministic machines, can be accepted in O(n2loglogn√logn) and O(n1.5√logn) time by a 1 tape TM, respectively. (3) 3 pushdown stores are better than 2 pushdown stores. This answers a rather old open problem by Book and Greibach, and Duris and Galil. An Ω(n4/3/loge n) lower bound is also obtained. (4) 1 tape can nondeterministically simulate 1 queue in O(n1.5/√logn) time. This disproves the conjectured Ω(n2) lower bound. Also 1 queue can simulate 2 pushdowns in time O(n1.5√logn). View full abstract»

Nondeterministic versus probabilistic linear search algorithms
Page(s): 65  73The "component counting lower bound" known for deterministic linear search algorithms (LSA's) also holds for their probabilistic versions (PLSA's) for many problems, even if twosided error is allowed, and if one does not charge for probabilistic choice. This implies lower bounds on PLSA's for e.g. the element distinctness problem (n log n) or the knapsack problem (n2). These results yield the first separations between probabilistic and nondeterministic LSA's, because the above problems are nondeterministically much easier. Previous lower bounds for PLSA's either only worked for onesided error "on the nice side", i.e. on the side where the problems are even nondeterministically hard, or only for probabilistic comparison trees. The proof of the lower bound differs fundamentally from all known lower bounds for LSA's or PLSA's, because it does not reduce the problem to a combinatorial one but argues extensively about e.g. a nondiscrete measure for similarity of sets in Rn. This lower bound result solves an open problem posed by Manber and Tompa as well as by Snir. Furthermore, a PLSA for n input variables with twosided error and expected runtime T can be simulated by a (deterministic) LSA in T2n steps. This proves that the gaps between probabilistic and deterministic LSA's shown by Snir cannot be too large. As this simulation even holds for algebraic computation trees we show that probabilistic and deterministic versions of this model are polynomially related. This is a weaker version of a result due to the author which shows that in case of LSA's, even the nondeterministic and deterministic versions are polynomially related. View full abstract»

The complexity of facets resolved
Page(s): 74  78First Page of the ArticleView full abstract» 
Using dual approximation algorithms for scheduling problems: Theoretical and practical results
Page(s): 79  89The problem of scheduling a set of n jobs on m identical machines so as to minimize the makespan time is perhaps the most wellstudied problem in the theory of approximation algorithms for NPhard optimization problems. In this paper we present the strongest possible type of result for this problem, a polynomial approximation scheme. More precisely, for each ε, we give an algorithm that runs in time O((n/ε)1/ε2) and has relative error at most ε. For algorithms that are polynomial in n and m, the strongest previouslyknown result was that the MULTIFIT algorithm delivers a solution with no worse than 20% relative error. In addition, we present a refinement of our scheme in the case where the performance guarantee is equal to that of MULTIFIT, that yields an algorithm that is both more efficient and easier to analyze than MULTIFIT. In this case, in order to guarantee a maximum relative error of 1/5+2k, the algorithm runs in O(n(k+logn)) time. The scheme is based on a new approach to constructing approximation algorithms, which we call dual approximation algorithms, where the aim is find superoptimal, but infeasible solutions, and the performance is measured by the degree of infeasibility allowed. This notion should find wide applicability in its own right, and should be considered for any optimization problem where traditional approximation algorithms have been particularly elusive. View full abstract»

A scaling algorithm for weighted matching on general graphs
Page(s): 90  100This paper presents an algorithm for maximum matching on general graphs with integral edge weights, running in time O(n3/4m lg N), where n, m and N are the number of vertices, number of edges, and largest edge weight magnitude, respectively. The best previous bound is O(n(mlg lg lgd n + n lg n)) where d is the density of the graph. The algorithm finds augmenting paths in batches by scaling the weights. The algorithm extends to degreeconstrained subgraphs and hence to shortest paths on undirected graphs, the Chinese postman problem and finding a maximum cut of a planar graph. It speeds up Christofides' travelling salesman approximation algorithm from O(n3) to O(n2.75 lg n). A list splitting problem that arises in Edmonds' matching algorithm is solved in O(mα(m,n)) time, where m is the number of operations on a universe of n elements; the list splitting algorithm does not use set merging. Applications are given to update problems for redgreen matching, the cardinality Chinese postman problem and the maximum cardinality plane cut problem; also to the allpairs shortest paths problem on undirected graphs with lengths plus or minus one. View full abstract»

An all pairs shortest path algorithm with expected running time O(n 2logn)
Page(s): 101  105An algorithm is described that solves the all pairs shortest path problem for a nonnegatively weighted graph. The algorithm has an average requirement on quite general classes of random graphs of O(n2logn) time, where n is the number of vertices in the graph. View full abstract»

Recognizing circle graphs in polynomial time
Page(s): 106  116Our main result is a polynomialtime algorithm for deciding whether a given graph is a circle graph, that is, the intersection graph of a set of chords on a circle. Our algorithm utilizes two new graphtheoretic results, regarding necessary induced subgraphs of graphs having neither articulation points nor similar pairs of vertices. View full abstract»

Why certain subgraph computations requite only linear time
Page(s): 117  125A general problem in computational graph theory is that of finding an optimal subgraph H of a given weighted graph G. The matching problem (which is easy) and the traveling salesman problem (which is not) are well known examples of this general problem. In the literature one can also find a variety of ad hoc algorithms for solving certain special cases in linear time. We present a general methodology for constructing linear time algorithms in the case that the graph G is defined by certain rules of composition (as are trees, series parallel graphs, and outerplanar graphs) and the desired subgraph H satisfies a "regular" property (such as independence or matching). This methodology is applied to obtain a linear time algorithm for computing the irredundance number of a tree, a problem for which no polynomial time algorithm was previously known. View full abstract»

Efficient string matching in the presence of errors
Page(s): 126  136Consider the string matching problem where differences between characters of the pattern and characters of the text are allowed. Each difference is due to either a mismatch between a character of the text and a character of the pattern or a superfluous character in the text or a superfluous character in the pattern. Given a text of length n, a pattern of length m and an integer k, we present an algorithm for finding all occurrences of the pattern in the text, each with at most k differences. The algorithm runs in O(m2 + k2n) time. Given the same input we also present an algorithm for finding all occurrences of the pattern in the text, each with at most k mismatches (superfluous characters in either the text or the pattern are not allowed). This algorithm runs in O(k(m logm + n)) time. View full abstract»

The least weight subsequence problem
Page(s): 137  143The least weight subsequence (LWS) problem is introduced, and is shown to be equivalent to the classic minimum path problem for directed graphs. A special case of the LWS problem is shown to be solvable in O(n log n) time generally and, for certain weight functions, in linear time. A number of applications are given, including an optimum paragraph formation problem and the problem of finding a minimum height Btree, whose solutions realize improvement in asymptotic time complexity. View full abstract»

Motion planning in the presence of moving obstacles
Page(s): 144  154This paper investigates the computational complexity of planning the motion of a body B in 2D or 3D space, so as to avoid collision with moving obstacles of known, easily computed, trajectories. Dynamic movement problems are of fundamental importance to robotics, but their computational complexity has not previously been investigated. We provide evidence that the 3D dynamic movement problem is intractable even if B has only a constant number of degrees of freedom of movement. In particular, we prove the problem is PSPACEhard if B is given a velocity modulus bound on its movements and is NP hard even if B has no velocity modulus bound, where in both cases B has 6 degrees of freedom. To prove these results we use a unique method of simulation of a Turing machine which uses time to encode configurations (whereas previous lower bound proofs in robotics used the system position to encode configurations and so required unbounded number of degrees of freedom). We also investigate a natural class of dynamic problems which we call asteroid avoidance problems: B, the object we wish to move, is a convex polyhedron which is free to move by translation with bounded velocity modulus, and the polyhedral obstacles have known translational trajectories but cannot rotate. This problem has many applications to robot, automobile, and aircraft collision avoidance. Our main positive results are polynomial time algorithms for the 2D asteroid avoidance problem with bounded number of obstacles as well as single exponential time and nO(log n) space algorithms for the 3D asteroid avoidance problem with an unbounded number of obstacles. Our techniques for solving these asteroid avoidance problems are novel in the sense that they are completely unrelated to previous algorithms for planning movement in the case of static obstacles. We also give some additional positive results for various other dynamic movers problems, and in particular give polynomial time algorithms for the case in which B has no velocity bounds and the movements of obstacles are algebraic in spacetime. View full abstract»

Visibilitypolygon search and euclidean shortest paths
Page(s): 155  164Consider a collection of disjoint polygons in the plane containing a total of n edges. We show how to build, in O(n2) time and space, a data structure from which in O(n) time we can compute the visibility polygon of a given point with respect to the polygon collection. As an application of this structure, the visibility graph of the given polygons can be constructed in O(n2) time and space. This implies that the shortest path that connects two points in the plane and avoids the polygons in our collection can be computed in O(n2) time, improving earlier O(n2 log n) results. View full abstract»

Slimming down search structures: A functional approach to algorithm design
Page(s): 165  174We establish new upper bounds on the complexity of several "rectangle" problems. Our results include, for instance, optimal algorithms for range counting and rectangle searching in two dimensions. These involve linear space implementations of range trees and segment trees. The algorithms we give are simple and practical; they can be dynamized and taken into higher dimensions. Also of interest is the nonstandard approach which we follow to obtain these results: it involves transforming data structures on the basis of functional specifications. View full abstract»

The complexity of recognizing polyhedral scenes
Page(s): 175  185Given a drawing of straight lines on the plane, we wish to decide whether it is the projection of the visible part of a set of opaque polyhedra. Although there is an extensive literature and reports on empirically succesful algorithm: for this problem, there has been no definite result concerning its complexity. In this paper we show that, rather surprisingly, this problem is NPcomplete. This is true even in the relatively simple case of trihedral scenes (no four planes share a point) without shadows or cracks. Despite this negative result, we present a fast algorithm for the important special case of orthohedral scenes (all planes are perpendicular to one of the three axes) with a fixed number of "possible" objects. View full abstract»