30 June3 July 1991
Filter Results

Counting classes are at least as hard as the polynomialtime hierarchy
Publication Year: 1991, Page(s):2  12
Cited by: Papers (9)It is shown that many natural counting classes are at least as computationally hard as PH (the polynomialtime hierarchy) in the following sense: for each K of the counting classes, every set in K(PH) is polynomialtime randomized manyone reducible to a set in K with twosided exponentially small error probability. As a consequence, these counting classes are computationally harder than PH unless... View full abstract»

PP is closed under truthtable reductions
Publication Year: 1991, Page(s):13  15
Cited by: Papers (9)R. Beigel et al. (1991) showed that PP is closed under intersection and a variety of special cases of truthtable closure. In the present work, the authors extend the techniques of Beigel et al. to show that PP is closed under general polynomialtime truthtime reductions.<<ETX>> View full abstract»

A complexity theory for feasible closure properties
Publication Year: 1991, Page(s):16  29
Cited by: Papers (4)The authors propose and develop a complexity theory of feasible closure properties. For each of the classes Hash P, SpanP, OptP, and MidP, they establish complete characterizationsin terms of complexity class collapsesof the conditions under which the class has all feasible closure properties. In particular, Hash P is Pclosed if and only if PP=UP; SpanP is Pclosed if and only if RMidP is Pcl... View full abstract»

Gapdefinable counting classes
Publication Year: 1991, Page(s):30  42
Cited by: Papers (18)The function class Hash P lacks a crucial closure property: it is not closed under subtraction. To remedy this problem, the authors introduce the function class GapP as a natural alternative to Hash P. GapP is the closure of Hash P under subtraction, and has all the other useful closure properties of Hash P as well. It is shown that most previously studied counting classes are gapdefinable, i.e.,... View full abstract»

The power of witness reduction
Publication Year: 1991, Page(s):43  59
Cited by: Papers (2)Recent results by M. Ogiwara and L.A. Hemachandra (1990) established a connection between functions being hard for a class of functions and functions interacting with the class to effect witness reduction. The author believes that the ability to achieve some form of witness reduction is what makes a function hard for a class of functions. To support this thesis he defines new function classes and ... View full abstract»

Proceedings of the Sixth Annual Structure in Complexity Theory Conference (Cat. No.91CH30221)
Publication Year: 1991 PDF (689 KB) 
Bounded queries in recursion theory: a survey
Publication Year: 1991, Page(s):62  78
Cited by: Papers (4)The author surveys much of the work that has been done on the following two questions: (1) What functions can one compute with m queries to A? and (2) Are there functions that can be computed with m queries to A that cannot be computed with m1 queries to A? To any set X? The framework is recursiontheoretic; the computations have no time or space bound.<<ETX>> View full abstract»

On reductions of NP sets to sparse sets
Publication Year: 1991, Page(s):79  88
Cited by: Papers (6)M. Ogiwara and O. Watanabe (1990) showed that if SAT is bounded truthtable reducible to a sparse set, then P=NP. In the present work, the authors simplify their proof, strengthen the result, and use it to obtain several new results. Among the new results are the following: applications of the main theorem to logtruthtable and logTuring reductions of NP sets to sparse sets; generalizations of t... View full abstract»

On the computational complexity of small descriptions
Publication Year: 1991, Page(s):89  101
Cited by: Papers (6)For a set L that is polynomial time reducible to some sparse set, the authors investigate the computational complexity of such sparse sets relative to L. They construct sets A and B such that both of them are polynomial time reducible to some sparse set, but A (resp., B) is polynomial time reducible to no sparse set in P/sup A/ (resp., NP/sup B/ intersection coNP/sup B/); that is, the complexity ... View full abstract»

Complexity classes and sparse oracles
Publication Year: 1991, Page(s):102  108
Cited by: Papers (10)The authors obtain positive relativization results. In particular, the goal is to prove statements of the kind: 'Given two complexity classes C/sub 1/ and C/sub 2/, C/sub 1/=C/sub 2/ if and only if for every sparse set S, C/sub 1//sup S/=C/sub 2//sup S/.' The authors derive a sufficient condition to obtain such results and, as an application, they prove a general theorem from which, as far as they... View full abstract»

PSPACE is provable by two provers in one round
Publication Year: 1991, Page(s):110  115
Cited by: Papers (6)It is shown that every language in PSPACE, or equivalently every language accepted by an unbounded round interactive proof system, has a oneround, twoprover interactive proof with exponentially small error probability. To obtain this result, the correctness of a simple but powerful method for parallelizing twoprover interactive proofs to reduce their error is proved.<<ETX>> View full abstract»

On the success probability of the two provers in oneround proof systems
Publication Year: 1991, Page(s):116  123
Cited by: Papers (24)The author addresses the problem of reducing the error probability of twoprover oneround proof systems, without increasing the number of provers or the number of rounds. An example, the noninteractive agreement protocol, where executing such a protocol twice in parallel does not decrease the error probability at all is constructed. Upper bounds on the error probability of specific classes of pro... View full abstract»

On the randomselfreducibility of complete sets
Publication Year: 1991, Page(s):124  132
Cited by: Papers (5)Informally, a function f is randomselfreducible if the evaluation of f at any given instance x can be reduced in polynomial time to the evaluation of f at one or more random instances y/sub i/. A set is randomselfreducible if its characteristic function is. The authors generalize the previous formal definitions of randomselfreducibility. They show that, even under this very general definitio... View full abstract»

Oneway functions, hard on average problems, and statistical zeroknowledge proofs
Publication Year: 1991, Page(s):133  138
Cited by: Papers (15)The author studies connections among oneway functions, hard on the average problems, and statistical zeroknowledge proofs. In particular, he shows how these three notions are related and how the third notion can be better characterized, assuming the first one.<<ETX>> View full abstract»

On one query, selfreducible sets
Publication Year: 1991, Page(s):139  151
Cited by: Papers (10)The authors study oneworddecreasing selfreducible sets, which are the usual selfreducible sets with the peculiarity that the selfreducibility machine makes at most one query to a word lexicographically smaller than the input. It is first shown that for all counting classes defined by a predicate on the number of accepting paths there exist complete sets which are oneworddecreasing selfredu... View full abstract»

Combinatorics and Kolmogorov complexity
Publication Year: 1991, Page(s):154  163
Cited by: Papers (1)The authors investigate combinatorial properties of finite sequences with high Kolmogorov complexity. They also demonstrate the utility of a Kolmogorov complexity method in combinatorial theory by several examples (such as the coinweighing problem).<<ETX>> View full abstract»

The complexity of malign ensembles
Publication Year: 1991, Page(s):164  171
Cited by: Papers (1)The author analyzes the concept of malignness, which is the property of probability ensembles making the average case running time equal to the worstcase running time for a class of algorithms. He derives lower and upper bounds on the complexity of malign ensembles, which are tight for exponential time algorithms and which show that no polynomial time computable malign ensemble exists for the cla... View full abstract»

Randomized vs. deterministic decision tree complexity for readonce Boolean functions
Publication Year: 1991, Page(s):172  179
Cited by: Papers (2)The authors consider the deterministic and the randomized decisiontree complexities for Boolean functions, denoted DC(f) and RC(f), respectively. It is well known that RC(f)>or=DC(f)/sup 0.5/ for every Boolean function f (called 0.5exponent), but no better lower bound is known for all Boolean functions, whereas the best known upper bound is RC(f)= Theta (DC(f)/sup 0//sub .//sup 753 . ./) (or ... View full abstract»

On the Monte Carlo Boolean decision tree complexity of readonce formulae
Publication Year: 1991, Page(s):180  187
Cited by: Papers (7)  Patents (1)In the Boolean decision tree model there is at least a linear gap between the Monte Carlo and the Las Vegas complexity of a function depending on the error probability. The author proves for a large class of readonce formulae that this trivial speedup is the best that a Monte Carlo algorithm can achieve. For every formula F belonging to that class it is shown that the Monte Carlo complexity of F... View full abstract»

A pseudorandom oracle characterization of BPP
Publication Year: 1991, Page(s):190  195
Cited by: Papers (3)It is known from work of C.H. Bennett and J. Gill (1981) and K. AmbosSpies (1986) that the following conditions are equivalent: (i) L in BPP; (ii); for almost all oracles A, l in P/sup A/. It is shown here that the following conditions are also equivalent to (i) and (ii): (iii) the set of oracles A for which L in P/sup A/ has pspacemeasure 1; (iv) for every pspacerandom oracle A, L in P/sup A/.... View full abstract»

Notions of resourcebounded category and genericity
Publication Year: 1991, Page(s):196  212
Cited by: Papers (7)The author investigates the strength of resourcebounded generic sets for deciding results in relativized complexity. He makes technical improvements to J.H. Lutz's notion of resourcebounded Baire category (1987, 1989) to show that almost every exponentialtime set (in the author's sense of category) separate P from NP. It is shown that the author's improved notion of category, while strictly mor... View full abstract»

BPP has subexponential time simulations unless EXPTIME has publishable proofs
Publication Year: 1991, Page(s):213  219
Cited by: Papers (2)It is shown that BPP can be simulated in subexponential time for infinitely many input lengths unless exponential time collapses to the second level of the polynomialtime hierarchy, has polynomialsize circuits, and has publishable proofs (EXPTIME=MA). It is also shown that BPP is contained in subexponential time unless exponential time has publishable proofs for infinitely many input lengths. In... View full abstract»

Relating equivalence and reducibility to sparse sets
Publication Year: 1991, Page(s):220  229
Cited by: Papers (3)For various polynomialtime reducibilities r, the authors ask whether being rreducible to a sparse set is a broader notion than being requivalent to a sparse set. Although distinguishing equivalence and reducibility to sparse sets, for manyone or 1truthtable reductions, would imply that P not=NP, the authors show that for ktruthtable reductions, k>or=2, equivalence and reducibility to sp... View full abstract»

Exponential time and subexponential time sets
Publication Year: 1991, Page(s):230  237
Cited by: Papers (2)The authors prove that the symmetric difference of a <or=/sup P//sub kparity/hard set for E and a subexponential time computable set is still <or=P/sub kparity/hard for E. This remains true for a <or=/sup P//sub m/hard set for E since 1parity reduction is manyone reduction. In addition, it is shown that it is not the case with respect to some other types of reductions. The authors ... View full abstract»

Adaptive logspace and depthbounded reducibilities
Publication Year: 1991, Page(s):240  254
Cited by: Papers (2)The author discusses a number of results regarding the study of the computational power of depthbounded reducibilities, their use to classify the complexity of computational problems, and their characterizations in terms of other computational models. In particular, problems arising in the design of concurrent systems are studied, and two kinds of logarithmic space reductions are defined. The fir... View full abstract»