By Topic

Computational Complexity (CCC), 2010 IEEE 25th Annual Conference on

Date 9-12 June 2010

Filter Results

Displaying Results 1 - 25 of 39
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (85 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (24 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (139 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (109 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - vii
    Save to Project icon | Request Permissions | PDF file iconPDF (217 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): viii
    Save to Project icon | Request Permissions | PDF file iconPDF (86 KB)  
    Freely Available from IEEE
  • Committees

    Page(s): ix
    Save to Project icon | Request Permissions | PDF file iconPDF (95 KB)  
    Freely Available from IEEE
  • list-reviewer

    Page(s): x
    Save to Project icon | Request Permissions | PDF file iconPDF (75 KB)  
    Freely Available from IEEE
  • Awards

    Page(s): xi
    Save to Project icon | Request Permissions | PDF file iconPDF (68 KB)  
    Freely Available from IEEE
  • Parallel Repetition of Two Prover Games (Invited Survey)

    Page(s): 3 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (238 KB) |  | HTML iconHTML  

    The parallel repetition theorem states that for any two-prover game with value smaller than 1, parallel repetition reduces the value of the game in an exponential rate. We give a short introduction to the problem of parallel repetition of two-prover games and some of its applications in theoretical computer science, mathematics and physics. We will concentrate mainly on recent results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • No Strong Parallel Repetition with Entangled and Non-signaling Provers

    Page(s): 7 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (283 KB) |  | HTML iconHTML  

    We consider one-round games between a classical verifier and two provers. One of the main questions in this area is the parallel repetition question: If the game is played H times in parallel, does the maximum winning probability decay exponentially in ℓ? In the classical setting, this question was answered in the affirmative by Raz. More recently the question arose whether the decay is of the form (1 - ⊖ (ε)) where 1 - ε is the value of the game and H is the number of repetitions. This question is known as the strong parallel repetition question and was motivated by its connections to the unique games conjecture. It was resolved by Raz who showed that strong parallel repetition does not hold, even in the very special case of games known as XOR games. This opens the question whether strong parallel repetition holds in the case when the provers share entanglement. Evidence for this is provided by the behavior of XOR games, which have strong (in fact perfect) parallel repetition, and by the recently proved strong parallel repetition of linear unique games. A similar question was open for games with so-called non-signaling provers. Here the best known parallel repetition theorem is due to Holenstein, and is of the form (1 - ⊖ (ε2)). We show that strong parallel repetition holds neither with entangled provers nor with non-signaling provers. In particular we obtain that Holenstein's bound is tight. Along the way we also provide a tight characterization of the asymptotic behavior of the entangled value under parallel repetition of unique games in terms of a semidefinite program. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Derandomized Parallel Repetition of Structured PCPs

    Page(s): 16 - 27
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (405 KB) |  | HTML iconHTML  

    A PCP is a proof system for NP in which the proof can be checked by a probabilistic verifier. The verifier is only allowed to read a very small portion of the proof, and in return is allowed to err with some bounded probability. The probability that the verifier accepts a false proof is called the soundness error, and is an important parameter of a PCP system that one seeks to minimize. Constructing PCPs with sub-constant soundness error and, at the same time, a minimal number of queries into the proof (namely two) is especially important due to applications for inapproximability. In this work we construct such PCP verifiers, i.e., PCPs that make only two queries and have sub-constant soundness error. Our construction can be viewed as a combinatorial alternative to the "manifold vs. point'' construction, which is the only construction in the literature for this parameter range. The "manifold vs. point'' PCP is based on a low degree test, while our construction is based on a direct product test. Our construction of a PCP is based on extending the derandomized direct product test of Impagliazzo, Kabanets and Wigderson (STOC 09) to a derandomized parallel repetition theorem. More accurately, our PCP construction is obtained in two steps. We first prove a derandomized parallel repetition theorem for specially structured PCPs. Then, we show that any PCP can be transformed into one that has the required structure, by embedding it on a de-Bruijn graph. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Derandomized Parallel Repetition Theorems for Free Games

    Page(s): 28 - 37
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (373 KB) |  | HTML iconHTML  

    Raz's parallel repetition theorem together with improvements of Holenstein shows that for any two-prover one-round game with value at most 1 - ∈ (for ∈ ≤ 1/2), the value of the game repeated n times in parallel on independent inputs is at most (1-∈)Ω(∈2n/ℓ) where ℓ is the answer length of the game. For free games (which are games in which the inputs to the two players are uniform and independent) the constant 2 can be replaced with 1 by a result of Barak, Rao, Raz, Rosen and Shaltiel. Consequently, n = O(tℓ/) repetitions suffice to reduce the value of a free game from 1 - ∈ to (1 - ∈)t, and denoting the input length of the game by m, if follows that nm = O(tℓm/) random bits can be used to prepare n independent inputs for the parallel repetition game. In this paper we prove a derandomized version of the parallel repetition theorem for free games and show that O(t(m+ℓ)) random bits can be used to generate correlated inputs such that the value of the parallel repetition game on these inputs has the same behavior. Thus, in terms of randomness complexity, correlated parallel repetition can reduce the value of free games at the "correct rate" when ℓ = O(m). Our technique uses strong extractors to "derandomize" a lemma of, and can be also used to derandomize a parallel repetition theorem of Parnafes, Raz and Wigderson for communication games in the special case that the game is free. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Derandomizing Arthur-Merlin Games and Approximate Counting Implies Exponential-Size Lower Bounds

    Page(s): 38 - 49
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    We show that if Arthur-Merlin protocols can be derandomized, then there is a Boolean function computable in deterministic exponential-time with access to an NP oracle, that cannot be computed by Boolean circuits of exponential size. More formally, if prAM ⊆ PNP then there is a Boolean function in ENP that requires circuits of size 2Ω(n). prAM is the class of promise problems that have Arthur-Merlin protocols, Pνρ is the class of functions that can be computed in deterministic polynomial-time with an NP oracle and ENP is its exponential analogue. The lower bound in the conclusion of our theorem suffices to construct very strong pseudorandom generators. We also show that the same conclusion holds if the problem of approximate counting the number of accepting paths of a nondeterministic Turing machine up to multiplicative factors can be done in nondeterministic polynomial-time. In other words, showing nondeterministic fully polynomial-time approximation schemes for #P-complete problems require proving exponential-size circuit lower bounds. A few works have already shown that if we can find efficient deterministic solutions to some specific tasks (or classes) that are known to be solvable efficiently by randomized algorithms (or proofs), then we obtain lower bounds against certain circuit models. These lower bounds were only with respect to polynomial-size circuits even if full derandomization is assumed' Thus they only implied fairly weak pseudorandom generators (if at all). A key ingredient in our proof is a connection between computational learning theory and exponential-size lower bounds. We show that the existence of deterministic learning algorithms with certain properties implies exponential-size lower bounds, where the complexity of the hard function is related to the complexity of the learning algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Simple Affine Extractors Using Dimension Expansion

    Page(s): 50 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (385 KB) |  | HTML iconHTML  

    Let Fq be the field of q elements. An (n, k)-affine extractor is a mapping D : Fqn→ {0,1} such that for any k-dimensional affine subspace X ⊆ Fqn, D(x) is an almost unbiased bit when x is chosen uniformly from X. Loosely speaking, the problem of explicitly constructing affine extractors gets harder as q gets smaller and easier as k gets larger. This is reflected in previous results: When q is 'large enough', specifically q = Ω(n2), Gabizon and Raz construct affine extractors for any k ≥ 1. In the 'hardest case', i.e. when q = 2, Bourgain constructs affine extractors for k ≥ δn for any constant (and even slightly subconstant) δ > 0. Our main result is the following: Fix any k ≥ 2 and let d = 5n/k. Then whenever q > 2 · d2 and p = char(Fq) > d, we give an explicit (n, k)-affine extractor. For example, when k = δn for constant δ > 0, we get an extractor for a field of constant size Ω((1/δ)2). We also get weaker results for fields of arbitrary characteristic (but can still work with a constant field size when k = δn for constant δ > 0). Thus our result may be viewed as a 'field-size/dimension' tradeoff for affine extractors. For a wide range of k this gives a new result, but even for large k where we do not improve (or even match) the previous result of, we believe that our construction and proof have the advantage of being very simple: Assume n is prime and d is odd, and fix any non-trivial linear map T : Fqn→ Fq. Define QR : Fq → {0,1} by QR(x) = 1 if and only if x is a quadratic residue. Then, the function D : Fqn → {0,1} defined by D(x) = QR(T(xd)) is an (n, k)-affine extractor. Our proof uses a result of H- - eur, Leung and Xiang giving a lower bound on the dimension of products of subspaces. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Derandomizing from Random Strings

    Page(s): 58 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (329 KB) |  | HTML iconHTML  

    In this paper we show that BPP is truth-table reducible to the set of Kolmogorov random strings RK. It was previously known that PSPACE, and hence BPP is Turing-reducible to RK. The earlier proof relied on the adaptivity of the Turing-reduction to find a Kolmogorov-random string of polynomial length using the set RK as oracle. Our new non-adaptive result relies on a new fundamental fact about the set RK, namely each initial segment of the characteristic sequence of RK has high Kolmogorov complexity. As a partial converse to our claim we show that strings of very high Kolmogorov-complexity when used as advice are not much more useful than randomly chosen strings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Power of Randomized Reductions and the Checkability of SAT

    Page(s): 64 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (391 KB) |  | HTML iconHTML  

    We prove new results regarding the complexity of various complexity classes under randomized oracle reductions. We first prove that BPPPSZK ⊆ AM ∩ coAM, where PSZK is the class of promise problems having statistical zero knowledge proofs. This strengthens the previously known facts that PSZK is closed under NC1 truth-table reductions (Sahai and Vadhan, J. ACM '03) and that PPSZK ⊆ AM ∩ coAM (Vadhan, personal communication). Our proof relies on showing that a certain class of real-valued functions that we call ℝ-TUAM can be approximated using an AM protocol. Then we investigate the power of randomized oracle reductions with relation to the notion of instance checking (Blum and Kannan, J. ACM '95). We observe that a theorem of Beigel implies that if any problem in TFNP such as Nash equilibrium is NP-hard under randomized oracle reductions, then SAT is checkable. We also observe that Beigel's theorem can be extended to an average-case setting by relating checking to the notion of program testing (Blum et al., JCSS '93). From this, we derive that if one-way functions can be based on NP-hardness via a randomized oracle reduction, then SAT is checkable. By showing that NP has a non-uniform tester, we also show that worst-case to average-case randomized oracle reduction for any relation (or language) R E NP implies that R has a nonuniform instance checker. These results hold even for adaptive randomized oracle reductions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Sampling Protocol and Applications to Basing Cryptographic Primitives on the Hardness of NP

    Page(s): 76 - 87
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (393 KB) |  | HTML iconHTML  

    We investigate the question of what languages can be decided efficiently with the help of a recursive collision-finding oracle. Such an oracle can be used to break collision-resistant hash functions or, more generally, statistically hiding commitments. The oracle we consider, Samd where d is the recursion depth, is based on the identically-named oracle defined in the work of Haitner et al. (FOCS '07). Our main result is a constant-round public-coin protocol "AM-Sam" that allows an efficient verifier to emulate a Samd oracle for any constant depth d = O(1) with the help of a BPPNP prover-AM-Sam allows us to conclude that if L is decidable by a k-adaptive randomized oracle algorithm with access to a SamO(1) oracle, then L ∈ AM[k] ∩ coAM[k]. The above yields the following corollary: assume there exists an O(1)-adaptive reduction that bases constant-round statistically hiding commitment on NP-hardness, then NP ⊆ coAM and the polynomial hierarchy collapses. The same result holds for any primitive that can be broken by SamO(1) including collision-resistant hash functions and O(1)-round oblivious transfer where security holds statistically for one of the parties. We also obtain non-trivial (though weaker) consequences for k-adaptive reductions for any k = poly(n). Prior to our work, most results in this research direction either applied only to non-adaptive reductions (Bogdanov and Trevisan, SIAM J. of Comp. '06 and Akavia et al., FOCS '06) or to one-way permutations (Brassard FOCS '79). The main technical tool we use to prove the above is a new constant-round public-coin protocol (SampleWithSize), which we believe to be of interest in its own right, that guarantees the following: given an efficient function f on n bits, let D be the output distribution D = f(Un), then SampleWithSize allows an efficient verifier Arthur to use an all-powerful prover Merlin's help to sample a rando- - m y ← D along with a good multiplicative approximation of the probability py = Pry' ← D [y' = y]. The crucial feature of SampleWithSize is that it extends even to distributions of the form D = f(Us), where Us is the uniform distribution on an efficiently decidable subset S ⊆ {0,1}n (such D are called efficiently samplable with post-selection), as long as the verifier is also given a good approximation of the value |S|. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Program-Enumeration Bottleneck in Average-Case Complexity Theory

    Page(s): 88 - 95
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (321 KB) |  | HTML iconHTML  

    Three fundamental results of Levin involve algorithms or reductions whose running time is exponential in the length of certain programs. We study the question of whether such dependency can be made polynomial. 1) Levin's "optimal search algorithm" performs at most a constant factor more slowly than any other fixed algorithm. The constant, however, is exponential in the length of the competing algorithm. We note that the running time of a universal search cannot be made "fully polynomial" (that is, the relation between slowdown and program length cannot be made polynomial), unless P=NP. 2) Levin's "universal one-way function" result has the following structure: there is a polynomial time computable function fLevin such that if there is a polynomial time computable adversary A that inverts fLevin on an inverse polynomial fraction of inputs, then for every polynomial time computable function g there also is a polynomial time adversary Ag that inverts g on an inverse polynomial fraction of inputs. Unfortunately, again the running time of Ag depends exponentially on the bit length of the program that computes g in polynomial time. We show that a fully polynomial uniform reduction from an arbitrary one-way function to a specific one-way function is not possible relative to an oracle that we construct, and so no "universal one-way function" can have a fully polynomial security analysis via relativizing techniques. 3) Levin's completeness result for distributional NP problems implies that if a specific problem in NP is easy on average under the uniform distribution, then every language L in NP is also easy on average under any polynomial time computable distribution. The running time of the implied algorithm for L, however, depends exponentially on the bit length of the non-deterministic polynomial time Turing machine that decides L. We show that if a completeness result for distributional NP can be proved via a "fully uniform"- - and "fully polynomial" time reduction, then there is a worst-case to average-case reduction for NP-complete problems. In particular, this means that a fully polynomial completeness result for distributional NP is impossible, even via randomized truth-table reductions, unless the polynomial hierarchy collapses. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Unique Games Conjecture (Invited Survey)

    Page(s): 99 - 121
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (439 KB) |  | HTML iconHTML  

    This article surveys recently discovered connections between the Unique Games Conjecture and computational complexity, algorithms, discrete Fourier analysis, and geometry. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spectral Algorithms for Unique Games

    Page(s): 122 - 130
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (379 KB) |  | HTML iconHTML  

    We present a new algorithm for Unique Games which is based on purely spectral techniques, in contrast to previous work in the area, which relies heavily on semidefinite programming (SDP). Given a highly satisfiable instance of Unique Games, our algorithm is able to recover a good assignment. The approximation guarantee depends only on the completeness of the game, and not on the alphabet size, while the running time depends on spectral properties of the Label-Extended graph associated with the instance of Unique Games. In particular, we show how our techniques imply a quasi-polynomial time algorithm that decides satisfiability of a game on the Khot-Vishnoi [14] integrality gap instance. Notably, when run on that instance, the standard SDP relaxation of Unique Games fails. As a special case, we also show how to re-derive a polynomial time algorithm for Unique Games on expander constraint graphs (similar to [2]) and a sub-exponential time algorithm for Unique Games on the Hypercube. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Log-Space Algorithm for Reachability in Planar Acyclic Digraphs with Few Sources

    Page(s): 131 - 138
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (447 KB) |  | HTML iconHTML  

    Designing algorithms that use logarithmic space for graph reachability problems is fundamental to complexity theory. It is well known that for general directed graphs this problem is equivalent to the NL vs L problem. This paper focuses on the reachability problem over planar graphs where the complexity is unknown. Showing that the planar reachability problem is NL-complete would show that nondeterministic log-space computations can be made unambiguous. On the other hand, very little is known about classes of planar graphs that admit log-space algorithms. We present a new `source-based' structural decomposition method for planar DAGs. Based on this decomposition, we show that reachability for planar DAGs with m sources can be decided deterministically in O(m + log n) space. This leads to a log-space algorithm for reachability in planar DAGs with O(log n) sources. Our result drastically improves the class of planar graphs for which we know how to decide reachability in deterministic log-space. Specifically, the class extends from planar DAGs with at most two sources to at most O(log n) sources. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Matching Problem for Special Graph Classes

    Page(s): 139 - 150
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (356 KB) |  | HTML iconHTML  

    An even cycle in a graph is called nice by Lovász and Plummer in [LP86] if the graph obtained by deleting all vertices of the cycle has some perfect matching. In the present paper we prove some new complexity bounds for various versions of problems related to perfect matchings in graphs with a polynomially bounded number of nice cycles. We show that for graphs with a polynomially bounded number of nice cycles the perfect matching decision problem is in SPL, it is hard for FewL, and the perfect matching construction problem is in LC=L∩⊕L. Furthermore, we significantly improve the best known upper bounds, proved by Agrawal, Hoang, and Thierauf in the STACS'07-paper [AHT07], for the polynomially bounded perfect matching problem by showing that the construction and the counting versions are in C=L∩⊕L and in C=L, respectively. Note that SPL, ⊕L, C=L and LC=L are contained in NC2. Moreover, we show that the problem of computing a maximum matching for bipartite planar graphs is in LC=L. This solves Open Question 4.7 stated in the STACS'08-paper by Datta, Kulkarni, and Roy [DKR08] where it is asked whether computing a maximum matching even for bipartite planar graphs can be done in NC. We also show that the problem of computing a maximum matching for graphs with a polynomially bounded number of even cycles is in LC=L. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Relative Strength of Pebbling and Resolution

    Page(s): 151 - 162
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (421 KB) |  | HTML iconHTML  

    The last decade has seen a revival of interest in pebble games in the context of proof complexity. Pebbling has proven to be a useful tool for studying resolution-based proof systems when comparing the strength of different subsystems, showing bounds on proof space, and establishing size-space trade-offs. The typical approach has been to encode the pebble game played on a graph as a CNF formula and then argue that proofs of this formula must inherit (various aspects of) the pebbling properties of the underlying graph. Unfortunately, the reductions used here are not tight. To simulate resolution proofs by pebblings, the full strength of nondeterministic black-white pebbling is needed, whereas resolution is only known to be able to simulate deterministic black pebbling. To obtain strong results, one therefore needs to find specific graph families which either have essentially the same properties for black and black-white pebbling (not at all true in general) or which admit simulations of black-white pebblings in resolution. This paper contributes to both these approaches. First, we design a restricted form of black-white pebbling that can be simulated in resolution and show that there are graph families for which such restricted pebblings can be asymptotically better than black pebblings. This proves that, perhaps somewhat unexpectedly, resolution can strictly beat black-only pebbling, and in particular that the space lower bounds on pebbling formulas in [Ben-Sasson and Nordstrom 2008] are tight. Second, we present a versatile parametrized graph family with essentially the same properties for black and black-white pebbling, which gives sharp simultaneous trade-offs for black and black-white pebbling for various parameter settings. Both of our contributions have been instrumental in obtaining the time-space trade-off results for resolution-based proof systems in [Ben-Sasson and Nordstrom 2009]. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trade-Off Lower Bounds for Stack Machines

    Page(s): 163 - 171
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (336 KB) |  | HTML iconHTML  

    A space bounded Stack Machine is a regular Turing Machine with a read-only input tape, several space bounded read-write work tapes, and an unbounded stack. Stack Machines with a logarithmic space bound have been connected to other classical models of computation, such as polynomial time Turing Machines (P) (Cook; 1971) and polynomial size, polylogarithmic depth, bounded fan-in circuits (NC) e.g., (Borodin et al.; 1989). In this paper, we give the first known lower bound for Stack Machines. This comes in the form of a trade-off lower bound between space and number of passes over the input tape. Specifically, we give an explicit permuted inner product function such that any Stack Machine computing this function requires either sublinear polynomial space or sublinear polynomial number of passes. In the case of logarithmic space Stack Machines, this yields an unconditional sublinear polynomial lower bound for the number of passes. To put this result in perspective, we note that Stack Machines with logarithmic space and a single pass over the input can compute Parity, Majority, as well as certain languages outside NC. The latter follows from (Allender; 1989), conditional on the widely believed complexity assumption that EXP is different from PSPACE. Our technique is a novel communication complexity reduction, thereby extending the already wide range of models of computation for which communication complexity can be used to obtain lower bounds. Informally, we show that a k-player number-in-hand communication protocol for a base function f can efficiently simulate a space- and pass-bounded Stack Machine for a related function F, which consists of several permuted instances of f, bundled together by a combining function h. Trade-off lower bounds for Stack Machines then follow from known communication complexity lower bounds. The framework for this reduction was given by (Beame and Huynh-Ngoc; 2008), who used it to obtain similar trade-off lower bounds for Turing Machines wi- - th a constant number of pass-bounded external tapes. We also prove that the latter cannot efficiently simulate Stack Machines, conditional on the complexity assumption that E is not a subset of PSPACE. It is the treatment of an unbounded stack which constitutes the main technical novelty in our communication complexity reduction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.