<![CDATA[ IEEE Transactions on Information Theory - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 18 2019April 25<![CDATA[Table of contents]]>655C1C4153<![CDATA[IEEE Transactions on Information Theory publication information]]>655C2C2144<![CDATA[On Spectral Design Methods for Quasi-Cyclic Codes]]>65526372647542<![CDATA[Locality and Availability of Array Codes Constructed From Subspaces]]>$q$ -Steiner systems, and subspace transversal designs. We present several constructions of such codes which are $q$ -analogs of some known block codes, such as the Hamming and simplex codes. We examine the locality and availability of the constructed codes. In particular, we distinguish between two types of locality and availability: node versus symbol. The resulting codes have distinct symbol/node locality/availability, allowing a more efficient repair process for a single symbol stored in a storage node of a distributed storage system, compared with the repair process for the whole node.]]>65526482660521<![CDATA[Repairing Multiple Failures for Scalar MDS Codes]]>repair bandwidth. In this paper, motivated by Reed-Solomon codes, we study the problem of repairing multiple failed nodes in a scalar MDS code. We extend the framework of (Guruswami and Wootters, 2017) to give a framework for constructing repair schemes for multiple failures in general scalar MDS codes in the centralized repair model. We then specialize our framework to Reed–Solomon codes, and also extend and improve upon recent results of (Dau et al., 2017).]]>65526612672531<![CDATA[The Repair Problem for Reed–Solomon Codes: Optimal Repair of Single and Multiple Erasures With Almost Optimal Node Size]]>$hgeqslant 1$ failed nodes for an $(n,k=n-r)$ maximum distance separable (MDS) code using $d$ helper nodes is at least $dhl/(d+h-k)$ , where $l$ is the size of the node. Guruswami and Wootters (2016) initiated the study of efficient repair of RS codes, showing that they can be repaired using a smaller bandwidth than under the trivial approach. At the same time, their work as well as follow-up papers stopped short of constructing RS codes (or any scalar MDS codes) that meet the cut-set bound with equality. In this paper, we construct the families of RS codes that achieve the cut-set bound for repair of one or several nodes. In the single-node case, we present the RS codes of length $n$ over the field ${mathbb F}_{q^{l}}, l=exp ((1+o(1))nlog n)$ that meet the cut-set bound. We also prove an almost matching lower bound on $l$ , showing that super-exponential scaling is both necessary and sufficient for scalar MDS codes to achieve the cut-set bound using linear repair schemes. For the case of multiple nodes, we c-
nstruct a family of RS codes that achieve the cut-set bound universally for the repair of any $h=1,2, {dots },r$ failed nodes from any subset of $d$ helper nodes, $kleqslant dleqslant n-h$ . For a fixed number of parities $r$ , the node size of the constructed codes is close to the smallest possible node size for codes with such properties.]]>65526732695636<![CDATA[Two or Few-Weight Trace Codes over <inline-formula> <tex-math notation="LaTeX">${mathbb{F}_{q}}+u{mathbb{F}_{q}}$ </tex-math></inline-formula>]]>$p$ be a prime number and $q=p^{s}$ for a positive integer $s$ . For any positive divisor $e$ of $q-1$ , we construct infinite families of codes $mathcal {C}$ of size $q^{2m}$ with few Lee-weight. These codes are defined as trace codes over the ring $R= mathbb {F}_{q} + u mathbb {F}_{q}$ , $u^{2} = 0$ . Using Gaussian sums, their Lee weight distributions are provided. In particular, when $gcd (e,m)=1$ , under the Gray map, the images of all codes in $mathcal {C}$ are of two-weight over the finite field $mathbb {F}_{q}$ , which meet the Griesmer bound. Moreover, when $gcd (e,m)=2, 3$ , or 4, all codes in $mathcal {C}$ are of most five-weight codes.]]>65526962703506<![CDATA[An Innovations Approach to Viterbi Decoding of Convolutional Codes]]>655270427221279<![CDATA[The Optimal Sub-Packetization of Linear Capacity-Achieving PIR Schemes With Colluding Servers]]>$M$ records are replicated in $N$ servers (each storing all $M$ records), a user wants to privately retrieve one record by accessing the servers such that the identity of the retrieved record is secret against any up to $T$ servers. A scheme designed for this purpose is called a $T$ -private information retrieval (PIR) scheme. In practice, capacity-achieving and small sub-packetization are both desired for PIR schemes, because the former implies the highest download rate and the latter means simple realization. Meanwhile, sub-packetization is the key technique for achieving capacity. In this paper, we characterize the optimal sub-packetization for linear capacity-achieving $T$ -PIR schemes. First, a lower bound on the sub-packetization $L$ for linear capacity-achieving $T$ -PIR schemes is proved, i.e., $Lgeq dn^{M-1}$ , where $d={mathrm{ gcd}}(N,T)$ and $n=N/d$ . Then, for general values of $M$ and $N>Tgeq 1$ , a linear capacity-achieving $T$ --
IR scheme with sub-packetization $dn^{M-1}$ is designed. Comparing with the first capacity-achieving $T$ -PIR scheme given by Sun and Jafar in 2016, our scheme reduces the sub-packetization from $N^{M}$ to the optimal and further reduces the field size by a factor of $Nd^{M-2}$ .]]>65527232735673<![CDATA[Bandwidth Adaptive & Error Resilient MBR Exact Repair Regenerating Codes]]>et al.; 1) both data reconstruction and repair are resilient to the presence of a certain number of erroneous nodes in the network and 2) the number of helper nodes in every repair is not fixed, but is a flexible parameter that can be selected during the run-time. We study the fundamental limits of required total repair bandwidth and provide an upper bound for the storage capacity of these codes under these assumptions. We then focus on the minimum repair bandwidth (MBR) case and derive the exact storage capacity by presenting explicit coding schemes with exact repair, which achieve the upper bound of the storage capacity in the considered setup. To this end, we first provide a more natural extension of the well-known product matrix (PM) MBR codes, modified to provide flexibility in choosing the number of helpers in each repair, and simultaneously be robust to erroneous nodes in the network. This is achieved by proving the non-singularity for a family of matrices in large enough finite fields. We next provide another extension of the PM codes, based on a novel repair scheme which enables flexibility in the number of helpers and robustness against erroneous nodes without any extra cost in field size compared with the original PM codes.]]>65527362759840<![CDATA[Nearly Optimal Sparse Group Testing]]>$ {n}$ items so as to identify, with a minimal number of tests, a “small” subset of $ {d}$ defective items. In “classical” non-adaptive group testing, it is known that when $ {d}$ is substantially smaller than $ {n}$ , $Theta ( {d}log ( {n}))$ tests are both information-theoretically necessary and sufficient to guarantee recovery with high probability. Group testing schemes in the literature that meet this bound require most items to be tested $ {Omega }(log ( {n}))$ times, and most tests to incorporate $ {Omega }({{n/d}})$ items. Motivated by physical considerations, we study group testing models in which the testing procedure is constrained to be “sparse.” Specifically, we consider (separately) scenarios in which 1) items are finitely divisible and hence may participate in at most $ {gamma } in {o}(log ( {n}))$ tests; or 2) tests are size-constrained to pool no more than $rho in {o}({{n/d}})$ items per test. For both scenarios, we provide information-theoretic lower bounds on the number of tests required to guarantee high probability recovery. In particular, one of our main results shows that $ {gamma }$ -finite divisibility of items forces any non-adaptive group testing algorithm with the probability of recovery error at most $ {epsilon }$ to perform at least $ {gamma } {d}({ {n/d}})^{({1}-{5} {epsilon })/ {gamma }}$ tests. Analogously, for $ {rho }$ -sized constrained tests, we show an information-theoretic lower bound of $ {Omega }( {n}/ {rho })$ tests for high-probability recovery–hence in both settings the number of tests required grows dramatically (relative to the classical setting) as a function of $ {n}$ . In both scenarios, we provide both randomized constructions and explicit constructions of designs with computationally efficient reconstruction algorithms that require a number of tests that is optimal up to constant or small polynomial factors in some regimes of ${{n, d,}} {gamma }$ , and $ {rho }$ . The randomized design/reconstruction algorithm in the $ {rho }$ -sized test scenario is universal–independent of the value of $ {d}$ , as long as $ {rho } in {o}({textbf {n/d}})$ . We also investigate the effect of unreliability/noise in test outcomes, and show that whereas the impact of noise in test out]]>655276027731096<![CDATA[On Capacities of the Two-User Union Channel With Complete Feedback]]>et al. when the size is at least 6. We complete this line of research when the size of the input alphabet is 3, 4, or 5. The proof hinges on the technical lemma that concerns the maximal joint entropy of two independent random variables in terms of their probability of equality. For the zero-error capacity region, using superposition coding, we provide a practical near-optimal communication scheme which improves all the previous explicit constructions.]]>65527742781378<![CDATA[Construction of Polar Codes With Sublinear Complexity]]>$N$ for a given transmission channel $W$ . Previous approaches require one to compute the reliability of the $N$ synthetic channels and then use only those that are sufficiently reliable. However, we know from two independent works by Schürch and by Bardet et al. that the synthetic channels are partially ordered with respect to degradation. Hence, it is natural to ask whether the partial order can be exploited to reduce the computational burden of the construction problem. We show that, if we take advantage of the partial order, we can construct a polar code by computing the reliability of roughly a fraction $1/log ^{3/2} N$ of the synthetic channels. In particular, we prove that $N/log ^{3/2} N$ is a lower bound on the number of synthetic channels to be considered and such a bound is tight up to a multiplicative factor $log log N$ . This set of roughly $N/log ^{3/2} N$ synthetic channels is universal, in the sense that it allows one to construct polar codes for any $W$ , and it can be identified by solving a maximum matching problem on a bipartite graph. Our proof technique consists of reducing the construction problem to the problem of computing the maximum cardinality of an antichain for a suitable partially ordered set. As such, this method is general, and it can be used to further impr-
ve the complexity of the construction problem, in case a refined partial order on the synthetic channels of polar codes is discovered.]]>65527822791495<![CDATA[Physical-Layer Schemes for Wireless Coded Caching]]>655279228073189<![CDATA[Error Exponents for Dimension-Matched Vector Multiple Access Channels With Additive Noise]]>65528082823442<![CDATA[Multiplexing Zero-Error and Rare-Error Communications Over a Noisy Channel]]>65528242837511<![CDATA[Second-Order Asymptotics for Communication Under Strong Asynchronism]]>$n$ , the second-order term in the maximum rate expansion is of order $Theta (1/rho)$ for any sampling rate $rho =O(1/sqrt {n})$ (and $rho =omega (1/n)$ for otherwise reliable communication is impossible). Instead, if $rho =omega (1/sqrt {n})$ , then the second-order term is the same as under full sampling and is given by a standard $Theta (sqrt {n})$ term. However, if the delay constraint is only slightly relaxed to $n(1+o(1))$ , then the above order transition (for $rho =O(1/sqrt {n})$ and $rho =omega (1/sqrt {n})$ ) vanishes and the second-order term remains the same as under full sampling for any $rho =omega (1/n)$ .]]>65528382849382<![CDATA[A Characterization of Guesswork on Swiftly Tilting Curves]]>65528502871874<![CDATA[Quantum Sphere-Packing Bounds With Polynomial Prefactors]]>$o(log n / n)$ , indicating our sphere-packing bound is almost exact in the high rate regime. Finally, for a special class of symmetric classical-quantum channels, we can completely characterize its optimal error probability without the constant composition code assumption. The main technical contributions are two converse Hoeffding bounds for quantum hypothesis testing and the saddle-point properties of error exponent functions.]]>65528722898690<![CDATA[Quantum Query Complexity of Entropy Estimation]]>$alpha $ -Rényi entropies (Shannon entropy being 1-Rényi entropy). In particular, we demonstrate a quadratic quantum speedup for Shannon entropy estimation and a generic quantum speedup for $alpha $ -Rényi entropy estimation for all $alpha geq 0$ values, including tight bounds for the Shannon entropy, the Hartley entropy ($alpha =0$ ), and the collision entropy ($alpha =2$ ). We also provide quantum upper bounds for estimating min-entropy ($alpha =+infty $ ) as well as the Kullback–Leibler divergence. We complement our results with quantum lower bounds on $alpha $ -Rényi entropy estimation for all $alpha geq 0$ values. Our approach is inspired by the pioneering work of Bravyi, Harrow, and Hassidim (BHH); however, with many new technical ingredients: 1) we improve the error dependence of the BHH framework by a fine-tuned error analysis together with Montanaro’s approach to estimating the expected output of quantum subroutines for $alpha =0,1$ ; 2) we develop a proce-
ure, similar to cooling schedules in simulated annealing, for general $alpha geq 0$ , and 3) in the cases of integer $alpha geq 2$ and $alpha =+infty $ , we reduce the entropy estimation problem to the $alpha $ -distinctness and $lceil log nrceil $ -distinctness problems, respectively.]]>65528992921718<![CDATA[Message Transmission Over Classical Quantum Channels With a Jammer With Side Information: Message Transmission Capacity and Resources]]>65529222943590<![CDATA[MDS Codes With Hulls of Arbitrary Dimensions and Their Quantum Error Correction]]>65529442952727<![CDATA[Network Estimation From Point Process Data]]>saturation in a point process model which both ensures stability and models non-linear thresholding effects; 2) impose general low-dimensional structural assumptions that include sparsity, group sparsity, and low-rankness that allows bounds to be developed in the high-dimensional setting; and 3) incorporate long-range memory effects through moving average and higher-order auto-regressive components. Using our general framework, we provide a number of novel theoretical guarantees for high-dimensional self-exciting point processes that reflect the role played by the underlying network structure and long-term memory. We also provide simulations and real data examples to support our methodology and main results.]]>655295329751171<![CDATA[Stable Recovery of Structured Signals From Corrupted Sub-Gaussian Measurements]]>655297629941100<![CDATA[Data-Dependent Generalization Bounds for Multi-Class Classification]]>data-dependent generalization error bounds that exhibit a mild dependency on the number of classes, making them suitable for multi-class learning with a large number of label classes. The bounds generally hold for empirical multi-class risk minimization algorithms using an arbitrary norm as the regularizer. Key to our analysis is new structural results for multi-class Gaussian complexities and empirical $ell _infty $ -norm covering numbers, which exploit the Lipschitz continuity of the loss function with respect to the $ell _{2}$ - and $ell _infty $ -norm, respectively. We establish data-dependent error bounds in terms of the complexities of a linear function class defined on a finite set induced by training examples, for which we show tight lower and upper bounds. We apply the results to several prominent multi-class learning machines and show a tighter dependency on the number of classes than the state of the art. For instance, for the multi-class support vector machine of Crammer and Singer (2002), we obtain a data-dependent bound with a logarithmic dependency, which is a significant improvement of the previous square-root dependency. The experimental results are reported to verify the effectiveness of our theoretical findings.]]>655299530211572<![CDATA[Optimal Stopping for Interval Estimation in Bernoulli Trials]]>$theta$ . Assuming that an independent and identically distributed sequence of Bernoulli ($theta$ ) trials is observed sequentially, we are interested in designing: 1) a stopping time $T$ that will decide the best time to stop sampling the process and 2) an optimum estimator $hat{{theta}}_{{T}}$ that will provide the optimum center of the interval estimate of $theta$ . We follow a semi-Bayesian approach, where we assume that there exists a prior distribution for $theta$ , and our goal is to minimize the average number of samples while we guarantee a minimal specified coverage probability level. The solution is obtained by applying standard optimal stopping theory and computing the optimum pair $(T,hat{{theta }}_{{T}})$ numerically. Regarding the optimum stopping time component $T$ , we demonstrate that it enjoys certain very interesting characteristics not commonly encountered in solutions of other classical optimal stopping problems. In particular, we prove that, for a particular prior (beta density), the optimum stopping time is always bounded from above and below; it needs to first accumulate a sufficient amount of information before deciding whether or not to stop, and it will always terminate before some finite deterministic time. We also conjecture that these properti-
s are present with any prior. Finally, we compare our method with the optimum fixed-sample-size procedure as well as with existing alternative sequential schemes.]]>65530223033702<![CDATA[Asymptotically Optimal Prediction for Time-Varying Data Generating Processes]]>${varepsilon }$ -entropy, we propose a concept called ${varepsilon }$ -predictability that quantifies the size of a model class (which can be parametric or nonparametric) and the maximal number of abrupt structural changes that guarantee the achievability of asymptotically optimal prediction. Moreover, for parametric distribution families, we extend the aforementioned kinetic prediction with discretized function spaces to its counterpart with continuous function spaces, and propose a sequential Monte Carlo-based implementation. We also extend our methodology for predicting smoothly varying data generating distributions. Under reasonable assumptions, we prove that the average predictive performance converges almost surely to the oracle bound, which corresponds to the case that the data generating distributions are known in advance. The results also shed some light on the so called “prediction-inference dilemma.” Various examples and numerical results are provided to demonstrate the wide applicability of our methodology.]]>655303430673875<![CDATA[High-Dimensional Classification by Sparse Logistic Regression]]>65530683079305<![CDATA[Robust Estimators and Test Statistics for One-Shot Device Testing Under the Exponential Distribution]]>655308030961690<![CDATA[Blind Gain and Phase Calibration via Sparse Spectral Methods]]>655309731231608<![CDATA[Exact Reconstruction of Euclidean Distance Geometry Problem Using Low-Rank Matrix Completion]]>$r$ Gram matrix with respect to a suitable basis. The well-known restricted isometry property cannot be satisfied in this scenario. Instead, a dual basis approach is introduced to theoretically analyze the reconstruction problem. If the Gram matrix satisfies certain coherence conditions with parameter $nu $ , the main result shows that the underlying configuration of $n$ points can be recovered with very high probability from $O(nrnu log ^{2}(n))$ uniformly random samples. Computationally, simple and fast algorithms are designed to solve the Euclidean distance geometry problem. Numerical tests on different 3-D data and protein molecules validate the effectiveness and efficiency of the proposed algorithms.]]>655312431441747<![CDATA[Refined Asymptotics for Rate-Distortion Using Gaussian Codebooks for Arbitrary Sources]]>$n$ -dimensional sphere. To be more precise, we term this as a spherical codebook. We also consider i.i.d. Gaussian codebooks in which each random codeword is drawn independently from a product Gaussian distribution. We derive the second-order, moderate, and large deviation asymptotics when i.i.d. Gaussian codebooks are employed. In contrast to the recent work on the channel coding counterpart by Scarlett, Tan, and Durisi (2017), the dispersions for spherical and i.i.d. Gaussian codebooks are identical. The ensemble excess-distortion exponents for both spherical and i.i.d. Gaussian codebooks are established for all rates. Furthermore, we show that the i.i.d. Gaussian codebook has a strictly larger excess-distortion exponent than its spherical counterpart for any rate greater than the ensemble rate-distortion function derived by Lapidoth.]]>65531453159754<![CDATA[RePair and All Irreducible Grammars are Upper Bounded by High-Order Empirical Entropy]]>$S$ over an alphabet of size $sigma $ , we prove that if the underlying grammar is irreducible, then the length of the binary code output by this grammar-based compression method is bounded by $|S|H_{k}(S) + o(|S|log sigma)$ for any $kin o(log _sigma |S|)$ , where $H_{k}(S)$ is the $k$ -order empirical entropy of $S$ . This is the first bound encompassing the whole class of irreducible grammars in terms of the high-order empirical entropy, with coefficient 1.]]>65531603164205<![CDATA[Optimal Accuracy-Privacy Trade-Off for Secure Computations]]>$g$ -entropy so as to quantify this information leakage. In order to control and restrain such information flows, we introduce the notion of function substitution, which replaces the computation of a function that reveals sensitive information with that of an approximate function. We exhibit theoretical bounds for the privacy gains that this approach provides and experimentally show that this enhances the confidentiality of the inputs while controlling the distortion of computed output values. Finally, we investigate the inherent compromise between accuracy of computation and privacy of inputs and we demonstrate how to realize such optimal trade-offs.]]>655316531821362<![CDATA[On PIR and Symmetric PIR From Colluding Databases With Adversaries and Eavesdroppers]]>private information retrieval (PIR) and symmetric private information retrieval (SPIR) from replicated databases with colluding servers, in the presence of Byzantine adversaries and eavesdroppers. Specifically, there are $K$ messages replicatively stored at $N$ databases. A user wants to retrieve one message by communicating with the databases, without revealing the identity of the message retrieved. For $T$ -colluding databases, any $T$ out of $N$ databases may communicate their interactions with the user to guess the identity of the requested message. We consider the situation where the communication system can be vulnerable to attachers, namely, there is an adversary in the system that can tap in on or even try to corrupt the communication. The capacity is defined as the maximum number of information bits of the desired message retrieved per downloaded bit. For SPIR, it is further required that the user learns nothing about the other $K-1$ messages in the database. Three types of adversaries are considered: a Byzantine adversary who can overwrite the transmission of any $B$ servers to the user; a passive eavesdropper who can tap in on the incoming and outgoing transmissions of any $E$ servers; and a combination of both -- an adversary who can tap in on a set of any $B$ nodes. The problems of SPIR with colluding servers and the three types of adversaries are named T-BSPIR, T-ESPIR and T-BESPIR, respectively. We derive the capacities of the three secure SPIR problems. The results resemble those of secure network coding problems with adversaries and eavesdroppers. The capacity of $T$ -colluding PIR with Byzantine adversaries is characterized in [1]. In this work, we consider $T$ -colluding PIR with an eavedropper (named T-EPIR). We derive the T-EPIR capacity when $E geq T$ ; for the case where $E leq T$ , we find an outer bound (converse bound) and an inner bound (achievability) on the optimal achievable rate.]]>65531833197678<![CDATA[The Capacity of Private Information Retrieval With Eavesdroppers]]>${K}$ messages and ${N}$ servers where each server stores all ${K}$ messages, a user who wants to retrieve one of the ${K}$ messages without revealing the desired message index to any set of ${T}$ colluding servers, and an eavesdropper who can listen to the queries and answers of any ${E}$ servers but is prevented from learning any information about the messages. The information theoretic capacity of ETPIR is defined to be the maximum number of desired message symbols retrieved privately per information symbol downloaded. We show that the capacity of ETPIR is $ {C} = ({1}- ({{E}}/{{N}})) ({1} + ({ {T}- {E}}/{ {N}- {E}}) + cdots + (({ {T}-boldsymbol {E}}/{ {N}-{E}}))^{ {K}-{1}})^{-{1}}$ when $ {E} < {T}$ , and $ {C} = ({1} - ({ {E}}/{ {N}}))$ when $ {E} geq {T}$ . To achieve the capacity, the servers need to share a common random variable (independent of the messages), and its size must be at least $({{E}}/{ {N}}) cdot ({{1}}/{ {C}})$ symbols per message symbol. Otherwise, with less amount of shared com-
on randomness, ETPIR is not feasible and the capacity reduces to zero. An interesting observation is that the ETPIR capacity expression takes different forms in two regimes. When $ {E} < {T}$ , the capacity equals the inverse of a sum of a geometric series with ${K}$ terms and decreases with ${K}$ ; this form is typical for capacity expressions of PIR. When ${E} geq {T}$ , the capacity does not depend on $ {K}$ , a typical form for capacity expressions of SPIR (symmetric PIR, which further requires data-privacy, i.e., the user learns no information about other undesired messages); the capacity does not depend on $ {T}$ either. In addition, the ETPIR capacity result includes multiple previous PIR and SPIR capacity results as special cases.]]>655319832141349<![CDATA[Fundamental Limits of Cache-Aided Private Information Retrieval With Unknown and Uncoded Prefetching]]>$N$ non-colluding and replicated databases when the user is equipped with a cache that holds an uncoded fraction $r$ from each of the $K$ stored messages in the databases. We assume that the databases are unaware of the cache content. We investigate $D^{*}(r)$ the optimal download cost normalized with the message size as a function of $K$ , $N$ , and $r$ . For a fixed $K$ and $N$ , we develop an inner bound (converse bound) for the $D^{*}(r)$ curve. The inner bound is a piece-wise linear function in $r$ that consists of $K$ line segments. For the achievability, we develop explicit schemes that exploit the cached bits as side information to achieve $K-1$ non-degenerate corner points. These corner points differ in the number of cached bits that are used to generate the one-side information equation. We obtain an outer bound (achievability) for any caching ratio by memory sharing between these corner points. Thus, the outer bound is also a piece-wise linear function in $r$ that consists of $K$ line segments. The inner and the outer bounds match in general for the cases of very low-caching ratio and very high-caching ratio. As a corollary, we fully characterize the optimal download cost caching ratio tradeoff for $K=3$ . For general $K$ , $N$ , and $r$ , we show that the largest gap between the achievability and the converse bounds is 1/6. Our results show that the download cost can be reduced beyond memory sharing if the databases are unaware of the cached content.]]>655321532321512<![CDATA[Verifiably Multiplicative Secret Sharing]]>$d$ -multiplicative secret sharing ($d$ -MSS) scheme allows the players to multiply $d$ shared secrets without recovering the secrets by converting their shares locally into an additive sharing of the product. It has been proved that the $d$ -MSS among $n$ players is possible if and only if no $d$ unauthorized sets of players cover the whole set of players (type $Q_{d}$ ). Although this result implies some limitations on SS in the context of MPC, the $d$ -multiplicative property is still useful for simplifying complex tasks of MPC by computing the product of $d$ field elements directly and non-interactively without any setup. This paper aims to improve the usefulness of the $d$ -MSS by enhancing the security against malicious adversaries. First, we introduce the notion of verifiably multiplicative SS, verifiably MSS for short, which is mainly formalized for detecting malicious behaviors. Informally, an SS scheme is verifiably$d$ -multiplicative if the scheme is $d$ -multiplicative and further enables the players to loca-
ly generate a share of a proof that the summed value is correct (i.e., the product of $d$ shared secrets). Secondly, we prove that there is no error-free verifiably MSS scheme whose decoder of the proof is additive, and that by accepting an error probability that can be chosen arbitrarily, there exists a verifiably $d$ -MSS scheme realizing a given access structure if and only if the access structure is of type $Q_{d}$ . In the proposed construction, each share of a proof consists of only two field elements. This result means that we can efficiently achieve the optimal resiliency of the standard $d$ -MSS even against malicious adversaries. We note that by allowing a general class of decoders that includes a linear one, there is an error-free verifiably $d$ -MSS scheme if the access structure is of type $Q_{d+1}$ . Finally, we generalize the $d$ -multiplicative property to a $d$ -or-less version where the number $d'$ of multiplied secrets with $d'leq d$ is not known in advance. We show that a $d$ -or-less MSS scheme can be constructed from any $d$ -MSS scheme of the same]]>65532333245392<![CDATA[Multilevel LDPC Lattices With Efficient Encoding and Decoding and a Generalization of Construction <inline-formula> <tex-math notation="LaTeX">$text{D}'$ </tex-math></inline-formula>]]>$text{D}'$ whose complexity is linear in the total number of coded bits. Moreover, a generalization of Construction $text{D}'$ is proposed, which relaxes some of the nesting constraints on the parity-check matrices of the component codes, leading to a simpler and improved design. Based on this construction, low-complexity multilevel LDPC lattices are designed whose performance under multistage decoding is comparable to that of polar lattices and close to that of low-density lattice codes on the power-unconstrained AWGN channel.]]>65532463260642<![CDATA[On the Generalized Degrees of Freedom of the MIMO Interference Channel With Delayed CSIT]]>$M$ antennas at each transmitter and $N$ antennas at each receiver, and in the non-trivial case when $M>N$ (with the case of $Mleq N$ not needing any CSIT), new lower and upper bounds on the symmetric GDoF are obtained that are parameterized by $alpha $ , which links the interference-to-noise ratio (INR) and the signal-to-noise ratio (SNR) at each receiver via $mathrm {INR}=mathrm {SNR}^{{alpha }}$ . A new upper bound for the symmetric GDoF is obtained by maximizing a bound on the weighted sum rate, which in turn is obtained from a combination of genie-aided side-information and an extremal inequality. The maximum weighted sum rate in the high SNR regime is shown to occur when the transmit covariance matrix at each transmitter is full rank. An achievability scheme is developed that is based on block-Markov encoding and backward decoding, and which incorporates channel statistics through interference quantization and digital multicasting. This symmetric GDoF lower bound is maximized separately for different ranges of ${alpha }$ , by optimizing the transmit power levels in the achievability scheme separately in the very weak $[0leq {alpha }leq ({1}/{2})]$ , w-
ak $[({1}/{2}) < {alpha }leq 1]$ , and strong $({alpha }>1)$ interference regimes. The lower and upper bounds coincide when ${alpha }geq [({{r}+1})/({{r}+2})]$ , where ${r}=min (2,{M}/{N})$ , thus characterizing the symmetric GDoF completely for strong interference and a range of values of weak interference. It is also shown that treating interference as noise is strictly sub-optimal from a GDoF perspective even when the interference is very weak.]]>65532613277836<![CDATA[The Error Probability of Sparse Superposition Codes With Approximate Message Passing Decoding]]>$n/(log n)^{2T}$ , where $T$ , the number of AMP iterations required for successful decoding, is bounded in terms of the gap from capacity.]]>65532783303633<![CDATA[Three Families of Monomial Functions With Three-Valued Walsh Spectrum]]>$Bbb F_{p}$ be a finite field with $p$ elements, where $p$ is a prime. Let $N ge 2$ be an integer and $d$ be the least positive integer satisfying $p^{d} equiv -1 pmod N$ . Let $q = p^{2sd}$ for some integers $s$ . In some special cases, we obtain the explicit evaluation of the following exponential sums: $S(a,b)=sum _{xin Bbb F_{q}^{*}}zeta _{p}^{ mathrm {Tr}_{q/p}(ax^{(({q-1})/{N})}+bx)}$ . As applications, Walsh spectrums of the monomial functions $mathrm {Tr}_{q/p}(x^{(({q-1})/{N})})$ in three cases are investigated. Our results show that Walsh spectrums of the monomial functions have at most four, five, or seven distinct values. Furthermore, three families of the monomial functions with three-valued Walsh spectrums are presented, see Corollaries 12, 21, 31, and 32. Consequently, certain previously known results by Li and Yue and Moisio are extended.]]>65533043314281<![CDATA[IEEE Transactions on Information Theory information for authors]]>655C3C393