<![CDATA[ IEEE Transactions on Information Theory - new TOC ]]>
http://ieeexplore.ieee.org
TOC Alert for Publication# 18 2018April 19<![CDATA[Table of contents]]>645C1C4148<![CDATA[IEEE Transactions on Information Theory publication information]]>645C2C265<![CDATA[Unlabeled Sensing With Random Linear Measurements]]>${mathbf{y}}= {mathbf{A}} {mathbf{x}} $ when the order of the observations in the vector ${mathbf{y}}$ is unknown. Focusing on the setting in which ${mathbf{A}}$ is a random matrix with i.i.d. entries, we show that if the sensing matrix ${mathbf{A}}$ admits an oversampling ratio of 2 or higher, then, with probability 1, it is possible to recover ${mathbf{x}}$ exactly without the knowledge of the order of the observations in ${mathbf{y}}$ . Furthermore, if ${mathbf{x}}$ is of dimension $K$ , then any $2K$ entries of ${mathbf{y}}$ are sufficient to recover ${mathbf{x}}$ . This result implies the existence of deterministic unlabeled sensing matrices with an oversampling factor of 2 that admit perfect reconstruction. The result is universal in that conditioned on the realization of matrix ${mathbf{A}}$ , recovery is guaranteed for all possible choices of ${mathbf{x}}$ . While -
he proof is constructive, it uses a combinatorial algorithm which is not practical, leaving the question of complexity open. We also analyze a noisy version of the problem and show that local stability is guaranteed by the solution. In particular, for every ${mathbf{x}}$ , the recovery error tends to zero as the signal-to-noise ratio tends to infinity. The question of universal stability is unclear. In addition, we obtain a converse of the result in the noiseless case: If the number of observations in ${mathbf{y}}$ is less than $2K$ , then with probability 1, universal recovery fails, i.e., with probability 1, there exist distinct choices of ${mathbf{x}}$ which lead to the same unordered list of observations in ${mathbf{y}}$ . We also present extensions of the result of the noiseless case to special cases with non-i.i.d. entries in ${mathbf{A}}$ , and to a different setting in which the labels of a portion of the observations ${mathbf{y}}$ are known. In terms of applications, the unlabeled sensing problem is related to data association problems encountered in different domains including robotics where it is appears in a method called “simultaneous localization and mapping”, multi-target tracking applications, and in sampling signals in the presence of jitter.]]>64532373253663<![CDATA[Information Recovery in Shuffled Graphs via Graph Matching]]>64532543273956<![CDATA[Minimax Lower Bounds for Noisy Matrix Completion Under Sparse Factor Models]]>a priori unknown matrices, one of which is sparse, and the observations are noisy. Our main contributions come in the form of minimax lower bounds for the expected per-element squared error for this problem under several common noise models. Specifically, we analyze scenarios where the corruptions are characterized by additive Gaussian noise or additive heavier-tailed (Laplace) noise, Poisson-distributed observations, and highly-quantized (e.g., one bit) observations, as instances of our general result. Our results establish that the error bounds derived in (Soni et al., 2016) for complexity-regularized maximum likelihood estimators achieve, up to multiplicative constants and logarithmic factors, the minimax error rates in each of these noise scenarios, provided that the nominal number of observations is large enough, and the sparse factor has (on an average) at least one non-zero per column.]]>64532743285304<![CDATA[Linear Regression With Shuffled Data: Statistical and Computational Limits of Permutation Recovery]]>$y = Pi ^{*} A x^{*} + w$ , where $x^{*} in {mathbb {R}} ^{d}$ is an unknown vector, $Pi ^{*}$ is an unknown $n times n$ permutation matrix, and $w in {mathbb {R}} ^{n}$ is additive Gaussian noise. We analyze the problem of permutation recovery in a random design setting in which the entries of matrix $A$ are drawn independently from a standard Gaussian distribution and establish sharp conditions on the signal-to-noise ratio, sample size $n$ , and dimension $d$ under which $Pi ^{*}$ is exactly and approximately recoverable. On the computational front, we show that the maximum likelihood estimate of $Pi ^{*}$ is NP-hard to compute for general $d$ , while also providing a polynomial time algorithm when $d =1$ .]]>64532863300497<![CDATA[Phase Retrieval With Random Gaussian Sensing Vectors by Alternating Projections]]>$n$ -dimensional vector from its phaseless scalar products with $m$ sensing vectors, independently sampled from complex normal distributions. We show that, with a suitable initialization procedure, the classical algorithm of alternating projections (Gerchberg–Saxton) succeeds with high probability when $mgeq Cn$ , for some $C>0$ . We conjecture that this result is still true when no special initialization procedure is used, and present numerical experiments that support this conjecture.]]>64533013312569<![CDATA[Breaking the Bandwidth Barrier: Geometrical Adaptive Entropy Estimation]]>$k$ -NN distances with a finite $k$ , independent of the sample size. Such a local and data dependent choice ameliorates boundary bias and improves performance in practice, but the bandwidth is vanishing at a fast rate, leading to a non-vanishing bias. We show that the asymptotic bias of the proposed estimator is universal; it is independent of the underlying distribution. Hence, it can be precomputed and subtracted from the estimate. As a byproduct, we obtain a unified way of obtaining both the kernel and NN estimators. The corresponding theoretical contribution relating the asymptotic geometry of nearest neighbors to order statistics is of independent mathematical interest.]]>645331333301159<![CDATA[Bayesian Model Averaging With Exponentiated Least Squares Loss]]>$n$ be the sample size, then the worst case regret of the former decays at a rate of $O(1/n)$ , whereas the worst case regret of the latter decays at a rate of $O(1/sqrt {n})$ . The recently proposed $Q$ -aggregation algorithm solves the model averaging problem with the optimal regret of $O(1/n)$ both in expectation and in deviation; however, it suffers from two limitations: 1) for continuous dictionary, the proposed greedy algorithm for solving $Q$ -aggregation is not applicable and 2) the formulation of $Q$ -aggregation appears ad hoc without clear intuition. This paper examines a different approach to model averaging by considering a Bayes estimator for deviation optimal model averaging by using exponentiated least squares loss. We establish a primal-dual relationship of this estimator and that of $Q$ -aggregation and propose new algorithms that satisfactorily resolve the above-mentioned limitations of $Q$ -aggregation.]]>64533313345916<![CDATA[Efficient Byzantine Sequential Change Detection]]>64533463360447<![CDATA[A Sequential Non-Parametric Multivariate Two-Sample Test]]>64533613370483<![CDATA[How to Achieve the Capacity of Asymmetric Channels]]>asymmetric case. We consider, in more detail, three basic coding paradigms. The first one is Gallager’s scheme that consists of concatenating a linear code with a non-linear mapping so that the input distribution can be appropriately shaped. We explicitly show that both polar codes and spatially coupled codes can be employed in this scenario. Furthermore, we derive a scaling law between the gap to capacity, the cardinality of the input and output alphabets, and the required size of the mapper. The second one is an integrated scheme in which the code is used both for source coding, in order to create codewords distributed according to the capacity-achieving input distribution, and for channel coding, in order to provide error protection. Such a technique has been recently introduced by Honda and Yamamoto in the context of polar codes, and we show how to apply it also to the design of sparse graph codes. The third paradigm is based on an idea of Böcherer and Mathar, and separates the two tasks of source coding and channel coding by a chaining construction that binds together several codewords. We present conditions for the source code and the channel code, and we describe how to combine any source code with any channel code that fulfill those conditions, in order to provide capacity-achieving schemes for asymmetric channels. In particular, we show that polar codes, spatially coupled codes, and homophonic codes are suitable as basic building blocks of the proposed cod-
ng strategy. Rather than focusing on the exact details of the schemes, the purpose of this tutorial is to present different coding techniques that can then be implemented with many variants. There is no absolute winner and, in order to understand the most suitable technique for a specific application scenario, we provide a detailed comparison that takes into account several performance metrics.]]>64533713393677<![CDATA[A New Class of Rank-Metric Codes and Their List Decoding Beyond the Unique Decoding Radius]]>$mathcal {C}$ gives decoding radius beyond $(1-R)/2$ with positive rate $R$ when the ratio of the number of rows over the number of columns is extremely small, 2) the Johnson bound for rank-metric codes does not exist as opposed to classical codes, and 3) the Gabidulin codes of square matrices cannot be list decoded beyond half of the minimum distance. Although the list decodability of random rank-metric codes and the limits to the list decodability have been determined completely, little work on efficient list decoding of rank-metric codes has been done. The only known efficient list decoding of rank-metric codes $mathcal {C}$ gives decoding radius up to the singleton bound $1-R- varepsilon$ with positive rate $R$ when $rho (mathcal {C})$ is extremely small, i.e., $O(varepsilon ^{2})$ , where $rho (mathcal {C})$ denotes the ratio of the number of rows over the number of columns of $mathcal {C}$ . It is commonly believed that it is difficult to list decode rank-metric codes $mathcal {C-
$ with the ratio $rho (mathcal {C})$ close to 1. The main purpose of this paper is to explicitly construct a class of rank-metric codes $mathcal {C}$ with the ratio $rho (mathcal {C})$ up to 1/2 and efficiently list decode these codes beyond unique decoding radius $(1-R)/2$ . Furthermore, encoding and list decoding algorithms run in polynomial time ${mathrm{ poly}}(n,exp (1/{ varepsilon }))$ . The list size can be reduced to $O(1/{ varepsilon })$ by randomizing the algorithm. Our key idea is to employ bivariate polynomials $f(x,y)$ , where $f$ is linearized in variable $y$ and the variable $x$ is used to “fold” the code. In other words, the rows are used to correct rank errors and the columns are used to “fold” the code to enlarge the decoding radius. Apart from the above algebraic technique, we have to prune down the list. The algebraic idea enables us to pin down the messages into a structured subspace whose dimension is linear in the number $n$ of columns. This “periodic” structure allows us to pre-encode the messages to prune d]]>64533943402403<![CDATA[Efficient Low-Redundancy Codes for Correcting Multiple Deletions]]>$k$ –bit deletions with efficient encoding/decoding, for a fixed $k$ . The single deletion case is well understood, with the Varshamov–Tenengolts–Levenshtein code from 1965 giving an asymptotically optimal construction with $approx ~2^{n}/n$ codewords of length $n$ , i.e., at most $log n$ bits of redundancy. However, even for the case of two deletions, there was no known explicit construction with redundancy less than $n^{Omega (1)}$ . For any fixed $k$ , we construct a binary code with $c_{k} log n$ redundancy that can be decoded from $k$ deletions in $O_{k}(n log ^{4} n)$ time. The coefficient $c_{k}$ can be taken to be $O(k^{2} log k)$ , which is only quadratically worse than the optimal, non-constructive bound of $O(k)$ . We also indicate how to modify this code to allow for a combination of up to $k$ insertions and deletions. We also n-
te that among linear codes capable of correcting $k$ deletions, the $(k+1)$ -fold repetition code is essentially the best possible.]]>64534033410219<![CDATA[Characterization of Elementary Trapping Sets in Irregular LDPC Codes and the Corresponding Efficient Exhaustive Search Algorithms]]>$dot$ ), $path$ , and $lollipop$ , thus, the terminology dpl characterization. A similar $dpl$ characterization was proposed in an earlier work by the authors for the leafless ETSs of variable–regular LDPC codes. The present paper generalizes the prior work to codes with a variety of variable node degrees and to ETSs that are not leafless. The proposed $dpl$ characterization corresponds to an efficient search algorithm that, for a given irregular LDPC code, can find all the instances of $(a, b)$ ETSs with size $a$ and with the number of unsatisfied check nodes $b$ within any range of interest $a leq a_{max }$ and $b leq b_{max }$ , exhaustively. Although branch-&-bound exhaustive search algorithms for finding ETSs of irregular LDPC codes exist, to the best of our knowledge, the proposed search algorithm is the first of its kind, in that, it is devised based on a characterization of ETSs that makes the search process efficient. For a constant degree distribution and range of search, the worst-case complexity of the proposed $dpl$ algorithm increases linearly with the block length $n$ . The average complexity, excluding the search for the input simple cycles, is constant in $n$ . Extensive simulation results are presented to show the versatility of the search algorithm, and to demonstrate that, compared to the literature, significant improvement in search speed can be obtained.]]>645341134302443<![CDATA[Finite-Length Analysis of Spatially-Coupled Regular LDPC Ensembles on Burst-Erasure Channels]]>$P_ {mathrm{ scriptscriptstyle B}}$ ) at finite block length and bounds on the coupling parameter for being asymptotically able to recover the burst. We further show that expurgating the ensemble can improve the block erasure probability by several orders of magnitude. Later we extend our methodology to more general channel models. In a first extension, we consider bursts that can start at a random location in the code word and span across multiple spatial positions. Besides the finite length analysis, we determine by means of density evolution the maximum correctable burst length. In a second extension, we consider the case where in addition to a single burst, random bit erasures may occur. Finally, we consider a block erasure channel model which erases each spatial position independently with some probability $p$ , potentially introducing multiple bursts simultaneously. All results are verified using Monte-Carlo simulations.]]>645343134491583<![CDATA[Bounds on Traceability Schemes]]>$t$ to $t^{2}$ , i.e., a $t$ -traceability scheme is a $t^{2}$ -cover-free family. Based on this interesting discovery, we derive new upper bounds for traceability schemes. By using combinatorial structures, we construct several infinite families of optimal traceability schemes, which attain our new upper bounds. We also provide a constructive lower bound for traceability schemes, the size of which has the same order of magnitude as our general upper bound. Meanwhile, we consider parent-identifying set systems, an anti-collusion key-distributing scheme requiring weaker conditions than traceability scheme but stronger conditions than cover-free family. A new upper bound is also given for parent-identifying set systems.]]>64534503460275<![CDATA[Sum-Networks From Incidence Structures: Construction and Capacity Analysis]]>64534613480864<![CDATA[Combinatorial Alphabet-Dependent Bounds for Locally Recoverable Codes]]>64534813492537<![CDATA[Locally Repairable Regenerating Codes: Node Unavailability and the Insufficiency of Stationary Local Repair]]>et al. and by Hollmann studied the concept of “locally repairable regenerating codes (LRRCs)” that successfully combines the functional repair and partial information exchange of regenerating codes (RCs) with the much-desired local repairability feature of locally repairable codes (LRCs). One important issue that needs to be addressed by any local repair schemes (including both LRCs and LRRCs) is that sometimes designated helper nodes may be temporarily unavailable, the result of various reasons that include multiple failures, degraded reads, or power-saving strategies to name a few. Under the setting of LRRCs with temporary node unavailability, this paper studies the impact of different helper selection methods. It proves that with node unavailability, all existing methods of helper selection, including those used in RCs and LRCs, can be insufficient in terms of achieving the optimal repair-bandwidth. For some scenarios, it is necessary to combine LRRCs with a new class of helper selection methods, termed dynamic helper selection, to achieve optimal repair-bandwidth. This paper also compares the performance of different classes of helper selection methods and answers the following fundamental question: is one method of helper selection intrinsically better than the other? for various scenarios.]]>645349335121126<![CDATA[On Sequential Locally Repairable Codes]]>$(n, k, r, t)$ -sequential LRCs (SLRC) as an $[n,k]$ linear code, where any $t'~(leq t)$ erasures can be sequentially recovered, each by $r~(2leq r<k)$ other code symbols. Here, sequential recovering means that the erased symbols are recovered one by one, and an already recovered symbol can be used to recover the remaining erased symbols. This important recovering method, in contrast with the extensively studied parallel recovering, is currently far from being thoroughly understood; more specifically, there are to date no codes constructed for arbitrary $tgeq 3$ erasures and bounds to evaluate the performance of such codes. We first derive a tight upper bound on the code rate of the $(n, k, r, t)$ -SLRC for $t=3$ and $r geq 2$ . We then propose two constructions of binary $(n, k, r, t)$ -SLRCs for general $r,tgeq 2$ (existing constructions only deal with $t leq 7$ erasures). The first construction generalizes the method of direct product construction. The second construc-
ion is based on the resolvable configurations and yields SLRCs for any $rgeq 2$ and odd $tgeq 3$ . For both constructions, the rates are optimal for $tin {2,3}$ and are higher than most of the existing LRC families for arbitrary $tgeq 4$ .]]>64535133527882<![CDATA[On Equivalence of Binary Asymmetric Channels Regarding the Maximum Likelihood Decoding]]>$(p,q)$ and $(p^prime,q^prime)$ , are equivalent from the point of view of maximum likelihood decoding when restricted to $n$ -block binary codes. This equivalence of channels induces a partition (depending on $n$ ) on the space of parameters $(p,q)$ into regions associated with the equivalence classes. Explicit expressions for describing these regions, their number and areas are derived. Some perspectives of applications of our results to decoding problems are also presented.]]>64535283537836<![CDATA[The Zero-Error Feedback Capacity of State-Dependent Channels]]>64535383578956<![CDATA[Equivalence of Additive-Combinatorial Linear Inequalities for Shannon Entropy and Differential Entropy]]>64535793589282<![CDATA[From Rate Distortion Theory to Metric Mean Dimension: Variational Principle]]>64535903609438<![CDATA[A Conditional Information Inequality and Its Combinatorial Applications]]>$H(A | B,X) + H(A | B,Y) leqslant H(A | B)$ for jointly distributed random variables $A,B,X,Y$ , which does not hold in general case, holds under some natural condition on the support of the probability distribution of $A,B,X,Y$ . This result generalizes a version of the conditional Ingleton inequality: if for some distribution $I(X mskip 5mu {:} mskip 5mu Y | A) = H(A | X,Y)=0$ , then $I(A mskip 5mu {:} mskip 5mu B) leqslant I(A mskip 5mu {:} mskip 5mu B | X) + I(A mskip 5mu {:} mskip 5mu B | Y) + I(X mskip 5mu {:} mskip 5mu Y)$ . We present two applications of our result. The first one is the following easy-to-formulate theorem on edge colorings of bipartite graphs: assume that the edges of a bipartite graph are colored in $K$ colors so that each two edges sharing a vertex have different colors and for each pair (left vertex $x$ , right vertex $y$ ) there is at most one color $a$ such both $x$ and $y$ are incident to edges with color $a$ ; assume further that the degree of each left vertex is at least $L$ -
/inline-formula> and the degree of each right vertex is at least $R$ . Then $K geqslant LR$ . The second application is a new method to prove lower bounds for biclique cover of bipartite graphs.]]>64536103615424<![CDATA[Wyner’s Common Information Under Rényi Divergence Measures]]>$alpha =1+sin [{0,2}]$ . We show that the minimum rate needed to ensure the Rényi divergences between the distribution induced by a code and the target distribution vanishes remains the same as the one in Wyner’s setting, except when the order $alpha =1+s=0$ . This implies that Wyner’s common information is rather robust to the choice of distance measure employed. As a byproduct of the proofs used to the establish the above results, the exponential strong converse for the common information problem under the total variation distance measure is established.]]>64536163632414<![CDATA[Coordination in Distributed Networks via Coded Actions With Application to Power Control]]>645363336541591<![CDATA[The Unbounded Benefit of Encoder Cooperation for the <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula>-User MAC]]>$k$ encoders, a multiple access channel (MAC), a decoder, and a node, referred to as a “cooperation facilitator” (CF), that is connected to each encoder via a pair of rate-limited links, with one link going from the encoder to the CF and the other link going back. Let the “cooperation rate” be the total outgoing rate of the CF. This paper demonstrates the existence of a class of MACs where the ratio of the sum-capacity gain to cooperation rate tends to infinity as the cooperation rate tends to zero. For any $kgeq 2$ , examples of channels in this class include the $k$ -user binary adder MAC and the $k$ -user Gaussian MAC.]]>64536553678572<![CDATA[Information-Theoretic Privacy for Smart Metering Systems with a Rechargeable Battery]]>64536793695571<![CDATA[Keyless Authentication and Authenticated Capacity]]>64536963714605<![CDATA[Minimax Rényi Redundancy]]>$alpha $ -mutual information via a generalized redundancy-capacity theorem. Special attention is placed on the analysis of the asymptotics of minimax Rényi divergence, which is determined up to a term vanishing in blocklength.]]>64537153733387<![CDATA[Analysis of Remaining Uncertainties and Exponents Under Various Conditional Rényi Entropies]]>a posteriori decoding.]]>64537343755958<![CDATA[Achievable Moderate Deviations Asymptotics for Streaming Compression of Correlated Sources]]>$n$ , while the error probability decays subexponentially fast in $n$ . Our main result focuses on the directions of approaches to corner points of the Slepian-Wolf region. It states that for each correlated source and all corner points, there exists a non-empty subset of directions of approaches, such that the moderate deviations constant (the constant of proportionality for the subexponential decay of the error probability) is enhanced (over the non-streaming case) by at least a factor of $T$ , the block delay of decoding source block pairs. We specialize our main result to the setting of streaming lossless source coding and generalize this result to the setting, where we have different delay requirements for each of the two source blocks. The proof of our main result involves the use of various analytical tools and amalgamates several ideas from the recent information-theoretic streaming literature. We adapt the so-called truncated memory encoding idea from Draper and Khisti (2011) and Lee, Tan, and Khisti (2016) to ensure that the effect of error accumulation is nullified in the limit of large block lengths. We also adapt the use of the so-called minimum weighted empirical suffix entropy decoder, which was used by Draper, Chang, and Sahai (2014) to derive achievable error exponents for symbolwise -
treaming Slepian-Wolf coding.]]>64537563780861<![CDATA[An Achievable Rate-Distortion Region for Multiple Descriptions Source Coding Based on Coset Codes]]>$l$ -descriptions problem was the combinatorial message sharing with binning (CMSB) region. The CMSB scheme utilizes unstructured quantizers and unstructured binning. In the first part of the paper, we show that this strategy can be improved upon using more general unstructured quantizers and a more general unstructured binning method. In the second part, structured coding strategies are considered. First, structured coding strategies are developed by considering the specific MD examples involving three or more descriptions. We show that the application of structured quantizers results in strict RD improvements when there are more than two descriptions. Furthermore, we show that a structured binning also yields improvements. These improvements are in addition to the ones derived in the first part of the paper. This suggests that structured coding is essential when coding over more than two descriptions. Using the ideas developed through these examples, we provide a new unified coding strategy by considering several structured coding layers. Finally, we characterize its performance in the form of an inner bound to the optimal RD region using computable single-letter information quantities. The new RD region strictly contains all of the previous known achievable regions.]]>645378138091541<![CDATA[The Distortion Rate Function of Cyclostationary Gaussian Processes]]>64538103824771<![CDATA[Capacity Scaling in MIMO Systems With General Unitarily Invariant Random Matrices]]>$R$ receive and $T$ transmit antennas with $R>T$ , we find the following: by removing as many receive antennas as needed to obtain a square system (provided the channel matrices before and after the removal have full rank) the maximum resulting loss of mutual information over all signal-to-noise ratios (SNRs) depends only on $R$ , $T$ , and the matrix of left-singular vectors of the initial channel matrix, but not on its singular values. In particular, if the latter matrix is Haar distributed the ergodic rate loss is given by $sum _{t=1}^{T}sum _{r=T+1}^{R}frac {1}{r-t}$ nats. Under the same assumption, if $T,Rto infty$ with the ratio $phi triangleq T/R$ fixed, the rate loss normalized by $R$ converges almost surely to $H(phi)$ bits with $H(cdot)$ denoting the binary entropy function. We also quantify and study how the mutual information as a function of the system -
imensions deviates from the traditionally assumed linear growth in the minimum of the system dimensions at high SNR.]]>64538253841556<![CDATA[Topological Interference Management With Decoded Message Passing]]>partially-connected interference networks with no channel state information except for the network topology (i.e., connectivity graph) at the transmitters. In this paper, we consider a similar problem in the uplink cellular networks, while message passing is enabled at the receivers (e.g., base stations), so that the decoded messages can be routed to other receivers via backhaul links to help further improve network performance. For this TIM problem with decoded message passing (TIM-MP), we model the interference pattern by conflict digraphs, connect orthogonal access to the acyclic set coloring on conflict digraphs, and show that one-to-one interference alignment boils down to orthogonal access because of message passing. With the aid of polyhedral combinatorics, we identify the structural properties of certain classes of network topologies where orthogonal access achieves the optimal degrees-of-freedom (DoF) region in the information-theoretic sense. The relation to the conventional index coding with simultaneous decoding is also investigated by formulating a generalized index coding problem with successive decoding as a result of decoded message passing. The properties of reducibility and criticality are also studied, by which we are able to prove the linear optimality of orthogonal access in terms of symmetric DoF for the networks up to four users with all possible network topologies (218 instances). Practical issues of the tradeoff between the overhead of message passing and the achievable symmetric DoF are also discussed, in the hope of facilitating efficient backhaul utilization.]]>645384238641497<![CDATA[The Two-Unicast Problem]]>645386538822707<![CDATA[Statistical Properties of Loss Rate Estimators in Tree Topology]]>64538833893614<![CDATA[Adversarial Source Identification Game With Corrupted Training]]>$X sim P_{X}$ , whose statistics are known to him through the observation of a training sequence generated by $X$ . In order to undermine the correct decision under the alternative hypothesis that the test sequence has not been drawn from $X$ , the attacker can modify a sequence produced by a source $Y sim P_{Y}$ up to a certain distortion and corrupt the training sequence either by adding some fake samples or by replacing some samples with fake ones. We derive the unique rationalizable equilibrium of the two versions of the game in the asymptotic regime and by assuming that the defender makes his decision by relying only on the first order statistics of the test and the training sequences. By mimicking Stein’s lemma, we derive the best achievable performance for the defender when the first type error probability is required to tend to zero exponentially fast with an arbitrarily small, yet positive, error exponent. We then use such a result to analyze the ultimate distinguishability of any two sources as a function of the allowed distortion and the fraction of corrupted samples injected into the training sequence.]]>64538943915909<![CDATA[Maximal Correlation Secrecy]]>$rho $ can be achieved via a randomly generated cipher with key length $approx 2log (1/rho )$ , independent of the message length, and by a stream cipher with key length $2log (1/rho )+log n+2$ for a message of length $n$ . We establish a converse showing that these ciphers are close to optimal. This is in contrast with the entropic security for which there is a gap between the lower and upper bounds. Finally, we show that a small maximal correlation implies secrecy with respect to several mutual information-based criteria but is not necessarily implied by them. Hence, maximal correlation is a stronger and more practically relevant measure of secrecy than the mutual information.]]>64539163926439<![CDATA[Efficient Encryption From Random Quasi-Cyclic Codes]]>$mathsf {QCSD}$ and $mathsf {RQCSD}$ problems). We also provide an analysis of the decryption failure probability of our scheme in the Hamming metric case: for the rank metric there is no decryption failure. Our schemes benefit from a very fast decryption algorithm together with small key sizes of only a few thousand bits. The cryptosystems are very efficient for low encryption rates and are very well suited to key exchange and authentication. Asymptotically, for $lambda $ the security parameter, the public key sizes are respectively in $mathcal {O}({lambda }^{2})$ for HQC and in $mathcal {O}left({lambda ^{frac {4}{3}}}right)$ for RQC. Practical parameter compa-
es well to the systems based on ring-learning parity with noise or the recent moderate density parity check codes system.]]>645392739431030<![CDATA[Lower and Upper Bounds on the Density of Irreducible NFSRs]]>$n$ -stage is called irreducible if, the family of output sequences of any NFSR of stage less than $n$ is not included in that of the NFSR. Tian and Qi in this paper [IEEE-IT, 2013(6),4006–4012] gave a lower bound on the density of irreducible NFSRs. In this paper, we improve their lower bound and also give an upper bound on the density of irreducible NFSRs. Moreover, the gap between our upper and lower bounds is less than 0.04.]]>64539443952256<![CDATA[Corrections to “Abelian Group Codes for Channel Coding and Source Coding”]]>$(mathcal {X},mathcal {Y},W_{Y|X})$ is characterized with maximal probability of error in of [1, Sec. II]. There is a mistake in the proof of achievability as given in Section VII.A. It is correctly shown on page 2408–2409 that begin{equation*} lim _{n rightarrow infty } max _{a} mathbb {E} left [{ P(E(a)) }right ] =0 end{equation*} if for all $hat {theta } neq boldsymbol {s}$ , begin{align*}&hspace {-0.5pc}R frac {sum _{(p,s)in mathcal {S}(G)} (s- hat {theta }_{p,s}) w_{p,s} log p}{sum _{(p,s) in mathcal {S} (G)} s w_{p,s} log q} \&qquad qquad quad qquad <log |H_{eta ^{*}+hat {theta }}|-H(X_{eta ^{*},b}|Y,[X_{eta ^{*},b}]_{hat {theta }}) - O(epsilon ). end{align*} However it is incorrectly claimed that the achievability conditions are: for all $hat {theta } neq pmb {s}$ , begin{equation*} Rle frac {1}{1-omega _{hat {theta }}} I(X_{eta ^{*},b};Y|[X_{eta ^{*},b}]_{hat {theta }}). end{equation*} Our original objective was to characterize the average error group capacity of a discrete memoryless channel. The average error is more widely used than the maximal error. Although we had the proof of achievability for the average error case, we could not prove the converse. So we settled for characterizing the maximal error group capacity. In light of the above error, we have the following resolution.]]>64539533953153<![CDATA[Blank page]]>645B3956B39562<![CDATA[IEEE Transactions on Information Theory information for authors]]>645C3C349