Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Information Theory and its Applications (ISITA), 2010 International Symposium on

Date 17-20 Oct. 2010

Filter Results

Displaying Results 1 - 25 of 196
  • [Front cover]

    Publication Year: 2010 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (120 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2010 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (20 KB)  
    Freely Available from IEEE
  • Welcome

    Publication Year: 2010 , Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (591 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Organizing Committee

    Publication Year: 2010 , Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (102 KB)  
    Freely Available from IEEE
  • Sponsorship

    Publication Year: 2010 , Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (22 KB)  
    Freely Available from IEEE
  • Session index

    Publication Year: 2010 , Page(s): 1 - 16
    Save to Project icon | Request Permissions | PDF file iconPDF (103 KB)  
    Freely Available from IEEE
  • Good high-rate π-rotation LDPC codes based on novel puncturing techniques

    Publication Year: 2010 , Page(s): 1 - 6
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (345 KB) |  | HTML iconHTML  

    In this paper we introduce puncturing techniques to produce high-rate π-rotation low-density parity-check (LDPC) codes. The techniques rely on symmetry considerations to maintain simple description and to achieve excellent performance. We also extend the bounds on the minimum d2 for the high-rate codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low-density parity-check accumulate codes

    Publication Year: 2010 , Page(s): 7 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3753 KB) |  | HTML iconHTML  

    This paper presents a class of high-rate codes called low-density parity-check accumulate (LDPCA) codes. The code design is the serial concatenation of an LDPC outer code and an accumulator with an interleaver. The iterative decoding for the LDPCA code design has complexity linear to the code length. When using regular LDPC codes with column weight 2, the proposed codes have low encoding complexity and are advantageous for hardware implementation. Simulation results show that the regular LDPCA codes have the same error performance with the regular LDPC codes at the waterfall region and outperform product accumulate codes at the error floor region. The investigation on weight distributions proves that regular LDPCA codes have the asymptotic minimum distance proportional to the code length. In addition, iterative decoding thresholds under density evolution are obtained with a Gaussian approximation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive quantization for low-density-parity-check decoders

    Publication Year: 2010 , Page(s): 13 - 18
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB) |  | HTML iconHTML  

    For the implementation of low-density parity-check (LDPC) decoders, the associated error performance and the complexity are significantly affected by the number of quantization bits used. In this paper, we propose an adaptive quantization scheme, which uses different quantization schemes at different iteration numbers based on a fixed number of quantization bits. Simulation results show that the proposed adaptive quantization can reduce the number of quantization bits without error performance degradation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error-trellis state complexity of LDPC convolutional codes based on circulant matrices

    Publication Year: 2010 , Page(s): 19 - 24
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB) |  | HTML iconHTML  

    Let H(D) be the parity-check matrix of an LDPC convolutional code corresponding to the parity-check matrix H of a QC code obtained using the method of Tanner et al. We see that the entries in H(D) are all monomials and several rows (columns) have monomial factors. Let us cyclically shift the rows of H. Then the parity-check matrix H'(D) corresponding to the modified matrix H' defines another convolutional code. However, its free distance is lower-bounded by the minimum distance of the original QC code. Also, each row (column) of H'(D) has a factor different from the one in H(D). We show that the statespace complexity of the error-trellis associated with H'(D) can be significantly reduced by controlling the row shifts applied to H with the error-correction capability being preserved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximately universal MIMO diversity embedded codes

    Publication Year: 2010 , Page(s): 25 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (192 KB) |  | HTML iconHTML  

    In diversity embedded coding, information streams are divided into two sub-streams with different priorities. If the optimal DMT performance of each coded stream can be achieved, then such code is said to be successive refinable. For the cases of SISO, SIMO, and MISO Rayleigh slow fading channels, Diggavi and Tse had shown that superposition coding with successive cancellation receiver achieves successive refinability in these channels. However, such optimality might not be extended to MIMO channel due to the strictly sub-optimality of successive cancellation receiver. In this paper, we first provide an explicit construction of MIMO diversity embedded codes that is sphere decodable. We then show that the proposed code is approximately universal, if joint ML decoding is used, and hence extend the notion of successive refinability to general MIMO channels. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MIMO MFSK receivers using FDE and MLD on quasi-static frequency selective fading channels

    Publication Year: 2010 , Page(s): 31 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (421 KB) |  | HTML iconHTML  

    In this paper, we have proposed the signal separation and equalization schemes for MFSK signals using FDE (Frequency Domain equalization) with CP (Cyclic Prefix) and the MLD (Maximum Likelihood Detection) with ZP (Zero Padding) on quasi-static MIMO frequency selective channels. Through computer simulations, we have verified that the MLD with ZP exhibits the better BER characteristics than the conventional FDE with CP. Moreover, we have decreased the computational complexity of MLD using M algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lattice-reduction aided HNN for vector precoding

    Publication Year: 2010 , Page(s): 37 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (353 KB) |  | HTML iconHTML  

    In this paper we propose a modification of the Hopfield neural networks for vector precoding, based on Lenstra, Lenstra, and Lovasz lattice basis reduction. This precoding algorithm controls the energy penalty for system loads α = K/N close to 1, with N and K denoting the number of transmit and receive antennas, respectively. Simulation results for the average transmit energy as a function of α show that our algorithm improves performance within the range 0.9 ≤ α ≤ 1, between 0.4 dB and 2.6 dB in comparison to standard HNN precoding. The proposed algorithm performs close to the sphere encoder (SE) while requiring much lower complexity, and thus, can be applied as an efficient suboptimal precoding method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A family of cyclic division algebra based fast-decodable 4×2 space-time block codes

    Publication Year: 2010 , Page(s): 42 - 47
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (326 KB) |  | HTML iconHTML  

    Multiple-input double-output (MIDO) codes are important in future wireless communications, where the portable end-user device is physically small and will typically contain maximum two receive antennas. Especially tempting is the 4×2 channel, where the four transmitters can either be all at one station, or separated between two different stations. Such channels optimally employ rate-two space-time (ST) codes consisting of 4×4 matrices. Unfortunately, such codes are in general very complex to decode, the worst-case complexity being as high as N8, where N is the size of the complex signaling alphabet. Hence, constructions with reduced complexity are called for. One option, of course, is to use the rate-one codes such as the quasi-orthogonal codes. However, if full multiplexing, i.e., transmission of two symbols per channel use is to be maintained, this option has to be put aside. Recently, some reduced complexity constructions have been proposed, but they have mainly been based on ad hoc methods and have resulted in a specific code instead of a more general class of codes. In this paper, it will be shown that cyclic division algebra (CDA) based codes satisfying certain criteria will always result in at least 25% worst-case complexity reduction, while maintaining full diversity and even the non-vanishing determinant (NVD) property. The reduction follows from the fact that the codes will consist of four Alamouti blocks allowing simplified decoding. At the moment, such reduction is the best known for rate-two MIDO codes,. The code proposed in was the first one to provably fulfill the related algebraic properties, and shall be repeated here as an example. Further, a new low-complexity design resulting from the proposed criteria is presented, and shown to have excellent performance through simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Direct biometric verification schemes with Gaussian data

    Publication Year: 2010 , Page(s): 48 - 53
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (114 KB) |  | HTML iconHTML  

    Verification of the person's identity using the database, which contains outcomes of his biometric measurements, is considered. The verification algorithm is proposed and its performance is evaluated with the assumption that the input data represent the vector of values of a random variable generated according to the Gaussian probability distribution and observed under an additive white Gaussian noise. The evaluation of the false acceptance rate includes an analysis of the possibilities of attackers, called wolves, who generate fixed vectors that have the best chance for the verifier's acceptance decision when the biometrics of the person is unknown. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fundamental limits for biometric identification with a database containing protected templates

    Publication Year: 2010 , Page(s): 54 - 59
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1032 KB) |  | HTML iconHTML  

    In this paper we analyze secret generation in biometric identification systems with protected templates. This problem is closely related to the study of the biometric identification capacity of Willems et al. 2003 and O'Sullivan and Schmid 2002 and the common randomness generation of Ahlswede and Csiszár 1993. In our system two terminals observe biometric enrollment and identification sequences of a number of individuals. It is the goal of these terminals to form a common secret for the sequences that belong to the same individual by interchanging public (helper) messages for all individuals in such a way that the information leakage about the secrets from these helper messages is negligible. It is important to realize that biometric data are unique for individuals and cannot be replaced if compromised. Therefore the helper messages should contain as little as possible information about the biometric data. On the other hand, the second terminal has to establish the identity of the individual who presented his biometric sequence, based on the helper data produced by the first terminal. In this paper we determine the fundamental tradeoff between secret-key rate, identification rate and privacy-leakage rate in biometric identification systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A geometric view of mutual information: Application to anonymity protocols

    Publication Year: 2010 , Page(s): 60 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (101 KB) |  | HTML iconHTML  

    Anonymity protocols are a special type of security protocols that focus on protecting the identities of communicating entities in a network communication. In this research we explore the notion of anonymity from an information-theoretic point of view. We see a protocol as a noisy channel that links a set of anonymous events (inputs) to a set of observables (outputs). The degree of anonymity of the protocol can then be expressed in terms of how much information is being leaked by the channel. In information theory, the information leaked by a noisy channel is given by the notion of mutual information. We propose an alternative measure of information leakage based on the vector configuration of the noisy channel's matrix. We show that a variant of this new measure coincides with mutual information which gives an interesting geometric interpretation to mutual information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Realizing and evaluating mutual anonymity in P2P networks

    Publication Year: 2010 , Page(s): 66 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (378 KB) |  | HTML iconHTML  

    In this paper we propose a mutually anonymous protocol for decentralized Peer-to-Peer (P2P) networks. The protocol is a combination between the Secret-Sharing-Based Mutual Anonymity Protocol (SSMP) and the information slicing technique. The proposed protocol realizes the initiator's and responder's anonymity by using the SSMP in which the complete reply-confirm interaction between responders and initiators is realized using the information slicing algorithm. Employing the concept of secret sharing schemes plays an essential role for the protection of the transmitted information between the initiator and responder, and using the information slicing technique the proposed protocol is churn resilient and can be realized with lower cryptographic cost. Moreover, we evaluate the anonymity in the P2P system from probability point of view. The results show that the proposed mutual anonymity protocol provides higher anonymity than the conventional methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the adaptive antidictionary code using minimal forbidden words with constant lengths

    Publication Year: 2010 , Page(s): 72 - 77
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (302 KB) |  | HTML iconHTML  

    This paper proposes a new on-line antidictionary code with linear time. The proposed algorithm uses a subset of antidictionary which length of the elements is at most a given fixed length. It is proved that the time complexity of this algorithm is linear with respect to the string length. Its effectiveness is demonstrated by simulation results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On coding for source with infinitesimal time slots

    Publication Year: 2010 , Page(s): 78 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (216 KB) |  | HTML iconHTML  

    We introduce a source that emits symbols at infinitesimal time slots. The source may emit no symbol at a slot. The outputs from the source are encoded on the real-time basis. We derive the minimum loss probability and reveal a connection to the continuous model. Moreover, we define the utilization factor of the channel and show that it coincides with the loss probability for any code. We consider the situation in which the number of division of time unit goes infinity keeping the entropy rate constant. In this artificial case, the process does not approach to a Poisson process. We show that the ordinary entropy coding is not optimal but sub-optimal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using synchronization bits to boost compression by substring enumeration

    Publication Year: 2010 , Page(s): 82 - 87
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (207 KB) |  | HTML iconHTML  

    A new lossless data compression technique called compression via substring enumeration (CSE) has recently been introduced. It has been observed that CSE achieves lower performance on binary data. An hypothesis has been formulated that suggests that CSE loses track of the position of the bits relative to the byte boundaries more easily in binary data and that this confusion incurs a penalty for CSE. This paper questions the validity of the hypothesis and proposes a simple technique to reduce the penalty, in case the hypothesis is correct. The technique consists in adding a preprocessing step that inserts synchronization bits in the data in order to boost the performance of CSE. Experiments provide strong evidence that the formulated hypothesis is true and they demonstrate the effectiveness of the use of synchronization bits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On coding for nonbinary sources with side information at the decoder

    Publication Year: 2010 , Page(s): 88 - 93
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (199 KB) |  | HTML iconHTML  

    We study the coding for nonbinary sources with side information at the decoder. The use of binary linear code and the associated decoding scheme is proposed, with a special emphasis on low-density parity-check codes. With iterative decoding based on the constraint due to the encoding structure, reasonably good performance is achieved when compared with Slepian-Wolf limit. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stationary sequences and stable sampling

    Publication Year: 2010 , Page(s): 94 - 99
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (198 KB) |  | HTML iconHTML  

    In this paper we study the question of the representation of random variables by means of frames or Riesz basis generated by stationary sequences. This concerns to the possible representation of continuous time processes by means of discrete samples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Approximating discrete probability distributions with causal dependence trees

    Publication Year: 2010 , Page(s): 100 - 105
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (513 KB) |  | HTML iconHTML  

    Chow and Liu considered the problem of approximating discrete joint distributions with dependence tree distributions where the goodness of the approximations were measured in terms of KL distance. They (i) demonstrated that the minimum divergence approximation was the tree with maximum sum of mutual informations, and (ii) specified a low-complexity minimum-weight spanning tree algorithm to find the optimal tree. In this paper, we consider an analogous problem of approximating the joint distribution on discrete random processes with causal, directed, dependence trees, where the approximation is again measured in terms of KL distance. We (i) demonstrate that the minimum divergence approximation is the directed tree with maximum sum of directed informations, and (ii) specify a low-complexity minimum weight directed spanning tree, or arborescence, algorithm to find the optimal tree. We also present an example to demonstrate the algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • English and Taiwanese text categorization using N-gram based on Vector Space Model

    Publication Year: 2010 , Page(s): 106 - 111
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2485 KB) |  | HTML iconHTML  

    In this paper, we present a new mathematical model based on a “Vector Space Model” and consider its implications. The proposed method is evaluated by performing several experiments. In these experiments, we classify newspaper articles from the English Reuters-21578 data set, and Taiwanese China Times 2005 data set using the proposed method. The Reuters-21578 data set is a benchmark data set for automatic text categorization. It is shown that FRAM has good classification accuracy. Specifically, the micro-averaged F-measure of the proposed method is 94.5% for English. However, that is 78.0% for Taiwanese. Though the proposed method is language-independent and provides a new perspective, our future work is to improve classification accuracy for Taiwanese. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.