Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Communications, IEEE Transactions on

Issue 3 • Date March 2013

Filter Results

Displaying Results 1 - 25 of 40
  • Table of contents

    Publication Year: 2013 , Page(s): c1 - c5
    Save to Project icon | Request Permissions | PDF file iconPDF (83 KB)  
    Freely Available from IEEE
  • Staff list

    Publication Year: 2013 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB)  
    Freely Available from IEEE
  • Fast Pruned Interleaving

    Publication Year: 2013 , Page(s): 817 - 831
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    In this paper, computationally efficient schemes for enumerating the so-called inliers of a wide range of permutations employed in pruned variable-size (turbo) interleavers are proposed. The objective is to accelerate pruned interleaving time in turbo codes by computing a statistic known as the pruning gap that enables determining a permuted address under pruning without serially permuting all its predecessors. It is shown that for any linear or quadratic permutation, including variations such as dithered relative prime or almost regular, the pruning gap can be computed in logarithmic time. Moreover, it is shown that Dedekind sums form efficient building blocks for enumerating inliers of the widely adopted polynomial-based permutations. An efficient algorithm for computing such sums in vector form using integer operations is presented. The results are extended to 2D and higher dimensional interleavers that combine multiple permutations along all dimensions, and closed-form expressions for inliers are derived. It is shown that the inliers statistic is a linear combination of the constituent permutation inliers. A lower bound on the minimum spread of serially pruned interleavers using the inliers statistic is also derived. Moreover, it is shown that serially pruned interleavers inherit the content-free property of the mother interleaver, and hence they are parallelizable. Simulation results of practical pruned turbo interleavers demonstrate a speedup improvement of several orders of magnitude compared to serial interleaving. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Decoding of the (89, 45, 17) Quadratic Residue Code

    Publication Year: 2013 , Page(s): 832 - 841
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (594 KB) |  | HTML iconHTML  

    In this paper, Three decoding methods of the (89, 45, 17) binary quadratic residue (QR) code to be presented are hard, soft and linear programming decoding algorithms. Firstly, a new hybrid algebraic decoding algorithm for the (89, 45, 17) QR code is proposed. It uses the Laplace formula to obtain the primary unknown syndromes, as done in Lin et al.'s algorithm when the number of errors v is less than or equal to 5, whereas Gaussian elimination is adopted to compute the unknown syndromes when v ≥ 6. Secondly, an appropriate modification to the algorithm developed by Chase is also given in this paper. Therefore, combining the proposed algebraic decoding algorithm with the modified Chase-II algorithm, called a new soft-decision decoding algorithm, becomes a complete soft decoding of QR codes. Thirdly, in order to further improve the error-correcting performance of the code, linear programming (LP) is utilized to decode the (89, 45, 17) QR code. Simulation results show that the proposed algebraic decoding algorithm reduces the decoding time when compared with Lin et al.'s hard decoding algorithm, and thus significantly reduces the decoding complexity of soft decoding while maintaining the same bit error rate (BER) performance. Moreover, the LP-based decoding improves the error-rate performance almost without increasing the decoding complexity, when compared with the new soft-decision decoding algorithm. It provides a coding gain of 0.2 dB at BER = 2 × 10-6. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New Techniques for Upper-Bounding the ML Decoding Performance of Binary Linear Codes

    Publication Year: 2013 , Page(s): 842 - 851
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB) |  | HTML iconHTML  

    In this paper, new techniques are presented to either simplify or improve most existing upper bounds on the maximum-likelihood (ML) decoding performance of the binary linear codes over additive white Gaussian noise (AWGN) channels. Firstly, the recently proposed union bound using truncated weight spectrum by Ma et al. is re-derived in a detailed way based on Gallager's first bounding technique (GFBT), where the "good region" is specified by a sub-optimal list decoding algorithm. The error probability caused by the bad region can be upper-bounded by the tail-probability of a binomial distribution, while the error probability caused by the good region can be upper-bounded by most existing techniques. Secondly, we propose two techniques to tighten the union bound on the error probability caused by the good region. The first technique is based on pair-wise error probabilities. The second technique is based on triplet-wise error probabilities, which can be upper-bounded by the fact that any three bipolar vectors form a non-obtuse triangle. The proposed bounds improve the conventional union bounds but have a similar complexity since they involve only the Q-function. The proposed bounds can also be adapted to bit-error probabilities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Capacity Bounds and Concatenated Codes over Segmented Deletion Channels

    Publication Year: 2013 , Page(s): 852 - 864
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (756 KB) |  | HTML iconHTML  

    We develop an information theoretic characterization and a practical coding approach for segmented deletion channels. Compared to channels with independent and identically distributed (i.i.d.) deletions, where each bit is independently deleted with an equal probability, the segmentation assumption imposes certain constraints, i.e., in a block of bits of a certain length, only a limited number of deletions are allowed to occur. This channel model has recently been proposed and motivated by the fact that for practical systems, when a deletion error occurs, it is more likely that the next one will not appear very soon. We first argue that such channels are information stable, hence their channel capacity exists. Then, we introduce several upper and lower bounds with two different methods in an attempt to understand the channel capacity behavior. The first scheme utilizes certain information provided to the transmitter and/or receiver while the second one explores the asymptotic behavior of the bounds when the average bit deletion rate is small. In the second part of the paper, we consider a practical channel coding approach over a segmented deletion channel. Specifically, we utilize outer LDPC codes concatenated with inner marker codes, and develop suitable channel detection algorithms for this scenario. Different maximum-a-posteriori (MAP) based channel synchronization algorithms operating at the bit and symbol levels are introduced, and specific LDPC code designs are explored. Simulation results clearly indicate the advantages of the proposed approach. In particular, for the entire range of deletion probabilities less than unity, our scheme offers a significantly larger transmission rate compared to the other existing solutions in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Parallel Search Algorithm for Determining Optimal R=1/2 Systematic Convolutional Self-Doubly Orthogonal Codes

    Publication Year: 2013 , Page(s): 865 - 876
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1314 KB) |  | HTML iconHTML  

    A novel parallel and implicitly-exhaustive search algorithm for finding, in systematic form, rate R=1/2 optimal-span Convolutional Self-Doubly Orthogonal (CDO) codes and Simplified Convolutional Self-Doubly Orthogonal (S-CDO) codes is presented. In order to obtain high-performance low-latency codecs with these codes, it is important to minimize their constraint length (or "span") for a given J number of generator connections. The proposed exhaustive algorithm uses algorithmic enhancements over the best previously published searching techniques, yielding new and improved codes: we were able to obtain new optimal-span CDO/S-CDO codes (having order J∈{9} and J∈{10,11} respectively), as well as new codes having the shortest spans published to date for higher values of J (J∈{10,12,...,17} and J∈{12,...,20} for CDO and S-CDO codes respectively). The new codes and their error performance are provided. An analysis of the evolution of the CDO/S-CDO code error performance as J increases is presented, and the shortest CDO/S-CDO code span values for each given J are compared. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Check Node Reliability-Based Scheduling for BP Decoding of Non-Binary LDPC Codes

    Publication Year: 2013 , Page(s): 877 - 885
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (384 KB) |  | HTML iconHTML  

    Scheduling strategy is considered an important aspect of belief-propagation (BP) decoding of low-density parity-check (LDPC) codes because it affects the decoder's convergence rate, decoding complexity and error-correction performance. In this paper, we propose two new scheduling strategies for the BP decoding of non-binary LDPC (NB-LDPC) codes. Both the strategies are devised based on the concept of check node reliability and employ a heuristically defined threshold which can adapt to the communication channel variations. As the scheduling strategies only update a subset of the check nodes in each iteration, they result in reduced iteration cost. Furthermore, since the BP performs suboptimally for finite-length LDPC codes, especially for short-length LDPC codes, by enhancing the message propagation over the Tanner Graphs of short-length NB-LDPC codes, the new scheduling strategies can even improve the error-correction performances of BP decoding. Simulation results demonstrate that the new scheduling strategies provide good performance/complexity tradeoffs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two Informed Dynamic Scheduling Strategies for Iterative LDPC Decoders

    Publication Year: 2013 , Page(s): 886 - 896
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (614 KB) |  | HTML iconHTML  

    When residual belief-propagation (RBP), which is a kind of informed dynamic scheduling (IDS), is applied to low-density parity-check (LDPC) codes, the convergence speed in error-rate performance can be significantly improved. However, the RBP decoders presented in previous literature suffer from poor convergence error-rate performance due to the two phenomena explored in this paper. The first is the greedy-group phenomenon, which results in a small part of the decoding graph occupying most of the decoding resources. By limiting the number of updates for each edge message in the decoding graph, the proposed Quota-based RBP (Q-RBP) schedule can reduce the probability of greedy groups forming. The other phenomenon is the silent-variable-nodes issue, which is a condition where some variable nodes have no chance of contributing their intrinsic messages to the decoding process. As a result, we propose the Silent-Variable-Node-Free RBP (SVNF-RBP) schedule, which can force all variable nodes to contribute their intrinsic messages to the decoding process equally. Both the Q-RBP and the SVNF-RBP provide appealing convergence speed and convergence error-rate performance compared to previous IDS decoders for both dedicated and punctured LDPC codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design of Irregular Repeat-Accumulate Coded Physical-Layer Network Coding for Gaussian Two-Way Relay Channels

    Publication Year: 2013 , Page(s): 897 - 909
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1511 KB) |  | HTML iconHTML  

    This paper addresses the design of irregular repeat accumulate (IRA) codes for coded physical-layer network coding (PNC) for the binary-input Gaussian two-way relay channel, assuming perfect synchronization and equal received power at the relay. The design is based on a nontrivial extension of EXIT-chart based design. Specifically, we analyze the components of the IRA-PNC scheme and propose an approach to model the soft information exchanged between these components. Then, we develop upper and lower bounds on the extrinsic information transfer functions to characterize the iterative process of computing the network-coded information. Based on that, we construct optimized IRA codes to minimize the computation error at the relay. The optimized IRA-PNC has considerable performance improvement over the existing regular RA coded PNC. For a rate 3/4 code, as an example, we observed improvements of 2.6 dB, and the optimized IRA-PNC scheme is only about 1.7 dB away from the capacity upper bound of the Gaussian two-way relay channel. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Algebraic Design and Implementation of Protograph Codes using Non-Commuting Permutation Matrices

    Publication Year: 2013 , Page(s): 910 - 918
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (270 KB) |  | HTML iconHTML  

    Random lifts of graphs, or equivalently, random permutation matrices, have been used to construct good families of codes known as protograph codes. An algebraic analog of this approach was recently presented using voltage graphs, and it was shown that many existing algebraic constructions of graph-based codes that use commuting permutation matrices may be seen as special cases of voltage graph codes. Voltage graphs are graphs that have an element of a finite group assigned to each edge, and the assignment determines a specific lift of the graph. In this paper we discuss how assignments of permutation group elements to the edges of a base graph affect the properties of the lifted graph and corresponding codes, and present a construction method of LDPC code ensembles based on non-commuting permutation matrices. We also show encoder and decoder implementations for these codes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Finite-Length Performance of Polar Codes: Stopping Sets, Error Floor, and Concatenated Design

    Publication Year: 2013 , Page(s): 919 - 929
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (454 KB) |  | HTML iconHTML  

    This paper investigates properties of polar codes that can be potentially useful in real-world applications. We start with analyzing the performance of finite-length polar codes over the binary erasure channel (BEC), while assuming belief propagation as the decoding method. We provide a stopping set analysis for the factor graph of polar codes, where we find the size of the minimum stopping set. We also find the girth of the graph for polar codes. Our analysis along with bit error rate (BER) simulations demonstrate that finite-length polar codes show superior error floor performance compared to the conventional capacity-approaching coding techniques. In order to take advantage from this property while avoiding the shortcomings of polar codes, we consider the idea of combining polar codes with other coding schemes. We propose a polar code-based concatenated scheme to be used in Optical Transport Networks (OTNs) as a potential real-world application. Comparing against conventional concatenation techniques for OTNs, we show that the proposed scheme outperforms the existing methods by closing the gap to the capacity while avoiding error floor, and maintaining a low complexity at the same time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient Majority-Logic Decoding of Short-Length Reed-Muller Codes at Information Positions

    Publication Year: 2013 , Page(s): 930 - 938
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (445 KB) |  | HTML iconHTML  

    Short-length Reed-Muller codes under majority-logic decoding are of particular importance for efficient hardware implementations in real-time and embedded systems. This paper significantly improves Chen's two-step majority-logic decoding method for binary Reed-Muller codes RM(r,m), r ≤ m/2, if - systematic encoding assumed - only errors at information positions are to be corrected. Some general results on the minimal number of majority gates are presented that are particularly good for short codes. Specifically, with its importance in applications as a 3-error-correcting, self-dual code, the smallest non-trivial example, RM(2,5) of dimension 16 and length 32, is investigated in detail. Further, the decoding complexity of our procedure is compared with that of Chen's decoding algorithm for various Reed-Muller codes up to length 210. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Stochastic Decoding of LDPC Codes over GF(q)

    Publication Year: 2013 , Page(s): 939 - 950
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (626 KB) |  | HTML iconHTML  

    Despite the outstanding performance of non-binary low-density parity-check (LDPC) codes over many communication channels, they are not in widespread use yet. This is due to the high implementation complexity of their decoding algorithms, even those that compromise performance for the sake of simplicity. In this paper, we present three algorithms based on stochastic computation to reduce the decoding complexity. The first is a purely stochastic algorithm with error-correcting performance matching that of the sum-product algorithm (SPA) for LDPC codes over Galois fields with low order and a small variable node degree. We also present a modified version which reduces the number of decoding iterations required while remaining purely stochastic and having a low per-iteration complexity. The second algorithm, relaxed half-stochastic (RHS) decoding, combines elements of the SPA and the stochastic decoder and uses successive relaxation to match the error-correcting performance of the SPA. Furthermore, it uses fewer iterations than the purely stochastic algorithm and does not have limitations on the field order and variable node degree of the codes it can decode. The third algorithm, NoX, is a fully stochastic specialization of RHS for codes with a variable node degree 2 that offers similar performance, but at a significantly lower computational complexity. We study the performance and complexity of the algorithms; noting that all have lower per-iteration complexity than SPA and that RHS can have comparable average per-codeword computational complexity, and NoX a lower one. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SHARP: Spectrum Harvesting with ARQ Retransmission and Probing in Cognitive Radio

    Publication Year: 2013 , Page(s): 951 - 960
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (608 KB) |  | HTML iconHTML  

    In underlay cognitive radio, a secondary user transmits in the transmission band of a primary user without serious degradation in the performance of the primary user. This paper proposes a method of underlay cognitive radio where the secondary pair listens to the primary ARQ feedback to glean information about the primary channel. The secondary transmitter may also probe the channel by transmitting a packet and listening to the primary ARQ, thus getting additional information about the relative strength of the cross channel and primary channel. The method is entitled Spectrum Harvesting with ARQ Retransmission and Probing (SHARP). The probing is done only infrequently to minimize its impact on the primary throughput. Two varieties of spectrum sharing, named conservative and aggressive SHARP, are introduced. Both methods avoid introducing any outage in the primary; their difference is that conservative SHARP leaves the primary operations altogether unaffected, while aggressive SHARP may occasionally force the primary to use two instead of one transmission cycle for a packet, in order to harvest a better throughput for the secondary. The performance of the proposed system is analyzed and it is shown that the secondary throughput can be significantly improved via the proposed approach, possibly with a small loss of the primary throughput during the transmission as well as probing period. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Beamforming and Orthogonal Space-Time Coding in Cognitive Networks with Partial CSI

    Publication Year: 2013 , Page(s): 961 - 972
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1608 KB) |  | HTML iconHTML  

    We consider a pair of secondary users that coexist, in a cognitive network, with multiple primary user pairs. The secondary link is supplied with partial network side information (NSI), which comprises message side information and partial channel side information (CSI), available in different levels at both the transmitter and the receiver of the cognitive link. The cognitive transceiver design has to obey predefined quality-of-service (QoS) criteria, that need to be maintained at the primary receivers, and at the same time properly handle the incoming interference from each primary transmitter in order to establish reliable communication. In this framework, we investigate the design and performance of the combined beamforming and orthogonal space-time block coding (BOSTBC) strategy, whose merits are well-documented, as a candidate transmission scheme for the secondary link. We study both aspects, QoS and interference, of the composite problem and characterize how they affect the beamformer design and the cognitive link performance, in the presence of partial NSI. Further, we propose a CSI quality-dependent model for the QoS criteria which yields an interesting trade-off between the cognitive link design and the primary QoS. Numerical results illustrate the system performance in this framework. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Physical-Layer Network Coding Scheme with Eigen-Direction Alignment Precoding for MIMO Two-Way Relaying

    Publication Year: 2013 , Page(s): 973 - 986
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (669 KB) |  | HTML iconHTML  

    We investigate efficient communication over multiple-input multiple-output (MIMO) two-way relay channels (TWRCs), where two multi-antenna users exchange information via a multi-antenna relay. We propose a new MIMO physical-layer network coding (PNC) scheme that includes novel eigen-direction alignment (EDA) precoding. The proposed EDA precoding efficiently aligns the two-user's eigen-modes into the same set of orthogonal directions, and multiple independent PNC streams are implemented over the aligned eigen-modes. We derive an achievable rate-pair of the proposed scheme, for given EDA precoding parameters, over a MIMO TWRC. To maximize the achievable rate-region, we formulate a design criterion for the EDA precoding parameters, and present solutions to the formulation. Closed-form bounds on the sum-rates of the designed EDA-PNC schemes are derived. Numerical results show that there is only a small gap between the achievable rate of the proposed scheme and the capacity upper bound of the MIMO TWRC. It is shown that the proposed scheme can significantly outperforms existing schemes in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Relay Selection Schemes and Performance Analysis Approximations for Two-Way Networks

    Publication Year: 2013 , Page(s): 987 - 998
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (561 KB) |  | HTML iconHTML  

    This paper studies relay selection schemes for two-way amplify-and-forward (AF) relay networks. For a network with two users that exchange information via multiple AF relays, we first consider a single-relay selection (SRS) scheme based on the maximization of the worse signal-to-noise ratio (SNR) of the two end users. The cumulative distribution function (CDF) of the worse SNR of the two users and its approximations are obtained, based on which the block error rate (BLER), the diversity order, the outage probability, and the sum-rate of the two-way network are derived. Then, with the help of a relay ordering, a multiple-relay selection (MRS) scheme is developed. The training overhead and feedback requirement for the implementation of the relay selection schemes are discussed. Numerical and simulation results are provided to corroborate the analytical results. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting Network Cooperation in Green Wireless Communication

    Publication Year: 2013 , Page(s): 999 - 1010
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (598 KB) |  | HTML iconHTML  

    There is a growing interest in energy efficient or so-called "green" wireless communication to reduce the energy consumption in cellular networks. Since today's wireless terminals are typically equipped with multiple network access interfaces such as Bluetooth, Wi-Fi, and cellular networks, this paper investigates user terminals cooperating with each other in transmitting their data packets to the base station (BS), by exploiting the multiple network access interfaces, called inter-network cooperation. We also examine the conventional schemes without user cooperation and with intra-network cooperation for comparison. Given target outage probability and data rate requirements, we analyze the energy consumption of conventional schemes as compared to the proposed inter-network cooperation by taking into account both physical-layer channel impairments and upper-layer protocol overheads. It is shown that distances between different network entities (i.e., user terminals and BS) have a significant influence on the energy efficiency of proposed inter-network cooperation scheme. Specifically, when the cooperating users are close to BS or the users are far away from each other, the inter-network cooperation may consume more energy than conventional schemes without user cooperation or with intra-network cooperation. However, as the cooperating users move away from BS and the inter-user distance is not too large, the inter-network cooperation significantly reduces the energy consumption over conventional schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Compute-and-Forward Scheme for Gaussian Bi-Directional Relaying with Inter-Symbol Interference

    Publication Year: 2013 , Page(s): 1011 - 1019
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (438 KB) |  | HTML iconHTML  

    We provide inner and outer bounds on the capacity region for the Gaussian bi-directional relaying over inter-symbol interference channels. The outer bound is obtained by the conventional cut-set argument. For the inner bound, we propose a compute-and-forward coding scheme based on lattice partition chains and study its achievable rate. The coding scheme is a time-domain coding scheme which uses a novel precoding scheme at the transmitter in combination with lattice precoding and a minimum mean squared error receiver to recover linear combinations of lattice codewords. The proposed compute-and-forward coding scheme substantially outperforms decode-and-forward schemes. While it is well known that for the point-to-point communication case, both independent coding along sub-channels and time-domain coding can approach the capacity limit, as a byproduct of the proposed scheme, we show that for the bi-directional relay case, independent coding along sub-channels is not optimal in general and joint coding across sub-channels can improve the capacity for some channel realizations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Capacity of Duplication Channels

    Publication Year: 2013 , Page(s): 1020 - 1027
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (467 KB) |  | HTML iconHTML  

    The i.i.d. duplication channel which duplicates each symbol independently with a certain probability is studied. The contribution is twofold: first, a tight lower bound on the capacity of such channels is introduced. Second, the capacity is computed for the small values of the duplication probability using a series expansion representation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gallager's Exponent Analysis of STBC MIMO Systems over η-μ and κ-μ Fading Channels

    Publication Year: 2013 , Page(s): 1028 - 1039
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (531 KB) |  | HTML iconHTML  

    In this paper, we analytically investigate Gallager's exponent for space-time block codes over multiple-input multiple-output block-fading channels with Gaussian input distribution. As a suitable metric of the fundamental tradeoff between communication reliability and information rate, Gallager's exponent can be used to determine the required codeword length to achieve a prescribed error probability at a given rate below the channel capacity. We assume that the receiver has full channel state information (CSI), while the transmitter has no CSI and performs equal power allocation across all transmit antennas. In the following, novel exact expressions for Gallager's exponent are derived for two well-known channel fading models, namely η-μ and κ-μ fading models. More importantly, the implications of fading parameters and channel coherence time on Gallager's exponent are investigated. In addition, we present new expressions for the Shannon capacity, cutoff rate and expurgated exponent for the above mentioned fading models, while in the high signal-to-noise ratio regime, simplified closed-form expressions are also derived. Finally, we highlight the fact that the presented analysis encompasses all previously known results on Nakagami-m, Rician, Rayleigh and Hoyt fading channels, as special cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduced and Fixed-Complexity Variants of the LLL Algorithm for Communications

    Publication Year: 2013 , Page(s): 1040 - 1050
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (421 KB) |  | HTML iconHTML  

    The Lenstra-Lenstra-Lovász (LLL) algorithm is a popular lattice reduction algorithm in communications. In this paper, variants of the LLL algorithm with either reduced or fixed complexity are proposed and analyzed. Specifically, the use of effective LLL reduction for lattice decoding is presented, where size reduction is only performed for pairs of consecutive basis vectors. Its average complexity (measured by the number of floating-point operations and averaged over i.i.d. standard normal lattice bases) is shown to be O(n3 log n), where n is the lattice dimension. This average complexity is an order lower than previously thought. To address the issue of variable complexity of the LLL algorithm, two fixed-complexity approximations are proposed. One is fixed-complexity effective LLL, for which the first vector of the basis is proven to be bounded in length; the other is fixed-complexity LLL with deep insertion, which is shown to be closely related to the well known V-BLAST algorithm. Such fixed-complexity structures are much desirable in hardware implementation since they allow straightforward constant-throughput implementation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Full-Diversity Achieving Precoding for Asymmetric MIMO Using Partial CSIT

    Publication Year: 2013 , Page(s): 1051 - 1058
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    For a block fading nt × nr,nt > nr MIMO channel, we propose a precoding scheme that achieves both ntnrth order diversity as well as rate of nt symbols per channel use, feeding back B(nt-1) bits as partial channel state information to the transmitter (CSIT), where 2B is the number of quantization states available. We establish the optimality of the uniform quantizer which achieves minimum loss in coding gain due to quantization of feedback values of our precoding scheme in comparison to the non-uniform quantization. We also lay down the guidelines for constellation sets with which our precoding scheme can achieve full diversity. We also derive the order of complexity involved in computing the precoder matrix and show that it is independent of the size of constellation sets. We compare our BER results with that of precoding schemes in the literature utilizing full CSIT as well as that with partial CSIT. We also investigate the loss in error rate performance due to imperfect channel knowledge at the receiver and present simulation results to verify our claims. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Per-Antenna Constant Envelope Precoding for Large Multi-User MIMO Systems

    Publication Year: 2013 , Page(s): 1059 - 1071
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (380 KB) |  | HTML iconHTML  

    We consider the multi-user MIMO broadcast channel with M single-antenna users and N transmit antennas under the constraint that each antenna emits signals having constant envelope (CE). The motivation for this is that CE signals facilitate the use of power-efficient RF power amplifiers. Analytical and numerical results show that, under certain mild conditions on the channel gains, for a fixed M, an array gain is achievable even under the stringent per-antenna CE constraint. Essentially, for a fixed M, at sufficiently large N the total transmitted power can be reduced with increasing N while maintaining a fixed information rate to each user. Simulations for the i.i.d. Rayleigh fading channel show that the total transmit power can be reduced linearly with increasing N (i.e., an O(N) array gain). We also propose a precoding scheme which finds near-optimal CE signals to be transmitted, and has O(MN) complexity. Also, in terms of the total transmit power required to achieve a fixed desired information sum-rate, despite the stringent per-antenna CE constraint, the proposed CE precoding scheme performs close to the sum-capacity achieving scheme for an average-only total transmit power constrained channel. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Communications focuses on all telecommunications including telephone, telegraphy, facsimile, and point-to-point television by electromagnetic propagation.

 

 

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Robert Schober
University of British Columbia