By Topic

Information Theory, IEEE Transactions on

Issue 11 • Date Nov. 2006

Filter Results

Displaying Results 1 - 25 of 41
  • Table of contents

    Publication Year: 2006 , Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (40 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Theory publication information

    Publication Year: 2006 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (38 KB)  
    Freely Available from IEEE
  • Consensus Propagation

    Publication Year: 2006 , Page(s): 4753 - 4766
    Cited by:  Papers (83)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB) |  | HTML iconHTML  

    We propose consensus propagation, an asynchronous distributed protocol for averaging numbers across a network. We establish convergence, characterize the convergence rate for regular graphs, and demonstrate that the protocol exhibits better scaling properties than pairwise averaging, an alternative that has received much recent attention. Consensus propagation can be viewed as a special case of belief propagation, and our results contribute to the belief propagation literature. In particular, beyond singly-connected graphs, there are very few classes of relevant problems for which belief propagation is known to converge View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Strong Consistency of Model Selection in Classification

    Publication Year: 2006 , Page(s): 4767 - 4774
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (293 KB) |  | HTML iconHTML  

    This paper considers model selection in classification. In many applications such as pattern recognition, probabilistic inference using a Bayesian network, prediction of the next in a sequence based on a Markov chain, the conditional probability P(Y=y|X=x) of class yisinY given attribute value xisinX is utilized. By model we mean the equivalence relation in X: for x,x'isinXx~x'hArrP(Y=y|X=x)=P(Y=y|X=x'), forall yisinY. By classification we mean the number of such equivalence classes is finite. We estimate the model from n samples zn=(xi,yi)i=1 n isin(XtimesY)n, using information criteria in the form empirical entropy H plus penalty term (k/2)dn (the model such that H+(k/2)dn is minimized is the estimated model), where k is the number of independent parameters in the model, and {dn}n=1 infin is a real nonnegative sequence such that lim supndn/n=0. For autoregressive processes, although the definitions of H and k are different, it is known that the estimated model almost surely coincides with the true model as nrarrinfin if {dn}n=1 infin>{2loglogn}n=1 infin, and that it does not if {dn}n=1 infin<{2loglogn}n=1 infin (Hannan and Quinn). The problem whether the same property is true for classification was open. This paper solves the problem in the affirmative View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online Regularized Classification Algorithms

    Publication Year: 2006 , Page(s): 4775 - 4788
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (471 KB) |  | HTML iconHTML  

    This paper considers online classification learning algorithms based on regularization schemes in reproducing kernel Hilbert spaces associated with general convex loss functions. A novel capacity independent approach is presented. It verifies the strong convergence of the algorithm under a very weak assumption of the step sizes and yields satisfactory convergence rates for polynomially decaying step sizes. Explicit learning rates with respect to the misclassification error are given in terms of the choice of step sizes and the regularization parameter (depending on the sample size). Error bounds associated with the hinge loss, the least square loss, and the support vector machine q-norm loss are presented to illustrate our method View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Schemes for Bidirectional Modeling of Discrete Stationary Sources

    Publication Year: 2006 , Page(s): 4789 - 4807
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (635 KB) |  | HTML iconHTML  

    We develop adaptive schemes for bidirectional modeling of unknown discrete stationary sources. These algorithms can be applied to statistical inference problems such as noncausal universal discrete denoising that exploit bidirectional dependencies. Efficient algorithms for constructing those models are developed and we compare their performance to that of the DUDE algorithm for universal discrete denoising View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measurement of Time-Variant Linear Channels

    Publication Year: 2006 , Page(s): 4808 - 4820
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (787 KB) |  | HTML iconHTML  

    The goal of channel measurement or operator identification is to obtain complete knowledge of a channel operator by observing the image of a finite number of input signals. In this paper, it is shown that the spreading support of the operator (that is, the support of the symplectic Fourier transform of the Kohn-Nirenberg symbol of the operator) has area less than one then the operator is identifiable. If the spreading support is larger than one, then the operator is not identifiable. The shape of the support region is essentially arbitrary thereby proving a conjecture of Bello. The input signal considered is a weighted delta train where the weights are the window function of a finite Gabor system whose elements satisfy a certain robust completeness property View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Single-User Broadcasting Protocols Over a Two-Hop Relay Fading Channel

    Publication Year: 2006 , Page(s): 4821 - 4838
    Cited by:  Papers (19)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1373 KB) |  | HTML iconHTML  

    A two-hop relay fading channel is considered, where only decoders possess perfect channel state information (CSI). Various relaying protocols and broadcasting strategies are studied. The main focus of this work is on simple relay transmission scheduling schemes. For decode-and-forward (DF) relaying, the simple relay cannot buffer multiple packets, nor can it reschedule retransmissions. This gives rise to consideration of other relaying techniques, such as amplify-and-forward (AF), where a maximal broadcasting achievable rate is analytically derived. A quantize-and-forward (QF) relay, coupled with a single-level code at the source, uses codebooks matched to the received signal power and performs optimal quantization. This is simplified by a hybrid amplify-QF (AQF) relay, which performs scaling, and single codebook quantization on the input. It is shown that the latter is optimal by means of throughput on the relay-destination link, while maintaining a lower coding complexity than the QF setting. A further extension of the AQF allows the relay to perform successive refinement, coupled with a matched multilevel code. Numerical results show that for high signal-to-noise ratios (SNRs), the broadcast approach over AF relay may achieve higher throughput gains than other relaying protocols that are numerically tractable View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Weight Distribution of Low-Density Parity-Check Codes

    Publication Year: 2006 , Page(s): 4839 - 4855
    Cited by:  Papers (60)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (649 KB) |  | HTML iconHTML  

    We derive the average weight distribution function and its asymptotic growth rate for low-density parity-check (LDPC) code ensembles. We show that the growth rate of the minimum distance of LDPC codes depends only on the degree distribution pair. It turns out that capacity-achieving sequences of standard (unstructured) LDPC codes under iterative decoding over the binary erasure channel (BEC) known to date have sublinearly growing minimum distance in the block length View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Average Coset Weight Distribution of Combined LDPC Matrix Ensembles

    Publication Year: 2006 , Page(s): 4856 - 4866
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (490 KB) |  | HTML iconHTML  

    In this paper, the average coset weight distribution (ACWD) of structured ensembles of low-density parity-check (LDPC) matrices, which are called combined ensembles, is discussed. A combined ensemble is composed of a set of simpler ensembles such as a regular bipartite ensemble. Two classes of combined ensembles have prime importance; a stacked ensemble and a concatenated ensemble. The ACWD formulas of these ensembles are shown in this paper. Such formulas play a key role to evaluate the average weight distribution of some classes of combined ensembles View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Stopping Redundancy of Reed&#8211;Muller Codes

    Publication Year: 2006 , Page(s): 4867 - 4879
    Cited by:  Papers (14)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (306 KB) |  | HTML iconHTML  

    The stopping redundancy of the code is an important parameter which arises from analyzing the performance of a linear code under iterative decoding on a binary erasure channel. In this paper, we will consider the stopping redundancy of Reed-Muller codes and related codes. Let R(lscr,m) be the Reed-Muller code of length 2m and order lscr. Schwartz and Vardy gave a recursive construction of parity-check matrices for the Reed-Muller codes, and asked whether the number of rows in those parity-check matrices is the stopping redundancy of the codes. We prove that the stopping redundancy of R(m-2,m), which is also the extended Hamming code of length 2m, is 2m-1 and thus show that the recursive bound is tight in this case. We prove that the stopping redundancy of the simplex code equals its redundancy. Several constructions of codes for which the stopping redundancy equals the redundancy are discussed. We prove an upper bound on the stopping redundancy of R(1,m). This bound is better than the known recursive bound and thus gives a negative answer to the question of Schwartz and Vardy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error Exponents for Recursive Decoding of Reed&#8211;Muller Codes on a Binary-Symmetric Channel

    Publication Year: 2006 , Page(s): 4880 - 4891
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (354 KB) |  | HTML iconHTML  

    Error exponents are studied for recursive decoding of Reed-Muller (RM) codes and their subcodes used on a binary-symmetric channel. The decoding process is first decomposed into similar steps, with one new information bit derived in each step. Multiple recursive additions and multiplications of the randomly corrupted channel outputs plusmn1 are performed using a specific order of these two operations in each step. Recalculated random outputs are compared in terms of their exponential moments. As a result, tight analytical bounds are obtained for decoding error probability of the two recursive algorithms considered in the paper. For both algorithms, the derived error exponents almost coincide with simulation results. Comparison of these bounds with similar bounds for bounded distance decoding and majority decoding shows that recursive decoding can reduce the output error probability of the latter two algorithms by five or more orders of magnitude even on the short block length of 256. It is also proven that the error probability of recursive decoding can be exponentially reduced by eliminating one or a few information bits from the original RM code View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonbinary Stabilizer Codes Over Finite Fields

    Publication Year: 2006 , Page(s): 4892 - 4914
    Cited by:  Papers (33)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (718 KB) |  | HTML iconHTML  

    One formidable difficulty in quantum communication and computation is to protect information-carrying quantum states against undesired interactions with the environment. To address this difficulty, many good quantum error-correcting codes have been derived as binary stabilizer codes. Fault-tolerant quantum computation prompted the study of nonbinary quantum codes, but the theory of such codes is not as advanced as that of binary quantum codes. This paper describes the basic theory of stabilizer codes over finite fields. The relation between stabilizer codes and general quantum codes is clarified by introducing a Galois theory for these objects. A characterization of nonbinary stabilizer codes over Fq in terms of classical codes over Fq 2 is provided that generalizes the well-known notion of additive codes over F4 of the binary case. This paper also derives lower and upper bounds on the minimum distance of stabilizer codes, gives several code constructions, and derives numerous families of stabilizer codes, including quantum Hamming codes, quadratic residue codes, quantum Melas codes, quantum Bose-Chaudhuri-Hocquenghem (BCH) codes, and quantum character codes. The puncturing theory by Rains is generalized to additive codes that are not necessarily pure. Bounds on the maximal length of maximum distance separable stabilizer codes are given. A discussion of open problems concludes this paper View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Universal Lossless Compression With Unknown Alphabets&#8212;The Average Case

    Publication Year: 2006 , Page(s): 4915 - 4944
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (840 KB) |  | HTML iconHTML  

    Universal compression of patterns of sequences generated by independent and identically distributed (i.i.d.) sources with unknown, possibly large, alphabets is investigated. A pattern is a sequence of indices that contains all consecutive indices in increasing order of first occurrence. If the alphabet of a source that generated a sequence is unknown, the inevitable cost of coding the unknown alphabet symbols can be exploited to create the pattern of the sequence. This pattern can in turn be compressed by itself. It is shown that if the alphabet size k is essentially small, then the average minimax and maximin redundancies as well as the redundancy of every code for almost every source, when compressing a pattern, consist of at least 0.5log(n/k3) bits per each unknown probability parameter, and if all alphabet letters are likely to occur, there exist codes whose redundancy is at most 0.5log(n/k2) bits per each unknown probability parameter, where n is the length of the data sequences. Otherwise, if the alphabet is large, these redundancies are essentially at least Theta(n-2/3 ) bits per symbol, and there exist codes that achieve redundancy of O(n-1/2) bits per symbol. Two suboptimal low-complexity sequential algorithms for compression of patterns are presented and their description lengths analyzed, also pointing out that the pattern average universal description length can decrease below the underlying i.i.d. entropy for large enough alphabets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Quantization of Multiple Sources Using Nonnegative Integer Bit Allocation

    Publication Year: 2006 , Page(s): 4945 - 4964
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (445 KB) |  | HTML iconHTML  

    Asymptotically optimal real-valued bit allocation among a set of quantizers for a finite collection of sources was derived in 1963 by Huang and Schultheiss, and an algorithm for obtaining an optimal nonnegative integer-valued bit allocation was given by Fox in 1966. We prove that, for a given bit budget, the set of optimal nonnegative integer-valued bit allocations is equal to the set of nonnegative integer-valued bit allocation vectors which minimize the Euclidean distance to the optimal real-valued bit-allocation vector of Huang and Schultheiss. We also give an algorithm for finding optimal nonnegative integer-valued bit allocations. The algorithm has lower computational complexity than Fox's algorithm, as the bit budget grows. Finally, we compare the performance of the Huang-Schultheiss solution to that of an optimal integer-valued bit allocation. Specifically, we derive upper and lower bounds on the deviation of the mean-squared error (MSE) using optimal integer-valued bit allocation from the MSE using optimal real-valued bit allocation. It is shown that, for asymptotically large transmission rates, optimal integer-valued bit allocations do not necessarily achieve the same performance as that predicted by Huang-Schultheiss for optimal real-valued bit allocations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Randomizing Functions: Simulation of a Discrete Probability Distribution Using a Source of Unknown Distribution

    Publication Year: 2006 , Page(s): 4965 - 4976
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (354 KB) |  | HTML iconHTML  

    In this paper, we characterize functions that simulate independent unbiased coin flips from independent coin flips of unknown bias. We call such functions randomizing. Our characterization of randomizing functions enables us to identify the functions that generate the largest average number of fair coin flips from a fixed number of biased coin flips. We show that these optimal functions are efficiently computable. Then we generalize the characterization, and we present a method to simulate an arbitrary rational probability distribution optimally (in terms of the average number of output digits) and efficiently (in terms of computational complexity) from outputs of many-faced dice of unknown distribution. We also study randomizing functions on exhaustive prefix-free sets View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spatial Capacity of Multiple-Access Wireless Networks

    Publication Year: 2006 , Page(s): 4977 - 4988
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (327 KB) |  | HTML iconHTML  

    We study the capacity of multiple-access networks both on uplink and downlink. In our model each user requires a given signal-to-interference-plus-noise ratio (SINR) and the capacity region is obtained as a solution of a power allocation problem. In this paper, we emphasize the differences between uplink and downlink. The mathematical analysis of the capacity region is given in the framework of ergodic point processes and we show the links between the geometry of the network and its capacity region. On the downlink, we pay attention to various network architectures and levels of cooperation between base stations: macrodiversity, load balancing, and traditional cellular networks View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Achievable Rates for the Discrete Memoryless Relay Channel With Partial Feedback Configurations

    Publication Year: 2006 , Page(s): 4989 - 5007
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (540 KB) |  | HTML iconHTML  

    Achievable rates over the discrete memoryless relay channel with partial feedback configurations are proposed. Specifically, we consider partial feedback from the receiver to the sender as well as partial feedback from the relay to the sender. These achievable rates are calculated for the general Gaussian and the Z relay channels and are shown to improve on the known one-way achievable rates View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Multiuser Capacity of WDM in a Nonlinear Optical Fiber: Coherent Communication

    Publication Year: 2006 , Page(s): 5008 - 5022
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (355 KB) |  | HTML iconHTML  

    Previous results suggest that the crosstalk produced by the fiber nonlinearity in a WDM system imposes a severe limit to the capacity of optical fiber channels, since the interference power increases faster than the signal power, thereby limiting the maximum achievable signal-to-interference-plus-noise ratio (SINR). We study this system in the weakly nonlinear regime as a multiple-access channel, and show that by optimally using the information from all the channels for detection, the change in the capacity region due to the nonlinear effect is minimal. On the other hand, if the receiver uses the output of only one wavelength channel, the capacity is significantly reduced due to the nonlinearity, and saturates as the interference power becomes comparable to the noise, which is consistent with earlier results. The results hold in channels with or without memory. Every point in the capacity region can be achieved without knowledge of the nonlinearity parameters at the transmitters. The structures of optimal/suboptimal receivers are briefly discussed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Deterministic-Code Capacity of the Two-User Discrete Memoryless Arbitrarily Varying General Broadcast Channel With Degraded Message Sets

    Publication Year: 2006 , Page(s): 5023 - 5044
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (596 KB)  

    An inner bound on the deterministic-code capacity region of the two-user discrete memoryless arbitrarily varying general broadcast channel (AVGBC) was characterized by Jahn, assuming that the common message capacity is nonzero; however, he did not indicate how one could decide whether the latter capacity is positive. Csiszaacuter and Narayan's result for the single-user arbitrarily varying channel (AVC) establishes the missing part in Jahn's characterization. Nevertheless, being based on Ahlswede's elimination technique, Jahn's characterization is not applicable for symmetrizable channels under state constraint. Here, the various notions of symmetrizability for the two-user broadcast AVC are defined. Sufficient non-symmetrizability condition that renders the common message capacity of the AVGBC positive is identified using an approach different from Jahn's. The decoding rules we use establish an achievable region under state and input constraints for the family of degraded message sets codes over the AVGBC View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MIMO Broadcast Channels With Finite-Rate Feedback

    Publication Year: 2006 , Page(s): 5045 - 5060
    Cited by:  Papers (550)  |  Patents (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1368 KB) |  | HTML iconHTML  

    Multiple transmit antennas in a downlink channel can provide tremendous capacity (i.e., multiplexing) gains, even when receivers have only single antennas. However, receiver and transmitter channel state information is generally required. In this correspondence, a system where each receiver has perfect channel knowledge, but the transmitter only receives quantized information regarding the channel instantiation is analyzed. The well-known zero-forcing transmission technique is considered, and simple expressions for the throughput degradation due to finite-rate feedback are derived. A key finding is that the feedback rate per mobile must be increased linearly with the signal-to-noise ratio (SNR) (in decibels) in order to achieve the full multiplexing gain. This is in sharp contrast to point-to-point multiple-input multiple-output (MIMO) systems, in which it is not necessary to increase the feedback rate as a function of the SNR View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Space&#8211;Time Trellis Codes Achieving Optimal Diversity Multiplexing Tradeoff

    Publication Year: 2006 , Page(s): 5060 - 5067
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (395 KB) |  | HTML iconHTML  

    Multiple antennas can be used for increasing the amount of diversity (diversity gain) or increasing the data rate (the number of degrees of freedom or spatial multiplexing gain) in wireless communication. As quantified by Zheng and Tse, given a multiple-input-multiple-output (MIMO) channel, both gains can, in fact, be simultaneously obtained, but there is a fundamental tradeoff (called the Diversity-Multiplexing Gain (DM-G) tradeoff) between how much of each type of gain, any coding scheme can extract. Space-time codes (STCs) can be employed to make use of these advantages offered by multiple antennas. Space-Time Trellis Codes (STTCs) are known to have better bit error rate performance than Space-Time Block Codes (STBCs), but with a penalty in decoding complexity. Also, for STTCs, the frame length is assumed to be finite and hence zeros are forced towards the end of the frame (called the trailing zeros), inducing rate loss. In this correspondence, we derive an upper bound on the DM-G tradeoff of full-rate STTCs with nonvanishing determinant (NVD). Also, we show that the full-rate STTCs with NVD are optimal under the DM-G tradeoff for any number of transmit and receive antennas, neglecting the rate loss due to trailing zeros. Next, we give an explicit generalized full-rate STTC construction for any number of states of the trellis, which achieves the optimal DM-G tradeoff for any number of transmit and receive antennas, neglecting the rate loss due to trailing zeros View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonreversibility and Equivalent Constructions of Multiple-Unicast Networks

    Publication Year: 2006 , Page(s): 5067 - 5077
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    We prove that for any finite directed acyclic network, there exists a corresponding multiple-unicast network, such that for every alphabet, each network is solvable if and only if the other is solvable, and, for every finite-field alphabet, each network is linearly solvable if and only if the other is linearly solvable. The proof is constructive and creates an extension of the original network by adding exactly s+5m(r-1) new nodes where, in the original network, m is the number of messages, r is the average number of receiver nodes demanding each source message, and s is the number of messages emitted by more than one source. The construction is then used to create a solvable multiple-unicast network which becomes unsolvable over every alphabet size if all of its edge directions are reversed and if the roles of source-receiver pairs are reversed View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Min-Cost Selfish Multicast With Network Coding

    Publication Year: 2006 , Page(s): 5077 - 5087
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (509 KB) |  | HTML iconHTML  

    The single-source min-cost multicast problem, which can be framed as a convex optimization problem with the use of network codes and convex increasing edge costs is considered. A decentralized approach to this problem is presented by Lun, Ratnakar for the case where all users cooperate to reach the global minimum. Further, the cost for the scenario where each of the multicast receivers greedily routes its flows is analyzed and the existence of a Nash equilibrium is proved. An allocation rule by which edge cost at each edge is allocated to flows through that edge is presented. We prove that under our pricing rule, the flow cost at user equilibrium is the same as the min-cost. This leads to the construction of a selfish flow-steering algorithm for each receiver, which is also globally optimal. Further, the algorithm is extended for completely distributed flow adaptation at nodes in the network to achieve globally minimal cost in steady state. Analogous results are also presented for the case of multiple multicast sessions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Large Deviations Analysis of Scheduling in Wireless Networks

    Publication Year: 2006 , Page(s): 5088 - 5098
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (712 KB) |  | HTML iconHTML  

    In this correspondence, we consider a cellular network consisting of a base station and N receivers. The channel states of the receivers are assumed to be identical and independent of each other. The goal is to compare the throughput of two different scheduling policies (a queue-length-based (QLB) policy and a greedy policy) given an upper bound on the queue overflow probability or the delay violation probability. We consider a multistate channel model, where each channel is assumed to be in one of L states. Given an upper bound on the queue overflow probability or an upper bound on the delay violation probability, we show that the total network throughput of the (QLB) policy is no less than the throughput of the greedy policy for all N. We also obtain a lower bound on the throughput of the (QLB) policy. For sufficiently large N, the lower bound is shown to be tight, strictly increasing with N, and strictly larger than the throughput of the greedy policy. Further, for a simple multistate channel model-ON-OFF channel, we prove that the lower bound is tight for all N View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering