Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Information Sciences and Systems, 2007. CISS '07. 41st Annual Conference on

Date 14-16 March 2007

Filter Results

Displaying Results 1 - 25 of 185
  • [Front cover]

    Publication Year: 2007
    Save to Project icon | Request Permissions | PDF file iconPDF (169 KB)  
    Freely Available from IEEE
  • Forty-first Annual Conference on Information Sciences and Systems

    Publication Year: 2007 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (18 KB)  
    Freely Available from IEEE
  • Copyright page

    Publication Year: 2007 , Page(s): ii
    Save to Project icon | Request Permissions | PDF file iconPDF (85 KB)  
    Freely Available from IEEE
  • Greetings

    Publication Year: 2007 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (513 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2007 , Page(s): v - xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (569 KB)  
    Freely Available from IEEE
  • Algorithms for Relaying via Channel Quantization in Finite Rate Feedback Limited Sensor Networks

    Publication Year: 2007 , Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (407 KB) |  | HTML iconHTML  

    Relaying is often advocated for improving system performance by enhancing spatial diversity in wireless networks. In this paper, we address the issue of energy tradeoff made by relay nodes between transmitting their own data and forwarding other nodes' information in fading channels. First, assuming perfect channel state information (CSI), we propose a power control policy in a two-node relay network under which total energy consumption across both nodes is minimized while meeting both outage probability requirements. However perfect CSI at each node is not possible in general due to bandwidth limitations that prevent the full exchange of precise channel information. Thus in this paper, we develop power control algorithms for relaying under bandwidth constraints, via quantization. Specifically, we develop a quantization protocol that accounts for the asymmetric nature of uplink/downlink communication bandwidths and develop an optimal polynomial time algorithm for channel quantization using this protocol. The quantization algorithm minimizes the sum of expected transmission powers at the source and relay, required for collaborative relaying to satisfy the given outage probability. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross-layer Resource Allocation Strategies for Quality-of-Service Driven Opportunistic Routing Over Wireless Relay Networks

    Publication Year: 2007 , Page(s): 7 - 12
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1567 KB) |  | HTML iconHTML  

    We develop a cross-layer optimization framework incorporating opportunistic routing and quality-of service (QoS) driven power allocation. By applying opportunistic routing, we optimally select one relay for each source-destination pair to assist their transmissions. By integrating information theory with the concept of effective capacity, our strategies aim at maximizing the relay network throughput subject to a given delay QoS constraint. We first propose two resource allocation strategies for the QoS driven opportunistic routing, under two different instantaneous power constraints. In the first strategy, we set the instantaneous constraint on the total transmission power of the wireless relay networks; while in the second one, we set the power constraint on each source-destination pair. In addition, we introduce another resource allocation strategy with the average power constraint of each source-destination pair. The simulation results show that the first two strategies have nearly the same effective capacity performance, and that the third strategy outperforms the first two strategies only when the QoS requirement is stringent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Relay Channel with a Wire-tapper

    Publication Year: 2007 , Page(s): 13 - 18
    Cited by:  Papers (21)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (819 KB) |  | HTML iconHTML  

    In this work a relay channel with a wire-tapper is studied for both discrete memoryless and Gaussian channels. The wire-tapper receives a physically degraded version of the destination's signal. We find inner and outer bounds for the capacity-equivocation rate region. We also argue that when the destination receives a physically degraded version of the relay's signal, inner and outer bounds meet for some special cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Resource Allocation for Multi Source-Destination Relay Networks

    Publication Year: 2007 , Page(s): 19 - 24
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1464 KB)  

    We consider a wireless network consisting of multiple sources communicating with their corresponding destinations utilizing a single half-duplex relay. The goal is to minimize the outage probability of the total rate in the network by allocating transmission powers and durations as well as rates to the nodes based on instantaneous channel gains, while satisfying a total average network power. The sources are allowed to use the relay opportunistically, that is only when relaying reduces overall power as opposed to direct transmission. We investigate the effect of interference when all sources transmit simultaneously, and the impact of time division among the sources.We show that dynamic allocation and opportunistic transmission based on instantaneous channel states provide significant reduction in outage compared to any constant resource allocation scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward Maximizing Throughput in Wireless Relay: A General User Cooperation Model

    Publication Year: 2007 , Page(s): 25 - 30
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1698 KB) |  | HTML iconHTML  

    We consider a generalization of the classic two-slot user cooperation model with a target to maximize the throughput. By allowing the relay to transmit, in the second time slot, an arbitrary combination of the partner's data and its new data, we show that the system throughput, which accounts for both the information rate and the outage probability, is in general improved. We further generalize the model to a multiple-user cooperative chain, where a user is given the freedom to, in its designated time slot, relay for any one, some or all of the previous users and at the same time transmit new data. Two specific time allocation methods are discussed with a sliding-window implementation, and their performances analyzed with respect to the availability of the feedback information. Compared to the conventional model, the generalized model considerably increases the system throughput, in addition to increasing the operational flexibility. Computer simulations using practical channel codes confirm the analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Mean Convergence Analysis for Partial Update NLMS Algorithms

    Publication Year: 2007 , Page(s): 31 - 34
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (224 KB) |  | HTML iconHTML  

    This paper discusses the convergence rates of partial update normalized least mean square (NLMS) algorithms for long, finite impulse response (FIR) adaptive filters. We specify the general form of convergence of tap weight vector's mean deviation for white Guassian input, and analyze several best known partial update algorithms' performance. These results are compared with the conventional NLMS algorithm. We further discuss the similarity in update effects of some partial update algorithms and proportionate-type NLMS algorithms. This theoretically demonstrates that for sparse impulse response system identification with white Guassian input, properly designed partial update NLMS algorithms, although need only a fraction of the fully updated NLMS algorithm's computational power, have the potential of achieving better performance than conventional NLMS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feedback and Weighting Mechanisms for Improving Jacobian Estimates in the Adaptive Simultaneous Perturbation Algorithm

    Publication Year: 2007 , Page(s): 35 - 40
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3641 KB) |  | HTML iconHTML  

    It is known that a stochastic approximation (SA) analogue of the deterministic Newton-Raphson algorithm provides an asymptotically optimal or near-optimal form of stochastic search. In a recent paper, Spall (2006) introduces two enhancements that generally improve the quality of the estimates for underlying Jacobian (Hessian) matrices, thereby improving the quality of the estimates for the primary parameters of interest. The first enhancement rests on a feedback process that uses previous Jacobian estimates to reduce the error in the current estimate. The second enhancement is based on the formation of an optimal weighting of "per-iteration" Jacobian estimates. This paper provides a formal convergence analysis for the algorithm introduced in Spall (2006). In particular, we present conditions for the almost sure convergence of the Jacobian estimates with the feedback and weighting. We also develop results for the rate of convergence in both the noisy and noise-free settings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User Allocation in Multi-System, Multi-Service Scenarios: Upper and Lower Performance Bound of Polynomial Time Assignment Algorithms

    Publication Year: 2007 , Page(s): 41 - 46
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1710 KB) |  | HTML iconHTML  

    In this paper we cover the problem of how users of different service classes should be assigned to a set of radio access technologies (RAT). All RAT have overlapping coverage and the aim is to maximize a weighted sum of assignable users. Under the constraint that users cannot be split between multiple air-interfaces the problem is identified as NP-complete. In the first part of the paper we derive upper and lower bounds of polynomial assignment algorithms. Using Lagrangian theory and continuous relaxation we show for polynomial assignments that in scenarios with M air-interfaces there are at most M users less assigned than in the optimum solution. In the second part we present an algorithm and compare its performance to standard load-balancing strategies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proportionate-Type Steepest Descent and NLMS Algorithms

    Publication Year: 2007 , Page(s): 47 - 50
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1191 KB) |  | HTML iconHTML  

    In this paper, a unified framework for representing proportionate type algorithms is presented. This novel representation enables a systematic approach to the problem of design and analysis of proportionate type algorithms. Within this unified framework, the feasibility of predicting the performance of a stochastic proportionate algorithm by analyzing the performance of its associated deterministic steepest descent algorithm is investigated, and found to have merit. Using this insight, various steepest descent algorithms are studied and used to predict and explain the behavior of their counterpart stochastic algorithms. In particular, it is shown that the mu-PNLMS algorithm possesses robust behavior. In addition to this, the epsiv-PNLMS algorithm is proposed and its performance is evaluated. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting Information Flows: Improving Chaff Tolerance by Joint Detection

    Publication Year: 2007 , Page(s): 51 - 56
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2146 KB) |  | HTML iconHTML  

    The problem of detecting encrypted information flows using timing information is considered. An information flow consists of both information-carrying packets and irrelevant packets called chaff. A relay node can perturb the timing of information-carrying packets as well as adding or removing chaff packets. The goal is to detect whether there is an information flow through certain nodes of interest by analyzing the transmission times of these nodes. Under the assumption that the relay of information-carrying packets is subject to a bounded delay constraint, fundamental limits on detection are characterized as the minimum amount of chaff needed for an information flow to mimic independent traffic. A detector based on the optimal chaff-inserting algorithms is proposed. The detector guarantees detection in the presence of an amount of chaff proportional to the total traffic size; furthermore, the proportion increases to 100% exponentially fast as the number of hops on the flow path increases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subspace Sequence Estimation

    Publication Year: 2007 , Page(s): 57 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (315 KB) |  | HTML iconHTML  

    Recent work in the theory of subspace filtering and estimation, for example the Multistage Wiener Filter (MWF) of Goldstein et. al., has focused primarily on the case of finite dimension filtering. While this is the more practical case, as it can be implemented in a general purpose computer, it has left open the theoretical question of the behavior of subspace filters in the infinite dimension case. In this work we begin develop a causal infinite dimensional filter that operates in a subspace similar to that of the MWF and whose performance approaches that of the causal Wiener Filter using this subspace. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum A-Posteriori Estimation in Linear Models With a Gaussian Model Matrix

    Publication Year: 2007 , Page(s): 63 - 67
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (877 KB) |  | HTML iconHTML  

    We consider the Bayesian inference of a random Gaussian vector in a linear model with a Gaussian model matrix. We derive the maximum a-posteriori (MAP) estimator for this model and show that it can be found using a simple line search over a unimodal function that can be efficiently evaluated. Next, we discuss the application of this estimator in the context of near-optimal detection of near-Gaussian-digitally modulated signals and demonstrate through simulations that the MAP estimator outperforms the standard linear MMSE estimator in terms of mean square error (MSE) and bit error rate (BER). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint Detection and Identification of an Unobservable Change in the Distribution of a Random Sequence

    Publication Year: 2007 , Page(s): 68 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1182 KB) |  | HTML iconHTML  

    This paper examines the joint problem of detection and identification of a sudden and unobservable change in the probability distribution function (pdf) of a sequence of independent and identically distributed (i.i.d.) random variables to one of finitely many alternative pdf's. The objective is quick detection of the change and accurate inference of the ensuing pdf. Following a Bayesian approach, a new sequential decision strategy for this problem is revealed and is proven optimal. Geometrical properties of this strategy are demonstrated via numerical examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Belief Propagation Based Multihead Multitrack Detection over Conventional and Bit-Patterned Media Storage Systems

    Publication Year: 2007 , Page(s): 74 - 79
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2310 KB) |  | HTML iconHTML  

    We propose a low complexity detection technique for multihead multitrack recording systems. By exploiting the sparseness of the two dimensional (2-D) partial response channel, we start with the development of an algorithm which performs belief propagation (BP) over the corresponding factor graph. We consider the BP-based detector not only for the partial response channel, but also for the more practical conventional media and bit-patterned media storage systems, with and without media noise. Compared to the maximum likelihood detector which has a prohibitively high complexity that is exponential with both the number of tracks and the number of ISI taps, the proposed detector has a much lower complexity and a fast parallel structure which come at a small performance penalty. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Erasure Codes with Small Overhead Factor and Their Distributed Storage Applications

    Publication Year: 2007 , Page(s): 80 - 85
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1601 KB) |  | HTML iconHTML  

    In this paper, we consider a family of XOR-based erasure codes with finite-sized randomly-generated parity check matrices, and report the results of thorough computational search for suitable erasure codes for distributed storage applications. Although the discovered matrices are not "low density" and the resulting codes are only approximately maximum distance separable (MDS) codes, they have performance advantages over other codes, such as LDPC and IRA (irregular repeat-accumulate) codes, in terms of the overhead factor, that is, the average ratio of the total amount of encoded file blocks for restoring lost blocks to the amount of original file blocks. We designed our codes so that the overhead factor becomes small. While typical LDPC codes use matrices that have several thousand rows, our codes use matrices that have only one thousand rows in consideration of practicable operation time and overhead. Because a method for discovering the most suitable matrix from a large number of matrices has not been found, we executed Monte Carlo simulation for a long time in order to discover a suitable matrix with the lowest overhead factor. We have discovered a family of erasure codes with an overhead factor of 1.002 on average, compared to 1.07 for typical LDPC codes when the number of rows is 1000. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Groups with a Shift Structure: the Schreier Matrix and an Algorithm

    Publication Year: 2007 , Page(s): 86 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (346 KB) |  | HTML iconHTML  

    A time-invariant group trellis is essentially a group shift or time-invariant group code. A group with a shift structure is the branch group of a strongly controllable time-invariant group trellis. We show that the Loeliger and Mittelholzer definition of a group with a shift structure is equivalent to a second definition by using a controllable Schreier matrix, a certain normal series written as a matrix having a special property of the diagonal terms. This suggests a third definition of a group with a shift structure. We use the third definition to derive properties that the group must have, and then give an algorithm to find the shift structure of the group. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Near-ML Decoding of CRC Codes

    Publication Year: 2007 , Page(s): 92 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    We study a new bit-flipping algorithm, coupled with an algebraic decoder, for near-ML decoding of cyclic redundancy check (CRC) codes on the binary AWGN channel. The asymptotic coding gain of such codes approaches 6 dB, and the real gain at 10-4 block error probability is about 4.5 dB. We show that a generalization of Chase's algorithm, called the {a,b} algorithm, is able to achieve nearly all of this gain at modest complexity. Here a denotes the number of bit positions having lowest confidence, and b denotes the maximum number of bits to be flipped among this low-confidence set. For a 16-bit CRC code of length n=1024, we show a=8, b=3 represents a good design choice. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Guidelines for Channel Code Design in Quasi-Static Fading Channels

    Publication Year: 2007 , Page(s): 95 - 100
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1257 KB) |  | HTML iconHTML  

    In this paper, we provide guidelines for the design of good binary error correcting codes in quasi-static fading channels, using established design rules in AWGN channels as a starting point. The proposed analysis is based on the Gaussian assumption of demodulator log-likelihood ratios. This assumption allows to decouple the influence of the convergence threshold, slope in the BER waterfall region and error-floor of the channel code, so that these parameters can be analyzed separately. Our analysis evidences that, contrary to what happens in AWGN channels, the design of good low rate codes in quasi-static fading channels is much simpler than those of standard rates (.3 to .8 ). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analog Coding for Delay-Limited Applications

    Publication Year: 2007 , Page(s): 101 - 106
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1296 KB) |  | HTML iconHTML  

    In this paper, we consider the problem of sending an analog source over an additive white Gaussian noise channel. The traditional analog coding schemes suffer from the threshold effect We introduce two robust schemes for analog conding. Unlike the previous methods, the new methods asymptotically achieve the optimal scaling of the signal-to-distortion-ratio (SDR) without being affected by the threshold effect. Also, we show that approximated versions of these techniques perform well for the practical applications, with a low complexity in encoding/decoding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance Analysis of MIMO Receivers under Imperfect CSIT

    Publication Year: 2007 , Page(s): 107 - 112
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1347 KB) |  | HTML iconHTML  

    This paper considers spatial multiplexing (SM) systems with preceding. The precoder is derived from the singular value decomposition (SVD) of the available channel state information at the transmitter (CSIT) and the receiver is a function of the precoder and the current channel. With perfect CSIT, the MRxMT flat-fading MIMO channel can be decomposed into mm(MT,MR) parallel spatial subchannels. However in practice, the available CSIT suffers from delay-induced error due to the channel temporal variations. Using this outdated CSIT for precoding in SM systems causes interference among the subchannels. Performance of the decorrelator, minimum mean squared error (MMSE) and successive interference cancelation (SIC) receivers is analyzed as the reliability of the available CSIT varies. Explicit expressions for the signal to interference-plus-noise ratio (SINR) and the mean squared error (MSE) are derived. Simulation results are provided to illustrate the significant performance gain achieved by precoding even with a moderate amount of correlation between the available outdated channel estimate and the current channel. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.