By Topic

Information Theory, IEEE Transactions on

Issue 4 • Date Jul 1995

Filter Results

Displaying Results 1 - 25 of 38
  • A coset correlation for sequences with two-valued periodic autocorrelation

    Page(s): 1150 - 1153
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB)  

    Defines a new coset correlation that generalizes previous coset correlation results for m-sequences. This new correlation can be computed in terms of the coset sizes for any sequence that has a two-valued periodic autocorrelation function and that is constant on cosets. Thus the results apply to a larger family of periodic sequences than just m-sequences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proof of a conjecture of Sarwate and Pursley regarding pairs of binary m-sequences

    Page(s): 1153 - 1155
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB)  

    Binary m-sequences are maximal length sequences generated by shift registers of length m, that are employed in navigation, radar, and spread-spectrum communications systems, because of their crosscorrelation properties. It is well known that given a pair of distinct m-sequences, the crosscorrelation function must take on at least three values. The article considers crosscorrelation functions that take on exactly three values, and where these values are preferred in that they are small. The main result is a proof of a conjecture made by Sarwate and Pursley in 1980, that if m≡0 (mod 4) then there are no preferred pairs of binary m-sequences. The proof makes essential use of a deep theorem of McEliece (1971) that restricts the possible weights that can occur in a binary cyclic code View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind adaptive multiuser detection

    Page(s): 944 - 960
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1440 KB)  

    The decorrelating detector and the linear minimum mean-square error (MMSE) detector are known to be effective strategies to counter the presence of multiuser interference in code-division multiple-access channels; in particular, those multiuser detectors provide optimum near-far resistance. When training data sequences are available, the MMSE multiuser detector can be implemented adaptively without knowledge of signature waveforms or received amplitudes. This paper introduces an adaptive multiuser detector which converges (for any initialization) to the MMSE detector without requiring training sequences. This blind multiuser detector requires no more knowledge than does the conventional single-user receiver: the desired user's signature waveform and its timing. The proposed blind multiuser detector is made robust with respect to imprecise knowledge of the received signature waveform of the user of interest View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Asymptotically optimum detection of a weak signal sequence with random time delays

    Page(s): 1169 - 1174
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB)  

    The problem of designing asymptotically optimum detectors for a weak signal sequence with random time delays in the presence of a white Gaussian noise is considered. The multidimensional probability distribution of the time delays is assumed to be known. As a result of asymptotic analysis of the log-likelihood ratio, the asymptotically optimum linear or quadratic detectors and their probability distributions and efficiencies are found View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Properties of the x2 mod N pseudorandom number generator

    Page(s): 1155 - 1159
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (376 KB)  

    In 1986, L. Blum, R.I. Blum, and M. Shub introduced the x2 mod N generator of pseudorandom bit strings and showed, given certain plausible but unproved hypotheses, that it has the desirable cryptographic property of unpredictability. They also studied the period length of the sequences produced by this generator and proposed a way to guarantee that these sequences will have maximum possible period. In this correspondence we prove that it is very likely that for many values of N the sequences produced by the x2 mod N generator are usually not balanced (that is, having equal frequency of 0's and 1's). We further prove that the proposed method for guaranteeing long periods is also very likely to guarantee relatively large imbalances between the frequencies of 0's and 1's. However, we also prove that the average imbalance for these sequences is no worse than what would be expected in a truly random bit string of the same length. Thus our results provide further support for the use of the x2 mod N generator in cryptographic applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • MMSE decision-feedback equalizers: finite-length results

    Page(s): 961 - 975
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1020 KB)  

    This paper extends a number of results on the infinite-length minimum-mean-square-error decision Feedback equalizer (MMSE-DFE) reported by Cioffi, Dudevoir, Eyuboglu and Forney (see IEEE Trans. Commun., 1995) to the finite-length case. Cholesky factorization and displacement structure theory are demonstrated to be two powerful analytical tools for analyzing the finite-length MMSE-DFE. Our objective throughout the paper is to establish finite-length analogs of the well-known infinite-length MMSE-DFE results. Similarities and differences between the two cases are examined and delineated. Finally, convergence of our derived finite-length results to their well-established infinite-length counterparts is shown View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Conditional entropy-constrained vector quantization: high-rate theory and design algorithms

    Page(s): 901 - 916
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1176 KB)  

    The performance of optimum vector quantizers subject to a conditional entropy constraint is studied. This new class of vector quantizers was originally suggested by Chou and Lookabaugh (1990). A locally optimal design of this kind of vector quantizer can be accomplished through a generalization of the well-known entropy-constrained vector quantizer (ECVQ) algorithm. This generalization of the ECVQ algorithm to a conditional entropy-constrained is called CECVQ, i.e., conditional ECVQ. Furthermore, we have extended the high-rate quantization theory to this new class of quantizers to obtain a new high-rate performance bound. The new performance bound is compared and shown to be consistent with bounds derived through conditional rate-distortion theory. A new algorithm for designing entropy-constrained vector quantizers was introduced by Garrido, Pearlman, and Finamore (see IEEE Trans. Circuits Syst. Video Technol., vol.5, no.2, p.83-95, 1995), and is named entropy-constrained pairwise nearest neighbor (ECPNN). The algorithm is basically an entropy-constrained version of the pairwise nearest neighbor (ECPNN) clustering algorithm of Equitz (1989). By a natural extension of the ECPNN algorithm we develop another algorithm, called CECPNN, that designs conditional entropy-constrained vector quantizers. Through simulation results on synthetic sources, we show that CECPNN and CECVQ have very close distortion-rate performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trellis-oriented decomposition and trellis complexity of composite-length cyclic codes

    Page(s): 1185 - 1191
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (656 KB)  

    The trellis complexity of composite-length cyclic codes (CLCC's) is addressed. We first investigate the trellis properties of concatenated and product codes in general. Known factoring of CLCC's into concatenated subcodes is thereby employed to derive upper bounds on the minimal trellis size and state-space profile. New decomposition of CLCC's into product subcodes is established and utilized to derive further upper hounds on the trellis parameters. The coordinate permutations that correspond to these bounds are exhibited. Additionally, new results on the generalized Hamming weights of CLCC's are obtained. The reduction in trellis complexity of many CLCC's leads to soft-decision decoders with relatively low complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Error-correcting two-dimensional modulation codes

    Page(s): 1116 - 1126
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1032 KB)  

    Modulation coding, to limit the number of consecutive zeros in a data stream, is essential in digital magnetic recording/playback systems. Additionally, such systems require error-correction coding to ensure that the decoded output matches the recorder input, even if noise is present. Typically, these two coding steps have been performed independently, although various methods of combining them into one step have recently appeared. Another recent development is two-dimensional modulation codes, which meet runlength constraints using several parallel recording tracks, significantly increasing channel capacity. The article combines these two ideas. Previous techniques (both block and trellis structures) for combining error correction and modulation coding are surveyed, with discussion of their applicability in the two-dimensional case. One approach, based on trellis-coded modulation, is explored in detail, and a class of codes is developed which exploits the increased capacity to achieve good error-correcting ability at the same rate as common non-error-correcting one-dimensional codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the capacity of the discrete-time Gaussian channel with delayed feedback

    Page(s): 1051 - 1059
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (412 KB)  

    Gives an upper bound on the finite block length capacity of the discrete-time nonstationary Gaussian channel with delayed feedback. With the aid of minimization of a quadratic form, it is proved that the L time-delayed feedback capacity CnFBL(P) and the nonfeedback capacity Cn(P) satisfy Cn(P)⩽CnFBL(P)⩽Cn (P*) where P* is concretely given. The authors give a sufficient condition for CnFBL(P) to be increased by L time-delayed feedback. Finally the authors also give a necessary and sufficient condition for CnFB2(P) of a class of Gaussian channel to be increased by 2 time-delayed feedback View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On calculating Sakrison's rate distortion function for classes of parameterized sources

    Page(s): 1160 - 1163
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (352 KB)  

    Sakrison extended Shannon's notion of the rate distortion function to parameterized classes of sources by taking a minimax approach and defining a measure of the minimum rate required for information reconstruction subject to a prescribed fidelity level D. Unfortunately, calculation of Sakrison's rate distortion function may be very difficult because analytic solutions do not generally exist and there has been a lack of a constructive method for finding the rate. However, an approach presented in this correspondence may be used to calculate an approximation to Sakrison's rate distortion function for classes of sources with a finite, discrete input space and a continuous parameter space. The approach gives rise to an algorithm which is shown to be convergent and numerical examples are studied View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The capacity of average and peak-power-limited quadrature Gaussian channels

    Page(s): 1060 - 1071
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1068 KB)  

    The capacity C(ρa, ρp) of the discrete-time quadrature additive Gaussian channel (QAGC) with inputs subjected to (normalized) average and peak power constraints, ρa and ρp respectively, is considered. By generalizing Smith's results for the scalar average and peak-power-constrained Gaussian channel, it is shown that the capacity achieving distribution is discrete in amplitude (envelope), having a finite number of mass-points, with a uniformly distributed independent phase and it is geometrically described by concentric circles. It is shown that with peak power being solely the effective constraint, a constant envelope with uniformly distributed phase input is capacity achieving for ρp⩽7.8 (dB 4.8 (dB) per dimension). The capacity under a peak-power constraint is evaluated for a wide range of ρp, by incorporating the theoretical observations into a nonlinear dynamic programming procedure. Closed-form expressions for the asymptotic (low and large ρa and ρp) capacity and the corresponding capacity achieving distribution and for lower and upper bounds on the capacity C(ρa, ρp ) are developed. The capacity C(ρa, ρp ) provides an improved ultimate upper bound on the reliable information rates transmitted over the QAGC with any communication systems subjected to both average and peak-power limitations, when compared to the classical Shannon formula for the capacity of the QAGC which does not account for the peak-power constraint. This is in particular important for systems that operate with restrictive (close to 1) average-to-peak power ratio ρap and at moderate power values View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symmetric capacity and signal design for L-out-of-K symbol-synchronous CDMA Gaussian channels

    Page(s): 1072 - 1082
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (760 KB)  

    We consider the symbol-synchronous Gaussian L-out-of-K code-division multiaccess channel, and obtain the capacity region and the upper and lower bounds to the symmetric capacity. The capacity region is found to be the same with or without frame synchronism. The lower bound depends on the signature waveforms through the eigenvalues of the SNR-weighted crosscorrelation matrix. We find a sufficient condition for the signature waveform set to maximize this lower bound and give an algorithm to construct a set of signature waveforms satisfying the sufficient condition View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Variable-rate tree-structured vector quantizers

    Page(s): 917 - 930
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (884 KB)  

    In general, growth algorithms for optimal tree-structured vector quantizers do not exist. In this paper we show that if the source satisfies certain conditions; namely, that of diminishing marginal returns; optimal growth algorithms do exist. We present such an algorithm and compare its performance with that of other tree growth algorithms. Even for sources that do not meet the necessary conditions for the growth algorithm to be optimal, such as for speech with unknown statistics, it is seen by simulation that the algorithm outperforms other known growth algorithms, For sources that do not satisfy the required conditions, the algorithm presented here can also be used to grow the initial tree for the pruning process. The performance of such pruned trees is superior to that of trees pruned from full trees of the same rate View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Threshold detection in correlated non-Gaussian noise fields

    Page(s): 976 - 1000
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1728 KB)  

    Classical threshold detection theory for arbitrary noise and signals, based on independent noise samples, i.e., using only the first-order probability density of the noise, is generalized to include the critical additional statistical information contained in the (first-order) covariances of the noise. This is accomplished by replacing the actual, generalized noise by a “quasi-equivalent” (QE-)model employing both the first-order PDF and covariance. The result is a “near-optimum” approach, which is the best available to date incorporating these fundamental statistical data. Space-time noise and signal fields are specifically considered throughout. Even with additive white Gaussian noise (AWGN) worthwhile processing gains per sample (Γ(c)) are attainable, often O(10-20 dB), over the usual independent sampling procedures, with corresponding reductions in the minimum detectable signal. The earlier moving average (MA) noise model, while not realistic, is included because it reduces in the Gaussian noise cases to the threshold optimum results of previous analyses, while the QE-model remains suboptimum here because of the necessary constraints imposed in combining the PDF and covariance information into the detector structure. Full space-time formulation is provided in general, with the important special cases of adaptive and preformed beams in reception. The needed (first-order) PDF here is given by the canonical Class A and Class B noise models. The general analysis, including the canonical threshold algorithms, correlation gain factors Γ(c), detection parameters for the QE-model, along with some representative numerical results for both coherent and incoherent detection, based on four representative Toeplitz covariance models is presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constructing a better cyclic code than cyclic Reed-Solomon code

    Page(s): 1191 - 1194
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (288 KB)  

    Problems of computing the generator polynomial for a (q+1, q-d+2) reversible cyclic BCH code over GF(q), q=pm, having the minimum Hamming distance d, are presented. The considered code is almost as short as a Reed-Solomon (RS) code but it generates codewords with two information symbols more than the codewords of RS code with the same minimum Hamming distance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decoding algorithms with input quantization and maximum error correction capability

    Page(s): 1126 - 1133
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (604 KB)  

    Decoding that uses soft-decision information but with multiple low-complexity decoders are investigated. These decoders correct only errors and erasures. The structure of the receiver consists of a bank of z demodulators followed by errors- and erasures-correcting decoders operating in parallel. Each demodulator has a threshold for determining when to erase a given symbol. We assign a cost f(θ) to the noise for causing an erasure when the receiver uses a particular threshold θ and a (larger) cost f(θ¯) for causing an error. The goal in designing the receiver is to choose the thresholds to maximize the noise cost which is necessary to cause a decoding error. We demonstrate that the above formulation is solvable for many channels including the M-ary input-output channel, the additive channel with coherent demodulation, and an additive channel with orthogonal modulation and noncoherent demodulation. Then we show that the maximum worst case error-correcting capability of the parallel decoding algorithms is the same as the maximum worst case error-correcting capability of a correlation decoder with the same number of quantization regions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cosets of convolutional codes with short maximum zero-run lengths

    Page(s): 1145 - 1150
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB)  

    Communication systems and storage systems derive symbol synchronization from the received symbol stream. To facilitate symbol synchronization, the channel sequences must have a short maximum zero-run length. One way to achieve this is to use a coset of an (n, k) convolutional code to generate the channel inputs. For k⩽n-2, it is shown that there exist cosets with short maximum zero-run length for any constraint length. Any coset of an (n, n-1) code with high rate and/or large constraint length is shown to have a large maximum zero-run length. A systematic procedure for obtaining cosets with short maximum zero-run length from (n, k) codes is presented, and new cosets with short maximum zero-run length and large minimum Hamming distance are tabulated View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiuser signaling in the symbol-synchronous AWGN channel

    Page(s): 1174 - 1178
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB)  

    Multiuser signal set design for the energy-constrained linear real-additive symbol-synchronous additive white Gaussian noise channel is investigated in this correspondence. An argument is presented showing the suboptimality of binary alphabets which, in turn, uncovers a rule for the formation of multiuser signal sets. This simple rule leads to a theorem stating the dimensionality at which additional users can be added to a multiuser system without decreasing the minimum Euclidean distance. Specific vector sets with the desired property are then constructed as examples and some generalizations are discussed. The theorem may also be used as a platform for the design of more efficient multiuser codes incorporating redundant signal sets. Two such codes are presented. In particular, a code combining four-dimensional unit-energy signals with a rate-2/3 convolutional code obtains a summed rate of 1.75 bits/dimension in the two-user adder channel with received minimum Euclidean distance of 2 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New directions in the theory of identification via channels

    Page(s): 1040 - 1050
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (728 KB)  

    Studies two problems in the theory of identification via channels. The first problem concerns the identification via channels with noisy feedback. Whereas for Shannon's transmission problem the capacity of a discrete memoryless channel does not change with feedback, it is known that the identification capacity is affected by feedback. The authors study its dependence on the feedback channel. They prove both, a direct and a converse coding theorem. Although a gap exists between the upper and lower bounds provided by these two theorems, the known result for channels without feedback and the known result for channels with complete feedback, are both special cases of these two new theorems, because in these cases the bounds coincide. The second problem is the identification via wiretap channels. A secrecy identification capacity is defined for the wiretap channel. A “dichotomy theorem” is proved which says that the second-order secrecy identification capacity is the same as Shannon's capacity for the main channel as long as the secrecy transmission capacity of the wiretap channel is not zero, and zero otherwise. Equivalently, one can say that the identification capacity is not lowered by the presence of a wiretapper as long as 1 bit can be transmitted (or identified) correctly with arbitrarily small error probability. This is in strong contrast to the case of transmission View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robustness of decentralized tests with ϵ-contamination prior

    Page(s): 1164 - 1169
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB)  

    We consider a decentralized detection problem where the prior density is not completely known, but is assumed to belong to an ε-contamination class. The expressions for the infimum and the supremum of the posterior probability that the parameter under question is in a given region, as the prior varies over the ε-contamination class, are derived. Numerical results are obtained for a specific case of an exponentially distributed observation and an exponentially distributed nominal prior. Asymptotic (as number of sensors tends to a large value) results are also obtained. The results illustrate the degree of robustness achieved with quantized observations as compared to unquantized observations View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bennett's integral for vector quantizers

    Page(s): 886 - 900
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1184 KB)  

    This paper extends Bennett's (1948) integral from scalar to vector quantizers, giving a simple formula that expresses the rth-power distortion of a many-point vector quantizer in terms of the number of points, point density function, inertial profile, and the distribution of the source. The inertial profile specifies the normalized moment of inertia of quantization cells as a function of location. The extension is formulated in terms of a sequence of quantizers whose point density and inertial profile approach known functions as the number of points increase. Precise conditions are given for the convergence of distortion (suitably normalized) to Bennett's integral. Previous extensions did not include the inertial profile and, consequently, provided only bounds or applied only to quantizers with congruent cells, such as lattice and optimal quantizers. The new version of Bennett's integral provides a framework for the analysis of suboptimal structured vector quantizers. It is shown how the loss in performance of such quantizers, relative to optimal unstructured ones, can be decomposed into point density and cell shape losses. As examples, these losses are computed for product quantizers and used to gain further understanding of the performance of scalar quantizers applied to stationary, memoryless sources and of transform codes applied to Gaussian sources with memory. It is shown that the short-coming of such quantizers is that they must compromise between point density and cell shapes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New optimal ternary linear codes

    Page(s): 1182 - 1185
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB)  

    The class of quasi-twisted (QT) codes is a generalization of the class of quasi-cyclic codes, similar to the way constacyclic codes are a generalization of cyclic codes. In this paper, rate 1/p QT codes over GF(3) are presented which have been constructed using integer linear programming and heuristic combinatorial optimization. Many of these attain the maximum possible minimum distance for any linear code with the given parameters, and several improve the maximum known minimum distances. Two of these new codes, namely (90, 6, 57) and (120, 6, 78), are optimal and so prove that d3(90, 6)=57 and d3(120, 6)=78 View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new construction for n-track (d, k) codes with redundancy

    Page(s): 1107 - 1115
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (708 KB)  

    A new construction for n-track (d, k) codes with redundancy r, referred to as (d, k; n, r) codes, is presented. This construction applies single-track (d, k+Δk) codes (with certain extra constraints and appropriate amounts of delay) on each of the n tracks. This construction achieves a large part of the capacity increases possible when using (d, k; n, r) codes, has simple encoders and decoders, and exhibits considerable robustness to faulty tracks. It is shown that under this construction, (d, k; n, r) codes can achieve at least (n-r-1:)/n*100% of the gap in capacity between conventional (d, k) and (d, ∞) codes. Several practical examples of (d, k; n, r) codes under this construction are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast parallel algorithms for decoding Reed-Solomon codes based on remainder polynomials

    Page(s): 873 - 885
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (848 KB)  

    The problem of decoding cyclic error correcting codes is one of solving a constrained polynomial congruence, often achieved using the Berlekamp-Massey or the extended Euclidean algorithm on a key equation involving the syndrome polynomial. A module-theoretic approach to the solution of polynomial congruences is developed here using the notion of exact sequences. This technique is applied to the Welch-Berlekamp (1986) key equation for decoding Reed-Solomon codes for which the computation of syndromes is not required. It leads directly to new and efficient parallel decoding algorithms that can be realized with a systolic array. The architectural issues for one of these parallel decoding algorithms are examined in some detail View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Information Theory publishes papers concerned with the transmission, processing, and utilization of information.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Frank R. Kschischang

Department of Electrical and Computer Engineering