Scheduled System Maintenance:
On May 6th, system maintenance will take place from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). During this time, there may be intermittent impact on performance. We apologize for the inconvenience.
By Topic

Selected Areas in Communications, IEEE Journal on

Issue 4 • Date April 2001

Filter Results

Displaying Results 1 - 21 of 21
  • Editorial signal processing for high density storage channels

    Publication Year: 2001 , Page(s): 577 - 581
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | PDF file iconPDF (47 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • A combinatorial technique for constructing high-rate MTR-RLL codes

    Publication Year: 2001 , Page(s): 582 - 588
    Cited by:  Papers (5)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB) |  | HTML iconHTML  

    We present advanced combinatorial techniques for constructing maximum runlength-limited (RLL) block codes and maximum transition run (MTR) codes. These codes find widespread application in recording systems. The proposed techniques are used to construct a high-rate multipurpose modulation code for recording systems. The code, a rate 16/17, (0,3,2,2) MTR code, that also fulfills (0,15,9,9) RLL constraints is a high-rate distance-enhancing code with additional constraints for improving timing and gain control. The encoder and decoder have a particularly efficient architecture and allow an instantaneous translation of 16-bit source words into 17-bit codewords and vice versa. The code has been implemented in Lucent read-channel chips and has excellent performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Symbol rate timing recovery for higher order partial response channels

    Publication Year: 2001 , Page(s): 635 - 648
    Cited by:  Papers (11)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (328 KB) |  | HTML iconHTML  

    This paper provides a framework for analyzing and comparing timing recovery schemes for higher order partial response (PR) channels. Several classes of timing recovery schemes are analyzed. Timing recovery loops employing timing gradients or phase detectors derived from the minimum mean-square error (MMSE) criterion, the maximum likelihood (ML) criterion, and the timing function approach of Mueller and Muller (1976) (MRI) are analyzed and compared. The paper formulates and analyzes MMSE timing recovery in the context of a slope look-up table (SLT), which is amenable for an efficient implementation. The properties and performance of the SLT-based timing loop are compared with the ML and MM loops. Analysis and time step simulations for a practical 16-state PR magnetic recording channel show that the output noise jitter of the ML phase detector is worse than that of the SLT-based phase detector. This is primarily due to the presence of self-noise in the ML detector. Consequently, the SLT-based phase detector is to be preferred. In comparing the SLT and MM based timing loops, it is found that both schemes have similar jitter performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Art of constructing low-complexity encoders/decoders for constrained block codes

    Publication Year: 2001 , Page(s): 589 - 601
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (272 KB) |  | HTML iconHTML  

    A rate p : q block encoder is a dataword-to-codeword assignment from 2p p-bit datawords to 2p q-bit codewords, and the corresponding block decoder is the inverse of the encoder. When designing block encoders/decoders for constrained systems, often, more than 2p codewords are available. In this paper, as our main contribution, we propose efficient heuristic computer algorithms to eliminate the excess codewords and to construct low hardware complexity block encoders/decoders. For (0,4/4) and (0,3/6) PRML constraints, block encoders/decoders generated using the proposed algorithms are comparable in complexity to human-generated encoders/decoders, but are significantly simpler than lexicographical encoders/decoders View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum runlength-limited codes with error control capabilities

    Publication Year: 2001 , Page(s): 602 - 611
    Cited by:  Papers (32)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB) |  | HTML iconHTML  

    New methods are presented to protect maximum runlength-limited sequences against random and burst errors and to avoid error propagation. The methods employ parallel conversion techniques and enumerative coding algorithms that transform binary user information into constrained codewords. The new schemes have a low complexity and are very efficient. The approach can be used for modulation coding in recording systems and for synchronization and line coding in communication systems. The schemes enable the usage of high-rate constrained codes, as error control can be provided with similar capabilities as for unconstrained sequences View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method for reducing the effects of thermal asperities

    Publication Year: 2001 , Page(s): 662 - 667
    Cited by:  Papers (7)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (100 KB) |  | HTML iconHTML  

    A new method of combating the effects of thermal asperities (TAs) in disk drives is proposed. In the classic method, changing a parameter in the equalizer modifies the duration of a TA but no attempt is made to modify the detector that follows the equalizer or to improve its performance. The proposed method involves modification of both the equalizer and the detector. Specifically, two channels are run in parallel. One channel is matched to the partial response polynomial P(D), whereas the other is matched to the partial response polynomial (1-D)P(D). P(D) is the target polynomial of the existing magnetic recording channel. The Viterbi detector in the (1-D)P(D) channel has better decoded bit error rate (BER) during a TA, and the Viterbi detector in the P(D) channel has better BER in the absence of a TA. The overall decoded bit stream is selected from these two Viterbi detectors in accordance as to whether a TA has been detected. The performance of the proposed method was studied via simulation for the magnetic recording channel with the target polynomial P(D)=(1-D)(1+D)2 (EPR4). The proposed system was found to be superior to existing methods in the presence of TAs, while having the same performance in the presence of additive white Gaussian noise (AWGN) View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Concatenated code system design for storage channels

    Publication Year: 2001 , Page(s): 709 - 718
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (260 KB) |  | HTML iconHTML  

    A number of papers have been published on the concatenation of an outer code with a partial response (PR) channel, where the outer code is a turbo code, a convolutional code, or a low-density parity-check code. This paper deals with the second case, assuming EPR4 and modified extended EPR4 (MEEPR4) partial response (PR) targets. The goals in this work include (1) the joint optimization of interleaver and precoder for a fixed outer convolutional code and PR target, (2) the choice of optimal code rate for both PR targets assuming a Lorentzian model, and (3) an assessment of the performance of these codes in the presence of thermal asperities. We introduce mathematical and algorithmic tools for accomplishing these goals and present simulation results that support our approach. Among the positive results is the ability to lower the well-known error rate floor of these concatenated schemes for arbitrary PR channels View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Iterative decoding for partial response (PR), equalized, magneto-optical (MO) data storage channels

    Publication Year: 2001 , Page(s): 774 - 782
    Cited by:  Papers (14)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (204 KB) |  | HTML iconHTML  

    Turbo codes and low-density parity check (LDPC) codes with iterative decoding have received significant research attention because of their remarkable near-capacity performance for additive white Gaussian noise (AWGN) channels. Previously, turbo code and LDPC code variants are being investigated as potential candidates for high-density magnetic recording channels suffering from low signal-to-noise ratios (SNR). We address the application of turbo codes and LDPC codes to magneto-optical (MO) recording channels. Our results focus on a variety of practical MO storage channel aspects, including storage density, partial response targets, the type of precoder used, and mark edge jitter. Instead of focusing just on bit error rates (BER), we also study the block error statistics. Our results for MO storage channels indicate that turbo codes of rate 16/17 can achieve coding gains of 3-5 dB over partial response maximum likelihood (PRML) methods for a 10-4 target BER. Simulations also show that the performance of LDPC codes for MO channels is comparable to that of turbo codes, while requiring less computational complexity. Both LDPC codes and turbo codes with iterative decoding are seen to be robust to mark edge jitter View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The minimum description length principle for modeling recording channels

    Publication Year: 2001 , Page(s): 719 - 729
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB) |  | HTML iconHTML  

    Modeling the magnetic recording channel has long been a challenging research problem. Typically, the tradeoff has been simplicity of the model for its accuracy. For a given family of channel models, the accuracy will grow with the model size, at a price of a more complex model. We develop a formalism that strikes a balance between these opposing criteria. The formalism is based on Rissanen's (1978) notion of minimum required complexity-the minimum description length (MDL). The family of channel models in this study is the family of signal-dependent autoregressive channel models chosen for its simplicity of description and experimentally verified modeling accuracy. For this family of models, the minimum description complexity is directly linked to the minimum required complexity of a detector. Furthermore, the minimum description principle for autoregressive models lends itself for an intuitively pleasing interpretation. The description complexity is the sum of two terms: (1) the entropy of the sequence of uncorrelated Gaussian random variables driving the autoregressive filters, which decreases with the model order (i.e., model size), and (2) a penalty term proportional to the model size. We exploit this interpretation to formulate the minimum description length criterion for the magnetic recording channel corrupted by nonlinearities and signal-dependent noise. Results on synthetically generated data are presented to validate the method. We then apply the method to data collected from the spin stand to establish the model's size and parameters that strike a balance between complexity and accuracy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On reverse concatenation and soft decoding algorithms for PRML magnetic recording channels

    Publication Year: 2001 , Page(s): 612 - 618
    Cited by:  Papers (4)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB) |  | HTML iconHTML  

    High-density magnetic recording systems require increasingly sophisticated signal processing techniques. In magnetic recording channels, the information bits are encoded by the concatenation of an outer nonbinary error correcting code (ECC) and an inner line code. Furthermore, the high intersymbol interference that characterizes the channel is controlled by “partial response” equalization. This paper presents a study on a better-integrated decoding procedure between the inner and outer codes. Inversion of the concatenated codes (reverse concatenation) allows the direct mapping of soft information from the partial response channel to the outer ECC. A simplified soft decoding technique, based on the use of erasures, is applied to two typical magnetic recording systems. Performance curves are obtained by an analysis of the distributions of the reliability measures associated with incorrect and correct symbols View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum transition run codes for generalized partial response channels

    Publication Year: 2001 , Page(s): 619 - 634
    Cited by:  Papers (14)  |  Patents (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (440 KB) |  | HTML iconHTML  

    A new twins constraint for maximum transition run (MTR) codes is introduced to eliminate quasi-catastrophic error propagation in sequence detectors for generalized partial response channels with spectral nulls both at dc and at the Nyquist frequency. Two variants of the twins constraint that depend on whether the generalized partial response detector trellis is unconstrained or j-constrained are studied. Deterministic finite-state transition diagrams that present the twins constraint are specified, and the capacity of the new class of MTR constraints is computed. The connection between (G,I) constraints and MTR(j) constraints is clarified. Code design methodologies that are based on look-ahead coding in combination with violation detection/substitution as well as on state splitting are used to obtain several specific constructions of high-rate MTR codes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reduced complexity sequence detection for high-order partial response channels

    Publication Year: 2001 , Page(s): 649 - 661
    Cited by:  Papers (2)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (392 KB) |  | HTML iconHTML  

    Detector hardware complexity of high-order partial response magnetic read channels is a major obstacle to high data rate operation and reduced area and power consumption. The method presented here reduces the complexity of single-step and two-step implementations of the Viterbi detector by applying a distance-enhancing code that eliminates some states from the code trellis. The complexity of the detector is further reduced by eliminating less-probable branches from the trellis. This is accomplished by a simple control mechanism that uses the signs of the consecutive input samples. The reduced set of add-compare-select (ACS) units is dynamically assigned to the detector states, decreasing the complexity of the Viterbi detector by roughly 50%. This method is demonstrated on high-order partial response systems with the E2PR4 target and an 11-level/32-state target. The simulation results show negligible bit error rate (BER) degradation for signal-to-noise ratios In the range of operation of contemporary disk drive read channels View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel fast approach for estimating error propagation in decision feedback detectors

    Publication Year: 2001 , Page(s): 668 - 676
    Cited by:  Papers (1)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB) |  | HTML iconHTML  

    The study of error-burst statistics is important for all detection systems, and more so for the decision feedback class. In data storage applications, many detection systems use decision feedback in one form or another. Fixed-delay tree search with decision feedback (FDTS/DF) and decision feedback equalization (DFE) are the direct forms, whereas the partial response detectors such as the reduced state sequence estimator (RSSE) and noise predictive maximum likelihood (NPML) detectors are the other forms. Although DF reduces the system complexity, it is inevitably linked with error propagation (EP), which can be quantified using error-burst statistics. Analytical evaluation of these statistics is difficult, if not impossible, because of the complexity of the problem. Hence, the usual practice is to use computer simulations. However, the computational time in traditional bit-by-bit simulations can be prohibitive at meaningful signal-to-noise ratios. In this paper, we propose a novel approach for fast estimation of error-burst statistics in FDTS/DF detectors, which is also applicable to other detection systems. In this approach, error events are initiated more frequently than natural by artificially injecting noise samples. These noise samples are generated using a transformation that results in significant reduction in computational complexity. Simulation studies show that the EP performance obtained by the proposed method matches closely with those obtained by bit-by-bit simulations, while saving as much as 99% of simulation time View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low-complexity iterative decoding with decision-aided equalization for magnetic recording channels

    Publication Year: 2001 , Page(s): 699 - 708
    Cited by:  Papers (40)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (256 KB) |  | HTML iconHTML  

    Turbo codes are applied to magnetic recoding channels by treating the channel as a rate-one convolutional code that requires a soft a posteriori probability (APP) detector for channel inputs. The complexity of conventional APP detectors, such as the BCJR algorithm or the soft-output Viterbi algorithm (SOVA), grows exponentially with the channel memory length. This paper derives a new APP module for binary intersymbol interference (ISI) channels based on minimum mean squared error (MMSE) decision-aided equalization (DAE), whose complexity grows linearly with the channel memory length, and it shows that the MMSE DAE is also optimal by the maximum a posteriori probability (MAP) criterion. The performance of the DAE is analyzed, and an implementable turbo-DAE structure is proposed. The reduction of channel APP detection complexity reaches 95% for a five-tap ISI channel when the DAE is applied. Simulations performed on partial response channels show close to optimum performance for this turbo-DAE structure. Error propagation of the DAE is also studied, and two fixed-delay solutions are proposed based on combining the DAE with the BCJR algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low-complexity maximum-likelihood decoding of shortened enumerative permutation codes for holographic storage

    Publication Year: 2001 , Page(s): 783 - 790
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB) |  | HTML iconHTML  

    Volume holographic memories (VHM) are page-oriented optical storage systems whose pages commonly contain on the order of one million pixels. Typically, each stored data page is composed of an equal number of binary pixels in either a low-contrast (“off”) state or a high-contrast (“on”) state. By increasing the number of “off” pixels and decreasing the number of “on” pixels per page, there is an associated gain in VHM system storage capacity. When grayscale pixels are used, a further gain is possible by similarly controlling the fraction of pixels at each gray level. This paper introduces a constant-weight, nonbinary, shortened enumerative permutation modulation block code to produce pages that exploit the proposed capacity advantage. In addition to the code description, we present an encoder and a low-complexity maximum-likelihood (ML) decoder for the shortened permutation code. A proof verifies our claim of ML decoding. Applying this class of code to VHMs predicts a 49% increase in storage capacity when recording modulation coded 3-bit (eight gray level) pixels compared with a VHM using a binary signaling alphabet and equal-probable (unbiased) data View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Turbo codes cascaded with high-rate block codes for (O, κ)-constrained channels

    Publication Year: 2001 , Page(s): 677 - 685
    Cited by:  Papers (5)  |  Patents (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB) |  | HTML iconHTML  

    We consider several issues in the analysis and design of turbo coded systems for (O, κ) input-constrained channels. These constraints commonly arise in magnetic recording channels. This system is characterized by a high-rate turbo code driving a high-rate (n-1)/n, small-length (O, κ) block code. We discuss the properties of the (O, κ) code that affect its performance on both an additive white Gaussian noise (AWGN) and a precoded dicode channel. We address soft-in soft-out (SISO) decoding of linear and nonlinear (O, κ) codes and show that good (O, κ) codes exist even when dmin=1. For the (O, κ) constrained AWGN channel, we present several rate (n-1)/n block codes that optimally tradeoff bit-error-rate performance with κ. For the precoded dicode channel, we show that the systematic (O, n-1) modulation codes are superior to most other rate (n-1)/n block codes in terms of error-rate performance, and their attractiveness is increased by the fact that they do not contribute any significant complexity to the overall system View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Loose composite constraint codes and their application in DVD

    Publication Year: 2001 , Page(s): 765 - 773
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (200 KB) |  | HTML iconHTML  

    Constrained coding is used in recording systems to translate an arbitrary sequence of input data to a channel sequence with special properties required by the physics of the medium. Very often, more than one constraint is imposed on a recorded sequence; typically, a run-length constraint is combined with a spectral-null constraint. We introduce a low-complexity encoder structure for composite constraints, based on loose multimode codes. The first channel constraint is imposed strictly, and the second constraint is imposed in a probabilistic fashion. Relaxing the second constraint is beneficial because it enables higher code rates and simplifies the encoder. To control the second constraint a multimode encoder is used. We represent a set of multimode coded sequences by a weighted trellis and propose using a limited trellis search to select optimal output. Using this method, we modify the EFM+ code used in digital versatile disk (DVD). We combine EFM+'s run-length constraint with the first- and second-order spectral-null constraints. The resulting EFM++ code gives more than 10-dB improvement in suppression of low-frequency spectral content in the servo bandwidth over the original EFM+ code with the same complexity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pattern-dependent noise prediction in signal-dependent noise

    Publication Year: 2001 , Page(s): 730 - 743
    Cited by:  Papers (45)  |  Patents (70)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (324 KB) |  | HTML iconHTML  

    Maximum and near-maximum likelihood sequence detectors in signal-dependent noise are discussed. It is shown that the linear prediction viewpoint allows a very simple derivation of the branch metric expression that has previously been shown as optimum for signal-dependent Markov noise. The resulting detector architecture is viewed as a noise predictive maximum likelihood detector that operates on an expanded trellis and relies on computation of branch-specific, pattern-dependent noise predictor taps and predictor error variances. Comparison is made on the performance of various low-complexity structures using the positional-jitter/width-variation model for transition noise. It is shown that when medium noise dominates, a reasonably low complexity detector that incorporates pattern-dependent noise prediction achieves a significant signal-to-noise ratio gain relative to the extended class 4 partial response maximum likelihood detector. Soft-output detectors as well as the use of soft decision feedback are discussed in the context of signal-dependent noise View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Effect of precoding on the convergence of turbo equalization for partial response channels

    Publication Year: 2001 , Page(s): 686 - 698
    Cited by:  Papers (36)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (304 KB) |  | HTML iconHTML  

    The effect of the precoder on the convergence of turbo equalization for precoded partial response channels is studied. The idea is to consider the turbo decoding algorithm as a one-parameter dynamical system and to study the effect of the precoder on the fixed points of the system. It is showed that precoding results in a loss in fidelity during the first iteration and that this loss depends on the precoder. Further, the rate at which the extrinsic information increases with iterations is also dependent on the precoder. The net result of these two effects is used to explain several existing results in the literature about the performance of different precoders. A design criteria based on the convergence is then proposed, and the impact of precoding on the design of the outer code is then studied. Finally, the design of precoders in the presence of an error correcting code, such as a Reed-Solomon code, is studied View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A survey of codes for optical disk recording

    Publication Year: 2001 , Page(s): 756 - 764
    Cited by:  Papers (6)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB) |  | HTML iconHTML  

    We report on 20 years of development of codes for optical disk recording systems. A description of the state-of-the-art and feasible options for future extensions and improvements are given View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dropout-tolerant read channels

    Publication Year: 2001 , Page(s): 744 - 755
    Cited by:  Papers (4)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB) |  | HTML iconHTML  

    Dropouts are intermittent losses of signal commonly seen in magnetic tape recording readout. The main reason for such losses is the increased spacing between the head and the medium due to media defects or debris particles. The resulting signal is not only degraded by the apparent amplitude loss, but the characteristics of the pulses due to transitions are also changed. Moreover, in many cases, the locations of the pulses are altered, causing excessive amounts of peakshift. In this paper, a model, linking the liftoff to these effects, is presented. Experimental verification of the model using actual signals from a test tape drive is also given. Artificial dropout waveforms generated using this model are used to test two different read channel strategies. The first approach is an equalization-based correction scheme that attempts to undo the dropout. Results indicate that the dropout effects can be almost completely eliminated if an appropriate equalization procedure is applied. As an alternative approach, it is shown that the use of turbo coding in the presence of dropouts appears to be promising View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Journal on Selected Areas in Communications focuses on all telecommunications, including telephone, telegraphy, facsimile, and point-to-point television, by electromagnetic propagation.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Muriel Médard
MIT