Cart (Loading....) | Create Account
Close category search window
 
Skip to Results

Search Results

You searched for: ((achieving low approach noise without sacrificing capacity)metadata)
314 Results returned
Skip to Results
  • Save this Search
  • Download Citations Disabled
  • Save To Project
  • Email
  • Print
  • Export Results
  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Achieving low approach noise without sacrificing capacity

    Ren, Liling. ; Clarke, J.-P. ; Nhut Tan Ho
    Digital Avionics Systems Conference, 2003. DASC '03. The 22nd

    Volume: 1
    Digital Object Identifier: 10.1109/DASC.2003.1245810
    Publication Year: 2003 , Page(s): 1.E.3 - 1.1-9 vol.1

    IEEE Conference Publications

    Advanced noise abatement procedures such as the three degree decelerating approach (TDDA) can significantly reduce the noise impact of aircraft during approach. With existing aircraft performance and flight operation uncertainties, however, implementation of the TDDA would require an increase in the initial separation between aircraft that would result in significant reduction in runway capacity. Simulation results indicate that this reduction in runway capacity is on the order of 50%, which is not acceptable for any procedure that must be used n high traffic scenarios. In this paper, we introduce a modified three degree decelerating approach (MTDDA) that provides the same noise benefits as the TDDA with little or no loss in capacity relative to conventional approach procedures. Simulation results indicate that for a representative aircraft mix, the capacity of the MTDDA is within 2% less of the maximum possible capacity using conventional approach procedures. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Achieving low approach noise without sacrificing capacity

    Ren, Liling. ; Clarke, J.-P. ; Nhut Tan Ho
    Digital Avionics Systems Conference, 2003. DASC '03. The 22nd

    Volume: 1
    Digital Object Identifier: 10.1109/DASC.2003.1245810
    Publication Year: 2003 , Page(s): 1.E.3 - 1.1-9 vol.1
    Cited by:  Papers (1)

    IEEE Conference Publications

    Advanced noise abatement procedures such as the three degree decelerating approach (TDDA) can significantly reduce the noise impact of aircraft during approach. With existing aircraft performance and flight operation uncertainties, however, implementation of the TDDA would require an increase in the initial separation between aircraft that would result in significant reduction in runway capacity. Simulation results indicate that this reduction in runway capacity is on the order of 50%, which is not acceptable for any procedure that must be used n high traffic scenarios. In this paper, we introduce a modified three degree decelerating approach (MTDDA) that provides the same noise benefits as the TDDA with little or no loss in capacity relative to conventional approach procedures. Simulation results indicate that for a representative aircraft mix, the capacity of the MTDDA is within 2% less of the maximum possible capacity using conventional approach procedures. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Reduced complexity Sphere Decoding

    Boyu Li ; Ayanoglu, E.
    Wireless Communications and Mobile Computing Conference (IWCMC), 2011 7th International

    Digital Object Identifier: 10.1109/IWCMC.2011.5982522
    Publication Year: 2011 , Page(s): 147 - 151
    Cited by:  Papers (1)

    IEEE Conference Publications

    In Multiple-Input Multiple-Output (MIMO) systems, Sphere Decoding (SD) can achieve performance equivalent to full search Maximum Likelihood (ML) decoding with reduced complexity. Several researchers reported techniques that reduce the complexity of SD further. In this paper, a new technique is introduced which decreases the computational complexity of SD substantially, without sacrificing performance. The reduction is accomplished by deconstructing the decoding metric to decrease the number of computations and exploiting the structure of a lattice representation. Simulation results show that this approach achieves substantial gains for the average number of real multiplications and real additions needed to decode one transmitted vector symbol. As an example, for a 4 × 4 MIMO system, the gains in the number of multiplications are 85% with 4-QAM and 90% with 64-QAM, at low SNR. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A Fast Algorithm for Robust Mixtures in the Presence of Measurement Errors

    Jianyong Sun ; Kaban, A.
    Neural Networks, IEEE Transactions on

    Volume: 21 , Issue: 8
    Digital Object Identifier: 10.1109/TNN.2010.2048219
    Publication Year: 2010 , Page(s): 1206 - 1220
    Cited by:  Papers (3)

    IEEE Journals & Magazines

    In experimental and observational sciences, detecting atypical, peculiar data from large sets of measurements has the potential of highlighting candidates of interesting new types of objects that deserve more detailed domain-specific followup study. However, measurement data is nearly never free of measurement errors. These errors can generate false outliers that are not truly interesting. Although many approaches exist for finding outliers, they have no means to tell to what extent the peculiarity is not simply due to measurement errors. To address this issue, we have developed a model-based approach to infer genuine outliers from multivariate data sets when measurement error information is available. This is based on a probabilistic mixture of hierarchical density models, in which parameter estimation is made feasible by a tree-structured variational expectation-maximization algorithm. Here, we further develop an algorithmic enhancement to address the scalability of this approach, in order to make it applicable to large data sets, via a K-dimensional-tree based partitioning of the variational posterior assignments. This creates a non-trivial tradeoff between a more detailed noise model to enhance the detection accuracy, and the coarsened posterior representation to obtain computational speedup. Hence, we conduct extensive experimental validation to study the accuracy/speed tradeoffs achievable in a variety of data conditions. We find that, at low-to-moderate error levels, a speedup factor that is at least linear in the number of data points can be achieved without significantly sacrificing the detection accuracy. The benefits of including measurement error information into the modeling is evident in all situations, and the gain roughly recovers the loss incurred by the speedup procedure in large error conditions. We analyze and discuss in detail the characteristics of our algorithm based on results obtained on appropriately designed synthetic data experimen- - ts, and we also demonstrate its working in a real application example. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A comparison study on KL domain penalized weighted least-squares approach to noise reduction for low-dose cone-beam CT

    Hao Zhang ; Yan Liu ; Hao Han ; Yi Fan ; Jing Wang ; Zhengrong Liang
    Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2012 IEEE

    Digital Object Identifier: 10.1109/NSSMIC.2012.6551758
    Publication Year: 2012 , Page(s): 3328 - 3332

    IEEE Conference Publications

    Dose reduction is a major task for cone-beam computed tomography (CBCT) applications because of the potential side effect of X-ray exposure to the patients. One of the strategies to achieve low-dose is to lower the X-ray tube current and/or shorten the exposure time in CT scanners. However, the image quality from the low mAs acquisition is severely degraded due to excessive quantum noise. In this work, we investigated three implementations of Karhunen-Loeve domain penalized weighted least-squares (KL-PWLS) scheme to adaptively treat the noise in the low-mAs CBCT sinograms. The motivation is based on the observations that strong data correlation exists between neighboring views and neighboring slices in CBCT and the KL transform can de-compose the correlated signals for adaptive noise treatment. The three implementations were: (1) performing the KL transform among neighboring views and reduce the three-dimensional (3D) noise-treatment procedure into a series of 2D operations, (2) performing the KL transform among neighboring slices in the 3D space, and (3) performing the KL transform on a view-by-view manner along the slice direction for sparse data sampling which actually reduces the procedure into a series of 1D operations. The noise-treated sinogram data were then reconstructed by the analytical Feldkamp-Davis-Kress (FDK) algorithm. The effectiveness of the presented KL-PWLS noise reduction strategy was evaluated using two physical phantoms (CatPhan600® and anthropomorphic head). Noise in the reconstructed CBCT-FDK images from a low 10mA protocol was greatly suppressed without noticeable sacrifice of the spatial resolution compared with those from a high 80mA protocol, which implies a potential dose reduction by as high as a factor of 8 for the two phantoms. The noise-resolution tradeoff curves indicate that the KL-PWLS implementations considering the neighboring slices (implementation 2 and 3) outperform that considering the neighboring views (implementa- ion 1) in better resolution preservation at the same noise level, which is probably due to the structural continuity among neighboring slices. For further dose reduction by sparse data sampling, implementation (3) can be a potential choice attributed to its computational advantages over the other two implementations. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Visually improved image compression by combining a conventional wavelet-codec with texture modeling

    Nadenau, M.J. ; Reichel, J. ; Kunt, M.
    Image Processing, IEEE Transactions on

    Volume: 11 , Issue: 11
    Digital Object Identifier: 10.1109/TIP.2002.804280
    Publication Year: 2002 , Page(s): 1284 - 1294
    Cited by:  Patents (1)

    IEEE Journals & Magazines

    Human observers are very sensitive to a loss of image texture in photo-realistic images. For example a portrait image without the fine skin texture appears unnatural. Once the image is decomposed by a wavelet transformation, this texture is represented by many wavelet coefficients of low- and medium-amplitude. The conventional encoding of all these coefficients is very bitrate expensive. Instead, such an unstructured or stochastic texture can be modeled by a noise process and be characterized with very few parameters. Thus, a hybrid scheme can be designed that encodes the structural image information by a conventional wavelet codec and the stochastic texture in a model-based manner. Such a scheme, called WITCH (Wavelet-based Image/Texture Coding Hybrid), is proposed. It implements such an hybrid coding approach, while nevertheless preserving the features of progressive and lossless coding. Its low computational complexity and the parameter coding costs of only 0.01 bpp make it a valuable extension of conventional codecs. A comparison with the JPEG2000 image compression standard showed that the WITCH-scheme achieves the same subjective quality while increasing the compression ratio by more than a factor of two. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Gain-bandwidth adjusting technique of A 36.1 GHz single stage low noise amplifier using 0.13μm CMOS process

    Rashid, S.M.S. ; Ali, S.N. ; Roy, A. ; Rashid, A.B.M.H.
    Advanced Communication Technology, 2009. ICACT 2009. 11th International Conference on

    Volume: 01
    Publication Year: 2009 , Page(s): 184 - 188
    Cited by:  Papers (1)

    IEEE Conference Publications

    This paper demonstrates that inserting a small resistance at the drain of a cascode LNA can be the simplest way of achieving higher bandwidth with only a slight degradation in noise figure. A 36.1 GHz single stage low noise amplifier is designed in 0.13 mum CMOS Process with a simple passive output matching circuit. The circuit is simulated using Cadence Spectre and simulation results show a forward gain (S21) of 11.4 dB at 36.1 GHz with 4.9 GHz Bandwidth. Reverse isolation is less than -24 dB, input-output matchings are -30.36 dB and -27.65 dB respectively. NF of the circuit is around 2.9 dB. Then the circuit is simulated again, this time employing a small resistance at the drain of the cscode LNA, to show that adding a small resistance at the drain of the cascode LNA is the easiest way to increase bandwidth of the circuit without significant sacrifice of noise figure. This single stage LNA consumes only 3.37 mW of power, when driven from a 1.2 V power supply. To the best of the authors knowledge gain-bandwidth adjusting technique of a single stage low noise amplifier operating at such high frequency is yet to be reported. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A Wideband Inductorless LNA With Local Feedback and Noise Cancelling for Low-Power Low-Voltage Applications

    Hongrui Wang ; Li Zhang ; Zhiping Yu
    Circuits and Systems I: Regular Papers, IEEE Transactions on

    Volume: 57 , Issue: 8
    Digital Object Identifier: 10.1109/TCSI.2010.2042997
    Publication Year: 2010 , Page(s): 1993 - 2005
    Cited by:  Papers (19)

    IEEE Journals & Magazines

    A wideband noise-cancelling low-noise amplifier (LNA) without the use of inductors is designed for low-voltage and low-power applications. Based on the common-gate-common-source (CG-CS) topology, a new approach employing local negative feedback is introduced between the parallel CG and CS stages. The moderate gain at the source of the cascode transistor in the CS stage is utilized to boost the transconductance of the CG transistor. This leads to an LNA with higher gain and lower noise figure (NF) compared with the conventional CG-CS LNA, particularly under low power and voltage constraints. By adjusting the local open-loop gain, the NF can be optimized by distributing the power consumption among transistors and resistors based on their contribution to the NF. The optimal value of the local open-loop gain can be obtained by taking into account the effect of phase shift at high frequency. The linearity is improved by employing two types of distortion-cancelling techniques. Fabricated in a 0.13-μm RF CMOS process, the LNA achieves a voltage gain of 19 dB and an NF of 2.8-3.4 dB over a 3-dB bandwidth of 0.2-3.8 GHz. It consumes 5.7 mA from a 1-V supply and occupies an active area of only 0.025 mm2. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Geometric Calibration of Third-Generation Computed Tomography Scanners from Scans of Unknown Objects using Complementary Rays

    Holt, K.M.
    Image Processing, 2007. ICIP 2007. IEEE International Conference on

    Volume: 4
    Digital Object Identifier: 10.1109/ICIP.2007.4379971
    Publication Year: 2007 , Page(s): IV - 129 - IV - 132
    Cited by:  Papers (1)

    IEEE Conference Publications

    To achieve good image quality for computed tomography scans, it is important to accurately know the geometrical relationship between the x-ray source, detector, and axis of rotation. Conventional geometric calibration algorithms generally require particular calibration phantoms, such as a small pin or wire, which may not be practical for all scanner types, particularly for large industrial scanners. This paper presents an alternative framework to calibrate system geometry from scans of arbitrary objects, without prior knowledge of the object's form. This lessens physical construction requirements, and permits post-scan geometric calibration from any arbitrary scan data. Experimental results show that central ray calibration using this approach can give results accurate to 0.01 channels for low-noise conditions, or 0.1-0.45 channels under higher noise levels. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Time domain synchronous OFDM based on simultaneous multi-channel reconstruction

    Linglong Dai ; Jintao Wang ; Zhaocheng Wang ; Tsiaflakis, P. ; Moonen, M.
    Communications (ICC), 2013 IEEE International Conference on

    Digital Object Identifier: 10.1109/ICC.2013.6654997
    Publication Year: 2013 , Page(s): 2984 - 2989

    IEEE Conference Publications

    Time domain synchronous OFDM (TDS-OFDM) can achieve a higher spectrum efficiency than standard cyclic prefix OFDM (CP-OFDM). Currently, it can support constellations up to 64QAM, but cannot support higher-order constellations like 256QAM due to the residual mutual interferences between the pseudorandom noise (PN) guard interval and the OFDM data block. To solve this problem, we break the traditional approach of iterative interference cancellation and propose the idea of using multiple inter-block-interference (IBI)-free regions of very small size to realize simultaneous multi-channel reconstruction under the framework of structured compressive sensing, whereby the sparsity nature of wireless channels as well as the characteristic that path delays vary much slower than path gains are jointly exploited. In this way, the mutually conditional time-domain channel estimation and frequency-domain data demodulation in TDS-OFDM can be decoupled without the use of IBI removal. We then propose the adaptive simultaneous orthogonal matching pursuit (A-SOMP) algorithm with low complexity to realize accurate multi-channel reconstruction, whose performance is close to the Cramér-Rao lower bound (CRLB). Simulation results confirm that the proposed scheme can support 256QAM without changing the current signal structure, so the spectrum efficiency can be increased by about 30%. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Feature analysis for the reversible watermarked electrooculography signal using Low distortion Prediction-error Expansion

    Dey, N. ; Biswas, S. ; Das, P. ; Das, A. ; Chaudhuri, S.S.
    Communications, Devices and Intelligent Systems (CODIS), 2012 International Conference on

    Digital Object Identifier: 10.1109/CODIS.2012.6422280
    Publication Year: 2012 , Page(s): 624 - 627

    IEEE Conference Publications

    At present, most of the hospitals and diagnostic centers globally, have started using wireless media for exchange of biomedical information (Electronic Patient Report or hospital logo) for mutual availability of therapeutic case studies. Exchange of information amongst various hospital and medical centers require high level of reliability and security. Signal integrity can be verified, authenticity and achieved control over the copy process can be proved by adding watermark in the original information as multimedia content. Electrooculography (EOG) is a medical test that records the movements and position of the eyes. In this present work, Low distortion Prediction-error Expansion technique is used for watermark insertion and extraction in an EOG signal without devalorizing its diagnostic parameters. It can be seen that in this approach the correlation value of the original watermark and the extracted watermark is quite high. The Signal-to-Noise ratio (SNR) between the original EOG signal and the recovered EOG signal markedly improves which claims the robustness of the method. In the second part of the present work different features of the original EOG signal, watermarked EOG signal and recovered EOG signal are analysed. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Two-step quantization in multibit ΔΣ modulators

    Lindfors, S. ; Halonen, K.A.I.
    Circuits and Systems II: Analog and Digital Signal Processing, IEEE Transactions on

    Volume: 48 , Issue: 2
    Digital Object Identifier: 10.1109/82.917785
    Publication Year: 2001 , Page(s): 171 - 176
    Cited by:  Papers (8)

    IEEE Journals & Magazines

    An architecture to simplify the circuit implementation of the internal analog-to-digital (A/D) converter in a ΔΣ modulator is proposed. The architecture is based on dividing the A/D conversion into two time steps, which makes the internal quantization feasible with much higher resolution than with conventional solutions. Furthermore, the time steps are interleaved so that the resolution improvement is achieved without sacrificing the speed. It is shown, with a linearized model, that the order of the noise shaping is increased by one with respect to the coarse quantization error made during the first step. For a high oversampling ratio, the coarse quantization error made in the first step is easily suppressed to an insignificant level due to the one order higher noise shaping. Depending on the partitioning of the bits between the conversion steps, the coarse error will dominate below a certain oversampling ratio. However, it is shown that the technique can be extended to more than one order higher noise shaping, making it useful for low oversampling ratios as well View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A low power consumption driver with low acoustics for piezoelectric synthetic jets

    Ramabhadran, R. ; Glaser, J.S. ; de Bock, H.P.
    Energy Conversion Congress and Exposition (ECCE), 2013 IEEE

    Digital Object Identifier: 10.1109/ECCE.2013.6647049
    Publication Year: 2013 , Page(s): 2692 - 2697

    IEEE Conference Publications

    The low profile piezoelectric synthetic jet is a promising approach for forced air convection cooling of electronics for high density and portable applications. The synthetic jet considered in this paper, known as a DCJ ([2]), consists of a pair of piezoelectric discs mounted on a frame and driven with a voltage, which causes the discs to mimic the action of a pair of bellows. This DCJ requires a sinusoidal waveform of 125-175 Hz at an amplitude up to bQV, and must operate from a 5V DC source. The mostly capacitive load of the DCJ presents a challenge for typical amplifiers. Furthermore, many applications require quiet operation; hence the jet driving waveform must have very low distortion to prevent audible acoustic noise. This paper describes a bidirectional power driver topology for driving capacitive loads based on a dual flyback topology, along with a low-cost, pure sine reference generator with predistortion to allow a clean output waveform without feedback. The driver achieves low power consumption (rì 250mW) with low harmonic content. The use of predistortion and a delta-sigma DAC compensates for the inherent flyback converter nonlinearity and the low resolution DAC typical of low-cost microcontrollers. The paper describes and presents experimental results for a design that accomplishes these objectives in a 30 mm × 30 mm × 4 mm volume. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Region-Based Anomaly Localisation in Crowded Scenes via Trajectory Analysis and Path Prediction

    Teng Zhang ; Wiliem, A. ; Lovell, B.C.
    Digital Image Computing: Techniques and Applications (DICTA), 2013 International Conference on

    Digital Object Identifier: 10.1109/DICTA.2013.6691519
    Publication Year: 2013 , Page(s): 1 - 7

    IEEE Conference Publications

    In this paper, we propose an approach for locating anomalies in crowded scene for surveillance videos. In contrast to the previous approaches, the proposed approach does not rely on traditional tracking techniques which tend to fail in crowed scenes. Instead the anomalies are tracked based on the information taken from a set of anomaly classifiers. To this end, each video frame is divided into non- overlapping regions wherein a set of low-level features are extracted. After that, we apply the anomaly classifiers which determine whether there is anomaly in each region. We then derive the anomaly trajectory by connecting the anomalous regions temporarily across the video frames. Finally, we propose path prediction using linear Support Vector Machine (SVM) to smooth the trajectory. By doing this, we will able to better locate them in the crowded scene. We tested our approach on UCSD Anomaly Detection dataset which contains crowded scenes and achieved notable improvement over the state-of-the-art results without sacrificing computational simplicity. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A soft self-commutating method using minimum control circuitry for multiple-string LED drivers

    Junsik Kim ; Jiyong Lee ; Shihong Park
    Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2013 IEEE International

    Digital Object Identifier: 10.1109/ISSCC.2013.6487777
    Publication Year: 2013 , Page(s): 376 - 377

    IEEE Conference Publications

    Light-emitting diodes (LEDs) are widely used in general lightings due to their several advantages including high efficiency, high reliability, long life, and environmental friendliness. Recently, various converter-free methods for non-isolated LED drivers with multiple LED strings connected in series have been introduced, enabling both a higher efficiency and power factor (PF) as well as lower total harmonic distortion (THD) [1-3]. In multiple-string LED drivers, the efficiency and PF are enhanced as the number of LED strings increases because of a low overhead voltage. However, as the operational voltage range decreases, it is difficult to find a proper commutation time using input voltage sensing approaches due to input voltage noise and LED voltage variation [4]. Other concerns are EMI and EMC noise caused by high di/dt and dv/dt in hard commutations. When the LED current is high, negative effects of hard commutation become worse and the required di/dt control circuits are more complicated [5]. To meet EMI and EMC regulations for lightings without adding on-board EMI filters, soft commutation is essential. In order to overcome these problems, we propose a soft self-commutating method using a Source-Coupled Pair (SCP) and reference voltages. The conventional control circuits required for an appropriate commutation time and soft commutation are no longer necessary. The fabricated 6-string LED driver IC is capable of achieving high efficiency (92.2%), high PF (0.996) and low THD (8.6%) under the 22W/110V AC condition. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    A 0.6V 2.9µW mixed-signal front-end for ECG monitoring

    Yip, M. ; Bohorquez, J.L. ; Chandrakasan, A.P.
    VLSI Circuits (VLSIC), 2012 Symposium on

    Digital Object Identifier: 10.1109/VLSIC.2012.6243792
    Publication Year: 2012 , Page(s): 66 - 67

    IEEE Conference Publications

    This paper presents a mixed-signal ECG front-end that uses aggressive voltage scaling to maximize power-efficiency and facilitate integration with low-voltage DSPs. 50/60Hz interference is canceled using mixed-signal feedback, enabling ultra-low-voltage operation by reducing dynamic range requirements. Analog circuits are optimized for ultra-low-voltage, and a SAR ADC with a dual-DAC architecture eliminates the need for a power-hungry ADC buffer. Oversampling and ΔΣ-modulation leveraging near-VT digital processing are used to achieve ultra-low-power operation without sacrificing noise performance and dynamic range. The fully-integrated front-end is implemented in a 0.18μm CMOS process and consumes 2.9μW from 0.6V. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Two-step quantization architectures for multibit ΔΣ-modulators

    Lindfors, S. ; Halonen, K.
    Circuits and Systems, 2000. Proceedings. ISCAS 2000 Geneva. The 2000 IEEE International Symposium on

    Volume: 2
    Digital Object Identifier: 10.1109/ISCAS.2000.856249
    Publication Year: 2000 , Page(s): 25 - 28 vol.2

    IEEE Conference Publications

    The implementation of the internal A/D-converter in a ΔΣ-modulator can be simplified by utilizing two-step quantization, which makes the internal quantization feasible with much higher resolution than with conventional solutions. The A/D-conversion is divided into two time steps which significantly reduces the circuit complexity when compared to flash type conversion. The loop stability is sustained by feeding back a coarse result obtained during the first time step. The second step yielding a finer resolution is interleaved with the next sample so that the resolution improvement is achieved without sacrificing the speed. It is shown, with a linearized model, that the order of the noise shaping is increased by one with respect to the coarse quantization error made during the first step. For a high oversampling ratio the coarse quantization error made in the first step is easily suppressed to an insignificant level due to the one order higher noise shaping. Depending on the partitioning of the bits between the conversion steps the coarse error will dominate below a certain oversampling ratio. However, it is shown that the technique can be extended to more than one order higher noise shaping making it usefull for low oversampling ratios as well View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Robust Multiresolution Coding

    Jun Chen ; Dumitrescu, S. ; Ying Zhang ; Jia Wang
    Communications, IEEE Transactions on

    Volume: 58 , Issue: 11
    Digital Object Identifier: 10.1109/TCOMM.2010.093010.100024
    Publication Year: 2010 , Page(s): 3186 - 3195

    IEEE Journals & Magazines

    In multiresolution coding a source sequence is encoded into a base layer and a refinement layer. The refinement layer, constructed using a conditional codebook, is in general not decodable without the correct reception of the base layer. By relating multiresolution coding with multiple description coding, we show that it is in fact possible to construct multiresolution codes in certain ways so that the refinement layer alone can be used to reconstruct the source to achieve a nontrivial distortion. As a consequence, one can improve the robustness of the existing multiresolution coding schemes without sacrificing the efficiency. Specifically, we obtain an explicit expression of the minimum distortion achievable by the refinement layer for arbitrary finite alphabet sources with Hamming distortion measure. Experimental results show that the information-theoretic limits can be approached using a practical robust multiresolution coding scheme based on low-density generator matrix codes. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Improving the performance of label-free optical biosensors

    Armani, A.M. ; Hunt, H.K.
    Winter Topicals (WTM), 2011 IEEE

    Digital Object Identifier: 10.1109/PHOTWTM.2011.5730048
    Publication Year: 2011 , Page(s): 65 - 66

    IEEE Conference Publications

    Summary form only given. Rapid and real-time detection of antigens, bacteria, viruses, etc., for medical diagnostics and environmental monitoring requires the development of highly sensitive biosensors. Traditionally, labeled sensors, such as ELISA assays and fluorescent immunoassays, have been used for as highly sensitive detection platforms in complex environments, such as serum or whole blood. These sensors are particularly successful in such environments due to their high specificity to the target molecule. However, labeled sensors detect the presence of the fluorescent probe rather than the molecule of interest, requiring a priori knowledge of the target. In direct contrast, label-free sensors, such as cantilever sensors, optical waveguide sensors, and surface plasmon resonators, detect the molecule itself, but many times with less sensitivity or with difficulty in minimizing false positives in complex environments. To overcome these limitations, improvements in both the sensitivity and specificity of label-free sensors must be simultaneously addressed. Fortunately, a new class of high-performance optical sensors, developed from whispering gallery mode microcavities, has shown improved capabilities for sensing. Although these devices were initially designed for telecommunications applications, their very low optical loss makes them prime candidates for biosensing platforms. This is due to the correspondence between the low material loss of the microcavities and the long circulating lifetimes of photons within the cavity, which enables "optical amplification" of otherwise undetectable signals, and therefore improves the signal to noise ratio. The primary metric used to quantitatively describe the optical loss is the quality factor (Q) of the optical resonator. The two most commonly used optical resonators, the microsphere and the microtoroid, have Q-factors in excess of 100 million, corresponding to photon storage times greater than 100 ns. The performance of t- ese highly sensitive optical platforms may be further improved by the addition of a component that adds specificity to the device. This would improve the ability of these platforms to compete with labeled assays in complex environments. Typically, this is done via physical adsorption to the surface, but this technique may not be stable to environmental changes, such as temperature or pH fluctuations. An improved method is the covalent attachment of probe molecules that specifically detect the target molecule of interest. However, to date, a general strategy for the bioconjugation of label-free, high sensitivity whispering gallery mode resonators via covalent attachment has not been developed, primarily due to the difficulties of surface conjugation without adversely impacting the sensitivity of the device. Therefore, it is crucial to develop a uniform, covalent surface functionalization process that is capable of maintaining the optical device's performance metrics, such as the quality factor. In this approach, we use the silica ultra-high-Q microtoroid microcavity as the test platform, as it is the only microcavity fabricated on a silicon substrate which has achieved Q factors in excess of 100 million. We selectively functionalize the surface of these silica microtoroids using a three step process: 1) hydroxylation, 2) amination, and 3) biotinylation. Optical and scanning electron microscopy are used to qualitatively characterize the as-fabricated and surface-modified devices. Microcavitiy analysis techniques are used to quantitatively probe the effects of the surface modifications on the quality factor of the devices. Together, these techniques enabled the identification of the conditions best suited to ensuring the devices' high performance. Additionally, the surface chemistry and properties of these devices are explored via X-ray photoelectron spectroscopy, contact angle measurements, and fluorescent imaging at each reaction step, and show uniform surface cov View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Transmit nonlinear zero forcing: energy efficient receiver oriented transmission in MIMO CDMA mobile radio downlinks

    Meurer, M. ; Weber, T. ; Qiu, W.
    Spread Spectrum Techniques and Applications, 2004 IEEE Eighth International Symposium on

    Digital Object Identifier: 10.1109/ISSSTA.2004.1371703
    Publication Year: 2004 , Page(s): 260 - 269
    Cited by:  Papers (6)

    IEEE Conference Publications

    For the downlink of MIMO TDD/CDMA mobile radio systems, transmission schemes were recently proposed which can be classified as receiver (Rx) oriented. A main asset of these schemes is their low receiver complexity. In addition, they do without sacrificing downlink transmission resources to training signals for channel estimation, which is beneficial for capacity. First investigations concerned linear transmitter (Tx) algorithms and nonlinear Tx algorithms based on the principle of Tomlinson-Harashima precoding; both of these, especially in scenarios with high system loads, show an unsatisfactory performance. Optimum nonlinear Tx algorithms studied later by Peel et al. circumvent this performance drawback, but have been shown to be far from feasible in today's mobile radio systems. We try to fill the gap between the feasible Tx algorithms, with their performance drawbacks, and the optimum Tx algorithm by a scalable nonlinear transmitter algorithm termed transmit nonlinear zero forcing (TxNZF). The crux of this algorithm consists in designing the transmitted signal at the access point (AP) groupwise by a nonlinear approach, and in using multiply connected decision regions in the detectors of the mobile terminals (MT). TxNZF achieves almost optimum performance. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Minimum Spanning Circle Method for Using Spare Subcarriers in PAPR Reduction of OFDM Systems

    Shunjia Liu ; Yi Zeng ; Bo Hu
    Signal Processing Letters, IEEE

    Volume: 15
    Digital Object Identifier: 10.1109/LSP.2008.925744
    Publication Year: 2008 , Page(s): 513 - 516
    Cited by:  Papers (3)

    IEEE Journals & Magazines

    In orthogonal frequency division multiplexing (OFDM) systems, spare subcarriers are available due to low signal-to-noise ratio (SNR) in some subcarriers or fragments after resource scheduling. Utilizing these spare subcarriers to carry scrambling codes, the peak-to-average-power ratio (PAPR) of the system can be largely reduced without sacrificing bandwidth, with no in-band distortion or out-of-band radiation, but only with a little increase of signal transmission power. In this letter, we derive that the PAPR reduction problem using one spare subcarrier is equivalent to the well-known minimum spanning circle problem. Based on this concept, an approach called minimum spanning circle method (MSCM) is proposed to find the optimal scrambling codes. As validated in simulations, our method is effective, efficient, and simple to realize. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    OFDM Transmission Without Guard Interval in Fast-Varying Underwater Acoustic Channels

    Zakharov, Y.V. ; Morozov, A.K.
    Oceanic Engineering, IEEE Journal of

    Volume: PP , Issue: 99
    Digital Object Identifier: 10.1109/JOE.2013.2296842
    Publication Year: 2014 , Page(s): 1 - 15

    IEEE Early Access Articles

    In this paper, we consider orthogonal frequency-division multiplexing (OFDM) transmission in fast-varying underwater acoustic channels. We demonstrate on experimental data that reliable communications can be achieved without any guard interval (such as cyclic prefix or zero padding) and with a superimposed pilot. Such OFDM transmission possesses a high spectral efficiency, but incurs severe intersymbol and intercarrier interference, and interference from the superimposed pilot. We propose a receiver that can efficiently deal with the interference and has a relatively low complexity as most of its operations are based on fast Fourier transform and local spline interpolation. The receiver is verified in an experiment with a transducer towed by a surface vessel moving at a high speed; a complicated trajectory of the transducer resulted in a severe Doppler distortion of the signal received on a single hydrophone. The performance of the proposed receiver is investigated for different parameter settings and compared with an ideal receiver with perfect channel knowledge, operating in interference-free scenarios, and mimicking the signal-to-noise ratio (SNR) of the experiment. The proposed receiver has provided error-free detection of encoded data at data rates of 0.5 b/s/Hz at a distance of 40 km and 0.33 b/s/Hz at a distance of 80 km, approaching the performance of the ideal receiver with a less than 3-dB loss in SNR. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Low-noise-figure photonic links without pre-amplification

    Ackerman, E.I. ; Betts, G. ; Burns, W.K. ; Cox, C.H. ; Phillips, M.R. ; Roussell, H.
    Sarnoff Symposium, 2009. SARNOFF '09. IEEE

    Digital Object Identifier: 10.1109/SARNOF.2009.4850375
    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (1)

    IEEE Conference Publications

    Using optical fiber to retrieve signals from remote sensors has several advantages compared to remoting by means of metallic waveguides such as coaxial cable. Fiber-optic retrieval of an RF signal can be achieved by down-converting and digitizing the signal for conveyance by a digital fiber-optic link, or it can be achieved by conveying the RF signal over an analog fiber-optic link before digitization. The latter approach can be realized with a minimum of hardware and dc power required at the sensing site, provided that the analog fiber-optic link has a sufficiently low noise figure without a pre-amplifier. Early demonstrations of ldquoamplifierlessrdquo analog fiber-optic links typically reported very high noise figures-in excess of 30 dB. In the last decade or so, several techniques have been developed to improve this situation. We describe five such techniques and show that they have resulted in much lower measured noise figures for amplifierless links. One technique, for example, has yielded noise figures < 5 dB for amplifierless links at frequencies of up to 10 GHz. The existence of amplifierless links with such low noise figures may enable remote sensing of signals in situations where the size, weight, and power (SWAP) of the remote hardware is of primary concern. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    The Investigation of an Electron Resonance Spectrometer Utilizing a Generalized Feedback Microwave Oscillator

    Payne, J.B.
    Microwave Theory and Techniques, IEEE Transactions on

    Volume: 12 , Issue: 1
    Digital Object Identifier: 10.1109/TMTT.1964.1125750
    Publication Year: 1964 , Page(s): 48 - 54
    Cited by:  Papers (1)

    IEEE Journals & Magazines

    In this investigation, an entirely different approach is taken toward the development of a "self-stabilized" paramagnetic resonance (EPR) spectrometer system which eliminates the usual low-power klystron oscillator, electronic frequency stabilizing equipment, and the complex superheterodyne detector without sacrificing loss of detection sensitivity. This system which is known as an oscillator spectrometer consists of a microwave amplifier containing a sample-carrying network element in the positive feedback loop. The microwave device oscillates at the network's central resonant frequency with essentially instantaneous frequency stability. Expressions relating the change in power level and frequency of oscillation as a function of the change in the network attenuation and phase at magnetic resonance are derived. The system's ultimate sensitivity is determined by analyzing the noise within the oscillator loop. In general, the noise that limits the detection of the resonance signal is principally that generated by the amplifier, and thus a simple video detector can be used. The sensitivity of this spectrometer was found to be comparable with that of the conventional bridge type spectrometer. View full abstract»

  • Full text access may be available. Click article title to sign in or learn about subscription options.

    Packet acquisition for time-frequency hopped random multiple access

    Hoang Nguyen ; Block, F.J.
    Military Communications Conference, 2008. MILCOM 2008. IEEE

    Digital Object Identifier: 10.1109/MILCOM.2008.4753239
    Publication Year: 2008 , Page(s): 1 - 7

    IEEE Conference Publications

    This paper investigates packet acquisition in a random multiple access (RMA) environment. An analytical approach to performance evaluation is provided, which enables the waveform designer to set design parameters quickly such that the required performance is achieved without over-design. RMA is a medium access methodology used for time-frequency hopped signaling and has been widely considered in the literature for mobile ad-hoc networking. It has a number of desirable characteristics, including low latency, frequency diversity and robustness against undesirable interferences. However, a major difficulty exists: in order to demodulate a packet, the receiver must Arst determine its arrival time. The difficulty is exacerbated with multiple access interference, a situation for which the receiver performance has not been well quantified. To illustrate the accuracy of the analytical results, simulations are performed over a wide range of design parameters, e.g., the pulse length, pulse duty factor and the number of synchronization pulses, and various operation-time variables such as the number of users and the relative power levels of the signal, interference, and noise. View full abstract»

Skip to Results

SEARCH HISTORY

Search History is available using your personal IEEE account.

Need Help?


IEEE Advancing Technology for Humanity About IEEE Xplore | Contact | Help | Terms of Use | Nondiscrimination Policy | Site Map | Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest professional association for the advancement of technology.
© Copyright 2014 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.