By Topic

Instrumentation and Measurement, IEEE Transactions on

Issue 2 • Date Feb. 2014

Filter Results

Displaying Results 1 - 25 of 32
  • Table of contents

    Page(s): C1 - 249
    Save to Project icon | Request Permissions | PDF file iconPDF (153 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Instrumentation and Measurement publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (143 KB)  
    Freely Available from IEEE
  • Weighing Fusion Method for Truck Scales Based on Prior Knowledge and Neural Network Ensembles

    Page(s): 250 - 259
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (979 KB) |  | HTML iconHTML  

    This paper presents a new approach of compensating truck scale's weighing errors based on prior knowledge and neural network ensembles (PKNNEs). Truck scale is a typical nonlinear system and it is fussy and labor intensive to compensate the weighing errors with the conventional method, leading to low accuracy of the weighing results. The general idea of this proposed approach consists of building individual neural networks (NNs) and designing the constraint conditions for optimizing neural network ensembles (NNEs) with the prior knowledge of the truck scale. First, three uncorrelated individual NNs are created by using the step-distribution characteristics of the truck scale's permissible maximum weighing error. Second, the constraint conditions for training the individual NNs are constructed by using the ideal weighing model and its derivatives, which can significantly improve the generalization ability of NNs, especially when the training samples are few or lacking. The detailed design procedure of this proposed method is given, the weighing principle of truck scale is discussed, and its weighing error models are found in this paper. Experimental results demonstrate the effectiveness of this method, and the testing results of a truck scale with PKNNEs in the field show that it meets the requirement for the weighing accuracy of medium-class scale defined by OIML R76 “nonautomatic weighing instruments.” View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Outdoor Insulators Testing Using Artificial Neural Network-Based Near-Field Microwave Technique

    Page(s): 260 - 266
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1349 KB) |  | HTML iconHTML  

    This paper presents a novel artificial neural network (ANN)-based near-field microwave nondestructive testing technique for defect detection and classification in nonceramic insulators (NCI). In this paper, distribution class 33-kV NCI samples with no defects, air voids in silicone rubber and fiber glass core, cracks in the fiberglass core, and small metallic inclusion between the fiber core and shank were inspected. The microwave inspection system uses an open-ended rectangular waveguide sensor operating in the near-field at a frequency of 24 GHz. A data acquisition system was used to record the measured data. ANN was trained to classify the different types of defects. The results showed that all defects were detected and classified correctly with high recognition rates. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Practical Fundamental Frequency Extraction Algorithm for Motion Parameters Estimation of Moving Targets

    Page(s): 267 - 276
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1688 KB) |  | HTML iconHTML  

    In this paper, a practical method is proposed for a moving target's fundamental frequency (MTFF) extraction from its acoustic signal. This method is developed for the application of motion parameters estimation. Starting from the analysis of the target frequency model and the acoustic Doppler model, the characteristics of moving target's signal are discussed. Based on the signatures of target's acoustic signal, a new approximate greatest common divisor (AGCD) method is developed to obtain an initial fundamental frequency (IFF). Then, the corresponding harmonic number associated with the IFF is determined by maximizing an objective function formulated as an impulse-train-weighted symmetric average magnitude sum function (SAMSF) of the observed signal. The frequency of the SAMSF is determined by target's acoustic signal, the period of the impulse train is controlled by the estimated IFF harmonic, and the maximization of the objective function is carried out through a time-domain matching of periodicity of the impulse train with that of the SAMSF. Finally, a precise fundamental frequency is achieved based on the obtained IFF and its harmonic number. In order to demonstrate the effectiveness of the proposed method, experiments are conducted on wheeled vehicles, tracked vehicles, and propeller-driven aircrafts. Evaluation of the algorithm performance in comparison with other traditional methods indicates that the proposed MTFF is practical for the fundamental frequency extraction of moving targets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Calibration of Channel Mismatches in Time-Interleaved ADCs Based on Equivalent Signal Recombination

    Page(s): 277 - 286
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2902 KB) |  | HTML iconHTML  

    In this paper, we present an adaptive calibration method for correcting channel mismatches in time-interleaved analog-to-digital converters (TIADCs). An equivalent signal recombination structure is proposed to eliminate aliasing components when the input signal bandwidth is greater than the Nyquist bandwidth of the sub-ADCs in a TIADC. A band-limited pseudorandom noise sequence is employed as the desired output of the TIADC and simultaneously is also converted into an analog signal, which is injected into the TIADC as the training signal during the calibration process. Adaptive calibration filters with parallel structures are used to optimize the TIADC output to the desired output. The main advantage of the proposed method is that it avoids a complicated error estimation or measurement and largely reduces the computational complexity. A four-channel 12-bit 400-MHz TIADC and its calibration algorithm are implemented by hardware, and the measured spurious-free dynamic range is greater than 76 dB up 90% of the entire Nyquist band. The hardware implementation cost can be dramatically reduced, especially in instrumentation or measurement equipment applications, where special calibration phases and system stoppages are common. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and Implementation of a Preprocessing Circuit for Bandpass Signals Acquisition

    Page(s): 287 - 294
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (748 KB) |  | HTML iconHTML  

    The processing capabilities that are included into the acquisition block of the real-time digital oscilloscopes largely contribute to determine the overall performance of the instrument. Their remarkable improvement has made it possible to enhance the performance in terms of increased measurement rate, automation, and reduced measurement uncertainty related to quantization and noise. This paper presents the implementation of a preprocessing circuit for a novel acquisition mode of bandpass signals, which is characterized by an increased vertical resolution. Although the theoretical foundations were recently presented with simulative results, here, the circuital implementation of such an acquisition mode is presented. The focus is on mid or low cost digital oscilloscopes that can improve their vertical resolution at a negligible additional cost. First, a preliminary field programmable gate array implementation is considered to evaluate the achievable performance both from a theoretical point of view and throughout experimental tests. Then, a custom application specific integrated circuit implementation, in 28-nm complementary metal-oxide-semiconductor technology is analyzed. Along with the parameter optimization, the work experimentally tests the acquisition mode and evaluates the effects of nonideal characteristics such as finite word length and nonideal filtering. The increase in the effective number of bit (ENoB) is up to 2.5 bit, whereas the ENoB degradation because of word length and nonideal filtering is quantified as ~ 1.1 and 0.5 bit. The design highlights that there is substantial margin for parallel implementation that is the base to candidate the proposed solution as a remarkable option for the next generation oscilloscopes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A 7.65-mW 5-bit 90-nm 1-Gs/s Folded Interpolated ADC Without Calibration

    Page(s): 295 - 303
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1802 KB) |  | HTML iconHTML  

    Power consumption of high-speed low-resolution analog-to-digital converters (ADCs) can be reduced by means of calibration. However, this solution has some drawbacks such as time slot allocation for calibration and die area increase. This paper presents a 5-bit 1-Gs/s ADC without calibration, fabricated in 90-nm CMOS. Low power consumption has been ensured by operating at both architecture and comparator levels. A folded interpolated architecture has been adopted. However, compared to standard solutions that use static preamplifiers, the interpolation technique has been implemented by taking recourse to dynamic comparators, enabling significant power saving. Moreover, despite the high operating frequency, intrinsic matching has been ensured while keeping low power consumption. The ADC uses double-tail dynamic comparators, operating with a fixed bias current and with reduced kickback noise. Large input transistors are used to guarantee the targeted matching, thereby avoiding calibration. The ADC achieves 4.3b-ENOB (effective number of bits) and 260-MHz effective resolution bandwidth while consuming 7.65 mW from a 1.2 V supply. The ADC figure of meritis 0.39 pJ/conv. step, which is the state-of-the-art performance for an uncalibrated ADC at this sampling frequency and resolution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • One-Class Classification of Mammograms Using Trace Transform Functionals

    Page(s): 304 - 311
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1524 KB) |  | HTML iconHTML  

    Mammography is one of the first diagnostic tests to prescreen breast cancer. Early detection of breast cancer has been known to improve recovery rates to a great extent. In most medical centers, experienced radiologists are given the responsibility of analyzing mammograms. But, there is always a possibility of human error. Errors can frequently occur as a result of fatigue of the observer, resulting in interobserver and intraobserver variations. The sensitivity of mammographic screening also varies with image quality. To offset different kinds of variability and to standardize diagnostic procedures, efforts are being made to develop automated techniques for diagnosis and grading of breast cancer images. This paper presents a one-class classification pipeline for the classification of breast cancer images into benign and malignant classes. Because of the sparse distribution of abnormal mammograms, the two-class classification problem is reduced to a one-class outlier identification problem. Trace transform, which is a generalization of the Radon transform, has been used to extract the features. Several new functionals specific to mammographic image analysis have been developed and implemented to yield clinically significant features. Classifiers such as the linear discriminant classifier, quadratic discriminant classifier, nearest mean classifier, support vector machine, and the Gaussian mixture model (GMM) were used. For automated diagnosis, the classification pipeline was tested on a set of 313 mammograms provided by the Singapore Anti-Tuberculosis Association CommHealth. A maximum accuracy rate of 92.48% has been obtained using GMMs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Iterative Structure for Online Calibration of M -channel Time-Interleaved ADCs

    Page(s): 312 - 325
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (905 KB) |  | HTML iconHTML  

    This paper proposes a computationally efficient calibration structure for online estimation and compensation of offset, gain and frequency response mismatches in M-channel time-interleaved (TI) analog-to-digital converters (ADCs). The basic idea of the proposed approach is to reserve some sampling instants for estimating and tracking the mismatch parameters of sub-ADCs with reference to a known input. Since the estimation problem is analogous to a standard system identification problem, we propose two simple variable digital filter (VDF) based adaptive filter structures which are derived from the least mean squares (LMS) and normalized LMS algorithms. On the other hand, the reservation of some sampling instants in the normal operation of TI ADC implies that part of samples have to be sacrificed. Based on a general time-varying linear system model for the mismatch and the spectral property of a slightly oversampled input signal, we also propose a novel iterative framework to solve the resulting underdetermined problem. It not only embraces a number of iterative algorithms for the tradeoff between convergence rate and arithmetic complexity but also admits efficient update structure based again on VDFs. Therefore, thanks to the well-known efficient implementation of VDFs, the adaptability of both estimation and compensation algorithms allows us to combine them seamlessly to form an online calibration structure, which is able to track and compensate for the channel mismatches with low complexity and high reconstruction accuracy. Finally, we demonstrate the usefulness of the proposed approach by means of computer simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of Cross Wavelet Transform for ECG Pattern Analysis and Classification

    Page(s): 326 - 333
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1417 KB) |  | HTML iconHTML  

    In this paper, we use cross wavelet transform (XWT) for the analysis and classification of electrocardiogram (ECG) signals. The cross-correlation between two time-domain signals gives a measure of similarity between two waveforms. The application of the continuous wavelet transform to two time series and the cross examination of the two decompositions reveal localized similarities in time and frequency. Application of the XWT to a pair of data yields wavelet cross spectrum (WCS) and wavelet coherence (WCOH). The proposed algorithm analyzes ECG data utilizing XWT and explores the resulting spectral differences. A pathologically varying pattern from the normal pattern in the QT zone of the inferior leads shows the presence of inferior myocardial infarction. A normal beat ensemble is selected as the absolute normal ECG pattern template, and the coherence between various other normal and abnormal subjects is computed. The WCS and WCOH of various ECG patterns show distinguishing characteristics over two specific regions R1 and R2, where R1 is the QRS complex area and R2 is the T-wave region. The Physikalisch-Technische Bundesanstalt diagnostic ECG database is used for evaluation of the methods. A heuristically determined mathematical formula extracts the parameter(s) from the WCS and WCOH. Empirical tests establish that the parameter(s) are relevant for classification of normal and abnormal cardiac patterns. The overall accuracy, sensitivity, and specificity after combining the three leads are obtained as 97.6%, 97.3%, and 98.8%, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Temperature Distribution Reconstruction by Eigenfunction Interpolation of Boundary Measurement Data

    Page(s): 334 - 342
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1139 KB) |  | HTML iconHTML  

    This paper deals with the inverse problem of evaluating the temperature distribution over time in a 3-D composite solid material having an arbitrary geometry. This approach is capable of evaluating the temperature distribution within the domain of the nonhomogeneous object under observation at each time instance. In this paper, we propose to use the eigenfunctions of the heat equation model, representing the heat problem under observation, as a basis for reconstructing the 3-D temperature distribution. This choice of basis functions has the advantage of incorporating the physics of the problem, making the temperature reconstruction inverse problem more robust. Because of the geometry complexity, the eigenfunctions have been computed numerically using a finite-element method. In principle, the method uses temperature measurements in just a few points of the object domain. To consider the practical aspect, here we focus our attention on a noninvasive approach taking the observation points only on the available boundary surfaces. The proper weighting of the eigenfunction basis used as temperature interpolators is achieved inverting the collected measured data. The two critical problems of selecting the best subset of eigenfunctions from the set of infinitely many available ones and the optimization of numbering and positioning the boundary measurement spots are studied as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Broadband 60 GHz Sounder for Propagation Channel Measurements Over Short/Medium Distances

    Page(s): 343 - 351
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2067 KB) |  | HTML iconHTML  

    This paper presents a millimeter-wave propagation-channel sounder that uses an optical-fiber cable to achieve phase coherence between transmitting and receiving antennas using two modulated light waves in the vicinity of 1550 nm. In the described sounder, the optical-fiber cable approach permits a separation of up to almost 1000 m between the transmitter and receiver because of the low level of attenuation per unit length in a single-mode optical-fiber cable. The measurement system can sweep from 500 MHz below 60.1 GHz (59.6 GHz) to 500 MHz above 60.1 GHz (60.6 GHz) and proceed to 10 001 in-phase and in-quadrature propagation channel measurements in {\sim}{\rm 5}~{\rm \sec .} Measurements of the frequency response of nonline-of-sight and line-of-sight indoor/outdoor propagation channels over medium ({>}{\rm 100}~{\rm m}) and short ranges can be performed for two or more different lateral receiving antenna positions for antenna diversity studies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power System Dynamic State Estimation With Synchronized Phasor Measurements

    Page(s): 352 - 363
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1120 KB) |  | HTML iconHTML  

    The dynamic state estimation (DSE) applied to power systems with synchrophasor measurements would estimate the system's true state based on measurements and predictions. In this application, as phasor measurement units (PMUs) are not deployed at all power system buses, state predictions would enhance the redundancy of DSE input data. The significance of predicted and measured data in DSE is affected by their confidence levels, which are inversely proportional to the corresponding variances. In practice, power system states may undergo drastic changes during hourly load fluctuations, component outages, or network switchings. In such conditions, the inclusion of predicted values could degrade the power system state estimation. This paper presents a mixed-integer programming formulation of DSE that is capable of simultaneously discarding predicted values whenever sudden changes in the system state are detected. This feature enhances the DSE computation and will not require iterative executions. The proposed model accommodates system-wide synchronized measurements of PMUs, which could be of interest to smart grid applications in energy management systems. The voltage phasors at buses without PMUs are calculated via voltage and current measurements of adjacent buses, which are referred to as indirect measurements. The guide to the expression of uncertainty in measurement is used for computing the confidence level of indirect measurements based on uncertainties associated with PMU measurements as well as with transmission line parameters. Simulation studies are conducted on an illustrative three-bus example and the IEEE 57-bus power system, and the performance of the proposed model is thoroughly discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Electrical Signal Source Separation Via Nonnegative Tensor Factorization Using On Site Measurements in a Smart Home

    Page(s): 364 - 373
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1228 KB) |  | HTML iconHTML  

    Measuring the electrical consumption of individual appliances in a household has recently received renewed interest in the area of energy efficiency research and sustainable development. The unambiguous acquisition of information by a single monitoring point of the whole house's electrical signal is known as energy disaggregation or nonintrusive load monitoring. A novel way to look into the issue of energy disaggregation is to interpret it as a single-channel source separation problem. To this end, we analyze the performance of source modeling based on multiway arrays and the corresponding decomposition or tensor factorization. First, with the proviso that a tensor composed of the data for the several devices in the house is given, nonnegative tensor factorization is performed in order to extract the most relevant components. Second, the outcome is later embedded in the test step, where only the measured consumption over the whole home is available. Finally, the disaggregated data by the device is obtained by factorizing the associated matrix considering the learned models. In this paper, we compare this method with a recent approach based on sparse coding. The results are obtained using real-world data from household electrical consumption measurements. The analysis of the comparison results illustrates the relevance of the multiway array-based approach in terms of accurate disaggregation, as further endorsed by the statistical analysis performed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Survey on Voltage Dip Measurements in Standard Framework

    Page(s): 374 - 387
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6233 KB) |  | HTML iconHTML  

    This paper reports the results of the analyses on the accuracy of algorithms commonly adopted in instruments devoted to the detection and characterization of voltage dips (also called sags). This analysis is particularly interesting because the results of dip measurements are used for calculation of severity levels and site indexes that are important parameters in the assessment of quality level of power supply but also for selecting equipment with proper intrinsic immunity. Anyway, instruments for dip measurement still have unresolved technical and theoretical issues related to the characterization of their metrological performance, so it can be found that different instruments are significantly in disagreement in some actual measurements. This paper moves a step into the direction of deepening the knowledge about the measurement of voltage dips, pointing out the limits because of the adoption of the detection algorithms adopted in agreement with standard. It starts with a discussion about parameters that characterize voltage dips. Then, the analytical calculations of systematic deviations in the event characterization, introduced by the most diffused dip detection algorithms, in simplified measurement situations, are presented, underlining their remarkable impact with particular attention to short-dip event. The obtained relations are experimentally verified on a commercial power quality instrument. Brief remarks about the analysis of the main algorithms, alternative to that of the standard, for voltage dip detection are also reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast Synchrophasor Estimation by Means of Frequency-Domain and Time-Domain Algorithms

    Page(s): 388 - 401
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1413 KB) |  | HTML iconHTML  

    This paper presents a performance comparison of two classes of synchrophasor estimators recently proposed in the literature, i.e., the frequency-domain algorithms known as interpolated discrete Fourier transform (IpDFT), and the time-domain algorithms based on the weighted least squares (WLSs) approach. The analysis is focused on fast phasor estimation only and it is performed under steady-state, dynamic, and transient conditions, when the length of the observation interval is equal to one or two nominal cycles of the acquired electric waveform. The considered testing conditions include not only the worst-case scenarios specified in the IEEE Standard C37.118.1-2011, but they also combine the effect of static and dynamic disturbances. As accuracy performance parameters, the total vector error as well as the amplitude and phase estimation errors are considered and evaluated. Estimator responsiveness is analyzed instead in terms of response times, when amplitude or phase steps occur. The preferable one- and two-cycles IpDFT and WLS algorithms are first determined by choosing the windows and the number of Taylor's series terms (in the WLS case only) assuring the best accuracy. It is shown that, when the time-domain approach is considered and one-cycle or two-cycle intervals are observed, the best accuracy is achieved by the WLS algorithm based on the rectangular window or the two-term one-cycle maximum image tone interference rejection (MIR) window, respectively. Conversely, when the frequency-domain approach is adopted, the best accuracy is achieved by the IpDFT-based on the MIR window related to the observed interval length. The estimates produced by the selected WLS and IpDFT algorithms are then compared against the boundaries specified in the Standard IEEE C37.118.1-2011 for P-and M-class compliance. During the discussion, advantages and disadvantages of both time-domain and frequency-domain approaches are highlighted. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Power System Frequency Estimation by Reduction of Noise Using Three Digital Filters

    Page(s): 402 - 409
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1148 KB) |  | HTML iconHTML  

    Accurate estimation of power system frequency is essential for monitoring and operation of the smart grid. Traditionally, this has been done using discrete Fourier transform (DFT) coefficients of the positive fundamental frequency. Such DFT-based frequency estimation has been used successfully in phasor measurement units and frequency disturbance recorders in North America. Frequency errors in DFT-based algorithms for single-phase signals arise mainly due to noise and the leakage effect of the negative fundamental frequency. In this paper, a DFT-based frequency estimation algorithm is proposed to introduce three digital filters for reduction of estimate error due to noise and the leakage effect. This algorithm calculates the frequency estimate from the magnitude ratios of DFT coefficients to avoid the leakage effect. It compensates the estimate error, which is induced from the DFT magnitude ratios of three filtered outputs. The enhancement of signal-to-noise ratios is verified through simulations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Measuring System for Microelectric Power

    Page(s): 410 - 421
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2916 KB) |  | HTML iconHTML  

    In this paper, we are mostly concerned with the measurement of electric micropower and energy. This parameter is essential to evaluate the energy efficiency of low power devices, such as wireless operated monitoring and control systems. It is also important to measure the standby power consumed by appliances and equipment while switched off. The measurement of power generated by the harvesting systems is another field of application. Thus, it is important to implement the power meters to accurately measure power . The common commercially available wattmeters are inaccurate for these low-power measurements for two main reasons: a limitation on the instrument dynamic range and the intermittent operating mode of some devices. Particularly critical is the current measurement, for the high gains required in most applications. In this paper, a measuring system for microelectric power and energy is proposed. It operates for voltage up to 3 Vpp and current from 1 pA to 5 mA, corresponding to an electric power down to around a fraction of microwatt. In the following, the system architecture is described, also discussing some experimental results obtained during the characterization test. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Load Characterization and Revenue Metering Under Non-Sinusoidal and Asymmetrical Operation

    Page(s): 422 - 431
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1383 KB) |  | HTML iconHTML  

    This paper proposes an approach to load characterization and revenue metering, which accounts for the influence of supply deterioration and line impedance. It makes use of the Conservative Power Theory and aims at characterizing the load from the measurements done at the point of common coupling. Despite the inherent limitations of a single-point measurement, the proposed methodology enables evaluation of power terms, which clarify the effects of reactivity, asymmetry and distortion, and attempts to depurate the power consumption accounted to the load from those terms deriving from supply nonidealities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Four-Terminal Imaging Using a Two-Terminal Electrical Impedance Tomography System

    Page(s): 432 - 440
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3983 KB) |  | HTML iconHTML  

    Electrical impedance tomography (EIT) is a promising visualization measurement technique to reconstruct media distribution in a region of interest (ROI) through impedance measurements on its boundary. In this paper, a two-to-four-terminal mode is proposed for a two-terminal EIT system to take advantage of the four-terminal imaging mechanism. Using the two-to-four-terminal mode, data are acquired using the two-terminal EIT system, whereas the imaging mechanism is based on the four-terminal mode. To realize four-terminal imaging using a two-terminal EIT system, the mapping formulas of data from the two- to four-terminal mode and vice versa are derived. A novel imaging method based on two-to-four-terminal mode is proposed to decrease the ill conditionedness of the relevant inverse problem and implemented with the conjugate gradient iteration algorithm and the Tikhonov regularization algorithm, respectively. Simulation and experimental results validate the feasibility and effectiveness of the proposed method in reconstructing inclusions within the ROI clearly and exactly in both shape and location. Compared with the method based on two-terminal mode, the quality of the images reconstructed using the proposed method can be improved in terms of contrast, resolution, and antiartifact, i.e., reducing the artifacts in the reconstructed images, especially when distinguishing complicated distributions. Meanwhile, the imaging speed is also increased. Moreover, the availability of four-terminal imaging mechanism in a two-terminal EIT system makes it more flexible to choose an appropriate mode according to the application requirements and available instruments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and Development of a Low-Cost Multisensor Inertial Data Acquisition System for Sailing

    Page(s): 441 - 449
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (737 KB) |  | HTML iconHTML  

    This paper presents the development of an inertial measurement data acquisition system intended for use in sailboats. The variables of interest are three-axis acceleration, three-axis rotation, Global Positioning System (GPS) position/velocity, magnetic compass bearing, and wind speed/direction. The design focus is on low-cost microelectromechanical systems-based technology and demonstrating the validity of these technologies in a scientific application. A prototype is constructed and submitted to a series of tests to demonstrate functionality and soundness of the design. Contributions of this paper include the novel application of inertial measurement unit technology to a sailboat racing application, the integration of all instrumentation, creative ruggedized packaging, and emphasizing the use of low-cost commercial off-the-shelf technology. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Maximum Sample Volume for Permittivity Measurements by Cavity Perturbation Technique

    Page(s): 450 - 455
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1087 KB) |  | HTML iconHTML  

    The maximum sample volume for accurate permittivity measurements of dielectric materials having various geometries (rod/bar, strip/disk, and sphere) by cavity perturbation technique has been investigated by determining the maximum volume ratio of sample to cavity (Vs/Vc)max based on analysis of the measurement theory. It is demonstrated that (Vs/Vc)max of a dielectric rod/bar with the height equal to that of resonant cavity relies exclusively on the relative dielectric constant, whereas (Vs/Vc)max of a dielectric strip/disk or sphere depends on both the relative dielectric constant and the dielectric loss factor. There is a relatively weak permittivity dependence of (Vs/Vc)max for dielectric property measurements of dielectric strips/disks compared with rods/bars or spheres. The maximum sample volume used in the measurements for different sample geometries follows the order: . Comparison between (Vs/Vc)max of low-loss Al2O3 and high-loss SiC reveals that low-loss materials can have a larger sample volume than high-loss materials for measurement. High-loss materials may require a strip/disk geometry to meet the measurement requirements. The variation in (Vs/Vc)max of Al2O3 having different geometries in a broad temperature range up to ~ 1400°C shows that (Vs/Vc)max of the sample decreases with increasing temperature and the change in (Vs/Vc)max should be considered during the high-temperature permittivity measurements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Two-Phase Air–Water Slug Flow Measurement in Horizontal Pipe Using Conductance Probes and Neural Network

    Page(s): 456 - 466
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4319 KB) |  | HTML iconHTML  

    This paper presents a method to obtain gas and liquid flow rates of two-phase air-water slug flow in a horizontal pipe through conductance probes and neural network. Contrary to statistical features commonly used in other works, five characteristic parameters of the mechanistic slug flow model are extracted from conductance signals, i.e., translational velocity, slug holdup, film holdup, slug length, and film length, which are used as the neural network inputs. The translational velocity is obtained through cross correlation of signals from the two ring-type conductance probes that are placed apart at a fixed distance. A feedforward neural network is adopted to correlate the characteristic parameters of slug flow and the gas and liquid flow rates and further used as a prediction tool. The experimental results show that the neural network method is able to learn the implicit correlations between the characteristic parameters of slug flow and the corresponding gas and liquid flow rates. It provides a performance for measurement of gas and liquid flow rates in slug flow regime within ±10% of full scale. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluation of a Calorimetric Thermal Voltage Converter for RF–DC Difference up to 1 GHz

    Page(s): 467 - 472
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1204 KB) |  | HTML iconHTML  

    In this paper, we present a performance evaluation of the RF-DC transfer difference for a calorimetric thermal voltage converter (CTVC) designed by the National Research Council of Canada (NRC) at frequencies up to 1 GHz. In the first part of this paper, we describe a bilateral comparison of the RF-DC difference standards between the NRC and the National Metrology Centre of the Agency for Science, Technology and Research of Singapore, in the frequency band from 1 kHz to 100 MHz. A good agreement is observed between the two laboratories using the CTVC as a traveling standard. In the second part of this paper, we evaluate the performance of the CTVC at higher frequencies up to 1 GHz. In this part, RF-DC difference of the CTVC is mathematically modeled and experimentally evaluated in terms of the calibration factor of a thermistor mount and the reflection coefficients at its type-N input connector. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

Papers are sought that address innovative solutions to the development and use of electrical and electronic instruments and equipment to measure, monitor and/or record physical phenomena for the purpose of advancing measurement science, methods, functionality and applications.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Prof. Alessandro Ferrero
Dipartimento di Elettrotecnica
Piazza Leonardo da Vinci 32
Politecnico di Milano
Milano 20133 Italy
alessandro.ferrero@polimi.it
Phone: 39-02-2399-3751
Fax: 39-02-2399-3703