By Topic

Tsinghua Science and Technology

Issue 6 • Date Dec. 2009

Filter Results

Displaying Results 1 - 18 of 18
  • [Front cover]

    Publication Year: 2009 , Page(s): c1
    Save to Project icon | PDF file iconPDF (161 KB)  
    Freely Available from IEEE
  • Contents

    Publication Year: 2009 , Page(s): 1
    Save to Project icon | PDF file iconPDF (88 KB)  
    Freely Available from IEEE
  • Combined analysis of cost and traffic grooming policies for hybrid networks under dynamic traffic requests

    Publication Year: 2009 , Page(s): 677 - 684
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1745 KB)  

    The benefit of a two-layer hybrid IP/MPLS (multi-protocol label switching) over a wavelength division multiplexing network has been analyzed considering both the cost and different grooming policies. A detailed cost and performance analysis of hybrid networks is done for three different grooming policies. The hybrid network cost is compared with that of an opaque network for equal traffic demand and equal blocking probability of dynamic requests of label switched paths. An algorithm is given to design optimum hybrid nodes for different grooming policies to provide the desired blocking probability for a given number of dynamic connection requests. The results show that all three applied grooming policies (I P layer first, optical layer first, and one hop first) result in lower costs of the hybrid network architecture than for the opaque network. In addition, an adaptive one hop first method is given to improve the best of the applied grooming policies, which limits grooming in heavily loaded hybrid nodes to achieve load balancing. The simulation results show that the new policy significantly reduces the overall blocking probability. View full abstract»

    Open Access
  • Face live detection method based on physiological motion analysis

    Publication Year: 2009 , Page(s): 685 - 690
    Cited by:  Papers (3)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (5079 KB)  

    In recent years, face recognition has often been proposed for personal identification. However, there are many difficulties with face recognition systems. For example, an imposter could login the face recognition system by stealing the facial photograph of a person registered on the facial recognition system. The security of the face recognition system requires a live detection system to prevent system login using photographs of a human face. This paper describes an effective, efficient face live detection method which uses physiological motion detected by estimating the eye blinks from a captured video sequence and an eye contour extraction algorithm. This technique uses the conventional active shape model with a random forest classifier trained to recognize the local appearance around each landmark. This local match provides more robustness for optimizing the fitting procedure. Tests show that this face live detection approach successfully discriminates a live human face from a photograph of the registered person's face to increase the face recognition system reliability. View full abstract»

    Open Access
  • Whole frame error concealment with refined motion extrapolation and prescription at encoder for H.264/AVC

    Publication Year: 2009 , Page(s): 691 - 697
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (997 KB)  

    Packet losses usually result in frame losses in low bit rate video streaming over error prone networks. The whole frame error concealment (EC) algorithm is then necessary for the decoder to improve the video quality. A refined bidirectional motion vector extrapolation was developed to improve the motion vector estimation precision by alleviating the effect of overlapped and hole regions with an adaptive overlapped block motion compensation to reduce block artifacts. A prescription-based framework was then developed to improve the error concealment at the encoder side. Simulations show that the EC algorithm at the decoder side outperforms the existing methods by 2–8 dB. Moreover, the prescription-based framework at the encoder side provides further peak signal to noise ratio improvement. View full abstract»

    Open Access
  • A highly linear low-voltage source-degeneration transconductor based on unity-gain buffer

    Publication Year: 2009 , Page(s): 698 - 702
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (419 KB)  

    A complementary metal oxide semiconductor (CMOS) transconductor based on a high performance unity-gain buffer driving the degeneration resistor was used to obtain a highly linear voltage-to-current conversion with considerable reduction of the supply voltage. Simulations show that the transconductor using an 0.18-μm standard CMOS process with a 1.2-V supply voltage has less than −80 dB total harmonic distortion (THO) for a 1-MHz 0.4-Vp-p differential input signal. The third-order intermodulation is less than −63 dB for 0.25 V p-p differential inputs at 1 MHz. The DC power consumption in the transconductor core is 240 μW. This topology is a feasible solution for low voltage and low power applications. View full abstract»

    Open Access
  • Objective image fusion quality evaluation using structural similarity

    Publication Year: 2009 , Page(s): 703 - 709
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1183 KB)  

    Objective evaluations of fused images are important in comparing the performance of different image fusion algorithms. This paper describes a structural similarity metric that does not use a reference image for image fusion evaluations. The metric is based on the universal image quality index and addresses not only the similarities between the input images and the fused image, but also the similarities among the input images. The evaluation process distinguishes between complementary information and redundant information using similarities among the input images. The metric uses the information classification to estimate how much structural similarity is preserved in the fused image. Tests demonstrate that the metric correlates well with subjective evaluations of the fused images. View full abstract»

    Open Access
  • Wireless communication and broadcasting convergence network throughput

    Publication Year: 2009 , Page(s): 710 - 717
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1589 KB)  

    Wireless communication and broadcasting convergence networks provide a potential solution to greater traffic throughput in the future. In this paper, the throughput of a convergence network is analyzed based on the standardization project for specific requirements for local and metropolitan area networks in China, referred to as broadband wireless multimedia systems. The convergence network is modeled as a combination of a broadcasting channel and a multi-access channel with interference. The throughput is then given as a function of the time resource allocation by calculating the channel capacity with interference. The maximum throughput and the optimal time resource allocation are then determined for a given delay constraint and traffic requirements. The results give guidelines for time resource allocation and system design for convergence networks. View full abstract»

    Open Access
  • Dynamic niching genetic algorithm with data attraction for automatic clustering

    Publication Year: 2009 , Page(s): 718 - 724
    Cited by:  Papers (1)
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (418 KB)  

    A genetic clustering algorithm was developed based on dynamic niching with data attraction. The algorithm uses the concept of Coulomb attraction to model the attraction between data points. Then, the niches with data attraction are dynamically identified in each generation to automatically evolve the optimal number of clusters as well as the cluster centers of the data set without using cluster validity functions or a variance-covariance matrix. Therefore, this clustering scheme does not need to pre-specify the number of clusters as in existing methods. Several data sets with widely varying characteristics are used to demonstrate the superiority of this algorithm. Experimental results show that the performance of this clustering algorithm is high, effective, and flexible. View full abstract»

    Open Access
  • Robust precoding schemes for multi-user MISO-OFDM downlink with limited time-domain feedback

    Publication Year: 2009 , Page(s): 725 - 731
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (495 KB)  

    In multi-user multiple-input single-output orthogonal frequency-division multiplexing (MISO-OFDM) downlinks with limited feedback, both linear precoders (LP) and Tomlinson-Harashima precoders (THP) experience performance degradation due to inaccurate channel state information at the transmitter (CSIT). This analysis treats the downlink channels as random quantities and exploits their second order statistics in robust precoding schemes to correct the errors introduced in the feedback procedure. The time-domain channel vectors are found to reduce the feedback overhead more than the frequency-domain vectors. A compression and restoration method and a codebook design are also given to obtain compact feedback quantities. Simulations show that the robust LP and THP are superior to the previous methods with tradeoffs possible between performance and feedback overhead. View full abstract»

    Open Access
  • Sum-of-squares design method for four-parameter lag-lead compensator

    Publication Year: 2009 , Page(s): 732 - 734
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (222 KB)  

    The four-parameter lag-lead compensator design has received much attention in the last two decades. However, most approaches have been either trial-and-error or only for special cases. This paper presents a non-trial-and-error design method for four-parameter lag-lead compensators. Here, the compensator design problem is formulated into a polynomial function optimization problem and solved by using the recently developed sum-of-squares (SOS) techniques. This result not only provides a useful design method but also shows the power of the SOS techniques. View full abstract»

    Open Access
  • Improved scheme for probabilistic transformation and teleportation of multi-particle quantum states

    Publication Year: 2009 , Page(s): 735 - 738
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (273 KB)  

    In most probabilistic teleportation schemes, if the teleportation fails, the unknown quantum state will be completely ruined. In addition, the frequently proposed high-dimensional unitary operations are very difficult to realize experimentally. To maintain the integrity of the unknown quantum state to be teleported, this analysis does not focus attention on the original multi-particle state but seeks to construct a faithful channel with an ancillary particle and a unified high-dimensional unitary operation. The result shows that if the construction of the multi-group Einstein-Podolsky-Rosen pair succeeds, the original multi-particle state can be used to deterministically teleport the unknown quantum state of the entangled multiple particles which avoids undermining the integrity of the unknown state brought about by failure. This unified high-dimensional operation is appealing due to the obvious experimental convenience. View full abstract»

    Open Access
  • Shape recognition and retrieval based on edit distance and dynamic programming

    Publication Year: 2009 , Page(s): 739 - 745
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1786 KB)  

    An important aim in pattern recognition is to cluster the given shapes. This paper presents a shape recognition and retrieval algorithm. The algorithm first extracts the skeletal features using the medial axis transform. Then, the features are transformed into a string of symbols with the similarity among those symbols computed based on the edit distance. Finally, the shapes are identified using dynamic programming. Two public datasets are analyzed to demonstrate that the present approach is better than previous approaches. View full abstract»

    Open Access
  • Load distribution assessment of reinforced concrete buildings during construction with structural characteristic parameter approach

    Publication Year: 2009 , Page(s): 746 - 755
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1148 KB)  

    High-rise reinforced concrete buildings are in great demand in developing countries with rapid urbanization. Construction engineers are facing more and more safety control challenges. One major issue is the understanding of the load distributions, especially the maximum slab load, of structures under construction, which is time dependent. Previous methods were mainly targeted to specific examples, providing specific solutions without addressing the fundamental issues of finding general solutions for load distributions in reinforced concrete buildings with different geometrical and material characteristics during construction. The concept of a structural characteristic parameter is used here to parameterize the main geometrical and material characteristics of concrete structures for generalized assessments of load distributions during construction. The maximum slab load for 20 different construction shoring/reshoring schemes is presented. The results indicate that the traditional simplified method may underestimate or overestimate the maximum slab load, depending mainly on the shoring/reshoring schemes. The structural characteristic parameter approach was specifically developed to assist construction engineers to estimate load distributions to assure safe construction procedures. View full abstract»

    Open Access
  • Influences of shrinkage, creep, and temperature on the load distributions in reinforced concrete buildings during construction

    Publication Year: 2009 , Page(s): 756 - 764
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (1499 KB)  

    Site measurements have shown that slab loads re-distribute, between the slabs during the concrete curing, while the external loadings and structural geometry remain the same. Some have assumed that this is caused by concrete shrinkage and creep, but there have been no studies on how these factors exactly influence the load distributions and to what degree these influences exist. This paper analyzes the influences of concrete shrinkage, creep, and temperature on the load re-distributions among slabs. Although these factors may all lead to load re-distribution, the results show that the influence of concrete shrinkage can be neglected. Simulations indicate that shrinkage only reduces slab loads by a maximum of 1.1%. Creep, however, may reduce the maximum slab load by from 3% to 16% for common construction schemes. More importantly, temperature variations between day and night can cause load fluctuation as large as 31.6%. This analysis can, therefore, assist site engineers to more accurately estimate slab loads for construction planning. View full abstract»

    Open Access
  • Constrained newton methods for transport network equilibrium analysis

    Publication Year: 2009 , Page(s): 765 - 775
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (620 KB)  

    A set of constrained Newton methods were developed for static traffic assignment problems. The Newton formula uses the gradient of the objective function to determine an improved feasible direction scaled by the second-order derivatives of the objective function. The column generation produces the active paths necessary for each origin-destination pair. These methods then select an optimal step size or make an orthogonal projection to achieve fast, accurate convergence. These Newton methods based on the constrained Newton formula utilize path information to explicitly implement Wardrop's principle in the transport network modelling and complement the traffic assignment algorithms. Numerical examples are presented to compare the performance with all possible Newton methods. The computational results show that the optimal-step Newton methods have much better convergence than the fixed-step ones, while the Newton method with the unit step size is not always efficient for traffic assignment problems. Furthermore, the optimal-step Newton methods are relatively robust for all three of the tested benchmark networks of traffic assignment problems. View full abstract»

    Open Access
  • Traffic control strategy for a surface street on an expressway-arterial corridor

    Publication Year: 2009 , Page(s): 776 - 781
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (544 KB)  

    Off-ramp area congestion and off-ramp queue spill over are wide-spread reasons for expressway traffic jams in China. A control strategy was developed to reduce the number of collisions between expressway and surface street vehicles and to reduce spill over by controlling the surface street vehicles to improve the traffic conditions near expressway off-ramps. This control algorithm monitors the traffic conditions at the off-ramp using occupancy rates and a performance index for the surface street vehicles, and then optimizes the control signal split and cycle length for the surface street vehicles assuming that the off-ramp queue is shorter than the minimum allowable length. Simulations indicate improvements in the traffic flow with the average vehicle travel time nearly 10% less and the off-ramp queue reduced significantly. View full abstract»

    Open Access
  • Beijing urban development model: Urban growth analysis and simulation

    Publication Year: 2009 , Page(s): 782 - 794
    Save to Project icon | Click to expandQuick Abstract | PDF file iconPDF (3061 KB)  

    Urban growth analysis and simulation have been recently conducted by cellular automata (CA) models based on self-organizing theory which differs from system dynamics models. This paper describes the Beijing urban development model (BUOEM) which adopts the CA approach to support urban planning and policy evaluation. BUOEM, as a spatio-temporal dynamic model for simulating urban growth in the Beijing metropolitan area, is based on the urban growth theory and integrates logistic regression and MonoLoop to obtain the weights for the transition rule with multi-criteria evaluation configuration. Local sensitivity analysis for all the parameters of BU OEM is also carried out to assess the model's performances. The model is used to identify urban growth mechanisms in the various historical phases since 1986, to retrieve urban growth policies needed to implement the desired (planned) urban form in 2020, and to simulate urban growth scenarios until 2049 based on the urban form and parameter set in 2020. The model has been proved to be capable of analyzing historical urban growth mechanisms and predicting future urban growth for metropolitan areas in China. View full abstract»

    Open Access

Aims & Scope

Tsinghua Science and Technology (Tsinghua Sci Technol) aims to highlight scientific achievements in computer science, electronic engineering, and other IT fields. Contributions all over the world are welcome.

Full Aims & Scope