By Topic

Signal Processing (ICSP), 2012 IEEE 11th International Conference on

Date 21-25 Oct. 2012

Go

Filter Results

Displaying Results 1 - 25 of 174
  • A power and bit allocation algorithm based on Virtual Bit in power line communication

    Page(s): 1571 - 1574
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (152 KB) |  | HTML iconHTML  

    To minimize the electromagnetic interference of power line channel and to satisfy the requirement of power-line communication rate, this paper proposed a power and bit allocation algorithm based on Virtual Bit in power-line communication system. The algorithm first presents the concept of Virtual Bit and a rule as the optimal allocation criterion. Then it loads bits on sub-carriers based on the proposed rule, which achieves the optimal allocation target of the minimum power. The simulation results illustrate that the algorithm performance can achieve optimal and it is suitable for power spectrum limited high-speed power line communication systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross-layer cooperative transmission for improving throughput in wireless relay networks

    Page(s): 1575 - 1578
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (156 KB) |  | HTML iconHTML  

    With the rapid increase of mobile devices, wireless networks face spectrum exhaustion and insufficient bandwidth utilization problems. Cooperative communication is a promising approach to improve throughput in wireless relay networks. Recent researches mainly focus on cooperative communication methods in PHY or MAC layers separately to improve data transmission performance. Considering a typical video surveillance scenario in WLAN, this paper propose a cross-layer cooperative transmission scheme from the terminal to the wireless access point with help of another terminal as the relay node. The relay and direct channel states are estimated first based on finite state Markov channel (FSMC) model before transmission. The link with larger effective bandwidth between the relay and direct transmission channels is selected as the actual transmission channel. The simulation experiments are implemented in 802.11b framework. The experimental results indicate that the proposed scheme can achieve a gain of about 11% in throughput with channel average signal-to-noise ratio (SNR) equal to 5 dB, and about 19% with average SNR equal to 10 dB. It shows that the proposed scheme can improve transmission performance effectively, and when the channel situation gets better, the gain becomes more notable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face recognition using the Wavelet tree and two-dimensional PCA

    Page(s): 1579 - 1582
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (223 KB) |  | HTML iconHTML  

    Two-dimensional principal component analysis (2D-PCA) is a fast method for face recognition. The proposed method makes use of 2D-PCA based on two dimensional Wavelet tree matrices composed of the Wavelet approximation coefficients(WTMPCA) as opposed to the traditional 2D-PCA, which is grounded on 2D matrices in the image domain. By applying the three-level Wavelet decomposition, the new 2D matrix is made up of the approximation coefficients. The matrices in the Wavelet domain not only contain the whole information of the images, but also extract the local feature. Finally, the 2D-PCA is used under the new image matrix for face recognition. Experimental results on the ORL and a subset of CAS-PEAL face database show that WTMPCA method achieves 96% accuracy on face recognition using only one principal component vector. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recognizing human emotional state via SRC in Fractional Fourier Domain

    Page(s): 1583 - 1586
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (164 KB) |  | HTML iconHTML  

    Recognizing human emotional state is one of the most important component for efficient human-computer interaction (HCI). In this paper, a novel emotional state recognition method via SRC (classification based on sparse representation) in Fractional Fourier Domain (FRFD) is proposed. For a robust representation, it performs feature extraction by using the Fractional Fourier Transform (FRFT). And then Principal Component Analysis (PCA) and down-sample [1] are used to reduce the feature dimensions. In particular, the human emotional state recognition task is fitted into the SRC framework. Due to the FRFT and the excellent theory of SRC, the proposed algorithm gives better results when comparing with the state-of-art SRC human emotional state recognition method. Experiments conducted on publicly human emotional state database verify the accuracy and efficiency of our algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fast sparse representation for Finger-Knuckle-Print recognition based on smooth L0 norm

    Page(s): 1587 - 1591
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB) |  | HTML iconHTML  

    As a novel biometric, Finger-Knuckle-Print (FKP) has received great interest in recent years, and has become a hot research spot of biometric recognition. Due to its characteristic of uniqueness, easy accessibility, none abrasion and abundant texture, it has been widely applied to personal identification. But the spare representation based FKP method has not been reported yet. In this paper, a smooth l0 norm spare representation model based FKP algorithm is proposed. Firstly, an over-complete dictionary is constructed using the training samples, and then Local Binary Pattern (LBP) operator is used for feature extraction and dimension reduction. Finally, smooth l0 norm is used to solve the model, accelerate the recognition process, and improve its efficiency. Experimental results on FKP Database established by The Hong Kong Polytechnic University show that the proposed method has achieved competitive good results with the state-of-the-arts and has great potential in practical applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel zero-watermark algorithm based on LU decomposition in NSST domain

    Page(s): 1592 - 1596
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (190 KB) |  | HTML iconHTML  

    A novel zero-watermark algorithm is proposed based on LU decomposition and Non-Subsampled Shearlet Transform(NSST) for copyright protection of digital images. In this method, firstly the host image is performed with NSST, and a sub-image is extracted from the low-frequency approximation image randomly by Logistic chaotic system, then the sub-image is divided into non-overlapping sub-blocks. Next, each sub-block is performed with LU decomposition, the zero-watermark is derived by judging the numerical relationship between the sum from the first row elements of each sub-block's U matrix and the mean of the sums from the first row elements of all sub-block's U matrix. Experimental results show that the method is robust to adding noise, filtering and JPEG compression, and can resist the Cropping and RST attack to some extent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cross-correlation based binary image registration for 3D palmprint recognition

    Page(s): 1597 - 1600
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB) |  | HTML iconHTML  

    A new binary image registration algorithm is proposed for 3D palmprint recognition. When two 3D palmprint images are matched, three binary images are extracted from each of them which encode the orientation information. Based on the cross-correlations of the binary images a new algorithm is developed to calculate the translation parameters. Then the binary images are aligned and matched. Experimental results on the HK-Poly 2D+3D palmprint database show the good performance of our method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face tracking system based on Gentle AdaBoost and Weighted Particle Filter

    Page(s): 1601 - 1604
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (106 KB) |  | HTML iconHTML  

    A face tracking system based on Gentle AdaBoost (GAB) and Weighted Particle Filter (WPF) are presented in this paper. The proposed algorithm is based on the fact that all the particles can be used, which improves the performance of the face tracking. At the same time, the GAB can be used to correct the tracking system so that the only best face image of someone can be detected. In a word, this paper shows how GAB combines with WPF to achieve a robust face tracking results. The experiment results clearly show the effectiveness of GAB-WPF algorithm in tracking. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image restoration and enhancement for finger-vein recognition

    Page(s): 1605 - 1608
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (913 KB) |  | HTML iconHTML  

    Finger-vein recognition has been a hot topic in biometrics community. However, finger-vein images are always poor in quality. This is very unhelpful for finger-vein feature analysis in practice. Considering the light propagation behavior in biological tissue and the variations of veins in orientation and diameter, in this paper, we propose an effective finger-vein image enhancement method based on scattering removal and a bank of even Gabor filters. The experimental results show that the proposed method has a good performance in venous region enhancement and can improve the finger-vein recognition accuracy significantly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fusion method of palmprint recognition base on texture features and lines features

    Page(s): 1609 - 1613
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (162 KB) |  | HTML iconHTML  

    The palmprint recognition is studied in detail. Some problem about feature extraction, feature fusion and matching is discussed. The extraction of line feature is obtained based on mathematic morphology. Texture feature extraction based on Gabor filters is also carried out. Using line and texture features, two classifiers is designed. Then, basic probability assignment functions of the two classifiers are proposed by Fuzzy regulation. At last, the result of two classifiers carries through fusion decision by D-S theory of evidence. The result of the experiment has already shown that this kind of method is valid, reasonable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Rotation correction of DHV images using entropy minimization of boundary descriptor

    Page(s): 1614 - 1619
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB) |  | HTML iconHTML  

    A rotation correction method of dorsal-hand vein (DHV) images is proposed. The idea of the proposed method is to normalize all the samples to the same virtual template defined with a criterion, so that we do not need to estimate the rotation angle between a sample and every template one by one in feature matching. The proposed method contains two main procedures, i.e., boundary detection and entropy minimization. Boundary detection is used to create a boundary map, and entropy minimization is adopted to find the best rotation angle with the boundary map. Simulations and real experiments are given to show that the proposed method can be completed in 65 milliseconds and can reduce equal error rate (EER) efficiently when the rotation angle in a database is big. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic brain state classification system using double channel of EEG signal from rat brain

    Page(s): 1620 - 1623
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (364 KB) |  | HTML iconHTML  

    In this paper, we aimed to develop a simple technique to classify brain states of rats, including Active, Inactive, REM, and NREM. Two EEG signals (from frontal and parietal cortices) were recorded to create EEG spectrums. The EEG spectrums created by Fast Fourier Transform (FFT) were separated into two sets; training set for the brain state model creation and testing set for experiment. The training set of each brain states which are manually classified as one of four possible brain state models by medical experts was created in terms of spectral mean and standard deviation. A basic method measuring similarity between testing and brain state model spectrums was based on normal distribution model. The results showed that the best classification of our proposed technique was found in NREM state with 95.86%. However, the classification result of inactive state was 73.33%, and overall average accuracy of all brain states was 87.76%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feature extraction of electroencephalogram signals applied to epilepsy

    Page(s): 1624 - 1628
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (188 KB) |  | HTML iconHTML  

    In this work, we proposed an analysis framework for Electroencephalogram (EEG) signals and their classification. The EEGs considered for this study belong to both normal as well as epileptic subjects. After wavelet packet decomposition of EEG signals, three important statistical features such as standard deviation, energy and entropy were computed at different sub-bands decomposition. The most suitable wavelets were selected for processing EEG signals. Linear discriminant analysis and principal component analysis are used to reduce the dimension of data. Feature vectors were used to model and train the efficient Support Vector Machine (SVM) classifier. In this study, we have attempted to improve the computing efficiency as it selects the statistical features and the dimensionality reduction method that can provide an important assistant to neuro-physicians, thus to make their decision on their patients. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A fast heart sounds detection and heart murmur classification algorithm

    Page(s): 1629 - 1632
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (329 KB) |  | HTML iconHTML  

    This paper extends our previous studies and presents a fast, automatic cardiac auscultation scoring system that effectively identifies the first and second heart sounds (S1 and S2) and extracts clinical features of heart murmurs to assist clinical diagnosis. Using the indices derived from AR modeling, the underlying scoring system is capable of detecting and identifying S1 and S2, dissecting the systole and the diastole for further analysis and extracting heart murmurs features found within, such as timing, duration, loudness (intensity), pitch and shape of murmurs. To achieve a broader spectrum of application, only the relative duration difference between systole and diastole was used as the a priori information to identify S1 and S2. This algorithm is particularly suited for an embedded system implementation with ease of calculations while maintaining accuracy and effectiveness. The suggested has been successfully evaluated with multiple cardiac cycles, where each systole and diastole was accurately identified and isolated. The performance of the approach has met good success using clinical data from patients with a variety of systolic murmur episodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A human-computer interface design using automatic gaze tracking

    Page(s): 1633 - 1636
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (213 KB) |  | HTML iconHTML  

    This paper describes the design and implementation of a human-computer interface using an in-house developed automatic gaze tracking system. A focus has been placed on developing an inexpensive non-invasive gaze-tracking computer interface by which the user can control his/her computer input in a hands-free manner. In the underlying project, infrared (IR) light emitting diodes (LEDs) are placed around a computer monitor to produce reference corneal glints from the user's eye and to illuminate the user's pupil. An IR-sensitive video camera is then used to capture images of these glints. A graphical user interface is used to gather calibration glint information from the user when gazing at six strategically place calibration points. A linear model is derived from these data and utilized to map the vertical and horizontal displacements of the glints with respect to certain physical landmarks of the user's pupil onto the corresponding point of gaze on the monitor. The design is capable of real-time performance and was evaluated with many volunteers with good success rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using sequential floating forward selection algorithm to detect epileptic seizure in EEG signals

    Page(s): 1637 - 1640
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (143 KB) |  | HTML iconHTML  

    Epilepsy is a common neurological disorder involving spontaneous seizures. While electroencephalography (EEG) is a useful diagnostic approach of epilepsy that records the electrical activity of the brain, detection of epileptic seizures is still clinically difficult. This study proposes a segmental classification approach for the detection of epileptic seizure in EEG signals. Regularized least squares and smoothness priors methods are applied to minimize the nonstationary components in the signals. The optimal frequency band energy features are selected by using the sequential floating forward selectrion (SFFS) algorithm, with linear, quadratic and cubic discriminant function as classifiers. The results show that when the quadratic discriminant function is applied, the sensitivity and specificity of seizure detection reach a maximum of 98.1% and 95.6% respectively for discriminating health subjects against epileptic subjects in seizure period, and the overall classification rate is 97.2%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Red blood cell segmentation using Active Appearance Model

    Page(s): 1641 - 1644
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (148 KB) |  | HTML iconHTML  

    The red blood cell segmentation is an important technology for automatic cell counting, classification and analysis in clinical examination. In this paper, we propose a red blood cell segmentation method based on Active Appearance Models (AAM). The AAM can effectively describe the shape information and texture information of the red blood cell, and separate cells from background precisely. Experimental results show that the AAM can extract cells from background effectively. Besides, compared with many traditional image segmentation methods, the AAM method provides a closed form of the boundary of the cell, so it is no need to perform boundary tracking for cell counting, measurement and analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Nonlinear baseline estimation of FHR signal using empirical mode decomposition

    Page(s): 1645 - 1649
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (218 KB) |  | HTML iconHTML  

    Automated analysis of fetal heart rate (FHR) curve plays a significant role in computer-aided fetal monitoring. The first and critical step is the estimation of the FHR baseline. A number of FHR baseline estimation algorithms have been developed but recent studies have pointed out the deficiency of such algorithms when dealing with non-stationary and non-linear FHR signals of continuous decelerations especially in intrapartum tracings. Our study proposes a novel non-linear FHR baseline estimation method using empirical mode decomposition (EMD) and a statistical post-processing method. To assess the baseline quality, we made a comparative study against a cited linear baseline estimation algorithm using auto-regressive moving average (ARMA) model-based simulated FHR signals of continuous acceleration and deceleration patterns. The results were evaluated in terms of basal FHR values, detected acceleration and deceleration numbers versus preset basal FHR values and acceleration/deceleration numbers. The results showed that when dealing with non-stationary and non-linear FHR tracings like intra-partum tracings with continuous accelerations or decelerations, the EMD-based baseline estimation method is more stable and interference-resistant than the traditional linear methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive combined denoising based low-dose X-ray CT reconstruction

    Page(s): 1650 - 1653
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (208 KB) |  | HTML iconHTML  

    X-ray Computed Tomography (CT) has been widely applied in clinical domain, especially in the field of diagnosis and treatment. X-rays is harmful to human health. Minimizing radiation dose as more as possible has been a significant concern in CT imaging filed. One way to reduce the radiation dose in the data acquisition process is to reduce the current of the x-ray source. However this will degrade the quality of the reconstructed image with strong noise. In this paper, we propose an adaptive combined denoising method for restoration of low-dose CT projection data (i.e., sinogram). The method unites LAWML and WienerChop in the wavelet domain. Simulated experimental results demonstrate that the proposed method performs better than conventional filters, such as the Hanning filter, in lowering the noise and preserving the image resolutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detect adverse drug reactions for drug Pioglitazone

    Page(s): 1654 - 1658
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (93 KB) |  | HTML iconHTML  

    Adverse drug reaction (ADR) is widely concerned for public health issue. In this study we propose an original approach to detect the ADRs using feature matrix and feature selection. The experiments are carried out on the drug Pioglitazone. Major side effects for the drug are detected and better performance is achieved compared to other computerized methods. The detected ADRs are based on the computerized method, further investigation is needed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Facial expression analysis using a sparse representation based space model

    Page(s): 1659 - 1662
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (454 KB) |  | HTML iconHTML  

    With the development of information technologies, facial expression analysis becomes more and more essential to human computer interaction (HCI). A natural way to analysis facial expression is derived from the study on human emotion which is regarded as the intrinsic origin of facial expression. Another issue for facial expression analysis is to extract substaintial facial features that correspond with visual perception system. Based on these observations, we present a sparse representation based space model for facial expression analysis which applies Gabor filters to extract facial features. The sparse representation based facial expression space model is induced from human emotion space and then can describe mixture facial expressions which are usual in daily life. Experiments on JAFFE database demonstrate the validity of the proposed facial expression space model. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cheek region extraction method for face diagnosis of Traditional Chinese Medicine

    Page(s): 1663 - 1667
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (196 KB) |  | HTML iconHTML  

    Face diagnosis is one of the four diagnostic methods of Traditional Chinese Medicine (TCM). The morbidity of the organs can be revealed from the face color. The cheeks contain abundant capillaries and have less interference with the noise, such as hair and beard, so they are considered as the important region to reflect the face color. A cheek region extraction method for face diagnosis of TCM is proposed in this paper. First, the face region is extracted from the original image which is captured by the tongue image analysis instrument for TCM. Then the face region image is rotated according to the eye positions to put straight the image. The lip chrome region is extracted using Fisher classifier, in which the lip chrome region will be binarized to position the mouth corner. Finally, based on the geometric structure of the face, the skin block center is determined to extract the cheek region automatically. The experimental result shows that the proposed method can achieve higher segmentation accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 3D reconstruction of tongue surface based on photometric stereo

    Page(s): 1668 - 1671
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (122 KB) |  | HTML iconHTML  

    In the objective research field of traditional Chinese medicine (TCM) tongue diagnosis, 2D tongue image is regarded as the research object for feature extraction and analysis in most current methods. However, 2D tongue image always can not visually express detail information of tongue 3D surface, such as lingual papilla, prick and fissure of tongue. It brings certain limitations to perceive real tongue surface. It is limited for the doctor to give accurate diagnosis. To solve this problem, a method to reconstruct the 3D surface of tongue based on photometric stereo is proposed. First, the surface normals and texture reflectance are obtained using photometric stereo technology in this paper, then calculate the depth values of tongue surface, and generate 3D depth map. It has been proved that the method is viable through lots of experiments and doctors may observe the 3D information of tongue surface from multiple perspectives. Detail information of tongue surface also may be expressed quantitatively and visually. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optimization of Tensor Reconstruction by Excluding Outliers from DWIs

    Page(s): 1672 - 1677
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (119 KB) |  | HTML iconHTML  

    Outliers in Diffusion Weighted Imaging (DWI) data appear frequently due to subjects' motion and the system noise, which are deleterious to the accuracy of diffusion tensor (DT) reconstruction. By detecting artifacts in the resulting DT data and minimizing a criteria score in the consequent FA map and positive definite map, we propose an optimization algorithm for Tensor Reconstruction by Excluding Outliers from DWIs (TREOD) that effectively improves the quality of tensor data reconstructed based on a selected subset of the raw DWI data in which outliers are excluded. Extensive experiments with both simulated and real datasets demonstrate the correctness and effectiveness of our proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A sparse representation based approach for steganography

    Page(s): 1678 - 1681
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (227 KB) |  | HTML iconHTML  

    In recent years, there has been a growing interest in exploiting sparsity of natural signals for use in a large number of applications including encryption and steganography. Steganography refers to the practice of hiding a secret message, known as stegotext, within a simple looking message called plaintext so that it can only be retrieved by the intended user. It is also desired that the hidden data should in no way affect the perceived quality of the cover message. In this paper, we propose an approach to steganography that is based on sparse representation of signals. The proposed method hides data within an audio clip or an image without compromising on their perceived qualities. It is also demonstrated in our experiments that the hidden data can be successfully separated out from the cover message. However, the proposed method is a fragile one in the sense that it is not robust to any sort of lossy processing like compression, cropping, etc. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.