Scheduled System Maintenance:
Some services will be unavailable Sunday, March 29th through Monday, March 30th. We apologize for the inconvenience.
By Topic

Intelligent Signal Processing and Communications Systems, 2008. ISPACS 2008. International Symposium on

Date 8-11 Feb. 2009

Filter Results

Displaying Results 1 - 25 of 119
  • A step size control method steadily reducing acoustic echo even during double-talk

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (467 KB) |  | HTML iconHTML  

    This paper proposes a step size control method capable of steadily canceling acoustic echo even during double talk. The method is characterized by applying a sub-adaptive filter to the control. The step size and the number of taps of the sub-adaptive filter are larger and fewer than those of the main adaptive filter used for canceling the acoustic echo, respectively. Accordingly, the sub-adaptive filter can reduce the residual echo more rapidly than the main adaptive filter. The proposed method applies the step size calculated using the residual echo to the main adaptive filter, and thereby, quickly and steadily reduces the acoustic echo. This paper finally verifies that the proposed method can provide almost the same convergence speed as that obtained by applying an optimum step size to the main adaptive filter. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Design and performance of space-time block coded hybrid ARQ schemes for multiple antenna transmission

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (373 KB) |  | HTML iconHTML  

    Multiple-input multiple-output (MIMO) communication systems is known to provide diversity and coding gain over that of a single-input single-output (SISO) system. Furthermore, automatic repeat request (ARQ) provides time diversity and improves the performance and reliability. In this paper, we propose and investigate hybrid ARQ (HARQ) scheme based on non-orthogonal space-time block codes (NOSTBC) and is using a decoding candidate list which reduces the detection/decoding complexity. Our proposed schemes are compared to orthogonal space-time block codes (OSTBC) and the obtained results show a significant performance gain when using NOSTBC-based ARQ scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improved K-best sphere decoder with a look-ahead technique for multiple-input multiple-output systems

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB) |  | HTML iconHTML  

    In this paper, a novel look-ahead technique taking advantages of the signal diversity is proposed to improve the performance of the conventional K-best sphere decoding algorithm that solves the MIMO detection problem of the spatial multiplexing scheme. We also examine the complexity overhead and the performance gain. Simulation results demonstrate that our proposed algorithm can achieve better performance with lower complexity by using a smaller K value than the one used in the conventional K-best algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiple constraint bit allocation for foveated visual coding

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (465 KB) |  | HTML iconHTML  

    This paper presents a new method in the multiple constraint bit allocation for foveated visual coding. First, the foveation effect is applied to the DCT transformed coefficients in macroblocks (MBs) of images and videos. The priorities of MBs are computed based on the retina eccentricity of human eyes, which is measured in terms of the distance from foveation points. Then, prioritized image slices are formed by comparing the priority of each MB with the preset threshold and grouping MBs having close priorities. With prioritized image slices, the proposed multiple constraint bit allocation based on the modified Lagrangian optimization is applied to all image slices. There are two constraints on the bit allocation. The first constraint is the target quality requirements of image slices. The second constraint is the bit rate constraint. The proposed multiple constraint bit allocation tries its best to achieve the best overall reconstructed image quality under these two constraints. The experimental results conducted with JPEG and H.263+ compression standards show that the proposed scheme can achieve better performance than the previous work with the traditional Lagrangian optimization and obtain the required quality of image slice in both subjective and objective quality. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pulse shaping based PAPR reduction for OFDM signals with minimum error probability

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (550 KB) |  | HTML iconHTML  

    The pulse shaping technique is an effective and flexible way for reducing the peak-to-average power ratio (PAPR) of orthogonal frequency division multiplexing (OFDM) signals. However, different pulse shaping waveforms result in different error probabilities of the system. In this paper we derive a condition for deciding if the selected pulse shaping waveforms for reducing the PAPR of OFDM signals can also obtain the minimum probability of errors. The qualified pulse shaping waveform thus can be implemented by using a discrete pulse shaping matrix, and the PAPR of OFDM signals can be made very close to that of single carrier signals at the minimum error probability. Our simulation results showed that the pulse shaping matrix generated by the square root of a raised cosine pulse is one of the optimal solutions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The subjective minimum number of gray levels for natural images derived by OK-quantization theory

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2896 KB) |  | HTML iconHTML  

    The analog image is digitalized by the sampling and the quantization. It is well known that the maximum sampling interval is derived by the sampling theorem. On the other hand, Koshimizu introduces the OK-quantization theory for image signals, which shows the maximum quantization interval. The minimum number of gray levels is also easily given by the maximum quantization interval. We apply the OK-quantization theory to a lot of natural images and derive those of the number of minimum gray levels. However, the minimum number of gray levels is different from the minimum number of gray levels for avoiding the false contouring. In this paper, we propose the pre-processing for the probability density function of natural images in order to obtain the minimum number of gray levels for avoiding the false contouring by the OK-quantization theory. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bit-length expansion for digital images

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1510 KB) |  | HTML iconHTML  

    Bit-length (i.e., resolution of signal amplitude) of digital images is decided by the quantization. The bit-length expansion technique is necessity for the high-quality display such as the liquid crystal display (LCD) and the plasma display. Since, these displays have more the 10 bit resolution for each color component. However, in general, color image signals are defined by 8 bit resolution for each component. Moreover, the bit-length expansion technique is also necessary for low bit-length images, which are observed the pseudo-contour. In this paper we propose the bit-length expansion method using adaptive window length filtering. We show that the excellent bit-length expanded images are obtained by the proposed method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A detection and tracking method based on POC for oncoming cars

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (918 KB) |  | HTML iconHTML  

    This paper proposes a new tracking method based on phase only correlation (POC) for oncoming cars. The POC function can measure exactly displacement between the corresponding points in images. The proposed method applies the POC to detecting and tracking the oncoming cars from moving pictures taken by an in-vehicle camera. To improve the detection and tracking accuracy, a motion vector based initial point search technique by using the POC function and a tracking method by using the coarse-to-fine search technique based on the POC are proposed. By applying the proposed method to actual moving pictures taken by the in-vehicle camera, the good result for the detection and tracking of the oncoming cars is obtained. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A method for data embedding to printed images based on use of original images

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (806 KB) |  | HTML iconHTML  

    A data embedding to printed images has become an important issue for several applications. In this paper, we assume an original image to be known and the server based data retrieval model, and a new method for the data embedding to the printed images is proposed. For data embedding to printed image, a method using spread spectrum technique has already been proposed. However, there are two problems in this method. One is the small number of data bits and another is misdetection due to the distortion and noise by printing and image capturing. In this paper, we proposed the use of ldquoWalsh coderdquo for diffusion code to increase the number of data bits. This can be also applied to the improvement of the detection process. This technique gives the tolerance to some distortions and noises. In the detection processing, a captured image is forwarded to the server which holds the original image and the embedded data is detected in it. Therefore, the improvement of the detection process by using original image is proposed. As a result, more data embedding and more accurate detection becomes possible than the conventional method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An unsupervised design method for weighted median filters based on simulated annealing

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (533 KB) |  | HTML iconHTML  

    Estimation of a suitable window shape and appropriate weights in weighted median filters is one of important problems. In this paper, we propose a new unsupervised design method for the window shapes and weights of weighted median filters. This technique is applied to texture images corrupted by impulse noise. As the performance measure for optimization by simulated annealing, rank-ordered absolute differences (ROAD) statistic is used for the estimation of the window shapes and weights, respectively. The simulation results show that the proposed method can design the weighted median filters with almost same performance compared to the one by the supervised technique. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A robust audio watermarking based-on multiwavelet transform

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (497 KB) |  | HTML iconHTML  

    In this paper, a robust watermarking scheme for digital audio signal is proposed. The watermarks are embedded into the low frequency coefficients in discrete multiwavelet transform domain to achieve robust performance against common signal processing procedures and compression. The embedding technique is based on quantization process which does not require the original audio signal in the watermark extraction. The experimental results demonstrate that watermark is imperceptible and the algorithm is robust to many common signal processing procedures, such as re-sampling, cropping, low-pass filtering and MPEG 1 Layer III compression. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A BCI using MEGvision and multilayer neural network - channel optimization and main lobe contribution analysis -

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1078 KB) |  | HTML iconHTML  

    Multilayer neural networks (MLNN) and the FFT amplitude of brain waves have been applied to dasiaBrain Computer Interfacepsila (BCI). In this paper, a magnetoencephalograph (MEG) system, dasiaMEGvisionpsila developed by Yokogawa Corporation, is used to measure brain activities. MEGvision is a 160-channel whole-head MEG system. Channels are selected from 8 main regions, a frontal lobe, a temporal lobe, a parietal lobe and a occipital lobe, located in the left and the right sides of the brain. The 8 channels, located at the central point in the 8 lobes, are initially selected. Optimum channels are searched for in the same lobe as the initial channels in order to achieve high classification accuracy. Two subjects and four mental tasks, including relaxed situation, multiplication, playing sport and rotating an object, are used. The brain waves are measured 10 times for one subject and one mental task. Among them, 8 data sets are used for training the MLNN, and the remaining 2 data sets are used for testing. 5 kinds of combinations of 2 data sets are selected for testing. Rates of correct classification by using the initial channels are 82:5 ~ 90%. By optimizing the channels, the accuracy is improved up to 85:0 ~97:5%, which is very high accuracy. Furthermore, contributions of the brain waves in the 8 lobes are analyzed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the use of multiple subspace multigrid linear equation solvers for improved convergence of NLMS-type adaptive filters

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (361 KB) |  | HTML iconHTML  

    Largely overlooked insights into the convergence behavior of the (N)LMS algorithm focusing on its worst and best case performance is presented. These insights motivate the use of the multigrid paradigm, well known from the numerical solution of partial differential equations, as an important tool in achieving improved convergence speed in (N)LMS-type adaptive filters. We present such a multigrid adaptive filter algorithm, employing two subspaces, giving superior convergence performance relative to the (N)LMS algorithm at a very modest price in terms of computational complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Execution time measurement of a vector operation on LSC-Based DSP

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (472 KB) |  | HTML iconHTML  

    In this paper, we describe the performance measurement of a new DSP processor based on the dataflow execution model. Our proposed DSP processor is in contrast to most recent DSP processors based on von Neumann paradigm. We call our DSP as LSC-based DSP because it is a specialized Loop Structured Computer (LSC) which is a parallel dataflow computer developed in our laboratory. In our measurement, we choose a vector operation as a benchmark algorithm because a vector operation is involved in most DSP algorithms and inherently equivalent to a program in a dataflow computer. LSC-based DSP is designed in hardware description language and implemented on a field programmable gate array (FPGA). We measure the execution time of a vector operation on LSC-based DSP with varying some design parameters. The measurement result shows the ability for parallel execution of LSC-based DSP with several dataflow processing elements. In practice, the execution time of LSC-based DSP with seven processing elements is approximately five times shorter than that with one processing element. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A noise spectral estimation method based on VAD and recursive averaging using new adaptive parameters for non-stationary noise environments

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (403 KB) |  | HTML iconHTML  

    A noise spectral estimation method, which is used in spectral suppression noise cancellers, is proposed for highly non-stationary noise environments. Speech and non-speech frames are detected by using the entropy-based voice activity detector (VAD). An adaptive normalization parameter and a variable threshold are newly introduced for the VAD. They are very useful for rapid change in the noise spectrum and power. Furthermore, a recursive averaging method is applied to estimating the noise spectrum in the non-speech frames. In this method, an adaptive smoothing parameter is proposed, based on speech presence probability. Simulations are carried out by using many kinds of noises, including white, babble, car, pink, factory and tank, which are changed from one to the other. The segmental SNR is improved by 2:0 ~ 3:8dB, and noise spectral estimation error is improved by 3:2 ~ 4:7dB for the white noise and the babble noise, which are changed from one to the other. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Grouping method based on feature matching for tracking and recognition of complex objects

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (602 KB) |  | HTML iconHTML  

    We propose a grouping algorithm for tracking and recognition of complex objects in video images. The algorithm is based on region-growing image segmentation for dividing each image into its constituent elements or segments and feature matching using the characteristic features of these elements. All segments in video images, which can be viewed as simple objects, can be detected and tracked with this algorithm no matter whether they are moving or not. But, for complex-object tracking and recognition, it is additionally necessary to group all elements belonging to these complex objects based on common characteristic features. As a result of the grouping method, the proposed algorithm is able to detect and track moving complex objects like e.g. cars in video images. This paper describes the proposed algorithm in detail and verifies its capabilities by simulation results with MATLAB. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Practical implementation of CCTA based on commercial CCII and OTA

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (966 KB) |  | HTML iconHTML  

    This article presents a basic current-mode building block for analog signal processing, namely current conveyor transconductance amplifier (CCTA) using the commercially available ICs. The performances are examined through PSPICE simulations and experiment, displaying usabilities of the new active element. The description includes some examples as a voltage-mode universal biquad filter, a grounded inductance, a current-mode multiplier and oscillator. They occupy only a single CCTA. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Layered LDGM codes for scalable video streaming over packet erasure channels

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (742 KB) |  | HTML iconHTML  

    This paper introduces layered low-density generator matrix (Layered-LDGM) codes for scalable video streaming data. The layered-LDGM codes maintain the arbitrary relationship of each layer from the encoder side to the decoder side. This resulting structure supports partial decoding. Furthermore, the proposed codes create forward error correcting (FEC) packets that considers the relationship between the scalable components. Thus, the layered-LDGM codes enable lost packet data to be recovered effectively. Simulation results show that the proposed codes offer better error resiliency, without sacrificing scalability, than the existing method which creates FEC packets for each scalable component independently. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Proposal of amplitude only logarithmic Radon transform for pattern matching - Relation with Fourier-Mellin transform -

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (541 KB) |  | HTML iconHTML  

    Amplitude only logarithmic radon transform (ALR transform) for pattern matching is proposed. An ALR image is invariant even if objects are translated in a picture. In the case of object scaling and rotation, the ALR image is merely translated. The objects are identified using a phase-only matched filter on the ALR image. The differences of size, rotation angle, and position between the two objects are detected. Our pattern matching procedure is described, and its simulations are executed. We show that the Fourier-Mellin transform and our proposed method have a closed relationship and some differences. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Road traffic monitoring using a wireless vehicle sensor network

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2694 KB) |  | HTML iconHTML  

    With the advancement of micro-electro-mechanical systems (MEMS) technologies, wireless sensor networks have opened new vistas for a wide range of application domains. These sensor nodes usually comprise small, low-power devices that integrate sensors and actuators with limited on-board processing and wireless communication capabilities. One of the most important applications is target tracking and monitoring. Here a novel wireless vehicle monitoring system that is able to detect, classify and determine the direction of travel of vehicles on a two-lane road is proposed. Each vehicle detection node features multiple sensors including magnetometer, accelerometer, infrared and acoustic microphone with a two-node structure for cooperative monitoring. The results show that the system was capable of classifying the vehicle type (using vehicle weight) and their directional of travel with high accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic distortion measurement for linearization of loudspeaker systems

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (544 KB) |  | HTML iconHTML  

    In this paper, we demonstrate the compensation effect of nonlinear distortion of loudspeaker systems using dynamic distortion measurement. The swept sinusoidal wave is usually used for the verification of the compensation effect of nonlinear distortion. However, the evaluation result is not always corresponding to actual drive status because the input signals to loudspeaker systems have wideband frequency components like music and voice. We therefore use dynamic distortion measurement with white noise which is a wideband signal. We design both a linearization system using Volterra filter and Mirror filter using the linear and the nonlinear parameters of a loudspeaker system estimated by Simulated Annealing(SA), and examine the effectiveness on compensating nonlinear distortions of the loudspeaker system. Experimental results show that the dynamic distortion measurement has effectivity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A low memory degree-k zerotree coder

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (429 KB) |  | HTML iconHTML  

    Image compression based on zerotree coding such as set-partitioning in hierarchical trees (SPIHT) yield very good performance. SPIHT uses bit-plane coding where the wavelet coefficients are scanned at every encoding pass. This requires a lot of memory space since the whole image has to be stored for the process of set-partitioning coding. In this paper, a low memory degree-k zerotree wavelet coding scheme is presented. The proposed algorithm uses a new tree structure with a lower scale of wavelet decomposition. Besides this, the degree of zerotree tested is tuned at each encoding pass. Simulation results show that our proposed coding scheme gives an almost equivalent performance as the SPIHT algorithm and achieves a memory reduction of 93.75% compared to SPIHT. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • General observation model for an iterative multiframe regularized Super-Resolution Reconstruction for video enhancement

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1947 KB) |  | HTML iconHTML  

    In general, the classical SRR algorithms are usually based on translational observation model hence these SRR algorithms can be applied only on the sequences that have simple translation motion. In order to cope with real video sequences and complex motion sequences, this paper proposes a general observation model for SRR algorithm, fast affine block-based transform, devoted to the case of nonisometric inter-frame motion. The proposed SRR algorithm is based on a maximum a posteriori estimation technique by minimizing a cost function. The classical L1 and L2 norm are used for measuring the difference between the projected estimate of the high-resolution image and each low resolution image, removing outliers in the data and errors due to possibly inaccurate motion estimation. Tikhonov regularization is used as prior knowledge for removing outliers, resulting in sharp edges and forcing interpolation along the edges and not across them. The efficacy of the proposed algorithm is demonstrated here in a number of experimental results using standard video sequence such as Susie and Foreman at several noise power and models. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A study of adaptive guard interval with estimation of channel impulse response for OFDM system

    Publication Year: 2009 , Page(s): 1 - 4
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (477 KB) |  | HTML iconHTML  

    In an orthogonal frequency division multiplexing (OFDM) system, a guard interval (GI) is used to remove the inter-symbol interference (ISI) due to a multipath channel. When the length of the GI is shorter than the maximum delay of the multipath channel, the ISI occurs. On the other hand, a long GI decreases transmission efficiency. In this paper, we propose an adaptive control method of the guard interval length. A channel impulse response is obtained by applying inverse fast Fourier transform (IFFT) to the estimated frequency-domain channel response. The maximum delay is then estimated from the channel impulse response, and the optimal length of the GI is decided from the maximum delay. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • New hybrid technique for traffic sign recognition

    Publication Year: 2009 , Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1334 KB) |  | HTML iconHTML  

    A hybrid traffic sign recognition scheme combining of knowledge-based analysis and radial basis function neural classifier (RBFNN) is proposed in this paper. Initially, traffic signs are detected from the road scenes using color segmentation method. The extracted signs are then passed to the recognition system for classification. The proposed recognition technique composes of three stages: (i) color histogram classification, (ii) shape classification and, (iii) RBF neural classification. Based on the unique color and shape of traffic signs, they can be classified into smaller subclasses and can be easily recognized using RBFNN. Before feeding traffic sign into the RBFNN, traffic sign features are extracted by principle component analysis (PCA) in order to reduce the dimensionality of the original images. This is followed by the Fisher's Linear Discriminant (FLD) to further obtain the most discriminant features. The performance of the proposed hybrid system is evaluated and compared to the purely neural classifier. The experimental results demonstrate that the proposed method has better recognition rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.