By Topic

Wavelet Analysis and Pattern Recognition, 2009. ICWAPR 2009. International Conference on

Date 12-15 July 2009

Filter Results

Displaying Results 1 - 25 of 94
  • A fast labeled graph matching algorithm based on edge matching and guided by search route

    Publication Year: 2009 , Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (309 KB) |  | HTML iconHTML  

    This paper presents a fast labeled graph matching algorithm called graph explorer (GE) algorithm, which can be categorized into the tree search based graph matching (TSGM) algorithms of exact graph/subgraph matching. Not like the other node-centric TSGM algorithms, the GE algorithm focuses on edges matching. It constructs search state of partially matched subgraph by edge and edge. It converts graph matching problem into a path search problem in the space of search states. Under the guidance of the search path, it avoided repeated label checking by inheriting state tree structure for caching and fast visiting matched nodes and edge. By a carefully optimized search route and intelligent backtracking, GE algorithm avoided a large amount of the invalid search states and improved performance to be almost linear to the number of edges of pattern graph with low ambiguity. While traditional TSGM are suffering the call stack overflow problem caused by recursive function calls, it overcame this problem by a dynamic state queue. It can handle extra large size of pattern (up to 10,000 nodes). The experiment shows the performance of GE is better than similar algorithms and it is more resistant to ambiguities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel salient region extraction based on color and texture features

    Publication Year: 2009 , Page(s): 8 - 15
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB) |  | HTML iconHTML  

    In current common research reports, salient regions are usually defined as those regions that could present the main meaningful or semantic contents, However, there are no uniform saliency metrics that could describe the saliency of implicit image regions. Most common metrics take those regions as salient regions, which have many abrupt changes or some unpredictable characteristics. But, this metric will fail to detect those salient useful regions with flat textures. In fact, according to human semantic perceptions, color and texture distinctions are the main characteristics that could distinct different regions. Thus, we present a novel saliency metric coupled with color and texture features, and its corresponding salient region extraction methods. In order to evaluate the corresponding saliency values of implicit regions in one image, three main colors and multi-resolution Gabor features are respectively used for color and texture features. For each region, its saliency value is actually to evaluate the total sum of its Euclidean distances for other regions in the color and texture spaces. A special synthesized image and several practical images with main salient regions are used to evaluate the performance of the proposed saliency metric and other several common metrics, i.e., scale saliency, wavelet transform modulus maxima point density, and important index based metrics. Experiment results verified that the proposed saliency metric could achieve more robust performance than those common saliency metrics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel shot segmentation algorithm based on grid-mapping dynamic windows in compressed videos

    Publication Year: 2009 , Page(s): 16 - 21
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (528 KB) |  | HTML iconHTML  

    Shot segmentation is normally the groundwork for video retrieval, and which is one of the most important steps for content-based video retrieval. Although existing research on shot segmentation is more active, it still remains many challenges. In the paper, based on the principle of video compression, a novel shot segmentation algorithm called grid-mapping dynamic window (GMDW) is proposed by using changing windows, which is on the basis of DC coefficients in I-Frames. And the GMDW and the approach based on histogram of DC image are integrated into an operator called united difference degree in which it comes true that more accurate difference between two adjacent I-Frames is measured. Finally, more accurate shot boundaries or wrong shot boundaries are detected by the analysis of macro-block in P or B frame between two adjacent I-Frames on a detected segment position. The experiments show that the Algorithm efficiently has improved the performance of shot detection, and to a certain extent, the complexity of the detection on gradual shot cuts is reduced. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A robustness and real-time face detection algorithm in complex background

    Publication Year: 2009 , Page(s): 22 - 25
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (211 KB) |  | HTML iconHTML  

    Because AdaBoost cascade face detection algorithm has a very outstanding performance, AdaBoost face detection is the mainstream algorithm currently. But it can produce misjudgment at a similar facial feature regional, particularly in the detection of more complicated image background circumstances misjudgment is even more serious. In view of reasons above, in this paper, a new algorithm was proposed and named A-SCS algorithm, which is increased skin color segmentation after detected face region use the AdaBoost algorithm. This algorithm makes full use of the image useful information, and greatly reduced the possibility of misjudgment. Compare to AdaBoost algorithm and skin color segmentation algorithm, the algorithm mentioned in this paper reduced the false detecting rate in complex background image, At the same time, it is of definite robustness. Simulated experimental results by Matlab indicate that this algorithm is faster and accuracy. Therefore it can be applied to real-time face detection system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic detecting and recognition of casts in urine sediment images

    Publication Year: 2009 , Page(s): 26 - 31
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (340 KB) |  | HTML iconHTML  

    The appearance of cast cells in urine sediment is an essential sign of serious renal or urinary tract diseases. However, due to uneven illumination, low contrast against the background and complicated components of the microscopic urine sediment images, detection and recognition of cast cells in former study can not be considered sufficient. In this paper, an efficient approach for casts detecting and recognition in urine sediment images is proposed. It consists of three stages: Firstly, 4-direction variance mapping image is acquired from gray scale image. Secondly, we obtain binary image by applying an improved adaptive bi-threshold segmentation algorithm to the above mapping image. In the last stage, five texture and shape characteristics of casts are extracted from both gray scale image and binary image. Based on these characteristics, we develop an decision-tree classifier to distinguish casts from other particles in the image. Experimental results show that our method produces satisfactory segmentation, achieves an easy-implemented, time-saving classifier and has improved recognition performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decision fusion of global and local image features for Markov localization

    Publication Year: 2009 , Page(s): 32 - 37
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (269 KB) |  | HTML iconHTML  

    This paper addresses a major problem in the context of visual robot localization. Vision-based localization easily leads to ambiguities in large-scale environments. A probabilistic method is proposed for mobile robots to recognize scenes for topological localization. Appearance-based scene classes are automatically learned from composite features which combine global and local image features extracted from sets of training images. A modified scale invariant feature transform (SIFT) feature descriptor, which integrates color with local structure, is used as local features to disambiguate the identification of features easily confused. The environment is defined as a topological graph where each node corresponds to a place and edges are paths connecting one node with another. In the course of traveling, each detected interest point vote for the most likely location, and the correct location is the one getting the largest number of votes. In the case of perceptual aliasing, a hidden Markov model (HMM) is used to increase the robustness of location recognition. Experimental results show that application of the proposed feature and decision fusion can largely reduce wrong matches and the proposed method is effective. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face recognition under varying illumination using adaptive filtering

    Publication Year: 2009 , Page(s): 38 - 42
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (418 KB) |  | HTML iconHTML  

    A novel method to extract illumination invariant features using the adaptive filter is proposed for face recognition under varying lighting conditions, the proposed method estimates illumination by minimizing the difference between the normalized illumination and estimated original illumination in the logarithm domain. To evaluate effectiveness of our method, three illumination methods (MSR, SQI and LTV) were implemented using Yale B database. It shows that the performance of the proposed method is better than other methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Feature extraction and classification for audio information in news video

    Publication Year: 2009 , Page(s): 43 - 46
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (201 KB) |  | HTML iconHTML  

    Feature extraction and analysis are the foundation of audio classification. At first, audio features are analyzed deeply, including short-time energy, zero-crossing rate, bandwidth, low short-time energy ratio, high zero-crossing rate ratio, and noise rate. Secondly a new audio classification method for news video is proposed based on the decision tree method, and then divides audio information into four classes: silence, pure speech, music, non-pure speech. The experiment results show that the selected features are effective for audio classification in news video, and the classification accuracy is reasonable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fusion and recognition of face and iris feature based on wavelet feature and KFDA

    Publication Year: 2009 , Page(s): 47 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (365 KB) |  | HTML iconHTML  

    In this paper, a novel approach to the fusion and recognition of face and iris image based on wavelet features and kernel Fisher discriminant analysis (KFDA) is developed. Firstly, the dimension is reduced, the noise is eliminated, the storage space is saved and the efficiency is improved by discrete wavelet transform (DWT) to face and iris image. Secondly, face and iris features are extracted and fusion by KFDA. Finally, nearest neighbor classifier is selected to perform recognition. Experimental results on ORL face database and CASIA iris database show that not only the dasiasmall sample problempsila is overcome by KFDA, but also the correct recognition rate is higher than that of face recognition and iris recognition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HSICT: A method for romoving highlight and shading in color image

    Publication Year: 2009 , Page(s): 51 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (400 KB) |  | HTML iconHTML  

    In this paper, the problem of removing highlight and shading in color image is addressed, and a novel method called highlight and shading invariant color transform (HSICT) is proposed for this purpose. HSICT can be accomplished in a process of three steps: (1) Illumination color estimation is achieved by using two different color distributions; (2) A linear transform is applied to eliminate the influence of highlight; (3) The effect of shading is removed by normalization. Experiments illustrate that HSICT can not only effectively remove highlight and shading in color image, but also can be easily combined with other algorithms in many fields, such as segmentation and edge detection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Image segmentation of bone in X-ray pictures of feet

    Publication Year: 2009 , Page(s): 57 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (382 KB) |  | HTML iconHTML  

    As the dynamic range of the targets in X-ray images of feet is wide, and the overlapping interval of the background gray value and the target is large, this article comes up with a bone-image segmentation method in X-ray pictures of feet. First of all, composite enhancement will be applied, then the distribution features of target will be exploited to carry out a dynamic partition, finally density function will be incorporated to make the segmentation. The experiment has proved that this method can effectively separate the bone image of feet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lossless compression of laser speckle images by the fuzzy logic

    Publication Year: 2009 , Page(s): 62 - 65
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (257 KB) |  | HTML iconHTML  

    In this paper, we consider the problem of lossless compression of laser speckle images produced in displacement measurement. We propose a lossless compression technique of laser speckle images which is based on speckle displacement estimation, temporal prediction and the Golomb entropy coding. In the proposed coder, the fuzzy-logic-based correlation is used for estimation of speckle displacements. Experimental results show that the proposed coder provides significant improvement in coding efficiency compared with the JPEG_LS coder in compression of laser speckle images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multi-spectral image fusion based on urban texture characteristics

    Publication Year: 2009 , Page(s): 66 - 73
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB) |  | HTML iconHTML  

    The proposed approach is for the fusion of urban multi-spectral images. Three extracted texture features are merged by a synthesizing fusion rule. Experimental results indicate that the proposed method performs well in both spectral information and spatial information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reconstruction of 3D microstructure of the rock sample basing on the CT images

    Publication Year: 2009 , Page(s): 74 - 78
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    A new image segmentation algorithm based on the Kring interpolation algorithm is proposed to segment the CT images of the rock into pore systems and the mineral grain systems. With the method, the CT image of the rock is segmented without isolated island by analyzing the correlation between the pixels of the image. The 3D microstructure of the pore system and the mineral grain system in the rock sample are reconstructed basing on the segmented images with matching cube algorithm, in which the volume element is reconstructed with 3-dimensional interpolation method and the equipotential surface is analyzed by triangular facet method. The reconstructed microstructures are verified by slice images in two other orthometric directions and the results prove that both the distribution and the shape characteristic of the pores and mineral grains in the reconstructed microstructure are in coincidence with that in the actual CT image with statistical significance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Region mutual information based objective evaluation measure for image fusion considering robustness

    Publication Year: 2009 , Page(s): 79 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (488 KB) |  | HTML iconHTML  

    An objective performance measure for image fusion considering region information is proposed. The measure not only reflects how much the pixel level information that fused image takes from the source image, but also considers the region features between source images and fused image. We conducted several simulations which show that our measure accords with subjective evaluations. Considering the robustness of image fusion, we test some exciting fusion method by appending any levels noise as disturbance. By the experiments, we find that robustness is an ignored performance should be given more attention to. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Texture analysis based on Bidimensional Empirical Mode Decomposition and quaternions

    Publication Year: 2009 , Page(s): 84 - 90
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2683 KB) |  | HTML iconHTML  

    In this paper, a renovate texture analysis method is proposed. The BEMD is a locally adaptive method and suitable for the analysis of nonlinear or nonstationary signals. The texture image is decomposed to several 2D-IMFs (two dimentional intrinsic mode functions) by BEMD (Bidimentional Empirical Mode Decomposition). Then quaternion is used to get the quaternionic analytic signals, which is compatible with the associated harmonic transform. Finally, each 2D-IMF's local properties are analyzed by using a new quaternionic representation. As an advanced method for describing the local properties of a 2D-signal, this algorithm has seven characters of each 2D-IMF including instantaneous frequency. The performance of this texture analysis method is demonstrated with both synthetic and natural images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The research on palm shape feature parameters extraction

    Publication Year: 2009 , Page(s): 91 - 94
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (339 KB) |  | HTML iconHTML  

    This paper briefly introduces the recognition process on palm shape and mainly introduces feature parameter extraction process on palm shape. By statistic experiment eight feature parameters are extracted from many parameters of palm shape. They are the length of pinkie, the length of ring finger, the length of middle finger, the length of forefinger, the length of thumb, the width of ring finger, the width of middle finger, the width of palm. Experiment indicates that using these eight feature parameters to identify palm can reach high veracity and rapidity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Triangle detection based on windowed Hough Transform

    Publication Year: 2009 , Page(s): 95 - 100
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (307 KB) |  | HTML iconHTML  

    A new algorithm based on windowed Hough Transform is proposed for triangle detection. A sliding window scans the image pixel by pixel and the Hough Transform is computed in the small region. Peaks of the Hough image which correspond to the line segments are then extracted. A triangle is detected when the three lines satisfy the certain conditions. Experimental results show that the arbitrary triangle can be detected and retrieved efficiently. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A linear subspace learning algorithm for incremental data

    Publication Year: 2009 , Page(s): 101 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (258 KB) |  | HTML iconHTML  

    Incremental learning has attracted increasing attention in the past decade. Since many real tasks are high-dimensional problems, dimensionality reduction is the important step. Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA) are two of the most widely used dimensionality reduction algorithms. However, PCA is an unsupervised algorithm. It is known that PCA is not suitable for classification tasks. Generally, LDA outperforms PCA when classification problem is involved. However, the major shortcoming of LDA is that the performance of LDA is degraded when encountering singularity problem. Recently, the modified LDA, Maximum margin criterion (MMC) was proposed to overcome the shortcomings of PCA and LDA. Nevertheless, MMC is not suitable for incremental data. The paper proposes an incremental extension version of MMC, called Incremental Maximum margin criterion (IMMC) to update projection matrix when new observation is coming, without repetitive learning. Since the approximation intermediate eigenvalue decomposition is introduced, it is low in computational complexity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A new approach for intrusion detection based on Local Linear Embedding algorithm

    Publication Year: 2009 , Page(s): 107 - 111
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (207 KB) |  | HTML iconHTML  

    Intrusion detection is a important network security research direction. SVM (support vector machine) is considered as a good substitute for traditional learning classification approach, and has a good generalization performance especially in small samples in non-linear case. LLE (local linear embedding) is a good nonlinear dimensionality reduction method, which is good for the data that lies on the nonlinear manifold. This paper proposes an approach using SVM and LLE in intrusion detection system. In the Matlab simulation experiment, we can achieve higher classification accuracy rate, lower false positive rare and false negative rate using the method, compared to PCA (principal component analysis) and ICA (independent component analysis) approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A novel random projection model for Linear Discriminant Analysis based face recognition

    Publication Year: 2009 , Page(s): 112 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (284 KB) |  | HTML iconHTML  

    Linear discriminant analysis (LDA) is one of the commonly used statistical methods for feature extraction in face recognition tasks. However, LDA often suffers from the small sample size (3S) problem, which occurs when the total number of training data is smaller than the dimension of input feature space. To deal with 3S problem, this paper proposes a novel approach for LDA-based face recognition using random projection (RP) technique. The advantages of random projection mainly include three aspects such as data-independent, dimensionality reduction and approximate distance preservation. So, based on the Johnson-Lindenstrauss theory, a new RP model is proposed for dimensionality reduction and simultaneously for learning the structure of the manifold with high accuracy. If the within-class scatter matrix is nonsingular in the randomly mapped feature space, LDA can be performed directly. Otherwise, RP will be followed by our previous regularized discriminant analysis (RDA) approach for face recognition. Two public available databases, namely FERET and CMU PIE databases, are selected for evaluation. Comparing with PCA, DLDA and Fisherface approaches, our proposed method gives the best performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Aero-engine fault diagnosis based on multi-scale Independent Component Analysis

    Publication Year: 2009 , Page(s): 118 - 122
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (721 KB) |  | HTML iconHTML  

    Independent signal is stricter than the non-correlated signal in math. Independent component analysis (ICA) can extract independent signals, so it is better than principal component analysis (PCA) when they are used to diagnose faults. However ICA isn't suited for no-obvious faults which are caused by inputs' small changes. In order to solve this problem, multi-scale ICA (MSICA) is investigated in this paper, which is applied to aero-engine fault diagnosis. MSICA is used to extract independent components are used to train support vector machine (SVM) for classification. Experiments demonstrate the benefits of this representation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An operator method for semi-supervised learning

    Publication Year: 2009 , Page(s): 123 - 127
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (566 KB) |  | HTML iconHTML  

    We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general- purpose learner. We proposed a semi-learning algorithm based on a novel form of regularization that allows us to emphasize the complexity of the representation of learners. With operator method, the optimal learner learned by such algorithm is explicitly represented by sampling operator when the hyperspace is a reproducing kernel Hilbert space. Based on such explicit representation, a simple and convenient algorithm is designed. Some preliminary experiments validate the effectiveness of the algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis of Laplacian Support Vector Machines

    Publication Year: 2009 , Page(s): 128 - 132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (274 KB) |  | HTML iconHTML  

    The goal of semi-supervised learning algorithm is to effectively incorporate labeled and unlabeled data in a general-purpose learner with small misclassification error. Although there are various algorithms to implement semi-supervised learning task, the crucial issue of dependence of generalization error on the number of labeled and unlabeled data is still poorly understood. In this paper, we consider the Laplacian Support Vector Machines (LapSVMs) and establish its error analysis. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated classfication of particles in urinary sediment

    Publication Year: 2009 , Page(s): 133 - 137
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (354 KB) |  | HTML iconHTML  

    The particles in urinary microscopic images are hard to classify because of noisy background and strong variability of objects in shape and texture. In order to overcome these difficulties, firstly, a new method of texture feature extraction using the distance mapping based on a set of local grayvalue invariants is introduced and the feature is robust to the shift and rotation. Secondly, we reduce the high dimensional feature into a lower dimensional space using PCA. Thirdly, a multiclass SVM is applied to classify 5 categories of particles after trained them reasonably. Finally the experiment results achieve an average of accuracy of 90.02% and a F1 value of 90.44%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.