Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Pattern Analysis and Machine Intelligence, IEEE Transactions on

Issue 6 • Date June 2009

Filter Results

Displaying Results 1 - 20 of 20
  • [Front cover]

    Publication Year: 2009 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (178 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Publication Year: 2009 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (108 KB)  
    Freely Available from IEEE
  • Introduction of New Associate Editors

    Publication Year: 2009 , Page(s): 961 - 963
    Save to Project icon | Request Permissions | PDF file iconPDF (144 KB)  
    Freely Available from IEEE
  • The Best Bits in an Iris Code

    Publication Year: 2009 , Page(s): 964 - 973
    Cited by:  Papers (43)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2423 KB) |  | HTML iconHTML  

    Iris biometric systems apply filters to iris images to extract information about iris texture. Daugman's approach maps the filter output to a binary iris code. The fractional Hamming distance between two iris codes is computed and decisions about the identity of a person are based on the computed distance. The fractional Hamming distance weights all bits in an iris code equally. However, not all the bits in an iris code are equally useful. Our research is the first to present experiments documenting that some bits are more consistent than others. Different regions of the iris are compared to evaluate their relative consistency, and contrary to some previous research, we find that the middle bands of the iris are more consistent than the inner bands. The inconsistent-bit phenomenon is evident across genders and different filter types. Possible causes of inconsistencies, such as segmentation, alignment issues, and different filters are investigated. The inconsistencies are largely due to the coarse quantization of the phase response. Masking iris code bits corresponding to complex filter responses near the axes of the complex plane improves the separation between the match and nonmatch Hamming distance distributions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Consistent Depth Maps Recovery from a Video Sequence

    Publication Year: 2009 , Page(s): 974 - 988
    Cited by:  Papers (59)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4130 KB) |  | HTML iconHTML  

    This paper presents a novel method for recovering consistent depth maps from a video sequence. We propose a bundle optimization framework to address the major difficulties in stereo reconstruction, such as dealing with image noise, occlusions, and outliers. Different from the typical multi-view stereo methods, our approach not only imposes the photo-consistency constraint, but also explicitly associates the geometric coherence with multiple frames in a statistical way. It thus can naturally maintain the temporal coherence of the recovered dense depth maps without over-smoothing. To make the inference tractable, we introduce an iterative optimization scheme by first initializing the disparity maps using a segmentation prior and then refining the disparities by means of bundle optimization. Instead of defining the visibility parameters, our method implicitly models the reconstruction noise as well as the probabilistic visibility. After bundle optimization, we introduce an efficient space-time fusion algorithm to further reduce the reconstruction noise. Our automatic depth recovery is evaluated using a variety of challenging video examples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Discriminant Saliency, the Detection of Suspicious Coincidences, and Applications to Visual Recognition

    Publication Year: 2009 , Page(s): 989 - 1005
    Cited by:  Papers (36)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4874 KB) |  | HTML iconHTML  

    A discriminant formulation of top-down visual saliency, intrinsically connected to the recognition problem, is proposed. The new formulation is shown to be closely related to a number of classical principles for the organization of perceptual systems, including infomax, inference by detection of suspicious coincidences, classification with minimal uncertainty, and classification with minimum probability of error. The implementation of these principles with computational parsimony, by exploitation of the statistics of natural images, is investigated. It is shown that Barlow's principle of inference by the detection of suspicious coincidences enables computationally efficient saliency measures which are nearly optimal for classification. This principle is adopted for the solution of the two fundamental problems in discriminant saliency, feature selection and saliency detection. The resulting saliency detector is shown to have a number of interesting properties, and act effectively as a focus of attention mechanism for the selection of interest points according to their relevance for visual recognition. Experimental evidence shows that the selected points have good performance with respect to 1) the ability to localize objects embedded in significant amounts of clutter, 2) the ability to capture information relevant for image classification, and 3) the richness of the set of visual attributes that can be considered salient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exact Geodesics and Shortest Paths on Polyhedral Surfaces

    Publication Year: 2009 , Page(s): 1006 - 1016
    Cited by:  Papers (7)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1004 KB) |  | HTML iconHTML  

    We present two algorithms for computing distances along convex and non-convex polyhedral surfaces. The first algorithm computes exact minimal-geodesic distances and the second algorithm combines these distances to compute exact shortest-path distances along the surface. Both algorithms have been extended to compute the exact minimal-geodesic paths and shortest paths. These algorithms have been implemented and validated on surfaces for which the correct solutions are known, in order to verify the accuracy and to measure the run-time performance, which is cubic or less for each algorithm. The exact-distance computations carried out by these algorithms are feasible for large-scale surfaces containing tens of thousands of vertices, and are a necessary component of near-isometric surface flattening methods that accurately transform curved manifolds into flat representations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Kernel Discriminant Analysis for Positive Definite and Indefinite Kernels

    Publication Year: 2009 , Page(s): 1017 - 1032
    Cited by:  Papers (17)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3272 KB) |  | HTML iconHTML  

    Kernel methods are a class of well established and successful algorithms for pattern analysis thanks to their mathematical elegance and good performance. Numerous nonlinear extensions of pattern recognition techniques have been proposed so far based on the so-called kernel trick. The objective of this paper is twofold. First, we derive an additional kernel tool that is still missing, namely kernel quadratic discriminant (KQD). We discuss different formulations of KQD based on the regularized kernel Mahalanobis distance in both complete and class-related subspaces. Secondly, we propose suitable extensions of kernel linear and quadratic discriminants to indefinite kernels. We provide classifiers that are applicable to kernels defined by any symmetric similarity measure. This is important in practice because problem-suited proximity measures often violate the requirement of positive definiteness. As in the traditional case, KQD can be advantageous for data with unequal class spreads in the kernel-induced spaces, which cannot be well separated by a linear discriminant. We illustrate this on artificial and real data for both positive definite and indefinite kernels. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Latent Palmprint Matching

    Publication Year: 2009 , Page(s): 1032 - 1047
    Cited by:  Papers (48)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7358 KB) |  | HTML iconHTML  

    The evidential value of palmprints in forensics is clear as about 30% of the latents recovered from crime scenes are from palms. While palmprint-based personal authentication systems have been developed, they mostly deal with low resolution (about 100 ppi) palmprints and only perform full-to-full matching. We propose a latent-to-full palmprint matching system that is needed in forensics. Our system deals with palmprints captured at 500 ppi and uses minutiae as features. Latent palmprint matching is a challenging problem because latents lifted at crime scenes are of poor quality, cover small area of palms and have complex background. Other difficulties include the presence of many creases and a large number of minutiae in palmprints. A robust algorithm to estimate ridge direction and frequency in palmprints is developed. This facilitates minutiae extraction even in poor quality palmprints. A fixed-length minutia descriptor, MinutiaCode, is utilized to capture distinctive information around each minutia and an alignment-based matching algorithm is used to match palmprints. Two sets of partial palmprints (150 live-scan partial palmprints and 100 latents) are matched to a background database of 10,200 full palmprints to test the proposed system. Rank-1 recognition rates of 78.7% and 69%, respectively, were achieved for live-scan palmprints and latents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Graph Matching

    Publication Year: 2009 , Page(s): 1048 - 1058
    Cited by:  Papers (52)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2547 KB) |  | HTML iconHTML  

    As a fundamental problem in pattern recognition, graph matching has applications in a variety of fields, from computer vision to computational biology. In graph matching, patterns are modeled as graphs and pattern recognition amounts to finding a correspondence between the nodes of different graphs. Many formulations of this problem can be cast in general as a quadratic assignment problem, where a linear term in the objective function encodes node compatibility and a quadratic term encodes edge compatibility. The main research focus in this theme is about designing efficient algorithms for approximately solving the quadratic assignment problem, since it is NP-hard. In this paper we turn our attention to a different question: how to estimate compatibility functions such that the solution of the resulting graph matching problem best matches the expected solution that a human would manually provide. We present a method for learning graph matching: the training examples are pairs of graphs and the 'labels' are matches between them. Our experimental results reveal that learning can substantially improve the performance of standard graph matching algorithms. In particular, we find that simple linear assignment with such a learning scheme outperforms Graduated Assignment with bistochastic normalisation, a state-of-the-art quadratic assignment relaxation algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online Signature Verification and Recognition: An Approach Based on Symbolic Representation

    Publication Year: 2009 , Page(s): 1059 - 1073
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5055 KB) |  | HTML iconHTML  

    In this paper, we propose a new method of representing on-line signatures by interval valued symbolic features. Global features of on-line signatures are used to form an interval valued feature vectors. Methods for signature verification and recognition based on the symbolic representation are also proposed. We exploit the notions of writer dependent threshold and introduce the concept of feature dependent threshold to achieve a significant reduction in equal error rate. Several experiments are conducted to demonstrate the ability of the proposed scheme in discriminating the genuine signatures from the forgeries. We investigate the feasibility of the proposed representation scheme for signature verification and also signature recognition using all 16500 signatures from 330 individuals of the MCYT bimodal biometric database. Further, extensive experimentations are conducted to evaluate the performance of the proposed methods by projecting features onto Eigenspace and Fisherspace. Unlike other existing signature verification methods, the proposed method is simple and efficient. The results of the experimentations reveal that the proposed scheme outperforms several other existing verification methods including the state-of-the-art method for signature verification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Semisupervised Multitask Learning

    Publication Year: 2009 , Page(s): 1074 - 1086
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1866 KB) |  | HTML iconHTML  

    Context plays an important role when performing classification, and in this paper we examine context from two perspectives. First, the classification of items within a single task is placed within the context of distinct concurrent or previous classification tasks (multiple distinct data collections). This is referred to as multi-task learning (MTL), and is implemented here in a statistical manner, using a simplified form of the Dirichlet process. In addition, when performing many classification tasks one has simultaneous access to all unlabeled data that must be classified, and therefore there is an opportunity to place the classification of any one feature vector within the context of all unlabeled feature vectors; this is referred to as semi-supervised learning. In this paper we integrate MTL and semi-supervised learning into a single framework, thereby exploiting two forms of contextual information. Example results are presented on a "toy" example, to demonstrate the concept, and the algorithm is also applied to three real data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Statistical Framework for Image Category Search from a Mental Picture

    Publication Year: 2009 , Page(s): 1087 - 1101
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1811 KB) |  | HTML iconHTML  

    Starting from a member of an image database designated the "query image," traditional image retrieval techniques, for example, search by visual similarity, allow one to locate additional instances of a target category residing in the database. However, in many cases, the query image or, more generally, the target category, resides only in the mind of the user as a set of subjective visual patterns, psychological impressions, or "mental pictures." Consequently, since image databases available today are often unstructured and lack reliable semantic annotations, it is often not obvious how to initiate a search session; this is the "page zero problem." We propose a new statistical framework based on relevance feedback to locate an instance of a semantic category in an unstructured image database with no semantic annotation. A search session is initiated from a random sample of images. At each retrieval round, the user is asked to select one image from among a set of displayed images-the one that is closest in his opinion to the target class. The matching is then "mental." Performance is measured by the number of iterations necessary to display an image which satisfies the user, at which point standard techniques can be employed to display other instances. Our core contribution is a Bayesian formulation which scales to large databases. The two key components are a response model which accounts for the user's subjective perception of similarity and a display algorithm which seeks to maximize the flow of information. Experiments with real users and two databases of 20,000 and 60,000 images demonstrate the efficiency of the search process. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tensor-Based AAM with Continuous Variation Estimation: Application to Variation-Robust Face Recognition

    Publication Year: 2009 , Page(s): 1102 - 1116
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2747 KB) |  | HTML iconHTML  

    The active appearance model (AAM) is a well-known model that can represent a non-rigid object effectively. However, the fitting result is often unsatisfactory when an input image deviates from the training images due to its fixed shape and appearance model. To obtain more robust AAM fitting, we propose a tensor-based AAM that can handle a variety of subjects, poses, expressions, and illuminations in the tensor algebra framework, which consists of an image tensor and a model tensor. The image tensor estimates image variations such as pose, expression, and illumination of the input image using two different variation estimation techniques: discrete and continuous variation estimation. The model tensor generates variation-specific AAM basis vectors from the estimated image variations, which leads to more accurate fitting results. To validate the usefulness of the tensor-based AAM, we performed variation-robust face recognition using the tensor-based AAM fitting results. To do, we propose indirect AAM feature transformation. Experimental results show that tensor-based AAM with continuous variation estimation outperforms that with discrete variation estimation and conventional AAM in terms of the average fitting error and the face recognition rate. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 3D Model Retrieval Using Probability Density-Based Shape Descriptors

    Publication Year: 2009 , Page(s): 1117 - 1133
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4265 KB) |  | HTML iconHTML  

    We address content-based retrieval of complete 3D object models by a probabilistic generative description of local shape properties. The proposed shape description framework characterizes a 3D object with sampled multivariate probability density functions of its local surface features. This density-based descriptor can be efficiently computed via kernel density estimation (KDE) coupled with fast Gauss transform. The non-parametric KDE technique allows reliable characterization of a diverse set of shapes and yields descriptors which remain relatively insensitive to small shape perturbations and mesh resolution. Density-based characterization also induces a permutation property which can be used to guarantee invariance at the shape matching stage. As proven by extensive retrieval experiments on several 3D databases, our framework provides state-of-the-art discrimination over a broad and heterogeneous set of shape categories. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Staff Detection with Stable Paths

    Publication Year: 2009 , Page(s): 1134 - 1139
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1342 KB) |  | HTML iconHTML  

    The preservation of musical works produced in the past requires their digitalization and transformation into a machine-readable format. The processing of handwritten musical scores by computers remains far from ideal. One of the fundamental stages to carry out this task is the staff line detection. We investigate a general-purpose, knowledge-free method for the automatic detection of music staff lines based on a stable path approach. Lines affected by curvature, discontinuities, and inclination are robustly detected. Experimental results show that the proposed technique consistently outperforms well-established algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Geometry-Based Ensembles: Toward a Structural Characterization of the Classification Boundary

    Publication Year: 2009 , Page(s): 1140 - 1146
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2220 KB) |  | HTML iconHTML  

    This article introduces a novel binary discriminative learning technique based on the approximation of the non-linear decision boundary by a piece-wise linear smooth additive model. The decision border is geometrically defined by means of the characterizing boundary points - points that belong to the optimal boundary under a certain notion of robustness. Based on these points, a set of locally robust linear classifiers is defined and assembled by means of a Tikhonov regularized optimization procedure in an additive model to create a final lambda-smooth decision rule. As a result, a very simple and robust classifier with a strong geometrical meaning and non-linear behavior is obtained. The simplicity of the method allows its extension to cope with some of nowadays machine learning challenges, such as online learning, large scale learning or parallelization, with linear computational complexity. We validate our approach on the UCI database. Finally, we apply our technique in online and large scale scenarios, and in six real life computer vision and pattern recognition problems: gender recognition, intravascular ultrasound tissue classification, speed traffic sign detection, Chagas' disease severity detection, clef classification and action recognition using a 3D accelerometer data. The results are promising and this paper opens a line of research that deserves further attention. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Very Fast Best-Fit Circular and Elliptical Boundaries by Chord Data

    Publication Year: 2009 , Page(s): 1147 - 1152
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1333 KB) |  | HTML iconHTML  

    Many machine vision tasks require objects to be delineated during image segmentation that have shapes that are well approximated by circles or ellipses. Due to their computational efficiency least-squares, algebraic methods are a popular choice for fitting an elliptic primitive to noisy image data when real-time processing is required. These methods, however, suffer from biased estimates and sensitivity to outlier data. In this paper a real-time, least-squares method is proposed that provides an indirect geometric fit based on the quadratic polynomial form of parallel chord lengths. The algorithm is shown to be more computationally efficient and more easily made robust to outlier data than algebraic methods. Experimental results also suggest that it provides estimates that suffer less from bias error. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TPAMI Information for authors

    Publication Year: 2009 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (108 KB)  
    Freely Available from IEEE
  • [Back cover]

    Publication Year: 2009 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (178 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) is published monthly. Its editorial board strives to present most important research results in areas within TPAMI's scope.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
David A. Forsyth
University of Illinois