By Topic

Pattern Analysis and Machine Intelligence, IEEE Transactions on

Issue 4 • Date Apr 1997

Filter Results

Displaying Results 1 - 12 of 12
  • On-line fingerprint verification

    Publication Year: 1997 , Page(s): 302 - 314
    Cited by:  Papers (362)  |  Patents (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (7084 KB)  

    Fingerprint verification is one of the most reliable personal identification methods. However, manual fingerprint verification is incapable of meeting today's increasing performance requirements. An automatic fingerprint identification system (AFIS) is needed. This paper describes the design and implementation of an online fingerprint verification system which operates in two stages: minutia extraction and minutia matching. An improved version of the minutia extraction algorithm proposed by Ratha et al. (1995), which is much faster and more reliable, is implemented for extracting features from an input fingerprint image captured with an online inkless scanner. For minutia matching, an alignment-based elastic matching algorithm has been developed. This algorithm is capable of finding the correspondences between minutiae in the input image and the stored template without resorting to exhaustive search and has the ability of adaptively compensating for the nonlinear deformations and inexact pose transformations between fingerprints. The system has been tested on two sets of fingerprint images captured with inkless scanners. The verification accuracy is found to be acceptable. Typically, a complete fingerprint verification procedure takes, on an average, about eight seconds on a SPARC 20 workstation. These experimental results show that our system meets the response time requirements of online verification with high accuracy View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The effect of Gaussian error in object recognition

    Publication Year: 1997 , Page(s): 289 - 301
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1164 KB)  

    In model based recognition, the goal is to locate an instance of one or more known objects in an image. The problem is compounded in real images by the presence of clutter, occlusion, and sensor error, which can lead to “false negatives”, failures to recognize the presence of the object, and “false positives”, in which the algorithm incorrectly identifies an occurrence of the object. The probability of either event is affected by parameters within the recognition algorithm, which are almost always chosen in an ad-hoc fashion. The effect of the parameter values on the likelihood that the recognition algorithm will make a mistake are usually not understood explicitly. To address the problem, we explicitly model the noise that occurs in the image. In a typical recognition algorithm, hypotheses about the position of the object are tested against the evidence in the image, and an overall score is assigned to each hypothesis. We use a statistical model to determine what score a correct or incorrect hypothesis is likely to have, and use standard binary hypothesis testing techniques to distinguish correct from incorrect hypotheses. Using this approach, we can compare algorithms and noise models, and automatically choose values for internal system thresholds to minimize the probability of making a mistake View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning an integral equation approximation to nonlinear anisotropic diffusion in image processing

    Publication Year: 1997 , Page(s): 342 - 352
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1164 KB)  

    Multiscale image enhancement and representation is an important part of biological and machine early vision systems. The process of constructing this representation must be both rapid and insensitive to noise, while retaining image structure at all scales. This is a complex task as small scale structure is difficult to distinguish from noise, while larger scale structure requires more computational effort. In both cases, good localization can be problematic. Errors can also arise when conflicting results at different scales require cross-scale arbitration. Structure sensitive multiscale techniques attempt to analyze an image at a variety of scales within a single image. Various techniques are compared. In this paper, we present a technique which obtains an approximate solution to the partial differential equation (PDE) for a specific time, via the solution of an integral equation which is the nonlinear analog of convolution. The kernel function of the integral equation plays the same role that a Green's function does for a linear PDE, allowing the direct solution of the nonlinear PDE for a specific time without requiring integration through intermediate times. We then use a learning technique to approximate the kernel function for arbitrary input images. The result is an improvement in speed and noise-sensitivity, as well as providing a means to parallelize an otherwise serial algorithm View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recognition of digits in hydrographic maps: binary versus topographic analysis

    Publication Year: 1997 , Page(s): 399 - 404
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2552 KB)  

    Compares the performance of topographic analysis and binary analysis for recognition of digits in hydrographic maps. The performance of each method was measured by the correct classification rate of the final symbol recognition step when processing a complete hydrographic map of size 0.45×0.6 m2 with about 35000 digits. The experimental results indicated that binary analysis had a better performance than topographic analysis. Overall, the performance of the binary analysis was acceptable View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An edge detection technique using the facet model and parameterized relaxation labeling

    Publication Year: 1997 , Page(s): 328 - 341
    Cited by:  Papers (10)  |  Patents (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (860 KB)  

    We present a method for detecting and labeling the edge structures in digital gray-scale images in two distinct stages: First, a variant of the cubic facet model is applied to detect the location, orientation and curvature of the putative edge points. Next, a relaxation labeling network is used to reinforce meaningful edge structures and suppress noisy edges. Each node label of this network is a 3D vector parameterizing the orientation and curvature information of the corresponding edge point. A hysteresis step in the relaxation process maximizes connected contours. For certain types of images, prefiltering by adaptive smoothing improves robustness against noise and spatial blurring View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model-based image enhancement of far infrared images

    Publication Year: 1997 , Page(s): 410 - 415
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2308 KB)  

    We devise enhancement algorithms for far infrared images based upon a model of an idealized far infrared image being piecewise-constant. We then apply two known enhancement algorithms: median filtering and spatial homomorphic filtering, and then extend the model to develop spatio-temporal homomorphic filtering. The algorithms have been applied to several image sequences and work well, showing significant image enhancement View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combination of multiple classifiers using local accuracy estimates

    Publication Year: 1997 , Page(s): 405 - 410
    Cited by:  Papers (251)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    This paper presents a method for combining classifiers that uses estimates of each individual classifier's local accuracy in small regions of feature space surrounding an unknown test sample. An empirical evaluation using five real data sets confirms the validity of our approach compared to some other combination of multiple classifiers algorithms. We also suggest a methodology for determining the best mix of individual classifiers View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Finite-resolution aspect graphs of polyhedral objects

    Publication Year: 1997 , Page(s): 315 - 327
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (664 KB)  

    We address the problem of computing the aspect graph of a polyhedral object observed by an orthographic camera with limited spatial resolution, such that two image points separated by a distance smaller than a preset threshold cannot be resolved. Under this model, views that would differ under normal orthographic projection may become equivalent, while “accidental” views may occur over finite areas of the view space. We present a catalogue of visual events for polyhedral objects and give an algorithm for computing the aspect graph and enumerating all qualitatively different aspects. The algorithm has been fully implemented and results are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A lexicon driven approach to handwritten word recognition for real-time applications

    Publication Year: 1997 , Page(s): 366 - 379
    Cited by:  Papers (90)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (512 KB)  

    A fast method of handwritten word recognition suitable for real time applications is presented in this paper. Preprocessing, segmentation and feature extraction are implemented using a chain code representation of the word contour. Dynamic matching between characters of a lexicon entry and segment(s) of the input word image is used to rank the lexicon entries in order of best match. Variable duration for each character is defined and used during the matching. Experimental results prove that our approach using the variable duration outperforms the method using fixed duration in terms of both accuracy and speed. Speed of the entire recognition process is about 200 msec on a single SPARC-10 platform and the recognition accuracy is 96.8 percent are achieved for lexicon size of 10, on a database of postal words captured at 212 dpi View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Parametric shape-from-shading by radial basis functions

    Publication Year: 1997 , Page(s): 353 - 365
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2296 KB)  

    We present a new method of shape from shading by using radial basis functions to parameterize the object depth. The radial basis functions are deformed by adjusting their centers, widths, and weights such that the intensity errors are minimized. The initial centers and widths are arranged hierarchically to speed up convergence and to stabilize the solution. Although the smoothness constraint is used, it can be eventually dropped out without causing instabilities in the solution. An important feature of our parametric shape-from-shading method is that it offers a unified framework for integration of multiple sensory information. We show that knowledge about surface depth and/or surface normals anywhere in the image can be easily incorporated into the shape from shading process. It is further demonstrated that even qualitative knowledge can be used in shape from shading to improve 3D reconstruction. Experimental comparisons of our method with several existing ones are made by using both synthetic and real images. Results show that our solution is more accurate than the others View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Inducing features of random fields

    Publication Year: 1997 , Page(s): 380 - 393
    Cited by:  Papers (170)  |  Patents (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1176 KB)  

    We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are non-Markovian and have a large number of parameters that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration of the method, we describe its application to the problem of automatic word classification in natural language processing View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Minimal surfaces based object segmentation

    Publication Year: 1997 , Page(s): 394 - 398
    Cited by:  Papers (80)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (484 KB)  

    A geometric approach for 3D object segmentation and representation is presented. The segmentation is obtained by deformable surfaces moving towards the objects to be detected in the 3D image. The model is based on curvature motion and the computation of surfaces with minimal areas, better known as minimal surfaces. The space where the surfaces are computed is induced from the 3D image (volumetric data) in which the objects are to be detected. The model links between classical deformable surfaces obtained via energy minimization, and intrinsic ones derived from curvature based flows. The new approach is stable, robust, and automatically handles changes in the surface topology during the deformation View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) is published monthly. Its editorial board strives to present most important research results in areas within TPAMI's scope.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
David A. Forsyth
University of Illinois