By Topic

Advances in Pattern Recognition, 2009. ICAPR '09. Seventh International Conference on

Date 4-6 Feb. 2009

Filter Results

Displaying Results 1 - 25 of 114
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (548 KB)  
    Freely Available from IEEE
  • [Title page i]

    Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (64 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (193 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (113 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): v - xii
    Save to Project icon | Request Permissions | PDF file iconPDF (168 KB)  
    Freely Available from IEEE
  • Preface

    Page(s): xiii - xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (85 KB)  
    Freely Available from IEEE
  • Conference Committee

    Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (67 KB)  
    Freely Available from IEEE
  • Organizing Committee

    Page(s): xvi
    Save to Project icon | Request Permissions | PDF file iconPDF (66 KB)  
    Freely Available from IEEE
  • Technical Program Committee

    Page(s): xvii - xviii
    Save to Project icon | Request Permissions | PDF file iconPDF (88 KB)  
    Freely Available from IEEE
  • International Advisory Committee

    Page(s): xix
    Save to Project icon | Request Permissions | PDF file iconPDF (63 KB)  
    Freely Available from IEEE
  • List of reviewers

    Page(s): xx - xxi
    Save to Project icon | Request Permissions | PDF file iconPDF (49 KB)  
    Freely Available from IEEE
  • Still Image and Video Fingerprinting

    Page(s): 3 - 8
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (193 KB) |  | HTML iconHTML  

    Multimedia fingerprinting, also know as robust/perceptual hashing and replica detection is an emerging technology that can be used as an alternative to watermarking for the efficient Digital Rights Management (DRM) of multimedia data. Two fingerprinting approaches are reviewed in this paper. The first is an image fingerprinting technique that makes use of color and texture descriptors,R-trees and Linear Discriminant Analysis (LDA). The second is a two-step, coarse-to-fine video fingerprinting method that involves color-based descriptors, R-trees and a frame-based voting procedure. Experimental performance evaluation is provided for both methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Colour and Multispectral Morphological Processing

    Page(s): 9 - 13
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (986 KB) |  | HTML iconHTML  

    The classical colour polar-based representations (HLS, HSV, etc.) lead to brightness and saturation with non consistent properties. The requirements for a correct quantitative colour polar representation are recalled. They lead to using norms, and in particular the L1 norm. Colour images are multivariable functions, and for segmenting them one must go through reducing step. It is classically obtained by calculating a gradient module,which is then segmented as a gray tone image. An alternative solution is proposed in the paper. It is based on separated segmentations, followed by final merging into a unique partition. The generalization of the top-hat transformation for extracting colour details is also considered. These new marginal colour operators take advantage of an adaptive combination of the chromatic and the achromatic (or the spectral and the spatio-geometric) colour components. Examples in feature extraction from geographical maps are given. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Large Lump Detection Using a Particle Filter of Hybrid State Variable

    Page(s): 14 - 17
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1868 KB) |  | HTML iconHTML  

    This paper presents a particle filter based solution to the problem of detecting large frozen lumps in an image sequence, taken of the feed to a crusher, which is used for size reduction of oilsand ore. In this application, the objects of interest, i.e., large frozen lumps, are characterized by a high level of image noise, irregular shapes, and uneven and variable surface texture. In addition, more than one large lump can be present in the scene. Our proposed solution integrates evidence of the presence of large lumps over time, by adapting an existing Bayesian framework for joint object detection and tracking. To implement the particle filter, we formulate an application-specific observation model that is required by the Bayesian tracker. Our experimental results show that the proposed solution is capable of detecting multiple large lumps reliably, and that it has the potential of preventing the oilsand crusher from being jammed and leading to improved productivity. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Watermarking Using Distributed MR-DCT and SVD

    Page(s): 21 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (527 KB) |  | HTML iconHTML  

    In this paper, a robust watermarking scheme is proposed for copyright protection using Distributed multi resolution discrete cosine transform(D-MR-DCT) and singular value decomposition. The core idea of the proposed scheme is to decompose an image into four frequency sub-bands using D-MR-DCT and then singular values of every sub-band are modified with the singular values of the watermark. The experimental results show better visual imperceptibility and resiliency of the proposed scheme against intentional or unintentional variety of attacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Secure Steganographic Technique for Blind Steganalysis Resistance

    Page(s): 25 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (471 KB) |  | HTML iconHTML  

    A simple yet effective tactic for secure steganography is proposed in this paper that can resist the blind steganalysis. In this method author derives a matrix based on the image content and thus providing the security. This matrix is used by quantization index modulation (QIM) based encoder and decoder. The embedding location of data is also randomized so as to immobilize the self calibration process. It is shown that detection rate of steganalysis scheme to proposed method is close to arbitrary speculation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Spread Spectrum Watermark Embedder Optimization Using Genetic Algorithms

    Page(s): 29 - 32
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (457 KB) |  | HTML iconHTML  

    This paper looks spread spectrum (SS) watermarking from a different angle where number of cover signal points, payload capacity and watermark signal-to-interference ratio are optimized. The objective is to meet an acceptable BER (bit error rate) and peak-to-average distortion (PAD)on a single point of the cover signal under the constraint of a given embedding distortion and cover size. First, a new model of spread spectrum (SS) watermarking is proposed where each watermark bit is spread using a distinct code pattern over N-mutually orthogonal signal points. Decision variable for each bit of watermark decoding is formed from the weighted average of N-decision statistics. Each watermarked signal point is then modified (attack channel) as Rayleigh distribution followed by AWGN (additive white Gaussian noise). Genetic algorithm (GA) is used to reduce the searching time in this multidimensional nonlinear problem of conflicting nature. Simulation results show that optimizing the number of cover signal points, payload capacity and watermark signal-to-interference ratio (WSIR), better acceptable values of both BER and PAD can be achieved simultaneously. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Bayesian Mixture of AR Models for Time Series Clustering

    Page(s): 35 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (168 KB) |  | HTML iconHTML  

    In this paper we propose a Bayesian framework for estimation of parameters of a mixture of autoregressive model, for time series clustering. The proposed approach is based on variational principles and provides a tractable approximation to the true posterior density that minimizes Kullback-Liebler(KL) divergence w.r.t prior distribution. The proposed approach is applied both on simulated and real time series data sets and found to be useful in exploring and finding the true number of underlying clusters, starting from arbitrarily large number clusters. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection of Neural Activities in FMRI Using Jensen-Shannon Divergence

    Page(s): 39 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (201 KB) |  | HTML iconHTML  

    In this paper, we present a statistical technique based on Jensen-Shanon divergence for detecting the regions of activity in fMRI images. The method is model free and we exploit the metric property of the square root of Jensen-Shannon divergence to accumulate the variations between successive time frames of fMRI images. Experimentally we show the effectiveness of our algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On Small Sample Prediction of Financial Crisis

    Page(s): 43 - 46
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (138 KB) |  | HTML iconHTML  

    Prediction of financial crisis is a challenging problem in financial research. On the basis of the information provided by financial statements, companies are usually classified into two groups, e.g., the groups of solvent and insolvent companies. Linear discriminant analysis (LDA), logistic regression and artificial neural network (ANN) are the most common statistical tools used for this classification. LDA and logistic regression separate the two groups using a hyperplane, and they provide good lower dimensional view of class separability. However, these methods are not robust against outliers and they also get affected by deviations from underlying model assumptions. Moreover, if the number of observations is small compared to the dimension of the measurement vector, these classical methods may lead to poor classification. On the contrary, ANN is more flexible and does not make any assumption about the population structure. But, it separates the competing populations using a complex surface. So, we sacrifice the lower dimensional view and the interpretability of the result, which are often the major concern in financial analysis. In this article, we propose to use a semiparametric method which preserves the interpretability and the lower dimensional view of class separability, but at the same time it is robust against outliers and capable to work well in high dimension and low sample size set up. We use two real life financial data sets to show the utility of this semiparametric method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Model Based Clustering of Audio Clips Using Gaussian Mixture Models

    Page(s): 47 - 50
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (149 KB) |  | HTML iconHTML  

    The task of clustering multivariate trajectory data of varying length exists in various domains. Model-based methods are capable of handling varying length trajectories without changing the length or structure. Hidden Markov models (HMMs) are widely used for trajectory data modeling. However, HMMs are not suitable for trajectories of long duration. In this paper, we propose a similarity based representation for multivariate, varying length trajectories of long duration using Gaussian mixture models. Each trajectory is modeled by a Gaussian mixture model (GMM). The log-likelihood of a trajectory for a given GMM model is used as a similarity score. The scores corresponding to all the trajectories in the given data set and all the GMMs are used to form a score matrix that is used in a clustering algorithm. The proposed model based clustering method is applied on the audio clips which are multivariate trajectories of varying length and long duration. The performance of the proposed method is much better than the method that uses a fixed length representation for an audio clip based on the perceptual features. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Sparse Intensity Histogram: Distinctive and Robust to the Space-distortion

    Page(s): 53 - 56
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (372 KB) |  | HTML iconHTML  

    In this article, we propose an image descriptor which efficiently calculates the distance between images, saves to the disk compactly, and seeks a similar image related to a query robustly. The advantage of the proposed descriptor, sparse intensity histogram (SIH), is that it takes a robust approach to space distortion to the local descriptor, and that the speed of comparing are similar to the global descriptor because the SIH does not consider the spatial information, correspondence problem, to find the similar pairs of extracted descriptors between one and the other image. The experimental result shows that the proposed SIH has much better performance than the edge histogram descriptor in its accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Approach to Corner Matching Using Fuzzy Similarity Measure

    Page(s): 57 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (368 KB) |  | HTML iconHTML  

    Corner matching in sequence images serves as a building block of several important applications of stereo vision. In this paper, we establish the corner correspondence between two images in the presence of intensity variations and motion blur by using a fuzzy theory based similarity measure. The matching approach proposed by us needs to extract set of corner points as candidates from both the frames. Experiments conducted with the help of various sequences of images prove the superiority of our algorithm over standard cross correlation and sum of absolute difference under non-ideal conditions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Quality Based Motion Estimation Criterion for Temporal Coding of Video

    Page(s): 61 - 64
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (215 KB) |  | HTML iconHTML  

    In video compression, motion compensation techniques are used for removal of temporal redundancy and it is the block based matching concept which is most popular among them. In such matching techniques, Mean Absolute Difference (MAD) is widely accepted as the matching criterion because of its simplicity and low computation. Since MAD considers only average error value in a block for matching purposes while ignoring individual difference between the pixels, the matching may not be more accurate. In this paper, a new block matching criterion is being suggested and is experimentally compared with two other matching criterions including MAD, using four parameters and the results are better for the proposed one. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Best Basis Selection Using Singular Value Decomposition

    Page(s): 65 - 68
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (459 KB) |  | HTML iconHTML  

    This paper presents a new idea of best basis selection through singular value decomposition. Wavelet and Wavelet Packet Transform are efficient tools to represent the image. Wavelet Packet Transform is a generalization of wavelet transform which is more adaptive than the wavelet transform because it offers a rich library of bases from which the best one can be chosen for a certain class of images with a specified cost function. Wavelet packet decomposition yields a redundant representation of the image. The problem of wavelet packet image coding consists of considering all possible wavelet packet bases in the library, and choosing the one that gives the best coding performance. In this work, Singular Value Decomposition is used as a tool to select the best basis. Experimental results have demonstrated the validity of the approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.