By Topic

Information Forensics and Security, IEEE Transactions on

Issue 4 • Date Dec. 2007

Filter Results

Displaying Results 1 - 23 of 23
  • Table of contents

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Optimum Detection for Spread-Spectrum Watermarking That Employs Self-Masking

    Page(s): 645 - 654
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2727 KB) |  | HTML iconHTML  

    Digital watermarking is an efficient and promising approach to protect intellectual property rights of digital media. Spread spectrum (SS) is one of the most widely used image watermarking schemes because of its robustness against attacks and its support for the exploitation of the properties of the human visual system (HVS). To maximize the watermark strength without introducing visual artifacts, in SS watermarking, the watermark signal is usually modulated by the just-noticeable difference (JND) of the host image. In advanced perceptual models, the JND is characterized as a nonlinear function of local image features. The optimum detection scheme for such nonlinearly embedded watermarks, however, has rarely been studied. In this paper, we address this problem and propose a novel approach that transforms the test signal to a perceptually uniform domain and then performs Bayesian hypothesis testing in that domain. Locally optimum detectors for arbitrary host signal distributions and arbitrary JND models that exploit the self-masking property of the HVS are derived in closed forms, in which the test signal is first nonlinearly preprocessed before a linear correlator is applied. The optimality of the proposed detector is justified mathematically according to the Neyman-Pearson criterion. Simulation results demonstrate the superior performances of the proposed detector over the conventional linear correlation detector. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Digital Image Watermarking Algorithm Resilient to Desynchronization Attacks

    Page(s): 655 - 663
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2759 KB) |  | HTML iconHTML  

    Synchronization is crucial to design a robust image watermarking scheme. In this paper, a novel feature-based image watermarking scheme against desynchronization attacks is proposed. The robust feature points, which can survive various signal-processing and affine transformation, are extracted by using the Harris-Laplace detector. A local characteristic region (LCR) construction method based on the scale-space representation of an image is considered for watermarking. At each LCR, the digital watermark is repeatedly embedded by modulating the magnitudes of discrete Fourier transform coefficients. In watermark detection, the digital watermark can be recovered by maximum membership criterion. Simulation results show that the proposed scheme is invisible and robust against common signal processing, such as median filtering, sharpening, noise adding, JPEG compression, etc., and desynchronization attacks, such as rotation, scaling, translation, row or column removal, cropping, and random bend attack, etc. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Combatting Ambiguity Attacks via Selective Detection of Embedded Watermarks

    Page(s): 664 - 682
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2290 KB) |  | HTML iconHTML  

    This paper focuses on a problem that is common to most watermarking-based ownership dispute resolutions and ownership assertion systems. Such systems are vulnerable to a simple but effective class of attacks that exploit the high false-positive rate of the watermarking techniques to cast doubt on the reliability of a resulting decision. To mitigate this vulnerability, we propose embedding multiple watermarks, as opposed to embedding a single watermark, and detecting a randomly selected subset of them while constraining the embedding distortion. The crux of the scheme lies in both watermark generation, which deploys a family of one-way functions and selective detection, which injects uncertainty into the detection process. The potential of this approach in reducing the false-positive probability is analyzed under various operating conditions and compared to single watermark embedding. The multiple watermark embedding and selective detection technique is incorporated analytically into the additive watermarking technique and results obtained through numerical solutions are presented to illustrate its effectiveness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decode-Time Forensic Watermarking of AAC Bitstreams

    Page(s): 683 - 696
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1411 KB) |  | HTML iconHTML  

    In digital rights-management systems, forensic watermarking complements encryption and deters the capture and unauthorized redistribution of the rendered content. In this paper, we propose a novel watermarking method which is integrated into the advanced audio coding (AAC) standard's decoding process. For predefined frequency bands, the method intercepts and modifies the scale factors, which are utilized for dequantization of spectral coefficients. It thereby modulates the short-time envelope of the bandlimited audio and embeds a watermark which is robust to various attacks, such as capture with a microphone and recompression at lower bit rates. Inclusion of watermark embedding in the AAC decoder has practically no effect on the decoding complexity. As a result, the proposed method can be integrated even into resource-constrained devices, such as portable players without any additional hardware. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collusion-Resistant Video Fingerprinting for Large User Group

    Page(s): 697 - 709
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1708 KB) |  | HTML iconHTML  

    Digital fingerprinting protects multimedia content from illegal redistribution by uniquely marking copies of the content distributed to users. Most existing multimedia fingerprinting schemes consider a user set on the scale of thousands. However, in such real-world applications as video-on-demand distribution, the number of potential users can be as many as 10-100 million. This large user size demands not only strong collusion resistance but also high efficiency in fingerprint construction, and detection, which makes most existing schemes incapable of being applied to these applications. A recently proposed joint coding and embedding fingerprinting framework provides a promising balance between collusion resistance, efficient construction, and detection, but several issues remain unsolved for applications involving a large group of users. In this paper, we explore how to employ the joint coding and embedding framework and develop practical algorithms to fingerprint video in such challenging settings as to accommodate more than ten million users and resist hundreds of users' collusion. We investigate the proper code structure for large-scale fingerprinting and propose a trimming detection technique that can reduce the decoding computational complexity by more than three orders of magnitude at the cost of less than 0.5% loss in detection probability under moderate to high watermark-to-noise ratios. Both analytic and experimental results show a high potential of joint coding and embedding to meet the needs of real-world large-scale fingerprinting applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noise Reduction in Side Channel Attack Using Fourth-Order Cumulant

    Page(s): 710 - 720
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (842 KB) |  | HTML iconHTML  

    Side channel attacks exploit physical information leaked during the operation of a cryptographic device (e.g., a smart card). The confidential data, which can be leaked from side channels, are timing of operations, power consumption, and electromagnetic emanation. In this paper, we propose a preprocessing method based on the fourth-order cumulant, which aims to improve the performance of side channel attacks. It takes advantages of the Gaussian and nonGaussian properties, that respectively characterize the noise and the signal, to remove the effects due to Gaussian noise coupled into side channel signals. The proposed method is then applied to analyze the electromagnetic signals of a synthesized application-specific integrated circuit during a data encryption standard operation. The theoretical and experimental results show that our method significantly reduces the number of side channel signals needed to detect the encryption key. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Robust Fingerprint Indexing Scheme Using Minutia Neighborhood Structure and Low-Order Delaunay Triangles

    Page(s): 721 - 733
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (758 KB) |  | HTML iconHTML  

    Fingerprint indexing is a key technique in automatic fingerprint identification systems (AFIS). However, handling fingerprint distortion is still a problem. This paper concentrates on a more accurate fingerprint indexing algorithm that efficiently retrieves the top N possible matching candidates from a huge database. To this end, we design a novel feature based on minutia neighborhood structure (we call this minutia detail and it contains richer minutia information) and a more stable triangulation algorithm (low-order Delaunay triangles, consisting of order 0 and 1 Delaunay triangles), which are both insensitive to fingerprint distortion. The indexing features include minutia detail and attributes of low-order Delaunay triangle (its handedness, angles, maximum edge, and related angles between orientation field and edges). Experiments on databases FVC2002 and FVC2004 show that the proposed algorithm considerably narrows down the search space in fingerprint databases and is stable for various fingerprints. We also compared it with other indexing approaches, and the results show our algorithm has better performance, especially on fingerprints with distortion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Comparative Study of Fingerprint Image-Quality Estimation Methods

    Page(s): 734 - 743
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3387 KB) |  | HTML iconHTML  

    One of the open issues in fingerprint verification is the lack of robustness against image-quality degradation. Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system. Therefore, it is important for a fingerprint recognition system to estimate the quality and validity of the captured fingerprint images. In this work, we review existing approaches for fingerprint image-quality estimation, including the rationale behind the published measures and visual examples showing their behavior under different quality conditions. We have also tested a selection of fingerprint image-quality estimation algorithms. For the experiments, we employ the BioSec multimodal baseline corpus, which includes 19 200 fingerprint images from 200 individuals acquired in two sessions with three different sensors. The behavior of the selected quality measures is compared, showing high correlation between them in most cases. The effect of low-quality samples in the verification performance is also studied for a widely available minutiae-based fingerprint matching system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fingerprint-Based Fuzzy Vault: Implementation and Performance

    Page(s): 744 - 757
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1821 KB) |  | HTML iconHTML  

    Reliable information security mechanisms are required to combat the rising magnitude of identity theft in our society. While cryptography is a powerful tool to achieve information security, one of the main challenges in cryptosystems is to maintain the secrecy of the cryptographic keys. Though biometric authentication can be used to ensure that only the legitimate user has access to the secret keys, a biometric system itself is vulnerable to a number of threats. A critical issue in biometric systems is to protect the template of a user which is typically stored in a database or a smart card. The fuzzy vault construct is a biometric cryptosystem that secures both the secret key and the biometric template by binding them within a cryptographic framework. We present a fully automatic implementation of the fuzzy vault scheme based on fingerprint minutiae. Since the fuzzy vault stores only a transformed version of the template, aligning the query fingerprint with the template is a challenging task. We extract high curvature points derived from the fingerprint orientation field and use them as helper data to align the template and query minutiae. The helper data itself do not leak any information about the minutiae template, yet contain sufficient information to align the template and query fingerprints accurately. Further, we apply a minutiae matcher during decoding to account for nonlinear distortion and this leads to significant improvement in the genuine accept rate. We demonstrate the performance of the vault implementation on two different fingerprint databases. We also show that performance improvement can be achieved by using multiple fingerprint impressions during enrollment and verification. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Application of Projective Invariants in Hand Geometry Biometrics

    Page(s): 758 - 768
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (966 KB) |  | HTML iconHTML  

    Our research focuses on finding mathematical representations of biometric features that are not only distinctive, but also invariant to projective transformations. We have chosen hand geometry technology to work with, because it has wide public awareness and acceptance and most important, large space for improvement. Unlike the traditional hand geometry technologies, the hand descriptor in our hand geometry system is constructed using projective-invariant features. Hand identification can be accomplished by a single view of a hand regardless of the viewing angles. The noise immunity and the discriminability possessed by a hand feature vector using different types of projective invariants are studied. We have found an appropriate symmetric polynomial representation of the hand features with which both noise immunity and discrimminability of a hand feature vector are considerably improved. Experimental results show that the system achieves an equal error rate (EER) of 2.1% by a 5-D feature vector on a database of 52 hand images. The EER reduces to 0.00% when the feature vector dimension increases to 18. In this paper, we extend the concept of hand geometry from a geometrical size-based technique that requires physical hand constraints to a projective invariant-based technique that allows free hand motion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Shape-Driven Gabor Jets for Face Description and Authentication

    Page(s): 769 - 780
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2096 KB) |  | HTML iconHTML  

    This paper proposes, through the combination of concepts and tools from different fields within the computer vision community, an alternative path to the selection of key points in face images. The classical way of attempting to solve the face recognition problem using algorithms which encode local information is to localize a predefined set of points in the image, extract features from the regions surrounding those locations, and choose a measure of similarity (or distance) between correspondent features. Our approach, namely shape-driven Gabor jets, aims at selecting an own set of points and features for a given client. After applying a ridges and valleys detector to a face image, characteristic lines are extracted and a set of points is automatically sampled from these lines where Gabor features (jets) are calculated. So each face is depicted by R2 points and their respective jets. Once two sets of points from face images have been extracted, a shape-matching algorithm is used to solve the correspondence problem (i.e., map each point from the first image to a point within the second image) so that the system is able to compare shape-matched jets. As a byproduct of the matching process, geometrical measures are computed and compiled into the final dissimilarity function. Experiments on the AR face database confirm good performance of the method against expression and, mainly, lighting changes. Moreover, results on the XM2VTS and BANCA databases show that our algorithm achieves better performance than implementations of the elastic bunch graph matching approach and other related techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Horizontal and Vertical 2DPCA-Based Discriminant Analysis for Face Verification on a Large-Scale Database

    Page(s): 781 - 792
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1757 KB) |  | HTML iconHTML  

    This paper first discusses some theoretical properties of 2D principal component analysis (2DPCA) and then presents a horizontal and vertical 2DPCA-based discriminant analysis (HVDA) method for face verification. The HVDA method, which applies 2DPCA horizontally and vertically on the image matrices (2D arrays), achieves lower computational complexity than the traditional PCA and Fisher linear discriminant analysis (LDA)-based methods that operate on high dimensional image vectors (1D arrays). The horizontal 2DPCA is invariant to vertical image translations and vertical mirror imaging, and the vertical 2DPCA is invariant to horizontal image translations and horizontal mirror imaging. The HVDA method is therefore less sensitive to imprecise eye detection and face cropping, and can improve upon the traditional discriminant analysis methods for face verification. Experiments using the face recognition grand challenge (FRGC) and the biometric experimentation environment system show the effectiveness of the proposed method. In particular, for the most challenging FRGC version 2 Experiment 4, which contains 12thinspace776 training images, 16 028 controlled target images, and 8014 uncontrolled query images, the HVDA method using a color configuration across two color spaces, namely, the YIQ and the YCbCr color spaces, achieves the face verification rate (ROC III) of 78.24% at the false accept rate of 0.1%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting Spoofing and Anomalous Traffic in Wireless Networks via Forge-Resistant Relationships

    Page(s): 793 - 808
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1366 KB) |  | HTML iconHTML  

    Many wireless networks are susceptible to spoofing attacks. Conventionally, ensuring the identity of the communicator and detecting an adversarial presence is performed via device authentication. Unfortunately, full-scale authentication is not always desirable as it requires key management and more extensive computations. In this paper, we propose noncryptographic mechanisms that are complementary to authentication and can detect device spoofing with little or no dependency on cryptographic keys. We introduce forge-resistant relationships associated with transmitted packets, and forge-resistant consistency checks, which allow other network entities to detect anomalous activity. We then provide several practical examples of forge-resistant relationships for detecting anomalous network activity. We explore the use of monotonic relationships in the sequence number fields, the use of a supplemental identifier field that evolves in time according to a reverse one-way function chain, and the use of traffic statistics to differentiate between anomalous traffic and congestion. We then show how these relationships can be used to construct classifiers that provide a multilevel threat assessment. We validate these methods through experiments conducted on the ORBIT wireless testbed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Correction to "A framework for robust watermarking of H.264 encoded video with controllable detection performance"

    Page(s): 809
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (44 KB)  

    In the March 2007 paper named above (pp. 14-23), the authors found the following error. The optimality of the detector is only valid for c = 2, which is a Gaussian distribution. The performance and results of the detector when c ≠ 2 as presented in the paper are still valid. However, in this case the detector is suboptimal. Equations (21)??(26) and (28) should be modified as described and the computational complexity of the detector is the same as the detector developed in reference [6]. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • List of Reviewers

    Page(s): 810 - 811
    Save to Project icon | Request Permissions | PDF file iconPDF (25 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security EDICS

    Page(s): 812
    Save to Project icon | Request Permissions | PDF file iconPDF (19 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security Information for authors

    Page(s): 813 - 814
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • Special issue on Integration of Context and Content for Multimedia Managemetn

    Page(s): 815 - 816
    Save to Project icon | Request Permissions | PDF file iconPDF (606 KB)  
    Freely Available from IEEE
  • 2007 Index IEEE Transactions on Information Forensics and Security Vol. 2

    Page(s): 817 - 827
    Save to Project icon | Request Permissions | PDF file iconPDF (112 KB)  
    Freely Available from IEEE
  • 9th International Conference on Signal Processing (ICSP'08)

    Page(s): 828
    Save to Project icon | Request Permissions | PDF file iconPDF (572 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (31 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Chung C. Jay Kuo
University of Southern California