By Topic

Information Forensics and Security, IEEE Transactions on

Issue 3 • Date Sept. 2006

Filter Results

Displaying Results 1 - 17 of 17
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Block QIM watermarking games

    Page(s): 293 - 310
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (943 KB) |  | HTML iconHTML  

    While binning is a fundamental approach to blind data embedding and watermarking, an attacker may devise various strategies to reduce the effectiveness of practical binning schemes. The problem analyzed in this paper is design of worst-case noise distributions against L-dimensional lattice quantization index modulation (QIM) watermarking codes. The cost functions considered are 1) probability of error of the maximum-likelihood decoder, and 2) the more tractable Bhattacharyya upper bound on error probability, which is tight at low embedding rates. Both problems are addressed under the following constraints on the attacker's strategy: the noise is independent of the marked signal, blockwise memoryless with block length L, and may not exceed a specified quadratic-distortion level. The embedder's quadratic distortion is limited as well. Three strategies are considered for the embedder: optimization of the lattice inflation parameter (also known as Costa parameter), dithering, and randomized lattice rotation. Critical in this analysis are the symmetry properties of QIM nested lattices and convexity properties of probability of error and related functionals of the noise distribution. We derive the minmax optimal embedding and attack strategies and obtain explicit solutions as well as numerical solutions for the worst-case noise. The role of the attacker's memory is investigated; in particular, we demonstrate the remarkable effectiveness of impulsive-noise attacks as L increases. The formulation proposed in this paper is also used to evaluate the capacity of lattice QIM under worst-noise conditions View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Behavior forensics for scalable multiuser collusion: fairness versus effectiveness

    Page(s): 311 - 329
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1005 KB) |  | HTML iconHTML  

    Multimedia security systems involve many users with different objectives and users influence each other's performance. To have a better understanding of multimedia security systems and offer stronger protection of multimedia, behavior forensics formulate the dynamics among users and investigate how they interact with and respond to each other. This paper analyzes the behavior forensics in multimedia fingerprinting and formulates the dynamics among attackers during multi-user collusion. In particular, this paper focuses on how colluders achieve the fair play of collusion and guarantee that all attackers share the same risk (i.e., the probability of being detected). We first analyze how to distribute the risk evenly among colluders when they receive fingerprinted copies of scalable resolutions due to network and device heterogeneity. We show that generating a colluded copy of higher resolution puts more severe constraints on achieving fairness. We then analyze the effectiveness of fair collusion. Our results indicate that the attackers take a larger risk of being captured when the colluded copy has higher resolution, and they have to take this tradeoff into consideration during collusion. Finally, we analyze the collusion resistance of the scalable fingerprinting systems in various scenarios with different system requirements, and evaluate the maximum number of colluders that the fingerprinting systems can withstand View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dual protection of JPEG images based on informed embedding and two-stage watermark extraction techniques

    Page(s): 330 - 341
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1625 KB) |  | HTML iconHTML  

    In this paper, the authors propose a watermarking scheme that embeds both image-dependent and fixed-part marks for dual protection (content authentication and copyright claim) of JPEG images. To achieve the goals of efficiency, imperceptibility, and robustness, a compressed-domain informed embedding algorithm, which incorporates the Lagrangian multiplier optimization approach and an adjustment procedure, is developed. A two-stage watermark extraction procedure is devised to achieve the functionality of dual protection. In the first stage, the semifragile watermark in each local channel is extracted for content authentication. Then, in the second stage, a weighted soft-decision decoder, which weights the signal detected in each channel according to the estimated channel condition, is used to improve the recovery rate of the fixed-part watermark for copyright protection. The experiment results manifest that the proposed scheme not only achieve dual protection of the image content, but also maintain higher visual quality (an average of 6.69 dB better than a comparable approach) for a specified level of watermark robustness. In addition, the overall computing load is low enough to be practical in real-time applications View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Gray Hausdorff distance measure for comparing face images

    Page(s): 342 - 349
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (850 KB) |  | HTML iconHTML  

    Human face recognition is considered to be one of the toughest problems in the domain of pattern recognition. The variations in face images due to differing expression, pose and illumination are some of the key issues to be addressed in developing a face recognition system. In this paper, a new measure called gray Hausdorff distance (denoted by Hpg) is proposed to compare the gray images of faces directly. An efficient algorithm for computation of the new measure is presented. The computation time is linear in the size of the image. The performance of this measure is evaluated on benchmark face databases. The face recognition system based on the new measure is found to be robust to pose and expression variations, as well as to slight variation in illumination. Comparison studies show that the proposed measure performs better than the existing ones in most cases View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face identification using novel frequency-domain representation of facial asymmetry

    Page(s): 350 - 359
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1237 KB) |  | HTML iconHTML  

    Face recognition is a challenging task. This paper introduces a novel set of biometrics, defined in the frequency domain and representing a form of "facial asymmetry." A comparison with existing spatial asymmetry measures suggests that the frequency-domain representation provides an efficient approach for performing human identification in the presence of severe expressions and for expression classification. Error rates of less than 5% are observed for human identification and around 25% for expression classification on a database of 55 individuals. Feature analysis indicates that asymmetry of the different face parts helps in these two apparently conflicting classification problems. An interesting connection between asymmetry and the Fourier domain phase spectra is then established. Finally, a compact one-bit frequency-domain representation of asymmetry is introduced, and a simplistic Hamming distance classifier is shown to be more efficient than traditional classifiers from storage and the computation point of view, while producing equivalent human identification results. In addition, the application of these compact measures to verification and a statistical analysis are presented View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fake finger detection by skin distortion analysis

    Page(s): 360 - 373
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5322 KB) |  | HTML iconHTML  

    Attacking fingerprint-based biometric systems by presenting fake fingers at the sensor could be a serious threat for unattended applications. This work introduces a new approach for discriminating fake fingers from real ones, based on the analysis of skin distortion. The user is required to move the finger while pressing it against the scanner surface, thus deliberately exaggerating the skin distortion. Novel techniques for extracting, encoding and comparing skin distortion information are formally defined and systematically evaluated over a test set of real and fake fingers. The proposed approach is privacy friendly and does not require additional expensive hardware besides a fingerprint scanner capable of capturing and delivering frames at proper rate. The experimental results indicate the new approach to be a very promising technique for making fingerprint recognition systems more robust against fake-finger-based spoofing attempts View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of message source and destination from network intercepts

    Page(s): 374 - 385
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (665 KB)  

    We consider the problem of estimating the endpoints (source and destination) of a transmission in a network based on partial measurement of the transmission path. Possibly asynchronous sensors placed at various points within the network provide the basis for endpoint estimation by indicating that a specific transmission has been intercepted at their assigned locations. During a training phase, test transmissions are made between various pairs of endpoints in the network and the sensors they activate are noted. Sensor activations corresponding to transmissions with unknown endpoints are also observed in a monitoring phase. A semidefinite programming relaxation is used in conjunction with the measurements and linear prior information to produce likely sample topologies given the data. These samples are used to generate Monte Carlo approximations of the posterior distributions of source/destination pairs for measurements obtained in the monitoring phase. The posteriors allow for maximum a posteriori (MAP) estimation of the endpoints along with computation of some resolution measures. We illustrate the method using simulations of random topologies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Security of autoregressive speech watermarking model under guessing attack

    Page(s): 386 - 390
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (242 KB) |  | HTML iconHTML  

    The security of the "autoregressive (AR) watermark in AR host" signal model is investigated. It is demonstrated through analysis and Monte Carlo simulation that the AR watermarking model is asymptotically as secure as the "white watermark in white host" model under the guessing attack View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matrix embedding for large payloads

    Page(s): 390 - 395
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (377 KB) |  | HTML iconHTML  

    Matrix embedding is a previously introduced coding method that is used in steganography to improve the embedding efficiency (increase the number of bits embedded per embedding change). Higher embedding efficiency translates into better steganographic security. This gain is more important for long messages than for shorter ones because longer messages are, in general, easier to detect. In this paper, we present two new approaches to matrix embedding for large payloads suitable for practical steganographic schemes-one based on a family of codes constructed from simplex codes and the second one based on random linear codes of small dimension. The embedding efficiency of the proposed methods is evaluated with respect to theoretically achievable bounds View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Graphical passwords based on robust discretization

    Page(s): 395 - 399
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (214 KB) |  | HTML iconHTML  

    This paper generalizes Blonder's graphical passwords to arbitrary images and solves a robustness problem that this generalization entails. The password consists of user-chosen click points in a displayed image. In order to store passwords in cryptographically hashed form, we need to prevent small uncertainties in the click points from having any effect on the password. We achieve this by introducing a robust discretization, based on multigrid discretization View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Information Forensics and Security EDICS

    Page(s): 400
    Save to Project icon | Request Permissions | PDF file iconPDF (20 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security Information for authors

    Page(s): 401 - 402
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • Special issue on music information retrieval (MIR)

    Page(s): 403
    Save to Project icon | Request Permissions | PDF file iconPDF (127 KB)  
    Freely Available from IEEE
  • Special issue on human detection and recognition

    Page(s): 404
    Save to Project icon | Request Permissions | PDF file iconPDF (300 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (32 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Chung C. Jay Kuo
University of Southern California