By Topic

Information Forensics and Security, IEEE Transactions on

Issue 4 • Date Dec. 2008

Filter Results

Displaying Results 1 - 25 of 29
  • Table of contents

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Editorial Farewell Message

    Page(s): 581
    Save to Project icon | Request Permissions | PDF file iconPDF (25 KB)  
    Freely Available from IEEE
  • Hiding Traces of Resampling in Digital Images

    Page(s): 582 - 592
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2669 KB) |  | HTML iconHTML  

    Resampling detection has become a standard tool for forensic analyses of digital images. This paper presents new variants of image transformation operations which are undetectable by resampling detectors based on periodic variations in the residual signal of local linear predictors in the spatial domain. The effectiveness of the proposed method is supported with evidence from experiments on a large image database for various parameter settings. We benchmark detectability as well as the resulting image quality against conventional linear and bicubic interpolation and interpolation with a sinc kernel. These early findings on ldquocounter-forensicrdquo techniques put into question the reliability of known forensic tools against smart counterfeiters in general, and might serve as benchmarks and motivation for the development of much improved forensic techniques. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Security of Lattice-Based Data Hiding Against the Watermarked-Only Attack

    Page(s): 593 - 610
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1486 KB) |  | HTML iconHTML  

    This paper presents a security analysis for data-hiding methods based on nested lattice codes, extending the analysis provided by previous works to a more general scenario. The security is quantified as the difficulty of estimating the secret key used in the embedding process, assuming that the attacker has several signals watermarked available with the same secret key. The theoretical analysis accomplished in the first part of this paper quantifies security in an information-theoretic sense by means of the mutual information between the watermarked signals and the secret key, addressing important issues, such as the possibility of achieving perfect secrecy and the impact of the embedding rate in the security level. In the second part, a practical algorithm for estimating the secret key is proposed, and the information extracted is used for implementing a reversibility attack on real images. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multipurpose Watermarking Based on Multiscale Curvelet Transform

    Page(s): 611 - 619
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (739 KB) |  | HTML iconHTML  

    Multipurpose watermarking for content authentication and copyright verification are accomplished by using the multiscale curvelet transform. A curvelet transform gains better and sparser representation than most traditional multiscale transforms. In this paper, an image is decomposed into multiscale coefficients with a dyadic number of wedges constructed from a variety of neighboring scales. Image hash is designed to extract image features from an approximate scale. The image features represented in the form of bit sequences are then embedded onto the wedges by a quantization based on human visual system behavior. The implementation strategy achieves content authentications for fatigue watermarking and copyright verifications for robust watermarking. The experiments demonstrate good results to support the feasibility of using this method in multipurpose applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hierarchical Watermarking of Semiregular Meshes Based on Wavelet Transform

    Page(s): 620 - 634
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2534 KB) |  | HTML iconHTML  

    This paper presents a hierarchical watermarking framework for semiregular meshes. Three blind watermarks are inserted in a semiregular mesh with different purposes: a geometrically robust watermark for copyright protection, a high-capacity watermark for carrying a large amount of auxiliary information, and a fragile watermark for content authentication. The proposed framework is based on wavelet transform of the semiregular mesh. More precisely, the three watermarks are inserted in different appropriate resolution levels obtained by wavelet decomposition of the mesh: the robust watermark is inserted by modifying the norms of the wavelet coefficient vectors associated with the lowest resolution level; the fragile watermark is embedded in the high resolution level obtained just after one wavelet decomposition by modifying the orientations and norms of the wavelet coefficient vectors; the high-capacity watermark is inserted in one or several intermediate levels by considering groups of wavelet coefficient vector norms as watermarking primitives. Experimental results demonstrate the effectiveness of the proposed framework: the robust watermark is able to resist all common geometric attacks even with a relatively strong amplitude; the fragile watermark is robust to content-preserving operations, while being sensitive to other attacks of which it can also provide the precise location; the payload of the high-capacity watermark increases rapidly along with the number of watermarking primitives. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Multiclass Detector of Current Steganographic Methods for JPEG Format

    Page(s): 635 - 650
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1242 KB) |  | HTML iconHTML  

    The aim of this paper is to construct a practical forensic steganalysis tool for JPEG images that can properly analyze single- and double-compressed stego images and classify them to selected current steganographic methods. Although some of the individual modules of the steganalyzer were previously published by the authors, they were never tested as a complete system. The fusion of the modules brings its own challenges and problems whose analysis and solution is one of the goals of this paper. By determining the stego-algorithm, this tool provides the first step needed for extracting the secret message. Given a JPEG image, the detector assigns it to six popular steganographic algorithms. The detection is based on feature extraction and supervised training of two banks of multiclassifiers realized using support vector machines. For accurate classification of single-compressed images, a separate multiclassifier is trained for each JPEG quality factor from a certain range. Another bank of multiclassifiers is trained for double-compressed images for the same range of primary quality factors. The image under investigation is first analyzed using a preclassifier that detects selected cases of double compression and estimates the primary quantization table. It then sends the image to the appropriate single- or double-compression multiclassifier. The error is estimated from more than 2.6 million images. The steganalyzer is also tested on two previously unseen methods to examine its ability to generalize. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Chaotic-Type Features for Speech Steganalysis

    Page(s): 651 - 661
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1677 KB) |  | HTML iconHTML  

    We investigate the use of chaotic-type features for recorded speech steganalysis. Considering that data hiding within a speech signal distorts the chaotic properties of the original speech signal, we design a steganalyzer that uses Lyapunov exponents and a fraction of false neighbors as chaotic features to detect the existence of a stegosignal. We also discuss the applicability of the proposed method to general audio. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Steganalysis Frameworks of Embedding in Multiple Least-Significant Bits

    Page(s): 662 - 672
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (707 KB) |  | HTML iconHTML  

    Replacement of least-significant bit plane is one of the popular steganography techniques in digital images because of its extreme simplicity. But it is more difficult to precisely estimate the rate of secret message embedded by replacement of multiple least-significant bit (MLSB) planes of a carrier object. In order to model the MLSB embedding, a lemma is introduced to prove the transition relationships among some trace subsets. Then, based on these transition relationships, two novel steganalysis frameworks are designed to detect two kinds of distinct MLSB embedding methods. A series of experiments show that the proposed steganalysis frameworks are highly sensitive to MLSB steganography, and can estimate the rate of secret message with higher accuracy. Furthermore, these frameworks can fully meet the need to distinguish stego images under low false positive rate, especially when the embedded message is short. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Theoretical and Practical Boundaries of Binary Secure Sketches

    Page(s): 673 - 683
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1017 KB) |  | HTML iconHTML  

    Fuzzy commitment schemes, introduced as a link between biometrics and cryptography, are a way to handle biometric data matching as an error-correction issue. We focus here on finding the best error-correcting code with respect to a given database of biometric data. We propose a method that models discrepancies between biometric measurements as an erasure and error channel, and we estimate its capacity. We then show that two-dimensional iterative min-sum decoding of properly chosen product codes almost reaches the capacity of this channel. This leads to practical fuzzy commitment schemes that are close to theoretical limits. We test our techniques on public iris and fingerprint databases and validate our findings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recognizing Rotated Faces From Frontal and Side Views: An Approach Toward Effective Use of Mugshot Databases

    Page(s): 684 - 697
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3336 KB) |  | HTML iconHTML  

    Mug shot photography has been used to identify criminals by the police for more than a century. However, the common scenario of face recognition using frontal and side-view mug shots as gallery remains largely uninvestigated in computerized face recognition across pose. This paper presents a novel appearance-based approach using frontal and sideface images to handle pose variations in face recognition, which has great potential in forensic and security applications involving police mugshot databases. Virtual views in different poses are generated in two steps: 1) shape modelling and 2) texture synthesis. In the shape modelling step, a multilevel variation minimization approach is applied to generate personalized 3-D face shapes. In the texture synthesis step, face surface properties are analyzed and virtual views in arbitrary viewing conditions are rendered, taking diffuse and specular reflections into account. Appearance-based face recognition is performed with the augmentation of synthesized virtual views covering possible viewing angles to recognize probe views in arbitrary conditions. The encouraging experimental results demonstrated that the proposed approach by using frontal and side-view images is a feasible and effective solution to recognizing rotated faces, which can lead to a better and practical use of existing forensic databases in computerized human face-recognition applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Regression From Uncertain Labels and Its Applications to Soft Biometrics

    Page(s): 698 - 708
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1043 KB) |  | HTML iconHTML  

    In this paper, we investigate two soft-biometric problems: (1) age estimation and (2) pose estimation, within the scenario where uncertainties exist for the available labels of the training samples. These two tasks are generally formulated as the automatic design of a regressor from training samples with uncertain nonnegative labels. First, the nonnegative label is predicted as the Frobenius norm of a matrix, which is bilinearly transformed from the nonlinear mappings of a set of candidate kernels. Two transformation matrices are then learned for deriving such a matrix by solving two semidefinite programming (SDP) problems, in which the uncertain label of each sample is expressed as two inequality constraints. The objective function of SDP controls the ranks of these two matrices and, consequently, automatically determines the structure of the regressor. The whole framework for the automatic design of a regressor from samples with uncertain nonnegative labels has the following characteristics: (1) the SDP formulation makes full use of the uncertain labels, instead of using conventional fixed labels; (2) regression with the Frobenius norm of matrix naturally guarantees the nonnegativity of the labels, and greater prediction capability is achieved by integrating the squares of the matrix elements, which to some extent act as weak regressors; and (3) the regressor structure is automatically determined by the pursuit of simplicity, which potentially promotes the algorithmic generalization capability. Extensive experiments on two human age databases: (1) FG-NET and (2) Yamaha, and the Pointing'04 head pose database, demonstrate encouraging estimation accuracy improvements over conventional regression algorithms without taking the uncertainties within the labels into account. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 3-D Ear Modeling and Recognition From Video Sequences Using Shape From Shading

    Page(s): 709 - 718
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2376 KB) |  | HTML iconHTML  

    We describe a novel approach for 3-D ear biometrics using video. A series of frames is extracted from a video clip and the region of interest in each frame is independently reconstructed in 3-D using shape from shading. The resulting 3-D models are then registered using the iterative closest point algorithm. We iteratively consider each model in the series as a reference model and calculate the similarity between the reference model and every model in the series using a similarity cost function. Cross validation is performed to assess the relative fidelity of each 3-D model. The model that demonstrates the greatest overall similarity is determined to be the most stable 3-D model and is subsequently enrolled in the database. Experiments are conducted using a gallery set of 402 video clips and a probe of 60 video clips. The results (95.0% rank-1 recognition rate and 3.3% equal error rate) indicate that the proposed approach can produce recognition rates comparable to systems that use 3-D range data. To the best of our knowledge, we are the first to develop a 3-D ear biometric system that obtains a 3-D ear structure from a video sequence. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Use of Identification Trial Statistics for the Combination of Biometric Matchers

    Page(s): 719 - 733
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (926 KB) |  | HTML iconHTML  

    Combination functions typically used in biometric identification systems consider as input parameters only those matching scores which are related to a single person in order to derive a combined score for that person. We discuss how such methods can be extended to utilize the matching scores corresponding to all people. The proposed combination methods account for dependencies between scores output by any single participating matcher. Our experiments demonstrate the advantage of using such combination methods when dealing with a large number of classes, as is the case with biometric person identification systems. The experiments are performed on the National Institute of Standards and Technology BSSR1 dataset and the combination methods considered include the likelihood ratio, neural network, and weighted sum. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Subspace Approximation of Face Recognition Algorithms: An Empirical Study

    Page(s): 734 - 748
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2125 KB) |  | HTML iconHTML  

    We present a theory for constructing linear subspace approximations to face-recognition algorithms and empirically demonstrate that a surprisingly diverse set of face-recognition approaches can be approximated well by using a linear model. A linear model, built using a training set of face images, is specified in terms of a linear subspace spanned by, possibly nonorthogonal vectors. We divide the linear transformation used to project face images into this linear subspace into two parts: 1) a rigid transformation obtained through principal component analysis, followed by a nonrigid, affine transformation. The construction of the affine subspace involves embedding of a training set of face images constrained by the distances between them, as computed by the face-recognition algorithm being approximated. We accomplish this embedding by iterative majorization, initialized by classical MDS. Any new face image is projected into this embedded space using an affine transformation. We empirically demonstrate the adequacy of the linear model using six different face-recognition algorithms, spanning template-based and feature-based approaches, with a complete separation of the training and test sets. A subset of the face-recognition grand challenge training set is used to model the algorithms and the performance of the proposed modeling scheme is evaluated on the facial recognition technology (FERET) data set. The experimental results show that the average error in modeling for six algorithms is 6.3% at 0.001 false acceptance rate for the FERET fafb probe set which has 1195 subjects, the most among all of the FERET experiments. The built subspace approximation not only matches the recognition rate for the original approach, but the local manifold structure, as measured by the similarity of identity of nearest neighbors, is also modeled well. We found, on average, 87% similarity of the local neighborhood. We also demonstrate the usefulness of the linear model for algorithm-depe- - ndent indexing of face databases and find that it results in more than 20 times reduction in face comparisons for Bayesian, elastic bunch graph matching, and one proprietary algorithm. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Toward Compression of Encrypted Images and Video Sequences

    Page(s): 749 - 762
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2107 KB) |  | HTML iconHTML  

    We present a framework for compressing encrypted media, such as images and videos. Encryption masks the source, rendering traditional compression algorithms ineffective. By conceiving of the problem as one of distributed source coding, it has been shown in prior work that encrypted data are as compressible as unencrypted data. However, there are two major challenges to realize these theoretical results. The first is the development of models that capture the underlying statistical structure and are compatible with our framework. The second is that since the source is masked by encryption, the compressor does not know what rate to target. We tackle these issues in this paper. We first develop statistical models for images before extending it to videos, where our techniques really gain traction. As an illustration, we compare our results to a state-of-the-art motion-compensated lossless video encoder that requires unencrypted video input. The latter compresses each unencrypted frame of the ldquoForemanrdquo test sequence by 59% on average. In comparison, our proof-of-concept implementation, working on encrypted data, compresses the same sequence by 33%. Next, we develop and present an adaptive protocol for universal compression and show that it converges to the entropy rate. Finally, we demonstrate a complete implementation for encrypted video. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using One-Class SVMs and Wavelets for Audio Surveillance

    Page(s): 763 - 775
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1013 KB) |  | HTML iconHTML  

    This paper presents a method aimed at recognizing environmental sounds for surveillance and security applications. We propose to apply one-class support vector machines (1-SVMs) together with a sophisticated dissimilarity measure in order to address audio classification, and more specifically, sound recognition. We illustrate the performance of this method on an audio database, which consists of 1015 sounds belonging to nine classes. The database used presents high intraclass diversity in temps of signal properties and some kind of interclass similarities. A large discrepancy in the number of items in each class implies nonuniform probability of sound appearances. The method proceeds as follows: first, the use of a set of state-of-the-art audio features is studied. Then, we introduce a set of novel features obtained by combining elementary features. Experiments conducted on a nine-class classification problem show the superiority of this novel sound recognition method. The best recognition accuracy (96.89%) is obtained when combining wavelet-based features, MFCCs, and individual temporal and frequency features. Our 1-SVM-based multiclass classification approach overperforms the conventional hidden Markov model-based system in the experiments conducted, the improvement in the error rate can reach 50%. Besides, we provide empirical results showing that the single-class SVM outperforms a combination of binary SVMs. Additional experiments demonstrate our method is robust to environmental noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Scalar DC–QIM for Semifragile Authentication

    Page(s): 776 - 782
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (662 KB) |  | HTML iconHTML  

    Semifragile watermarking can be used to detect illegal and local manipulation, while being robust toward legal processings such as lossy compression. The aim of this work is twofold: 1) we first formalize this semifragile authentication problem by proposing an appropriate payoff: by considering the watermark as an integrity stamp (i.e., its detectability stands as an integrity evidence), a natural game cost is the false-alarm probability which still detects the watermark in a tampered content and 2) this approach is further applied to find the optimal power allocation across parallel additive Gaussian channels of a dithered distortion compensated scalar quantization-based scheme against substitution attacks. The results are specialized to design an efficient discrete cosine transform (DCT) semifragile image watermarking system which allows detecting substitution attacks despite JPEG compression. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Buyer–Seller Watermarking Protocol Based on Secure Embedding

    Page(s): 783 - 786
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (107 KB) |  | HTML iconHTML  

    In a forensic watermarking architecture, a buyer-seller protocol protects the watermark secrets from the buyer and prevents false infringement accusations by the seller. Existing protocols encrypt the watermark and the content with a homomorphic public-key cipher and perform embedding under encryption. When used for multimedia data, these protocols create a large computation and bandwidth overhead. In this correspondence, we show that the same functionality can be achieved efficiently using recently proposed secure watermark embedding algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Cryptographic Secrecy of Steganographic Matrix Embedding

    Page(s): 786 - 791
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (248 KB) |  | HTML iconHTML  

    Some information-hiding schemes are scrutinized in terms of their cryptographic secrecy. The schemes under study appeal to the so-called matrix embedding strategy, designed to optimize embedding capacity under distortion constraints, as opposed to any cryptographic measure. Nonetheless, we establish conditions under which a key equivocation function is optimal, and show that under reasonable key generation models, a perfect secrecy property is nearly satisfied, limited by a mutual information measure that decreases exponentially with the block length. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fingerprint-Quality Index Using Gradient Components

    Page(s): 792 - 800
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1864 KB) |  | HTML iconHTML  

    Fingerprint image-quality checking is one of the most important issues in fingerprint recognition because recognition is largely affected by the quality of fingerprint images. In the past, many related fingerprint-quality checking methods have typically considered the condition of input images. However, when using the preprocessing algorithm, ridge orientation may sometimes be extracted incorrectly. Unwanted false minutiae can be generated or some true minutiae may be ignored, which can also affect recognition performance directly. Therefore, in this paper, we propose a novel quality-checking algorithm which considers the condition of the input fingerprints and orientation estimation errors. In the experiments, the 2-D gradients of the fingerprint images were first separated into two sets of 1-D gradients. Then, the shapes of the probability density functions of these gradients were measured in order to determine fingerprint quality. We used the FVC2002 database and synthetic fingerprint images to evaluate the proposed method in three ways: 1) estimation ability of quality; 2) separability between good and bad regions; and 3) verification performance. Experimental results showed that the proposed method yielded a reasonable quality index in terms of the degree of quality degradation. Also, the proposed method proved superior to existing methods in terms of separability and verification performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • List of Reviewers

    Page(s): 801 - 803
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (30 KB)  

    Lists the reviewers who contributed to IEEE Transactions on Information Forensics and Security in 2007. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Information Forensics and Security EDICS

    Page(s): 803
    Save to Project icon | Request Permissions | PDF file iconPDF (21 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security information for authors

    Page(s): 804 - 805
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Chung C. Jay Kuo
University of Southern California