By Topic

Information Forensics and Security, IEEE Transactions on

Issue 1 • Date March 2010

Filter Results

Displaying Results 1 - 25 of 26
  • Table of contents

    Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (43 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Efficient General Print-Scanning Resilient Data Hiding Based on Uniform Log-Polar Mapping

    Page(s): 1 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1724 KB) |  | HTML iconHTML  

    This paper proposes an efficient, blind, and robust data hiding scheme which is resilient to both geometric distortion and the general print-scan process, based on a near uniform log-polar mapping (ULPM). In contrast to performing inverse log-polar mapping (a mapping from the log-polar system to the Cartesian system) to the watermark signal or its index as done in the prior works, we apply ULPM to the frequency index (u, v) in the Cartesian system to obtain the discrete log-polar coordinate (l 1, l 2), then embed one watermark bit w(l 1 ,l 2 ) in the corresponding discrete Fourier transform coefficient c(u,v). This mapping of index from the Cartesian system to the log-polar system but embedding the corresponding watermark directly in the Cartesian domain not only completely removes the interpolation distortion and the interference distortion introduced to the watermark signal as observed in some prior works, but also largely expands the cardinality of watermark in the log-polar mapping domain. Both theoretical analysis and experimental results show that the proposed watermarking scheme achieves excellent robustness to geometric distortion, normal signal processing, and the general print-scan process. Compared to existing watermarking schemes, our algorithm offers significant improvement in terms of robustness against general print-scan, receiver operating characteristic (ROC) performance, and efficiency of blind resynchronization. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secure Client-Side ST-DM Watermark Embedding

    Page(s): 13 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1214 KB) |  | HTML iconHTML  

    Client-side watermark embedding systems have been proposed as a possible solution for the copyright protection in large-scale content distribution environments. In this framework, we propose a new look-up-table-based secure client-side embedding scheme properly designed for the spread transform dither modulation watermarking method. A theoretical analysis of the detector performance under the most known attack models is presented and the agreement between theoretical and experimental results verified through several simulations. The experimental results also prove that the advantages of the informed embedding technique in comparison to the spread-spectrum watermarking approach, which are well known in the classical embedding schemes, are preserved in the client-side scenario. The proposed approach permits us to successfully combine the security of client-side embedding with the robustness of informed embedding methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Step Construction of Visual Cryptography Schemes

    Page(s): 27 - 38
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (450 KB) |  | HTML iconHTML  

    Two common drawbacks of the visual cryptography scheme (VCS) are the large pixel expansion of each share image and the small contrast of the recovered secret image. In this paper, we propose a step construction to construct VCSOR and VCSXOR for general access structure by applying (2,2)-VCS recursively, where a participant may receive multiple share images. The proposed step construction generates VCSOR and VCSXOR which have optimal pixel expansion and contrast for each qualified set in the general access structure in most cases. Our scheme applies a technique to simplify the access structure, which can reduce the average pixel expansion (APE) in most cases compared with many of the results in the literature. Finally, we give some experimental results and comparisons to show the effectiveness of the proposed scheme. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Subset Keys and Identity Tickets (SKIT) Key Distribution Scheme

    Page(s): 39 - 51
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (424 KB) |  | HTML iconHTML  

    Probabilistic key predistribution schemes (P-KPSs) which place modest demands on hardware are good candidates for securing interactions between resource limited computers. Collusion susceptible P-KPSs are trade-offs between security and complexity. Some facets of complexity include computation, bandwidth, and storage overhead. Metrics for security include resistance to passive eavesdropping attacks, and active message injection attacks. The contributions of this paper are three-fold: 1) a novel P-KPS, the subset keys and identity tickets (SKIT) scheme; 2) a generic KPS model to facilitate comparison of various facets of the complexity of key predistribution schemes; and 3) a new security model to describe the resistance of P-KPSs to active message-injection attacks. The two models are used to show why SKIT has many compelling advantages over existing P-KPSs in the literature. In particular, while placing lower demands on computation, bandwidth and storage overhead, SKIT realizes substantial improvements in resistance to passive and active attacks. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mosaicing Touchless and Mirror-Reflected Fingerprint Images

    Page(s): 52 - 61
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2841 KB) |  | HTML iconHTML  

    Touchless fingerprint sensing technologies have been explored to solve problems in touch-based sensing techniques because they do not require any contact between a sensor and a finger. While they can solve problems caused by the contact of a finger, other difficulties emerge such as a view difference problem and a limited usable area due to perspective distortion. In order to overcome these difficulties, we propose a new touchless fingerprint sensing device capturing three different views at one time and a method for mosaicing these view-different images. The device is composed of a single camera and two planar mirrors reflecting side views of a finger, and it is an alternative to expensive multiple-camera-based systems. The mosaic method can composite the multiple view images by using the thin plate spline model to expand the usable area of a fingerprint image. In particular, to reduce the affect of perspective distortion, we select the regions in each view by minimizing the ridge interval variations in a final mosaiced image. Results are promising as our experiments show that mosaiced images offer 29% more true minutiae and 28% larger good quality area than one-view, unmosaiced images. Also, when the side-view images are matched to the mosaiced images, it gives more matched minutiae than matching with one-view frontal images. We expect that the proposed method can reduce the view difference problem and increase the usable area of a touchless fingerprint image. Furthermore, the proposed method can be applied to other biometric applications requiring a large template for recognition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assessing Fingerprint Individuality in Presence of Noisy Minutiae

    Page(s): 62 - 70
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1661 KB) |  | HTML iconHTML  

    Fingerprint image quality is an important source of intraclass variability. When the underlying image quality is poor, human experts as well as automatic systems are more likely to make errors in minutiae detection and matching by either missing true features or detecting spurious ones. As a consequence, fingerprint individuality estimates change depending on the quality of the underlying images. The goal of this paper is to quantitatively study the effect of noise in minutiae detection and localization, resulting from varying image quality, on fingerprint individuality. The measure of fingerprint individuality is modeled as a function of image quality via a random effects model and methodology for the estimation of unknown parameters is developed in a Bayesian framework. Empirical results on two databases, one in-house and another publicly available, demonstrate how the measure of fingerprint individuality increases as image quality becomes poor. The measure corresponding to the ??12-point match?? with 26 observed minutiae in the query and template fingerprints increases by several orders of magnitude when the fingerprint quality degrades from ??best?? to ??poor??. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Doubly Weighted Approach for Appearance-Based Subspace Learning Methods

    Page(s): 71 - 81
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1598 KB) |  | HTML iconHTML  

    We propose in this paper a doubly weighted subspace learning approach for face representation and recognition. Motivated by the fact that some face samples and parts are more effectual in characterizing and recognizing faces, we construct two weighting matrices based on pairwise similarity of face samples within a same class and discriminant score of each pixel within a face sample to duly emphasize both the between-sample and within-sample features. We then incorporate these two weighting matrices into three popular subspace learning methods, namely principal component analysis, linear discriminant analysis, and nonnegative matrix factorization, to obtain the discriminative features of faces for recognition. Moreover, the proposed doubly weighted technique can be readily extended to other newly proposed subspace learning algorithms to improve their performance. Experimental results show that the proposed approach can effectively enhance the discriminant power of the extracted face features and outperform existing, nonweighted subspace learning algorithms. The performance gain is even more apparent for cases with imbalanced training samples. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face Verification Across Age Progression Using Discriminative Methods

    Page(s): 82 - 91
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1563 KB) |  | HTML iconHTML  

    Face verification in the presence of age progression is an important problem that has not been widely addressed. In this paper, we study the problem by designing and evaluating discriminative approaches. These directly tackle verification tasks without explicit age modeling, which is a hard problem by itself. First, we find that the gradient orientation, after discarding magnitude information, provides a simple but effective representation for this problem. This representation is further improved when hierarchical information is used, which results in the use of the gradient orientation pyramid (GOP). When combined with a support vector machine GOP demonstrates excellent performance in all our experiments, in comparison with seven different approaches including two commercial systems. Our experiments are conducted on the FGnet dataset and two large passport datasets, one of them being the largest ever reported for recognition tasks. Second, taking advantage of these datasets, we empirically study how age gaps and related issues (including image quality, spectacles, and facial hair) affect recognition algorithms. We found surprisingly that the added difficulty of verification produced by age gaps becomes saturated after the gap is larger than four years, for gaps of up to ten years. In addition, we find that image quality and eyewear present more of a challenge than facial hair. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A New Framework for Adaptive Multimodal Biometrics Management

    Page(s): 92 - 102
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1826 KB) |  | HTML iconHTML  

    This paper presents a new evolutionary approach for adaptive combination of multiple biometrics to ensure the optimal performance for the desired level of security. The adaptive combination of multiple biometrics is employed to determine the optimal fusion strategy and the corresponding fusion parameters. The score-level fusion rules are adapted to ensure the desired system performance using a hybrid particle swarm optimization model. The rigorous experimental results presented in this paper illustrate that the proposed score-level approach can achieve significantly better and stable performance over the decision-level approach. There has been very little effort in the literature to investigate the performance of an adaptive multimodal fusion algorithm on real biometric data. This paper also presents the performance of the proposed approach from the real biometric samples which further validate the contributions from this paper. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid Approach for Generating Secure and Discriminating Face Template

    Page(s): 103 - 117
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1862 KB) |  | HTML iconHTML  

    Biometric template protection is one of the most important issues in deploying a practical biometric system. To tackle this problem, many algorithms, that do not store the template in its original form, have been reported in recent years. They can be categorized into two approaches, namely biometric cryptosystem and transform-based. However, most (if not all) algorithms in both approaches offer a trade-off between the template security and matching performance. Moreover, we believe that no single template protection method is capable of satisfying the security and performance simultaneously. In this paper, we propose a hybrid approach which takes advantage of both the biometric cryptosystem approach and the transform-based approach. A three-step hybrid algorithm is designed and developed based on random projection, discriminability-preserving (DP) transform, and fuzzy commitment scheme. The proposed algorithm not only provides good security, but also enhances the performance through the DP transform. Three publicly available face databases, namely FERET, CMU-PIE, and FRGC, are used for evaluation. The security strength of the binary templates generated from FERET, CMU-PIE, and FRGC databases are 206.3, 203.5, and 347.3 bits, respectively. Moreover, noninvertibility analysis and discussion on data leakage of the proposed hybrid algorithm are also reported. Experimental results show that, using Fisherface to construct the input facial feature vector (face template), the proposed hybrid method can improve the recognition accuracy by 4%, 11%, and 15% on the FERET, CMU-PIE, and FRGC databases, respectively. A comparison with the recently developed random multispace quantization biohashing algorithm is also reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fuzzy Key Binding Strategies Based on Quantization Index Modulation (QIM) for Biometric Encryption (BE) Applications

    Page(s): 118 - 132
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (978 KB) |  | HTML iconHTML  

    Biometric encryption (BE) has recently been identified as a promising paradigm to deliver security and privacy, with unique technical merits and encouraging social implications. An integral component in BE is a key binding method, which is the process of securely combining a signal, containing sensitive information to be protected (i.e., the key), with another signal derived from physiological features (i.e., the biometric). A challenge to this approach is the high degree of noise and variability present in physiological signals. As such, fuzzy methods are needed to enable proper operations, with adequate performance results in terms of false acceptance rate and false rejection rate. In this work, the focus will be on a class of fuzzy key binding methods based on dirty paper coding known as quantization index modulation. While the methods presented are applicable to a wide range of biometric modalities, the face biometric is selected for illustrative purposes, in evaluating the QIM-based solutions for BE systems. Performance evaluation of the investigated methods is reported using data from the CMU PIE face database. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fragility Analysis of Adaptive Quantization-Based Image Hashing

    Page(s): 133 - 147
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1840 KB) |  | HTML iconHTML  

    Fragility is one of the most important properties of authentication-oriented image hashing. However, to date, there has been little theoretical analysis on the fragility of image hashing. In this paper, we propose a measure called expected discriminability for the fragility of image hashing and study this fragility theoretically based on the proposed measure. According to our analysis, when Gray code is applied into the discrete-binary conversion stage of image hashing, the value of the expected discriminability, which is dominated by the quantization stage of image hashing, is no more than 1/2. We further evaluate the expected discriminability of the image-hashing scheme that uses adaptive quantization, which is the most popular quantization scheme in the field of image hashing. Our evaluation reveals that if deterministic adaptive quantization is applied, then the expected discriminability of the image-hashing scheme can reach the maximum value (i.e., 1/2). Finally, some experiments are conducted to validate our theoretical analysis and to compare the performance of several quantization schemes for image hashing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Location of an Eavesdropper in Multiterminal Networks

    Page(s): 148 - 157
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1050 KB) |  | HTML iconHTML  

    We study the optimum location of an eavesdropper from a secrecy capacity perspective in multiterminal networks with power control. We determine the logical location of an eavesdropper (represented by the channel gain from all the transmitters) which 1) results in zero secrecy capacity for maximum number of users in the network and 2) results in zero secrecy capacity for the bottle-neck links. We then analyze the asymptotic secrecy capacity of the system and the asymptotic behavior of the optimum logical location of the eavesdropper. Results indicate that power control can make eavesdropping more difficult as it results in infeasible locations for the eavesdropper. Power control is also shown to provide scenarios which can result in positive asymptotic secrecy capacity for all the users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Abuse-Free Fair Contract-Signing Protocol Based on the RSA Signature

    Page(s): 158 - 168
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (281 KB) |  | HTML iconHTML  

    A fair contract-signing protocol allows two potentially mistrusted parities to exchange their commitments (i.e., digital signatures) to an agreed contract over the Internet in a fair way, so that either each of them obtains the other's signature, or neither party does. Based on the RSA signature scheme, a new digital contract-signing protocol is proposed in this paper. Like the existing RSA-based solutions for the same problem, our protocol is not only fair, but also optimistic, since the trusted third party is involved only in the situations where one party is cheating or the communication channel is interrupted. Furthermore, the proposed protocol satisfies a new property- abuse-freeness. That is, if the protocol is executed unsuccessfully, none of the two parties can show the validity of intermediate results to others. Technical details are provided to analyze the security and performance of the proposed protocol. In summary, we present the first abuse-free fair contract-signing protocol based on the RSA signature, and show that it is both secure and efficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Insiders Behaving Badly: Addressing Bad Actors and Their Actions

    Page(s): 169 - 179
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (973 KB) |  | HTML iconHTML  

    We present a framework for describing insiders and their actions based on the organization, the environment, the system, and the individual. Using several real examples of unwelcome insider action (hard drive removal, stolen intellectual property, tax fraud, and proliferation of e-mail responses), we show how the taxonomy helps in understanding how each situation arose and could have been addressed. The differentiation among types of threats suggests how effective responses to insider threats might be shaped, what choices exist for each type of threat, and the implications of each. Future work will consider appropriate strategies to address each type of insider threat in terms of detection, prevention, mitigation, remediation, and punishment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Composite Signal Representation for Fast and Storage-Efficient Processing of Encrypted Signals

    Page(s): 180 - 187
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (458 KB) |  | HTML iconHTML  

    Signal processing tools working directly on encrypted data could provide an efficient solution to application scenarios where sensitive signals must be protected from an untrusted processing device. In this paper, we consider the data expansion required to pass from the plaintext to the encrypted representation of signals, due to the use of cryptosystems operating on very large algebraic structures. A general composite signal representation allowing us to pack together a number of signal samples and process them as a unique sample is proposed. The proposed representation permits us to speed up linear operations on encrypted signals via parallel processing and to reduce the size of the encrypted signal. A case study-1-D linear filtering-shows the merits of the proposed representation and provides some insights regarding the signal processing algorithms more suited to work on the composite representation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reversible Image Watermarking Using Interpolation Technique

    Page(s): 187 - 193
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1055 KB) |  | HTML iconHTML  

    Watermarking embeds information into a digital signal like audio, image, or video. Reversible image watermarking can restore the original image without any distortion after the hidden data is extracted. In this paper, we present a novel reversible watermarking scheme using an interpolation technique, which can embed a large amount of covert data into images with imperceptible modification. Different from previous watermarking schemes, we utilize the interpolation-error, the difference between interpolation value and corresponding pixel value, to embed bit ??1?? or ??0?? by expanding it additively or leaving it unchanged. Due to the slight modification of pixels, high image quality is preserved. Experimental results also demonstrate that the proposed scheme can provide greater payload capacity and higher image fidelity compared with other state-of-the-art schemes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Corrections to “TPM Meets DRE: Reducing the Trust Base for Electronic Voting Using Trusted Platform Modules”  [Dec 09 628-637]

    Page(s): 194
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (29 KB)  

    In the above titled paper (ibid., vol. 4, no. 4, pp. 628-637, Dec. 09), the fourth sentence of the Abstract contains an error. The correct sentence is presented here. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Corrections to “Scantegrity II: End-to-End Verifiability by Voters of Optical Scan Elections Through Confirmation Codes” [Dec 09 611-627]

    Page(s): 194
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (29 KB)  

    In the above titled paper (ibid., vol. 4, no. 4, pp. 611-627, Dec. 09), due to a production error, the affiliations of two of the authors were listed incorrectly. The correct affiliations are presented here. Also, the name of the last author in the affiliations footnote was printed incorrectly. The correct name is P. Y. A. Ryan. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Information Forensics and Security EDICS

    Page(s): 195
    Save to Project icon | Request Permissions | PDF file iconPDF (21 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security Information for authors

    Page(s): 196 - 197
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • Special issue on New Frontiers in Rich Transcription

    Page(s): 198
    Save to Project icon | Request Permissions | PDF file iconPDF (126 KB)  
    Freely Available from IEEE
  • IEEE copyright form

    Page(s): 199 - 200
    Save to Project icon | Request Permissions | PDF file iconPDF (1065 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Chung C. Jay Kuo
University of Southern California