Scheduled System Maintenance:
On May 6th, system maintenance will take place from 8:00 AM - 12:00 PM ET (12:00 - 16:00 UTC). During this time, there may be intermittent impact on performance. We apologize for the inconvenience.
By Topic

Information Forensics and Security, IEEE Transactions on

Issue 2 • Date June 2007

Filter Results

Displaying Results 1 - 24 of 24
  • Table of contents

    Publication Year: 2007 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (41 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Publication Year: 2007 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (34 KB)  
    Freely Available from IEEE
  • Noniterative Algorithms for Sensitivity Analysis Attacks

    Publication Year: 2007 , Page(s): 113 - 126
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (977 KB) |  | HTML iconHTML  

    Sensitivity analysis attacks constitute a powerful family of watermark "removal" attacks. They exploit vulnerability in some watermarking protocols: the attacker's unlimited access to the watermark detector. This paper proposes a mathematical framework for designing sensitivity analysis attacks and focuses on additive spread-spectrum embedding schemes. The detectors under attack range in complexity from basic correlation detectors to normalized correlation detectors and maximum-likelihood (ML) detectors. The new algorithms precisely estimate and then eliminate the watermark from the watermarked signal. This is accomplished by exploiting geometric properties of the detection boundary and the information leaked by the detector. Several important extensions are presented, including the case of a partially unknown detection function, and the case of constrained detector inputs. In contrast with previous art, our algorithms are noniterative and require, at most, O(n) detection operations in order to precisely estimate the watermark, where n is the dimension of the signal. The cost of each detection operation is O(n); hence, the algorithms can be executed in quadratic time. The method is illustrated with an application to image watermarking using an ML detector based on a generalized Gaussian model for images View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Perceptual Models to Improve Fidelity and Provide Resistance to Valumetric Scaling for Quantization Index Modulation Watermarking

    Publication Year: 2007 , Page(s): 127 - 139
    Cited by:  Papers (41)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3748 KB) |  | HTML iconHTML  

    Traditional quantization index modulation (QIM) methods are based on a fixed quantization step size, which may lead to poor fidelity in some areas of the content. A more serious limitation of the original QIM algorithm is its sensitivity to valumetric changes (e.g., changes in amplitude). In this paper, we first propose using Watson's perceptual model to adaptively select the quantization step size based on the calculated perceptual "slack". Experimental results on 1000 images indicate improvements in fidelity as well as improved robustness in high-noise regimes. Watson's perceptual model is then modified such that the slacks scale linearly with valumetric scaling, thereby providing a QIM algorithm that is theoretically invariant to valumetric scaling. In practice, scaling can still result in errors due to cropping and roundoff that are an indirect effect of scaling. Two new algorithms are proposed - the first based on traditional QIM and the second based on rational dither modulation. A comparison with other methods demonstrates improved performance over other recently proposed valumetric-invariant QIM algorithms, with only small degradations in fidelity View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Derivation of Error Distribution in Least Squares Steganalysis

    Publication Year: 2007 , Page(s): 140 - 148
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (560 KB) |  | HTML iconHTML  

    This paper considers the least squares method (LSM) for estimation of the length of payload embedded by least-significant bit replacement in digital images. Errors in this estimate have already been investigated empirically, showing a slight negative bias and substantially heavy tails (extreme outliers). In this paper, (approximations for) the estimator distribution over cover images are derived: this requires analysis of the cover image assumption of the LSM algorithm and a new model for cover images which quantifies deviations from this assumption. The theory explains both the heavy tails and the negative bias in terms of cover-specific observable properties, and suggests improved detectors. It also allows the steganalyst to compute precisely, for the first time, a p-value for testing the hypothesis that a hidden payload is present. This is the first derivation of steganalysis estimator performance View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Constructive and Unifying Framework for Zero-Bit Watermarking

    Publication Year: 2007 , Page(s): 149 - 163
    Cited by:  Papers (4)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (838 KB) |  | HTML iconHTML  

    In the watermark detection scenario, also known as zero-bit watermarking, a watermark, carrying no hidden message, is inserted in a piece of content. The watermark detector checks for the presence of this particular weak signal in received contents. The article looks at this problem from a classical detection theory point of view, but with side information enabled at the embedding side. This means that the watermark signal is a function of the host content. Our study is twofold. The first step is to design the best embedding function for a given detection function, and the best detection function for a given embedding function. This yields two conditions, which are mixed into one 'fundamental' partial differential equation. It appears that many famous watermarking schemes are indeed solution to this 'fundamental' equation. This study thus gives birth to a constructive framework unifying solutions, so far perceived as very different View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Optical Watermarking for Printed Document Authentication

    Publication Year: 2007 , Page(s): 164 - 173
    Cited by:  Papers (5)  |  Patents (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (570 KB) |  | HTML iconHTML  

    This article describes a novel visual information concealment technique, referred to as optical watermarking, for the authentication of original printed documents. An optical watermark is a two-dimensional binary image. It can be of any shape and can be printed on any part of a document. The optical watermark is constructed by the superposition of multiple two-dimensional binary images (referred to as layers), each with different carrier structural patterns embedding various hidden information. The hidden information is embedded into each layer using phase modulation. Based on properties of the human visual system and modulation principle, the hidden information becomes visible to the human eyes only when a right "key" is positioned on top of the optical watermark with the right alignment. Here, "keys" play the similar role as keys in encryption, that is, to decode hidden information. Thus, with such a "lock and key" approach, it greatly improves the security level of the optical watermark. In addition, the multiple layer structure of the optical watermark makes it extremely robust against reverse engineering attacks. Due to its high security and tight link with electronic document systems, which requires documents to be finally printed on paper, the optical watermark has been applied to various electronic document systems. These are online ticketing, online bill of lading, and remote signing and printing of documents, where critical and unique information are embedded in watermarks and printed together with individual documents for future authentication. It has also been used in offline and traditional antiforgery applications, such as brand protection, preprinted high-value tickets, and identification documents View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition

    Publication Year: 2007 , Page(s): 174 - 180
    Cited by:  Papers (8)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (373 KB) |  | HTML iconHTML  

    In this paper, we first develop a direct Bayesian-based support vector machine (SVM) by combining the Bayesian analysis with the SVM. Unlike traditional SVM-based face recognition methods that require one to train a large number of SVMs, the direct Bayesian SVM needs only one SVM trained to classify the face difference between intrapersonal variation and extrapersonal variation. However, the additional simplicity means that the method has to separate two complex subspaces by one hyperplane thus affecting the recognition accuracy. In order to improve the recognition performance, we develop three more Bayesian-based SVMs, including the one-versus-all method, the hierarchical agglomerative clustering-based method, and the adaptive clustering method. Finally, we combine the adaptive clustering method with multilevel subspace analysis to further improve the recognition performance. We show the improvement of the new algorithms over traditional subspace methods through experiments on two face databases - the FERET database and the XM2VTS database View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Hand-Geometry Recognition Using Entropy-Based Discretization

    Publication Year: 2007 , Page(s): 181 - 187
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (842 KB) |  | HTML iconHTML  

    The hand-geometry-based recognition systems proposed in the literature have not yet exploited user-specific dependencies in the feature-level representation. We investigate the possibilities to improve the performance of the existing hand-geometry systems using the discretization of extracted features. This paper proposes employing discretization of hand-geometry features, using entropy-based heuristics, to achieve the performance improvement. The performance improvement due to the unsupervised and supervised discretization schemes is compared on a variety of classifiers: k-NN, naive Bayes, SVM, and FFN. Our experimental results on the database of 100 users achieve significant improvement in the recognition accuracy and confirm the usefulness of discretization in hand-geometry-based systems View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Human Identification From Dental X-Ray Images Based on the Shape and Appearance of the Teeth

    Publication Year: 2007 , Page(s): 188 - 197
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2746 KB) |  | HTML iconHTML  

    Dental biometrics deal with human identification from dental characteristics. In this paper, we present a new technique for identifying people based upon shapes and appearances of their teeth from dental X-ray radiographs. The new technique represents each tooth by a feature vector obtained from the forcefield energy function of the grayscale image of the tooth and Fourier descriptors of the contour of the tooth. The feature vector is composed of the distances between a small number of potential energy wells as well as a small number of Fourier descriptors. Given a query image (i.e., postmortem radiograph), each tooth is matched with the archived teeth in the database (antemortem radiographs) that have the same tooth number. Then, voting is used to obtain a list of best matches for the query image based upon the matching results of the individual teeth. Our goal of using appearance and shape-based features together is to overcome the drawback of using only the contour of the tooth, which can be strongly affected by the quality of the images. The experimental results on a database of 162 antemortem images show that our method is effective in identifying individuals based on their dental radiographs View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tracing Malicious Relays in Cooperative Wireless Communications

    Publication Year: 2007 , Page(s): 198 - 212
    Cited by:  Papers (22)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1282 KB) |  | HTML iconHTML  

    A cooperative communication system explores a new dimension of diversity in wireless communications to combat the unfriendly wireless environment. While this emerging technology is promising in improving communication quality, some security problems inherent to cooperative relay also arise. This paper investigates the security issues in cooperative communications under the context of multiple relay nodes using decode-and-forward strategy, where one of the relay nodes is adversarial and tries to corrupt the communications by sending garbled signals. We show that the conventional physical-layer signal detector will lead to a high error rate in signal detection in such a scenario, and the application-layer cryptography alone will not be able to distinguish the adversarial relay from legitimate ones. To trace and identify the adversarial relay, we propose a cross-layer tracing scheme that uses adaptive signal detection at the physical layer, coupled with pseudorandom tracing symbols at the application layer. Analytical results for tracing statistics as well as experimental simulations are presented to demonstrate the effectiveness of the proposed tracing scheme View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analysis and Protection of Dynamic Membership Information for Group Key Distribution Schemes

    Publication Year: 2007 , Page(s): 213 - 226
    Cited by:  Papers (2)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1057 KB) |  | HTML iconHTML  

    In secure group-oriented applications, key management schemes are employed to distribute and update keys such that unauthorized parties cannot access group communications. Key management, however, can disclose information about the dynamics of group membership, such as the group size and the number of joining and departing users. This is a threat to applications with confidential group membership information. This paper investigates techniques that can stealthily acquire group dynamic information from key management. We show that insiders and outsiders can successfully obtain group membership information by exploiting key establishment and key updating procedures in many popular key management schemes. Particularly, we develop three attack methods targeting tree-based centralized key management schemes. Further, we propose a defense technique utilizing batch rekeying and phantom users, and derive performance criteria that describe security level of the proposed scheme using mutual information. The proposed defense scheme is evaluated based on the data from MBone multicast sessions. We also provide a brief analysis on the disclosure of group dynamic information in contributory key management schemes View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Defense Against Injecting Traffic Attacks in Wireless Mobile Ad-Hoc Networks

    Publication Year: 2007 , Page(s): 227 - 239
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (713 KB) |  | HTML iconHTML  

    In ad-hoc networks, nodes need to cooperatively forward packets for each other. Without necessary countermeasures, such networks are extremely vulnerable to injecting traffic attacks, especially those launched by insider attackers. Injecting an overwhelming amount of traffic into the network can easily cause network congestion and decrease the network lifetime. In this paper, we focus on those injecting traffic attacks launched by insider attackers. After investigating the possible types of injecting traffic attacks, we have proposed two sets of defense mechanisms to combat such attacks. The first set of defense mechanisms is fully distributed, while the second is centralized with decentralized implementation. The detection performance of the proposed mechanisms has also been formally analyzed. Both theoretical analysis and experimental studies have demonstrated that under the proposed defense mechanisms, there is almost no gain to launch injecting traffic attacks from the attacker's point of view View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Securing Cooperative Ad-Hoc Networks Under Noise and Imperfect Monitoring: Strategies and Game Theoretic Analysis

    Publication Year: 2007 , Page(s): 240 - 253
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (798 KB) |  | HTML iconHTML  

    In cooperative ad-hoc networks, nodes belong to the same authority and pursue the common goals, and will usually unconditionally help each other. Consequently, without necessary countermeasures, such networks are extremely vulnerable to insider attacks, especially under noise and imperfect monitoring. In this paper, we present a game theoretic analysis of securing cooperative ad-hoc networks against insider attacks in the presence of noise and imperfect monitoring. By focusing on the most basic networking function, namely routing and packet forwarding, we model the interactions between good nodes and insider attackers as secure routing and packet forwarding games. The worst case scenarios are studied where initially good nodes do not know who the attackers are while insider attackers know who are good. The optimal defense strategies have been devised in the sense that no other strategies can further increase the good nodes' payoff under attacks. Meanwhile, the optimal attacking strategies and the maximum possible damage that can be caused by attackers have been discussed. Extensive simulation studies have also been conducted to evaluate the effectiveness of the proposed strategies View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance Analysis of Robust Audio Hashing

    Publication Year: 2007 , Page(s): 254 - 266
    Cited by:  Papers (12)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB) |  | HTML iconHTML  

    We present a novel theoretical analysis of the Philips audio fingerprinting method proposed by Haitsma, Kalker, and Oostveen (2001). Although this robust hashing algorithm exhibits very good performance, the method has only been partially analyzed in the literature. Hence, there is a clear need for a more complete analysis which allows both performance prediction and systematic optimization. We examine here the theoretical performance of the method for Gaussian inputs by means of a statistical model. Our analysis relies on formulating the unquantized fingerprint as a quadratic form, which affords a systematic way to compute the model parameters. We provide closed-form analytical upperbounds for the probability of bit error of the hash for two relevant scenarios: noise addition and desynchronization. We show that these results are useful when applied to real audio signals View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Cascade Framework for a Real-Time Statistical Plate Recognition System

    Publication Year: 2007 , Page(s): 267 - 282
    Cited by:  Papers (24)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2677 KB) |  | HTML iconHTML  

    This paper describes a plate recognition system that can process images rapidly at high accuracy rates. This system is designed to meet the requirements of performance, computational speed, and adaptation for vehicle surveillance applications, such as stolen car detection systems. These requirements are satisfied by adopting a cascade framework, utilizing plate characteristics, and developing fast one-pass algorithms. Our system is composed of three main cascading modules for plate detection, character segmentation, and postprocessing. Each module is further decomposed into several cascading procedures, which are composed of successively more complex rejecters. The first module rapidly rejects a majority of nonplate regions by using low computational gradient features and a one-pass scanning algorithm followed by heavy computational statistical rejecters. The second module rejects a majority of noncharacter regions in a similar manner. A peak-valley analysis algorithm is proposed to rapidly detect all promising candidates of character regions. The third module eliminates the plate characters that do not satisfy the plate specifications. In our experiments, the system can recognize plates over 38 frames per second with a resolution of 640 times 480 pixels on a 3-GHz Intel Pentium 4 personal computer View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance Analysis of Scalar DC–QIM for Zero-Bit Watermarking

    Publication Year: 2007 , Page(s): 283 - 289
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3126 KB) |  | HTML iconHTML  

    Quantization-based schemes, such as scalar distortion compensated quantization index modulation (DC-QIM), have demonstrated some performance merits in data hiding, which is mainly a transmission problem. However, a number of applications can be stated in terms of the watermark detection problem (also named zero-bit watermarking), and this situation has been seldom addressed in the literature for quantization-based techniques. In this context, we carry out a complete performance analysis of dithered uniform quantizer-based QIM with DC under additive white Gaussian noise. Using large deviation theory, performance is evaluated according to the receiver operating characteristic (ROC) and the total probability of detection error. Under the white Gaussian host signal assumption, scalar DC-QIM is compared with other existing watermarking methods, including quantized projection (QP), spread spectrum (SS) and improved SS (ISS, for which zero-bit watermarking performances with correlation detector are also proposed). Among the compared methods, it is shown that dithered scalar DC-QIM is the more relevant choice to address zero-bit watermarking, due to its host independent performance. A short comparison is also provided with respect to the corresponding transmission problem, thus evaluating the loss in performance due to the detection. We conclude in measuring the performance gain that could be provided by the use of more sophisticated lattice quantizers than the cubic structure View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Information Forensics and Security EDICS

    Publication Year: 2007 , Page(s): 290
    Save to Project icon | Request Permissions | PDF file iconPDF (19 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security Information for authors

    Publication Year: 2007 , Page(s): 291 - 292
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • Special issue on Multimedia Applications in Mobile/Wireless Context

    Publication Year: 2007 , Page(s): 293
    Save to Project icon | Request Permissions | PDF file iconPDF (160 KB)  
    Freely Available from IEEE
  • Special issue on MIMO-Optimized Transmission Systems for Delivering Data and Rich Content

    Publication Year: 2007 , Page(s): 294
    Save to Project icon | Request Permissions | PDF file iconPDF (119 KB)  
    Freely Available from IEEE
  • Special issue on Genomic and Proteomic Signal Processing

    Publication Year: 2007 , Page(s): 295
    Save to Project icon | Request Permissions | PDF file iconPDF (128 KB)  
    Freely Available from IEEE
  • 2008 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

    Publication Year: 2007 , Page(s): 296
    Save to Project icon | Request Permissions | PDF file iconPDF (573 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Publication Year: 2007 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (31 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Mauro Barni
University of Siena, Italy