By Topic

Information Forensics and Security, IEEE Transactions on

Issue 2 • Date June 2010

Filter Results

Displaying Results 1 - 22 of 22
  • Table of contents

    Publication Year: 2010 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (42 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • Edge Adaptive Image Steganography Based on LSB Matching Revisited

    Publication Year: 2010 , Page(s): 201 - 214
    Cited by:  Papers (39)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (6400 KB) |  | HTML iconHTML  

    The least-significant-bit (LSB)-based approach is a popular type of steganographic algorithms in the spatial domain. However, we find that in most existing approaches, the choice of embedding positions within a cover image mainly depends on a pseudorandom number generator without considering the relationship between the image content itself and the size of the secret message. Thus the smooth/flat regions in the cover images will inevitably be contaminated after data hiding even at a low embedding rate, and this will lead to poor visual quality and low security based on our analysis and extensive experiments, especially for those images with many smooth regions. In this paper, we expand the LSB matching revisited image steganography and propose an edge adaptive scheme which can select the embedding regions according to the size of secret message and the difference between two consecutive pixels in the cover image. For lower embedding rates, only sharper edge regions are used while keeping the other smoother regions as they are. When the embedding rate increases, more edge regions can be released adaptively for data hiding by adjusting just a few parameters. The experimental results evaluated on 6000 natural images with three specific and four universal steganalytic algorithms show that the new scheme can enhance the security significantly compared with typical LSB-based approaches as well as their edge adaptive ones, such as pixel-value-differencing-based approaches, while preserving higher visual quality of stego images at the same time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Steganalysis by Subtractive Pixel Adjacency Matrix

    Publication Year: 2010 , Page(s): 215 - 224
    Cited by:  Papers (60)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (717 KB) |  | HTML iconHTML  

    This paper presents a method for detection of steganographic methods that embed in the spatial domain by adding a low-amplitude independent stego signal, an example of which is least significant bit (LSB) matching. First, arguments are provided for modeling the differences between adjacent pixels using first-order and second-order Markov chains. Subsets of sample transition probability matrices are then used as features for a steganalyzer implemented by support vector machines. The major part of experiments, performed on four diverse image databases, focuses on evaluation of detection of LSB matching. The comparison to prior art reveals that the presented feature set offers superior accuracy in detecting LSB matching. Even though the feature set was developed specifically for spatial domain steganalysis, by constructing steganalyzers for ten algorithms for JPEG images, it is demonstrated that the features detect steganography in the transform domain as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matrix Embedding With Pseudorandom Coefficient Selection and Error Correction for Robust and Secure Steganography

    Publication Year: 2010 , Page(s): 225 - 239
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1080 KB) |  | HTML iconHTML  

    In matrix embedding (ME)-based steganography, the host coefficients are minimally perturbed such that the transmitted bits fall in a coset of a linear code, with the syndrome conveying the hidden bits. The corresponding embedding distortion and vulnerability to steganalysis are significantly less than that of conventional quantization index modulation (QIM)-based hiding. However, ME is less robust to attacks, with a single host bit error leading to multiple decoding errors for the hidden bits. In this paper, we employ the ME-RA scheme, a combination of ME-based hiding with powerful repeat accumulate (RA) codes for error correction, to address this problem. A key contribution of this paper is to compute log likelihood ratios for RA decoding, taking into account the many-to-one mapping between the host coefficients and an encoded bit, for ME. To reduce detectability, we hide in randomized blocks, as in the recently proposed Yet Another Steganographic Scheme (YASS), replacing the QIM-based embedding in YASS by the proposed ME-RA scheme. We also show that the embedding performance can be improved by employing punctured RA codes. Through experiments based on a couple of thousand images, we show that for the same embedded data rate and a moderate attack level, the proposed ME-based method results in a lower detection rate than that obtained for QIM-based YASS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Information-Theoretically Secret Key Generation for Fading Wireless Channels

    Publication Year: 2010 , Page(s): 240 - 254
    Cited by:  Papers (50)  |  Patents (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1318 KB) |  | HTML iconHTML  

    The multipath-rich wireless environment associated with typical wireless usage scenarios is characterized by a fading channel response that is time-varying, location-sensitive, and uniquely shared by a given transmitter-receiver pair. The complexity associated with a richly scattering environment implies that the short-term fading process is inherently hard to predict and best modeled stochastically, with rapid decorrelation properties in space, time, and frequency. In this paper, we demonstrate how the channel state between a wireless transmitter and receiver can be used as the basis for building practical secret key generation protocols between two entities. We begin by presenting a scheme based on level crossings of the fading process, which is well-suited for the Rayleigh and Rician fading models associated with a richly scattering environment. Our level crossing algorithm is simple, and incorporates a self-authenticating mechanism to prevent adversarial manipulation of message exchanges during the protocol. Since the level crossing algorithm is best suited for fading processes that exhibit symmetry in their underlying distribution, we present a second and more powerful approach that is suited for more general channel state distributions. This second approach is motivated by observations from quantizing jointly Gaussian processes, but exploits empirical measurements to set quantization boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate. We validate both proposed protocols through experimentations using a customized 802.11a platform, and show for the typical WiFi channel that reliable secret key establishment can be accomplished at rates on the order of 10 b/s. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Blind Authentication: A Secure Crypto-Biometric Verification Protocol

    Publication Year: 2010 , Page(s): 255 - 268
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1093 KB) |  | HTML iconHTML  

    Concerns on widespread use of biometric authentication systems are primarily centered around template security, revocability, and privacy. The use of cryptographic primitives to bolster the authentication process can alleviate some of these concerns as shown by biometric cryptosystems. In this paper, we propose a provably secure and blind biometric authentication protocol, which addresses the concerns of user's privacy, template protection, and trust issues. The protocol is blind in the sense that it reveals only the identity, and no additional information about the user or the biometric to the authenticating server or vice-versa. As the protocol is based on asymmetric encryption of the biometric data, it captures the advantages of biometric authentication as well as the security of public key cryptography. The authentication protocol can run over public networks and provide nonrepudiable identity verification. The encryption also provides template protection, the ability to revoke enrolled templates, and alleviates the concerns on privacy in widespread use of biometrics. The proposed approach makes no restrictive assumptions on the biometric data and is hence applicable to multiple biometrics. Such a protocol has significant advantages over existing biometric cryptosystems, which use a biometric to secure a secret key, which in turn is used for authentication. We analyze the security of the protocol under various attack scenarios. Experimental results on four biometric datasets (face, iris, hand geometry, and fingerprint) show that carrying out the authentication in the encrypted domain does not affect the accuracy, while the encryption key acts as an additional layer of security. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Key Extraction From General Nondiscrete Signals

    Publication Year: 2010 , Page(s): 269 - 279
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (442 KB) |  | HTML iconHTML  

    We address the problem of designing optimal schemes for the generation of secure cryptographic keys from continuous noisy data. We argue that, contrary to the discrete case, a universal fuzzy extractor does not exist. This implies that in the continuous case, key extraction schemes have to be designed for particular probability distributions. We extend the known definitions of the correctness and security properties of fuzzy extractors. Our definitions apply to continuous as well as discrete variables. We propose a generic construction for fuzzy extractors from noisy continuous sources, using independent partitions. The extra freedom in the choice of discretization, which does not exist in the discrete case, is advantageously used to give the extracted key a uniform distribution. We analyze the privacy properties of the scheme and the error probabilities in a one-dimensional toy model with simplified noise. Finally, we study the security implications of incomplete knowledge of the source's probability distribution P. We derive a bound on the min-entropy of the extracted key under the worst-case assumption, where the attacker knows P exactly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Source Camera Identification Using Enhanced Sensor Pattern Noise

    Publication Year: 2010 , Page(s): 280 - 287
    Cited by:  Papers (25)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (794 KB) |  | HTML iconHTML  

    Sensor pattern noises (SPNs), extracted from digital images to serve as the fingerprints of imaging devices, have been proved as an effective way for digital device identification. However, as we demonstrate in this work, the limitation of the current method of extracting SPNs is that the SPNs extracted from images can be severely contaminated by details from scenes, and as a result, the identification rate is unsatisfactory unless images of a large size are used. In this work, we propose a novel approach for attenuating the influence of details from scenes on SPNs so as to improve the device identification rate of the identifier. The hypothesis underlying our SPN enhancement method is that the stronger a signal component in an SPN is, the less trustworthy the component should be, and thus should be attenuated. This hypothesis suggests that an enhanced SPN can be obtained by assigning weighting factors inversely proportional to the magnitude of the SPN components. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Predictive Network Anomaly Detection and Visualization

    Publication Year: 2010 , Page(s): 288 - 299
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1701 KB) |  | HTML iconHTML  

    Various approaches have been developed for quantifying and displaying network traffic information for determining network status and in detecting anomalies. Although many of these methods are effective, they rely on the collection of long-term network statistics. Here, we present an approach that uses short-term observations of network features and their respective time averaged entropies. Acute changes are localized in network feature space using adaptive Wiener filtering and auto-regressive moving average modeling. The color-enhanced datagram is designed to allow a network engineer to quickly capture and visually comprehend at a glance the statistical characteristics of a network anomaly. First, average entropy for each feature is calculated for every second of observation. Then, the resultant short-term measurement is subjected to first- and second-order time averaging statistics. These measurements are the basis of a novel approach to anomaly estimation based on the well-known Fisher linear discriminant (FLD). Average port, high port, server ports, and peered ports are some of the network features used for stochastic clustering and filtering. We empirically determine that these network features obey Gaussian-like distributions. The proposed algorithm is tested on real-time network traffic data from Ohio University's main Internet connection. Experimentation has shown that the presented FLD-based scheme is accurate in identifying anomalies in network feature space, in localizing anomalies in network traffic flow, and in helping network engineers to prevent potential hazards. Furthermore, its performance is highly effective in providing a colorized visualization chart to network analysts in the presence of bursty network traffic. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Network-Based Traitor-Tracing Technique Using Traffic Pattern

    Publication Year: 2010 , Page(s): 300 - 313
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1198 KB) |  | HTML iconHTML  

    Today, with the rapid advance in broadband technology, streaming technology is applied to many applications, such as content delivery systems and web conference systems. On the other hand, we must implement digital rights management (DRM) to control content spreading and to avoid unintended content use. Traitor tracing is one of the key technologies that constructs DRM systems, and enables content distributors to observe and control content reception. General methods make use of watermarking to provide users' individual information unique to each user. However, these methods need to produce many individual contents. Especially, this is not realistic for real-time streaming systems. Furthermore, watermarking, which is a key technology adopted by contemporary methods, has known limitations and attacks against it. This is why the authors have proposed a method to monitor the content stream using traffic patterns constructed from only traffic volume information obtained from routers. The proposed method can determine who is watching the streaming content and whether or not a secondary content delivery exists. This information can be also used with general methods to construct a more practical traitor-tracing system. A method to cope with random errors and burst errors has also been investigated. Finally, the results of simulation and practical experiment are provided demonstrating the effectiveness of the proposed approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Lossless Data Embedding in Electronic Inks

    Publication Year: 2010 , Page(s): 314 - 323
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1062 KB) |  | HTML iconHTML  

    This paper presents a novel lossless data embedding algorithm for electronic inks. The proposed algorithm first computes the analytical ink-curve for each stroke as a set of smoothly concatenated cubic Bezier curves. During embedding, a set of data carrier points on the ink-curve are evaluated, perturbed, and then inserted into the original point array. Our proposed smooth ink-curve generation method iteratively refines the parameterization based on the local curve geometry. We demonstrate experimentally that the inserted points incur the least perceptional distortion on the marked ink-curves even after large mount of secret message is embedded. On extraction, the perturbed data carriers are identified and the original point array can be recovered. By subtracting these perturbed data carriers from their recomputed reference locations, we derive a sequence of perturbation vectors, from which the embedded secret message can be decoded. Based on the proposed embedding technique and the public-key cryptography, a hybrid secure ink authentication system is designed to secure electronic writings for tampering detection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Providing Witness Anonymity Under Peer-to-Peer Settings

    Publication Year: 2010 , Page(s): 324 - 336
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (534 KB) |  | HTML iconHTML  

    In this paper, we introduce the concept of witness anonymity for peer-to-peer systems, as well as other systems with the peer-to-peer nature. Witness anonymity combines the seemingly conflicting requirements of anonymity (for honest peers who report on the misbehavior of other peers) and accountability (for malicious peers that attempt to misuse the anonymity feature to slander honest peers). We propose the Secure Deep Throat (SDT) protocol to provide anonymity for the witnesses of malicious or selfish behavior to enable such peers to report on this behavior without fear of retaliation. On the other hand, in SDT, the misuse of anonymity is restrained in such a way that any malicious peer attempting to send multiple claims against the same innocent peer for the same reason (i.e., the same misbehavior type) can be identified. We also describe how SDT can be used in two modes. The active mode can be used in scenarios with real-time requirements, e.g., detecting and preventing the propagation of peer-to-peer worms, whereas the passive mode is suitable for scenarios without strict real-time requirements, e.g., query-based reputation systems. We analyze the security and overhead of SDT, and present countermeasures that can be used to mitigate various attacks on the protocol. Moreover, we show how SDT can be easily integrated with existing protocols/mechanisms with a few examples. Our analysis shows that the communication, storage, and computation overheads of SDT are acceptable in peer-to-peer systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Information Leakage in Fuzzy Commitment Schemes

    Publication Year: 2010 , Page(s): 337 - 348
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (461 KB) |  | HTML iconHTML  

    In 1999, Juels and Wattenberg introduced the fuzzy commitment scheme. This scheme is a particular realization of a binary biometric secrecy system with chosen secret keys. It became a popular technique for designing biometric secrecy systems, since it is convenient and easy to implement using standard error-correcting codes. This paper investigates privacy- and secrecy-leakage in fuzzy commitment schemes. The analysis is carried out for four cases of biometric data statistics, i.e., memoryless totally symmetric, memoryless input-symmetric, memoryless, and stationary ergodic. First, the achievable regions are determined for the cases when data statistics are memoryless totally symmetric and memoryless input-symmetric. For the general memoryless and stationary ergodic cases, only outer bounds for the achievable rate-leakage regions are provided. These bounds, however, are sharpened for systematic parity-check codes. Given the achievable regions (bounds), the optimality of fuzzy commitment is assessed. The analysis shows that fuzzy commitment is only optimal for the memoryless totally symmetric case if the scheme operates at the maximum secret-key rate. Moreover, it is demonstrated that for the general memoryless and stationary ergodic cases, the scheme leaks information on both the secret and biometric data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • SVD-Based Universal Spatial Domain Image Steganalysis

    Publication Year: 2010 , Page(s): 349 - 353
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (268 KB) |  | HTML iconHTML  

    This paper is concerned with the universal (blind) image steganalysis problem and introduces a novel method to detect especially spatial domain steganographic methods. The proposed steganalyzer models linear dependencies of image rows/columns in local neighborhoods using singular value decomposition transform and employs content independency provided by a Wiener filtering process. Experimental results show that the novel method has superior performance when compared with its counterparts in terms of spatial domain steganography. Experiments also demonstrate the reasonable ability of the method to detect discrete cosine transform-based steganography as well as the perturbation quantization method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Method for Automatic Identification of Signatures of Steganography Software

    Publication Year: 2010 , Page(s): 354 - 358
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (115 KB) |  | HTML iconHTML  

    A fully automated, blind, media-type agnostic approach to steganalysis is presented here. Steganography may sometimes be exposed by detecting automatically characterized regularities in output media caused by weak implementations of steganography algorithms. Fast and accurate detection of steganography is demonstrated experimentally here across a range of media types and a variety of steganography approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Information Forensics and Security EDICS

    Publication Year: 2010 , Page(s): 359
    Save to Project icon | Request Permissions | PDF file iconPDF (21 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security Information for authors

    Publication Year: 2010 , Page(s): 360 - 361
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • Special issue on Using the Physical the Layer for Securing the Next Generation of Communication Systems

    Publication Year: 2010 , Page(s): 362
    Save to Project icon | Request Permissions | PDF file iconPDF (140 KB)  
    Freely Available from IEEE
  • Special issue on New Frontiers in Rich Transcription

    Publication Year: 2010 , Page(s): 363
    Save to Project icon | Request Permissions | PDF file iconPDF (126 KB)  
    Freely Available from IEEE
  • Special issue on Adaptive Sparse Representation of Data and Applications in Signal and Image Processing

    Publication Year: 2010 , Page(s): 364
    Save to Project icon | Request Permissions | PDF file iconPDF (141 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Publication Year: 2010 , Page(s): C3
    Save to Project icon | Request Permissions | PDF file iconPDF (32 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Mauro Barni
University of Siena, Italy