By Topic

Information Forensics and Security, IEEE Transactions on

Issue 2 • Date June 2006

Filter Results

Displaying Results 1 - 19 of 19
  • Table of contents

    Page(s): c1 - c4
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (35 KB)  
    Freely Available from IEEE
  • Biometrics: a tool for information security

    Page(s): 125 - 143
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3016 KB) |  | HTML iconHTML  

    Establishing identity is becoming critical in our vastly interconnected society. Questions such as "Is she really who she claims to be?," "Is this person authorized to use this facility?," or "Is he in the watchlist posted by the government?" are routinely being posed in a variety of scenarios ranging from issuing a driver's license to gaining entry into a country. The need for reliable user authentication techniques has increased in the wake of heightened concerns about security and rapid advancements in networking, communication, and mobility. Biometrics, described as the science of recognizing an individual based on his or her physical or behavioral traits, is beginning to gain acceptance as a legitimate method for determining an individual's identity. Biometric systems have now been deployed in various commercial, civilian, and forensic applications as a means of establishing identity. In this paper, we provide an overview of biometrics and discuss some of the salient research issues that need to be addressed for making biometric technology an effective tool for providing information security. The primary contribution of this overview includes: 1) examining applications where biometric scan solve issues pertaining to information security; 2) enumerating the fundamental challenges encountered by biometric systems in real-world applications; and 3) discussing solutions to address the problems of scalability and security in large-scale authentication systems. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • EyeCerts

    Page(s): 144 - 153
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2120 KB) |  | HTML iconHTML  

    In this paper, we propose EyeCerts, a biometric system for the identification of people which achieves offline verification of certified, cryptographically secure documents. An EyeCert is a printed document which certifies the association of content on the document with a biometric feature-a compressed version of a human iris in this work. The system is highly cost-effective since it does not require high complexity, hard-to-replicate printing technologies. Further, the device used to verify an EyeCert is inexpensive, estimated to have approximately the same cost as an off-the-shelf iris-scanning camera. As a central component of the EyeCert system, we present an iris analysis technique that aims to extract and compress the unique features of a given iris with a discrimination criterion using limited storage. The compressed features should be at maximal distance with respect to a reference iris image database. The iris analysis algorithm performs several steps in three main phases: 1) the algorithm detects the human iris by using a new model which is able to compensate for the noise introduced by the surrounding eyelashes and eyelids, 2) it converts the isolated iris using a modified Fourier-Mellin transform into a standard domain where the common radial patterns of the human iris are concisely represented, and 3) it optimally selects, aligns, and near-optimally compresses the most distinctive transform coefficients for each individual user. Using a low-quality imaging system (sub-U.S.$100), a χ2 error distribution model, and assuming a fixed false negatives rate of 5%, EyeCert caused false positives at rates better than 10-5 and as low as 10-30 for certain users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Performance analysis of iris-based identification system at the matching score level

    Page(s): 154 - 168
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2008 KB) |  | HTML iconHTML  

    Practical iris-based identification systems are easily accessible for data collection at the matching score level. In a typical setting, a video camera is used to collect a single frontal view image of good quality. The image is then preprocessed, encoded, and compared with all entries in the biometric database resulting in a single highest matching score. In this paper, we assume that multiple scans from the same iris are available and design the decision rules based on this assumption. We consider the cases where vectors of matching scores may be described by a Gaussian model with dependent components under both genuine and imposter hypotheses. Two test statistics: the plug-in loglikelihood ratio and the average Hamming distance are designed. We further analyze the performance of filter-based iris recognition systems. The model fit is verified using the Shapiro-Wilk test for normality. We show that the loglikelihood ratio with well-estimated maximum-likelihood parameters in it often outperforms the average Hamming distance statistic. The problem of identification with M iris classes is further stated as an (M+1)ary hypothesis testing problem. We use empirical approach, Chernoff bound, and Large Deviations approach to predict the performance of the iris-based identification system. The bound on the probability of error is evaluated as a function of the number of classes and the number of iris scans per class. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An algorithm for distorted fingerprint matching based on local triangle feature set

    Page(s): 169 - 177
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2264 KB) |  | HTML iconHTML  

    Coping with nonlinear distortions in fingerprint matching is a challenging task. This paper proposes a novel method, a fuzzy feature match (FFM) based on a local triangle feature set to match the deformed fingerprints. The fingerprint is represented by the fuzzy feature set: the local triangle feature set. The similarity between the fuzzy feature set is used to characterize the similarity between fingerprints. A fuzzy similarity measure for two triangles is introduced and extended to construct a similarity vector including the triangle-level similarities for all triangles in two fingerprints. Accordingly, a similarity vector pair is defined to illustrate the similarities between two fingerprints. The FFM method maps the similarity vector pair to a normalized value which quantifies the overall image to image similarity. The proposed algorithm has been evaluated with NIST 24 and FVC2004 fingerprint databases. Experimental results confirm that the proposed FFM based on the local triangle feature set is a reliable and effective algorithm for fingerprint matching with nonlinear distortions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Teeth segmentation in digitized dental X-ray films using mathematical morphology

    Page(s): 178 - 189
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (5568 KB)  

    Automating the process of postmortem identification of individuals using dental records is receiving increased attention. Teeth segmentation from dental radiographic films is an essential step for achieving highly automated postmortem identification. In this paper, we offer a mathematical morphology approach to the problem of teeth segmentation. We also propose a grayscale contrast stretching transformation to improve the performance of teeth segmentation. We compare and contrast our approach with other approaches proposed in the literature based on a theoretical and empirical basis. The results show that in addition to its capability of handling bitewing and periapical dental radiographic views, our approach exhibits the lowest failure rate among all approaches studied. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reducing delay and enhancing DoS resistance in multicast authentication through multigrade security

    Page(s): 190 - 204
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (856 KB) |  | HTML iconHTML  

    Many techniques for multicast authentication employ the principle of delayed key disclosure. These methods introduce delay in authentication, employ receiver-side buffers, and are susceptible to denial-of-service (DoS) attacks. Delayed key disclosure schemes have a binary concept of authentication and do not incorporate any notion of partial trust. This paper introduces staggered timed efficient stream loss-tolerant authentication (TESLA), a method for achieving multigrade authentication in multicast scenarios that reduces the delay needed to filter forged multicast packets and, consequently, mitigates the effects of DoS attacks. Staggered TESLA involves modifications to the popular multicast authentication scheme, TESLA, by incorporating the notion of multilevel trust through the use of multiple, staggered authentication keys in creating message authentication codes (MACs) for a multicast packet. We provide guidelines for determining the appropriate buffer size, and show that the use of multiple MACs and, hence, multiple grades of authentication, allows the receiver to flush forged packets quicker than in conventional TESLA. As a result, staggered TESLA provides an advantage against DoS attacks compared to conventional TESLA. We then examine two new strategies for reducing the time needed for complete authentication. In the first strategy, the multicast source uses assurance of the trustworthiness of entities in a neighborhood of the source, in conjunction with the multigrade authentication provided by staggered TESLA. The second strategy achieves reduced delay by introducing additional key distributors in the network. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Digital camera identification from sensor pattern noise

    Page(s): 205 - 214
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1104 KB)  

    In this paper, we propose a new method for the problem of digital camera identification from its images based on the sensor's pattern noise. For each camera under investigation, we first determine its reference pattern noise, which serves as a unique identification fingerprint. This is achieved by averaging the noise obtained from multiple images using a denoising filter. To identify the camera from a given image, we consider the reference pattern noise as a spread-spectrum watermark, whose presence in the image is established by using a correlation detector. Experiments on approximately 320 images taken with nine consumer digital cameras are used to estimate false alarm rates and false rejection rates. Additionally, we study how the error rates change with common image processing, such as JPEG compression or gamma correction. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust and secure image hashing

    Page(s): 215 - 230
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1752 KB) |  | HTML iconHTML  

    Image hash functions find extensive applications in content authentication, database search, and watermarking. This paper develops a novel algorithm for generating an image hash based on Fourier transform features and controlled randomization. We formulate the robustness of image hashing as a hypothesis testing problem and evaluate the performance under various image processing operations. We show that the proposed hash function is resilient to content-preserving modifications, such as moderate geometric and filtering distortions. We introduce a general framework to study and evaluate the security of image hashing systems. Under this new framework, we model the hash values as random variables and quantify its uncertainty in terms of differential entropy. Using this security framework, we analyze the security of the proposed schemes and several existing representative methods for image hashing. We then examine the security versus robustness tradeoff and show that the proposed hashing methods can provide excellent security and robustness. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint coding and embedding techniques for MultimediaFingerprinting

    Page(s): 231 - 247
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3760 KB)  

    Digital fingerprinting protects multimedia content from illegal redistribution by uniquely marking every copy of the content distributed to each user. The collusion attack is a powerful attack where several different fingerprinted copies of the same content are combined together to attenuate or even remove the fingerprints. One major category of collusion-resistant fingerprinting employs an explicit step of coding. Most existing works on coded fingerprinting mainly focus on the code-level issues and treat the embedding issues through abstract assumptions without examining the overall performance. In this paper, we jointly consider the coding and embedding issues for coded fingerprinting systems and examine their performance in terms of collusion resistance, detection computational complexity, and distribution efficiency. Our studies show that coded fingerprinting has efficient detection but rather low collusion resistance. Taking advantage of joint coding and embedding, we propose a permuted subsegment embedding technique and a group-based joint coding and embedding technique to improve the collusion resistance of coded fingerprinting while maintaining its efficient detection. Experimental results show that the number of colluders that the proposed methods can resist is more than three times as many as that of the conventional coded fingerprinting approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Boneh-Shaw fingerprinting scheme is better than we thought

    Page(s): 248 - 255
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (280 KB) |  | HTML iconHTML  

    Digital fingerprinting is a forensic method against illegal copying. The distributor marks each individual copy with a unique fingerprint. If an illegal copy appears, it can be traced back to one or more guilty pirates due to this fingerprint. To work against a coalition of several pirates, the fingerprinting scheme must be based on a collusion-secure code. This paper addresses binary collusion-secure codes in the setting of Boneh and Shaw (1995/1998). We prove that the Boneh-Shaw scheme is more efficient than originally proven, and we propose adaptations to further improve the scheme. We also point out some differences between our model and others in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Efficient spatial image watermarking via new perceptual masking and blind detection schemes

    Page(s): 256 - 274
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3512 KB) |  | HTML iconHTML  

    The aim of this paper is to improve the performance of spatial domain watermarking. To this end, a new perceptual mask and a new detection scheme are proposed. The proposed spatial perceptual mask is based on the cover image prediction error sequence and matches very well with the properties of the human visual system. It exhibits superior performance compared to existing spatial masking schemes. Moreover, it allows for a significantly increased strength of the watermark while, at the same time, the watermark visibility is decreased. The new blind detection scheme comprises an efficient prewhitening process and a correlation-based detector. The prewhitening process is based on the least-squares prediction error filter and substantially improves the detector's performance. The correlation-based detector that was selected is shown to be the most suitable for the problem at hand. The improved performance of the proposed detection scheme has been justified theoretically for the case of linear filtering plus noise attack and through extensive simulations. The theoretical analysis is independent of the proposed mask and the derived expressions can be used for any watermarking technique based on spatial masking. It is shown though that in most cases the detector performs better if the proposed mask is employed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Steganalysis for Markov cover data with applications to images

    Page(s): 275 - 287
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1384 KB)  

    The difficult task of steganalysis, or the detection of the presence of hidden data, can be greatly aided by exploiting the correlations inherent in typical host or cover signals. In particular, several effective image steganalysis techniques are based on the strong interpixel dependencies exhibited by natural images. Thus, existing theoretical benchmarks based on independent and identically distributed (i.i.d.) models for the cover data underestimate attainable steganalysis performance and, hence, overestimate the security of the steganography technique used for hiding the data. In this paper, we investigate detection-theoretic performance benchmarks for steganalysis when the cover data are modeled as a Markov chain. The main application explored here is steganalysis of data hidden in images. While the Markov chain model does not completely capture the spatial dependencies, it provides an analytically tractable framework whose predictions are consistent with the performance of practical steganalysis algorithms that account for spatial dependencies. Numerical results are provided for image steganalysis of spread-spectrum and perturbed quantization data hiding. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Information Forensics and Security EDICS

    Page(s): 288
    Save to Project icon | Request Permissions | PDF file iconPDF (19 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security Information for authors

    Page(s): 289 - 290
    Save to Project icon | Request Permissions | PDF file iconPDF (45 KB)  
    Freely Available from IEEE
  • Special issue on adaptive waveform design for agile sensing and communication

    Page(s): 291
    Save to Project icon | Request Permissions | PDF file iconPDF (167 KB)  
    Freely Available from IEEE
  • Call for papers on network-aware multimedia processing and communications

    Page(s): 292
    Save to Project icon | Request Permissions | PDF file iconPDF (158 KB)  
    Freely Available from IEEE
  • IEEE Signal Processing Society Information

    Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (31 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Mauro Barni
University of Siena, Italy