By Topic

Information Forensics and Security, IEEE Transactions on

Issue 1 • Date Jan. 2013

Filter Results

Displaying Results 1 - 25 of 34
  • [Front cover]

    Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (298 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (129 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (183 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): 3 - 4
    Save to Project icon | Request Permissions | PDF file iconPDF (183 KB)  
    Freely Available from IEEE
  • Internet Traffic Classification by Aggregating Correlated Naive Bayes Predictions

    Page(s): 5 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2799 KB) |  | HTML iconHTML  

    This paper presents a novel traffic classification scheme to improve classification performance when few training data are available. In the proposed scheme, traffic flows are described using the discretized statistical features and flow correlation information is modeled by bag-of-flow (BoF). We solve the BoF-based traffic classification in a classifier combination framework and theoretically analyze the performance benefit. Furthermore, a new BoF-based traffic classification method is proposed to aggregate the naive Bayes (NB) predictions of the correlated flows. We also present an analysis on prediction error sensitivity of the aggregation strategies. Finally, a large number of experiments are carried out on two large-scale real-world traffic datasets to evaluate the proposed scheme. The experimental results show that the proposed scheme can achieve much better classification performance than existing state-of-the-art traffic classification methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • User Authentication Through Mouse Dynamics

    Page(s): 16 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1485 KB) |  | HTML iconHTML  

    Behavior-based user authentication with pointing devices, such as mice or touchpads, has been gaining attention. As an emerging behavioral biometric, mouse dynamics aims to address the authentication problem by verifying computer users on the basis of their mouse operating styles. This paper presents a simple and efficient user authentication approach based on a fixed mouse-operation task. For each sample of the mouse-operation task, both traditional holistic features and newly defined procedural features are extracted for accurate and fine-grained characterization of a user's unique mouse behavior. Distance-measurement and eigenspace-transformation techniques are applied to obtain feature components for efficiently representing the original mouse feature space. Then a one-class learning algorithm is employed in the distance-based feature eigenspace for the authentication task. The approach is evaluated on a dataset of 5550 mouse-operation samples from 37 subjects. Extensive experimental results are included to demonstrate the efficacy of the proposed approach, which achieves a false-acceptance rate of 8.74%, and a false-rejection rate of 7.69% with a corresponding authentication time of 11.8 seconds. Two additional experiments are provided to compare the current approach with other approaches in the literature. Our dataset is publicly available to facilitate future research. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Latent Fingerprint Matching Using Descriptor-Based Hough Transform

    Page(s): 31 - 45
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4420 KB) |  | HTML iconHTML  

    Identifying suspects based on impressions of fingers lifted from crime scenes (latent prints) is a routine procedure that is extremely important to forensics and law enforcement agencies. Latents are partial fingerprints that are usually smudgy, with small area and containing large distortion. Due to these characteristics, latents have a significantly smaller number of minutiae points compared to full (rolled or plain) fingerprints. The small number of minutiae and the noise characteristic of latents make it extremely difficult to automatically match latents to their mated full prints that are stored in law enforcement databases. Although a number of algorithms for matching full-to-full fingerprints have been published in the literature, they do not perform well on the latent-to-full matching problem. Further, they often rely on features that are not easy to extract from poor quality latents. In this paper, we propose a new fingerprint matching algorithm which is especially designed for matching latents. The proposed algorithm uses a robust alignment algorithm (descriptor-based Hough transform) to align fingerprints and measures similarity between fingerprints by considering both minutiae and orientation field information. To be consistent with the common practice in latent matching (i.e., only minutiae are marked by latent examiners), the orientation field is reconstructed from minutiae. Since the proposed algorithm relies only on manually marked minutiae, it can be easily used in law enforcement applications. Experimental results on two different latent databases (NIST SD27 and WVU latent databases) show that the proposed algorithm outperforms two well optimized commercial fingerprint matchers. Further, a fusion of the proposed algorithm and commercial fingerprint matchers leads to improved matching accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Document Clustering for Forensic Analysis: An Approach for Improving Computer Inspection

    Page(s): 46 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (934 KB) |  | HTML iconHTML  

    In computer forensic analysis, hundreds of thousands of files are usually examined. Much of the data in those files consists of unstructured text, whose analysis by computer examiners is difficult to be performed. In this context, automated methods of analysis are of great interest. In particular, algorithms for clustering documents can facilitate the discovery of new and useful knowledge from the documents under analysis. We present an approach that applies document clustering algorithms to forensic analysis of computers seized in police investigations. We illustrate the proposed approach by carrying out extensive experimentation with six well-known clustering algorithms (K-means, K-medoids, Single Link, Complete Link, Average Link, and CSPA) applied to five real-world datasets obtained from computers seized in real-world investigations. Experiments have been performed with different combinations of parameters, resulting in 16 different instantiations of algorithms. In addition, two relative validity indexes were used to automatically estimate the number of clusters. Related studies in the literature are significantly more limited than our study. Our experiments show that the Average Link and Complete Link algorithms provide the best results for our application domain. If suitably initialized, partitional algorithms (K-means and K-medoids) can also yield to very good results. Finally, we also present and discuss several practical results that can be useful for researchers and practitioners of forensic computing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Robust Hashing for Image Authentication Using Zernike Moments and Local Features

    Page(s): 55 - 63
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1764 KB) |  | HTML iconHTML  

    A robust hashing method is developed for detecting image forgery including removal, insertion, and replacement of objects, and abnormal color modification, and for locating the forged area. Both global and local features are used in forming the hash sequence. The global features are based on Zernike moments representing luminance and chrominance characteristics of the image as a whole. The local features include position and texture information of salient regions in the image. Secret keys are introduced in feature extraction and hash construction. While being robust against content-preserving image processing, the hash is sensitive to malicious tampering and, therefore, applicable to image authentication. The hash of a test image is compared with that of a reference image. When the hash distance is greater than a threshold τ1 and less than τ2, the received image is judged as a fake. By decomposing the hashes, the type of image forgery and location of forged areas can be determined. Probability of collision between hashes of different images approaches zero. Experimental results are presented to show effectiveness of the method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Optimized Wavelength Band Selection for Heavily Pigmented Iris Recognition

    Page(s): 64 - 75
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2381 KB) |  | HTML iconHTML  

    Commercial iris recognition systems usually acquire images of the eye in 850-nm band of the electromagnetic spectrum. In this work, the heavily pigmented iris images are captured at 12 wavelengths, from 420 to 940 nm. The purpose is to find the most suitable wavelength band for the heavily pigmented iris recognition. A multispectral acquisition system is first designed for imaging the iris at narrow spectral bands in the range of 420-940 nm. Next, a set of 200 human black irises which correspond to the right and left eyes of 100 different subjects are acquired for an analysis. Finally, the most suitable wavelength for heavily pigmented iris recognition is found based on two approaches: 1) the quality assurance of texture; 2) matching performance-equal error rate (EER) and false rejection rate (FRR). This result is supported by visual observations of magnified detailed local iris texture information. The experimental results suggest that there exists a most suitable wavelength band for heavily pigmented iris recognition when using a single band of wavelength as illumination. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • White-Box Traceable Ciphertext-Policy Attribute-Based Encryption Supporting Any Monotone Access Structures

    Page(s): 76 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3838 KB) |  | HTML iconHTML  

    In a ciphertext-policy attribute-based encryption (CP-ABE) system, decryption keys are defined over attributes shared by multiple users. Given a decryption key, it may not be always possible to trace to the original key owner. As a decryption privilege could be possessed by multiple users who own the same set of attributes, malicious users might be tempted to leak their decryption privileges to some third parties, for financial gain as an example, without the risk of being caught. This problem severely limits the applications of CP-ABE. Several traceable CP-ABE (T-CP-ABE) systems have been proposed to address this problem, but the expressiveness of policies in those systems is limited where only and gate with wildcard is currently supported. In this paper we propose a new T-CP-ABE system that supports policies expressed in any monotone access structures. Also, the proposed system is as efficient and secure as one of the best (non-traceable) CP-ABE systems currently available, that is, this work adds traceability to an existing expressive, efficient, and secure CP-ABE scheme without weakening its security or setting any particular trade-off on its performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Recognizing Surgically Altered Face Images Using Multiobjective Evolutionary Algorithm

    Page(s): 89 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2034 KB) |  | HTML iconHTML  

    Widespread acceptability and use of biometrics for person authentication has instigated several techniques for evading identification. One such technique is altering facial appearance using surgical procedures that has raised a challenge for face recognition algorithms. Increasing popularity of plastic surgery and its effect on automatic face recognition has attracted attention from the research community. However, the nonlinear variations introduced by plastic surgery remain difficult to be modeled by existing face recognition systems. In this research, a multiobjective evolutionary granular algorithm is proposed to match face images before and after plastic surgery. The algorithm first generates non-disjoint face granules at multiple levels of granularity. The granular information is assimilated using a multiobjective genetic approach that simultaneously optimizes the selection of feature extractor for each face granule along with the weights of individual granules. On the plastic surgery face database, the proposed algorithm yields high identification accuracy as compared to existing algorithms and a commercial face recognition system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Heap Graph Based Software Theft Detection

    Page(s): 101 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (989 KB) |  | HTML iconHTML  

    As JavaScript is becoming more and more popular, JavaScript programs are valuable assets to many companies. However, the source code of JavaScript programs can be easily obtained and plagiarism of JavaScript programs is a serious threat to the industry. There are techniques like code obfuscation and watermarking which can make the source code of a program difficult to understand by humans and prove the ownership of the program. However, code obfuscation cannot avoid the source code being copied and a watermark can be defaced. In this paper, we use a relatively new technique, software birthmark, to help detect code theft of JavaScript programs. A birthmark is a unique characteristic a program possesses that can be used to identify the program. We extend two recent birthmark systems that extract the birthmark of a software from the run-time heap. We propose a redesigned system with improved robustness and performed extensive experiments to justify the effectiveness and robustness of it. Our evaluation based on 200 large-scale websites showed that our birthmark system exhibits 100% accuracy. We remark that it is solid and ready for practical use. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Reversible Watermarking Based on Invariant Image Classification and Dynamic Histogram Shifting

    Page(s): 111 - 120
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2273 KB) |  | HTML iconHTML  

    In this paper, we propose a new reversible watermarking scheme. One first contribution is a histogram shifting modulation which adaptively takes care of the local specificities of the image content. By applying it to the image prediction-errors and by considering their immediate neighborhood, the scheme we propose inserts data in textured areas where other methods fail to do so. Furthermore, our scheme makes use of a classification process for identifying parts of the image that can be watermarked with the most suited reversible modulation. This classification is based on a reference image derived from the image itself, a prediction of it, which has the property of being invariant to the watermark insertion. In that way, the watermark embedder and extractor remain synchronized for message extraction and image reconstruction. The experiments conducted so far, on some natural images and on medical images from different modalities, show that for capacities smaller than 0.4 bpp, our method can insert more data with lower distortion than any existing schemes. For the same capacity, we achieve a peak signal-to-noise ratio (PSNR) of about 1-2 dB greater than with the scheme of Hwang , the most efficient approach actually. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face Recognition and Verification Using Photometric Stereo: The Photoface Database and a Comprehensive Evaluation

    Page(s): 121 - 135
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3285 KB) |  | HTML iconHTML  

    This paper presents a new database suitable for both 2-D and 3-D face recognition based on photometric stereo (PS): the Photoface database. The database was collected using a custom-made four-source PS device designed to enable data capture with minimal interaction necessary from the subjects. The device, which automatically detects the presence of a subject using ultrasound, was placed at the entrance to a busy workplace and captured 1839 sessions of face images with natural pose and expression. This meant that the acquired data is more realistic for everyday use than existing databases and is, therefore, an invaluable test bed for state-of-the-art recognition algorithms. The paper also presents experiments of various face recognition and verification algorithms using the albedo, surface normals, and recovered depth maps. Finally, we have conducted experiments in order to demonstrate how different methods in the pipeline of PS (i.e., normal field computation and depth map reconstruction) affect recognition and verification performance. These experiments help to 1) demonstrate the usefulness of PS, and our device in particular, for minimal-interaction face recognition, and 2) highlight the optimal reconstruction and recognition algorithms for use with natural-expression PS data. The database can be downloaded from http://www.uwe.ac.uk/research/Photoface. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Touchalytics: On the Applicability of Touchscreen Input as a Behavioral Biometric for Continuous Authentication

    Page(s): 136 - 148
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1709 KB) |  | HTML iconHTML  

    We investigate whether a classifier can continuously authenticate users based on the way they interact with the touchscreen of a smart phone. We propose a set of 30 behavioral touch features that can be extracted from raw touchscreen logs and demonstrate that different users populate distinct subspaces of this feature space. In a systematic experiment designed to test how this behavioral pattern exhibits consistency over time, we collected touch data from users interacting with a smart phone using basic navigation maneuvers, i.e., up-down and left-right scrolling. We propose a classification framework that learns the touch behavior of a user during an enrollment phase and is able to accept or reject the current user by monitoring interaction with the touch screen. The classifier achieves a median equal error rate of 0% for intrasession authentication, 2%-3% for intersession authentication, and below 4% when the authentication test was carried out one week after the enrollment phase. While our experimental findings disqualify this method as a standalone authentication mechanism for long-term authentication, it could be implemented as a means to extend screen-lock time or as a part of a multimodal biometric authentication system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Security of End-to-End Measurements Based on Packet-Pair Dispersions

    Page(s): 149 - 162
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1888 KB) |  | HTML iconHTML  

    The packet-pair technique is a widely adopted method to estimate the capacity of a path. The use of the packet-pair technique has been suggested in numerous applications including network management and end-to-end admission control. Recent observations also indicate that this technique can be used to fingerprint Internet paths. However, given that packet-pair measurements are performed in an open environment, end-hosts might try to alter these measurements to increase their gain in the network. In this paper, we explore the security of measurements based on the packet-pair technique. More specifically, we analyze the major threats against bandwidth estimation using the packet-pair technique and we demonstrate empirically that current implementations of this technique are vulnerable to a wide range of bandwidth manipulation attacks-in which end-hosts can accurately modify their claimed bandwidths. We propose lightweight countermeasures to detect attacks on bandwidth measurements; our technique can detect whether delays were inserted within the transmission of a packet-pair (e.g., by bandwidth shapers). We further propose a novel scheme for remote path identification using the distribution of packet-pair dispersions and we evaluate its accuracy, robustness, and potential use. Our findings suggest that the packet-pair technique can reveal valuable information about the identity/locations of remote hosts. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • COKE Crypto-Less Over-the-Air Key Establishment

    Page(s): 163 - 173
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1986 KB) |  | HTML iconHTML  

    In this paper, we present a novel probabilistic protocol (COKE) to allow two wireless communicating parties to commit over-the-air (OTA) on a shared secret, even in the presence of a globally eavesdropping adversary. The proposed solution leverages no crypto but just plaintext messages exchange. Indeed, the security of the solution relies on the difficulty for the adversary to correctly identify, for each one-bit transmission, the sender of that bit-not its value, which is indeed exchanged in cleartext. Due to the low requirements of COKE (essentially, the capability to send a few wireless messages), it is particularly suited for resource constrained wireless devices (e.g., WNSs, wireless embedded systems), as well as for those scenarios where just energy saving is at premium, such as smartphones. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Evaluation of Otoacoustic Emissions as a Biometric

    Page(s): 174 - 183
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1540 KB) |  | HTML iconHTML  

    This paper presents a comprehensive overview of an investigation into the use of otoacoustic emissions (OAE) as an identity verification biometric. OAE could be important as a biometric identifier in applications where users wear headsets since it is discrete and difficult to spoof. OAE are very low level [ ~ 17 dB sound pressure level (SPL)] sounds emitted from the human ear as part of the normal hearing process. They can occur spontaneously or be invoked by a suitable stimulus, these being known as transient evoked otoacoustic emissions (TEOAE) and distortion product otoacoustic emission (DPOAE). An initial visual comparison shows that otoacoustic emissions are clearly distinctive and are stable over a six month period. A biometric analysis based on the Euclidean distance measurement of TEOAE recordings in the temporal domain was performed on prerecorded datasets captured for medical purposes and data were collected specifically for this study. For a database of 23 subjects, the equal error rate (EER) was 1.24% for a 90% confidence interval. DPOAEs also demonstrated biometric potential but the level of discrimination is inferior to TEOAE. The combination of DPOAE and TEOAE into a multimodal analysis was demonstrated to be feasible although the potential improvement in performance is yet to be quantified. Finally, the use of maximum length sequencing (MLS) was investigated to reduce capture time without decreasing performance. This demonstrated a reduction in capture time for a TEOAE from 1 min to 5 s with a visual analysis of a fourth-order MLS showing good stability and reproducibility. OAEs can potentially be used as a biometric and benefit from their small template size (512 data points in our TEOAE biometric) and simple analysis. The level of background noise is the most significant practical factor that affects biometric performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Real-Time Design Based on FPGA for Expeditious Error Reconciliation in QKD System

    Page(s): 184 - 190
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1361 KB) |  | HTML iconHTML  

    For high-speed quantum key distribution systems, error reconciliation is often the bottleneck affecting system performance. By exchanging common information through a public channel, the identical key can be generated on both communicating sides. However, the necessity to eliminate disclosed bits for security reasons lowers the final key rate. To improve this key rate, the amount of disclosed bits should be minimized. In addition, decreasing the time spent on error reconciliation also improves the key rate. In this paper, we introduce a practical method for expeditious error reconciliation implemented in a field programmable gate array for a discrete variable quantum key distribution system, and illustrate the superiority of this method to other similar algorithms running on a PC. Experimental results demonstrate the rapidity of the proposed protocol. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Matching Composite Sketches to Face Photos: A Component-Based Approach

    Page(s): 191 - 204
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3252 KB) |  | HTML iconHTML  

    The problem of automatically matching composite sketches to facial photographs is addressed in this paper. Previous research on sketch recognition focused on matching sketches drawn by professional artists who either looked directly at the subjects (viewed sketches) or used a verbal description of the subject's appearance as provided by an eyewitness (forensic sketches). Unlike sketches hand drawn by artists, composite sketches are synthesized using one of the several facial composite software systems available to law enforcement agencies. We propose a component-based representation (CBR) approach to measure the similarity between a composite sketch and mugshot photograph. Specifically, we first automatically detect facial landmarks in composite sketches and face photos using an active shape model (ASM). Features are then extracted for each facial component using multiscale local binary patterns (MLBPs), and per component similarity is calculated. Finally, the similarity scores obtained from individual facial components are fused together, yielding a similarity score between a composite sketch and a face photo. Matching performance is further improved by filtering the large gallery of mugshot images using gender information. Experimental results on matching 123 composite sketches against two galleries with 10,123 and 1,316 mugshots show that the proposed method achieves promising performance (rank-100 accuracies of 77.2% and 89.4%, respectively) compared to a leading commercial face recognition system (rank-100 accuracies of 22.8% and 52.0%) and densely sampled MLBP on holistic faces (rank-100 accuracies of 27.6% and 10.6%). We believe our prototype system will be of great value to law enforcement agencies in apprehending suspects in a timely fashion. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decentralized Hypothesis Testing in Wireless Sensor Networks in the Presence of Misbehaving Nodes

    Page(s): 205 - 215
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2910 KB) |  | HTML iconHTML  

    Wireless sensor networks are prone to node misbehavior arising from tampering by an adversary (Byzantine attack), or due to other factors such as node failure resulting from hardware or software degradation. In this paper, we consider the problem of decentralized detection in wireless sensor networks in the presence of one or more classes of misbehaving nodes. Binary hypothesis testing is considered where the honest nodes transmit their binary decisions to the fusion center (FC), while the misbehaving nodes transmit fictitious messages. The goal of the FC is to identify the misbehaving nodes and to detect the state of nature. We identify each class of nodes with an operating point (false alarm and detection probabilities) on the receiver operating characteristic (ROC) curve. Maximum likelihood estimation of the nodes' operating points is then formulated and solved using the expectation maximization (EM) algorithm with the nodes' identities as latent variables. The solution from the EM algorithm is then used to classify the nodes and to solve the decentralized hypothesis testing problem. Numerical results compared with those from the reputation-based schemes show a significant improvement in both classification of the nodes and hypothesis testing results. We also discuss an inherent ambiguity in the node classification problem which can be resolved if the honest nodes are in majority. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pixel Group Trace Model-Based Quantitative Steganalysis for Multiple Least-Significant Bits Steganography

    Page(s): 216 - 228
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3372 KB) |  | HTML iconHTML  

    For analyzing the multiple least-significant bits (MLSB) steganography, a pixel group trace model is presented. Based on this model and some statistical characteristics of images, two quantitative steganalysis methods are proposed for two typical MLSB steganography paradigms. The pixel group trace model simulates the MLSB embedding by exclusive or operation, and traces the transition relationship among the possible structures of the pixel group's value by some trace pixel group subsets. Then, the estimation equations of embedding ratio are derived from the transition probability matrix among trace subsets and the symmetry of regular and singular pixel group sets. Finally, a series of experimental results for the case of triple pixel group show that the proposed steganalysis methods can estimate the low embedding ratio with smaller error, especially, for some cases, the interquartile range of the estimation errors is smaller than the best one of the others by more than 45%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Secrecy Capacity Enhancement With Distributed Precoding in Multirelay Wiretap Systems

    Page(s): 229 - 238
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2350 KB) |  | HTML iconHTML  

    The secrecy capacity of relay communications is investigated in this paper, where an eavesdropper is present. Distributed precoding through multiple relay nodes can be used to enhance the received signal power at the destination node and to mitigate the signal leakage to the eavesdropper, simultaneously. Due to individual power constraints at relay nodes, the distributed precoding scalar at each relay node is equivalent to a distributed precoding angle. An iterative algorithm is proposed to find the suboptimum distributed precoding angles at all the relay nodes, where the channel state information (CSI) sharing overhead among the relay nodes can also be substantially reduced. Each relay node receives the equivalent CSI from its preceding relay node, computes its distributed precoding angle, and updates the equivalent CSI for the next relay node. Compared with the simple decode-and-forward relaying protocol with random distributed precoding, the proposed iterative distributed precoding algorithm is able to further improve on the secrecy capacity of the multirelay wiretap system with an acceptable CSI sharing overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Component-Based Representation in Automated Face Recognition

    Page(s): 239 - 253
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2008 KB) |  | HTML iconHTML  

    This paper presents a framework for component-based face alignment and representation that demonstrates improvements in matching performance over the more common holistic approach to face alignment and representation. This work is motivated by recent evidence from the cognitive science community demonstrating the efficacy of component-based facial representations. The component-based framework presented in this paper consists of the following major steps: 1) landmark extraction using Active Shape Models (ASM), 2) alignment and cropping of components using Procrustes Analysis, 3) representation of components with Multiscale Local Binary Patterns (MLBP), 4) per-component measurement of facial similarity, and 5) fusion of per-component similarities. We demonstrate on three public datasets and an operational dataset consisting of face images of 8000 subjects, that the proposed component-based representation provides higher recognition accuracies over holistic-based representations. Additionally, we show that the proposed component-based representations: 1) are more robust to changes in facial pose, and 2) improve recognition accuracy on occluded face images in forensic scenarios. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Chung C. Jay Kuo
University of Southern California