Scheduled System Maintenance:
On Monday, April 27th, IEEE Xplore will undergo scheduled maintenance from 1:00 PM - 3:00 PM ET (17:00 - 19:00 UTC). No interruption in service is anticipated.
By Topic

Information Forensics and Security, IEEE Transactions on

Issue 3 • Date Sept. 2010

Filter Results

Displaying Results 1 - 25 of 29
  • Table of contents

    Publication Year: 2010 , Page(s): C1 - C4
    Save to Project icon | Request Permissions | PDF file iconPDF (44 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Publication Year: 2010 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (36 KB)  
    Freely Available from IEEE
  • A Wavelet-PCA-Based Fingerprinting Scheme for Peer-to-Peer Video File Sharing

    Publication Year: 2010 , Page(s): 365 - 373
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1146 KB) |  | HTML iconHTML  

    In order to utilize peer-to-peer (P2P) networks in legal content distribution to benefit the legal content providers, copyright protection needs to be enhanced. In this paper, a fingerprint generation and embedding method is proposed for complex P2P file sharing networks. In this method, wavelet and principal component analysis (PCA) techniques are used for fingerprint generation. First, the wavelet technique obtains a low-frequency representation of the test image (or source file, which is assumed to be one I frame of a video with a DVD quality) and PCA finds the features of the representation. Then, a set of fingerprint matrices can be created based on a proposed algorithm. Finally, each matrix combines with the low-frequency representative to become a unique fingerprinted matrix. The fingerprinted matrix is not only much smaller than the original image in size but also contains the most important information. Without this information, the quality of the reconstructed image will be very poor. Thus, the fingerprinted file is more suitable for distribution in P2P networks, because, in the distribution stage, the uniquely fingerprinted matrix will only be dispensed by the source host and leave the rest for P2P networks to handle. On the other hand, among other frames of the same video which are not decomposed, some will be embedded with sharable fingerprints. The relationship between unique fingerprint and sharable fingerprint and the purpose of using it will be discussed in the paper. Our result indicates that the proposed fingerprint has shown strong robustness against common attacks such as Gaussian noise, median filter, and lossy compression. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Experimental Study on the Security Performance of YASS

    Publication Year: 2010 , Page(s): 374 - 380
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1269 KB) |  | HTML iconHTML  

    This paper presents an experimental study on the security performance of Yet Another Steganographic Scheme (YASS). It reports: 1) YASS's security performance with different input images, i.e., uncompressed images and JPEG compressed images; 2) YASS's security performance compared with two other JPEG steganographic schemes MB1 and F5; and 3) some experimental results about extended YASS. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic Secret Keys From Reciprocal MIMO Wireless Channels: Measurement and Analysis

    Publication Year: 2010 , Page(s): 381 - 392
    Cited by:  Papers (30)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1377 KB) |  | HTML iconHTML  

    Information theoretic limits for random key generation in multiple-input multiple-output (MIMO) wireless systems exhibiting a reciprocal channel response are investigated experimentally with a new three-node MIMO measurement campaign. As background, simple expressions are presented for the number of available key bits, as well as the number of bits that are secure from a close eavesdropper. Two methods for generating secret keys are analyzed in the context of MIMO channels and their mismatch rate and efficiency are derived. A new wideband indoor MIMO measurement campaign in the 2.51- to 2.59-GHz band is presented, whose purpose is to study the number of available key bits in both line-of-sight and nonline-of-sight environments. Application of the key generation methods to measured propagation channels indicates key generation rates that can be obtained in practice for four-element arrays. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Purely Automated Attacks on PassPoints-Style Graphical Passwords

    Publication Year: 2010 , Page(s): 393 - 405
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1401 KB) |  | HTML iconHTML  

    We introduce and evaluate various methods for purely automated attacks against PassPoints-style graphical passwords. For generating these attacks, we introduce a graph-based algorithm to efficiently create dictionaries based on heuristics such as click-order patterns (e.g., five points all along a line). Some of our methods combine click-order heuristics with focus-of-attention scan-paths generated from a computational model of visual attention, yielding significantly better automated attacks than previous work. One resulting automated attack finds 7%-16% of passwords for two representative images using dictionaries of approximately 226 entries (where the full password space is 243). Relaxing click-order patterns substantially increased the attack efficacy albeit with larger dictionaries of approximately 235 entries, allowing attacks that guessed 48%-54% of passwords (compared to previous results of 1% and 9% on the same dataset for two images with 235 guesses). These latter attacks are independent of focus-of-attention models, and are based on image-independent guessing patterns. Our results show that automated attacks, which are easier to arrange than human-seeded attacks and are more scalable to systems that use multiple images, require serious consideration when deploying basic PassPoints-style graphical passwords. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face Matching and Retrieval Using Soft Biometrics

    Publication Year: 2010 , Page(s): 406 - 415
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1275 KB) |  | HTML iconHTML  

    Soft biometric traits embedded in a face (e.g., gender and facial marks) are ancillary information and are not fully distinctive by themselves in face-recognition tasks. However, this information can be explicitly combined with face matching score to improve the overall face-recognition accuracy. Moreover, in certain application domains, e.g., visual surveillance, where a face image is occluded or is captured in off-frontal pose, soft biometric traits can provide even more valuable information for face matching or retrieval. Facial marks can also be useful to differentiate identical twins whose global facial appearances are very similar. The similarities found from soft biometrics can also be useful as a source of evidence in courts of law because they are more descriptive than the numerical matching scores generated by a traditional face matcher. We propose to utilize demographic information (e.g., gender and ethnicity) and facial marks (e.g., scars, moles, and freckles) for improving face image matching and retrieval performance. An automatic facial mark detection method has been developed that uses (1) the active appearance model for locating primary facial features (e.g., eyes, nose, and mouth), (2) the Laplacian-of-Gaussian blob detection, and (3) morphological operators. Experimental results based on the FERET database (426 images of 213 subjects) and two mugshot databases from the forensic domain (1225 images of 671 subjects and 10 000 images of 10 000 subjects, respectively) show that the use of soft biometric traits is able to improve the face-recognition performance of a state-of-the-art commercial matcher. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Face Recognition in Global Harmonic Subspace

    Publication Year: 2010 , Page(s): 416 - 424
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2998 KB) |  | HTML iconHTML  

    In this paper, a novel pattern recognition scheme, global harmonic subspace analysis (GHSA), is developed for face recognition. In the proposed scheme, global harmonic features are extracted at the semantic scale to capture the 2-D semantic spatial structures of a face image. Laplacian Eigenmap is applied to discriminate faces in their global harmonic subspace. Experimental results on the Yale and PIE face databases show that the proposed GHSA scheme achieves an improvement in face recognition accuracy when compared with conventional subspace approaches, and a further investigation shows that the proposed GHSA scheme has impressive robustness to noise. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Regional Registration for Expression Resistant 3-D Face Recognition

    Publication Year: 2010 , Page(s): 425 - 440
    Cited by:  Papers (18)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1179 KB) |  | HTML iconHTML  

    Biometric identification from three-dimensional (3-D) facial surface characteristics has become popular, especially in high security applications. In this paper, we propose a fully automatic expression insensitive 3-D face recognition system. Surface deformations due to facial expressions are a major problem in 3-D face recognition. The proposed approach deals with such challenging conditions in several aspects. First, we employ a fast and accurate region-based registration scheme that uses common region models. These common models make it possible to establish correspondence to all the gallery samples in a single registration pass. Second, we utilize curvature-based 3-D shape descriptors. Last, we apply statistical feature extraction methods. Since all the 3-D facial features are regionally registered to the same generic facial component, subspace construction techniques may be employed. We show that linear discriminant analysis significantly boosts the identification accuracy. We demonstrate the recognition ability of our system using the multiexpression Bosphorus and the most commonly used 3-D face database, Face Recognition Grand Challenge (FRGCv2). Our experimental results show that in both databases we obtain comparable performance to the best rank-1 correct classification rates reported in the literature so far: 98.19% for the Bosphorus and 97.51% for the FRGCv2 database. We have also carried out the standard receiver operating characteristics (ROC III) experiment for the FRGCv2 database. At an FAR of 0.1%, the verification performance was 86.09%. This shows that model-based registration is beneficial in identification scenarios where speed-up is important, whereas for verification one-to-one registration can be more beneficial. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Plastic Surgery: A New Dimension to Face Recognition

    Publication Year: 2010 , Page(s): 441 - 448
    Cited by:  Papers (30)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (755 KB) |  | HTML iconHTML  

    Advancement and affordability is leading to the popularity of plastic surgery procedures. Facial plastic surgery can be reconstructive to correct facial feature anomalies or cosmetic to improve the appearance. Both corrective as well as cosmetic surgeries alter the original facial information to a large extent thereby posing a great challenge for face recognition algorithms. The contribution of this research is 1) preparing a face database of 900 individuals for plastic surgery, and 2) providing an analytical and experimental underpinning of the effect of plastic surgery on face recognition algorithms. The results on the plastic surgery database suggest that it is an arduous research challenge and the current state-of-art face recognition algorithms are unable to provide acceptable levels of identification performance. Therefore, it is imperative to initiate a research effort so that future face recognition systems will be able to address this important problem. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Laser Doppler Vibrometry Measures of Physiological Function: Evaluation of Biometric Capabilities

    Publication Year: 2010 , Page(s): 449 - 460
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1387 KB) |  | HTML iconHTML  

    A novel approach for remotely sensing mechanical cardiovascular activity for use as a biometric marker is proposed. Laser Doppler Vibrometry (LDV) is employed to sense vibrations on the surface of the skin above the carotid artery related to arterial wall movements associated with the central blood pressure pulse. Carotid LDV signals are recorded using noncontact methods and the resulting unobtrusiveness is a major benefit of this technique. Several recognition methods are proposed that use the temporal and/or spectral information in the signal to assess biometric performance both on an intrasession basis, and on an intersession basis where LDV measurements were acquired from the same subjects after delays ranging from one week to six months. A waveform decomposition method that utilizes principal component analysis is used to model the signal in the time domain. Authentication testing for this approach produces an equal-error rate of 0.5% for intrasession testing. However, performance degrades substantially for intersession testing, requiring a more robust approach to modeling. Improved performance is obtained using techniques based on time-frequency decomposition, incorporating a method for extracting informative components. Biometric fusion methods including data fusion and information fusion are applied to train models using data from multiple sessions. As currently implemented, this approach yields an intersession equal-error rate of 6.3%. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Addressing Missing Values in Kernel-Based Multimodal Biometric Fusion Using Neutral Point Substitution

    Publication Year: 2010 , Page(s): 461 - 469
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1146 KB) |  | HTML iconHTML  

    In multimodal biometric information fusion, it is common to encounter missing modalities in which matching cannot be performed. As a result, at the match score level, this implies that scores will be missing. We address the multimodal fusion problem involving missing modalities (scores) using support vector machines (SVMs) with the neutral point substitution (NPS) method. The approach starts by processing each modality using a kernel. When a modality is missing, at the kernel level, the missing modality is substituted by one that is unbiased with regards to the classification, called a neutral point. Critically, unlike conventional missing-data substitution methods, explicit calculation of neutral points may be omitted by virtue of their implicit incorporation within the SVM training framework. Experiments based on the publicly available Biosecure DS2 multimodal (scores) data set show that the SVM-NPS approach achieves very good generalization performance compared to the sum rule fusion, especially with severe missing modalities. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Dynamic Selection of Biometric Fusion Algorithms

    Publication Year: 2010 , Page(s): 470 - 479
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1360 KB) |  | HTML iconHTML  

    Biometric fusion consolidates the output of multiple biometric classifiers to render a decision about the identity of an individual. We consider the problem of designing a fusion scheme when 1) the number of training samples is limited, thereby affecting the use of a purely density-based scheme and the likelihood ratio test statistic; 2) the output of multiple matchers yields conflicting results; and 3) the use of a single fusion rule may not be practical due to the diversity of scenarios encountered in the probe dataset. To address these issues, a dynamic reconciliation scheme for fusion rule selection is proposed. In this regard, the contribution of this paper is two-fold: 1) the design of a sequential fusion technique that uses the likelihood ratio test-statistic in conjunction with a support vector machine classifier to account for errors in the former; and 2) the design of a dynamic selection algorithm that unifies the constituent classifiers and fusion schemes in order to optimize both verification accuracy and computational cost. The case study in multiclassifier face recognition suggests that the proposed algorithm can address the issues listed above. Indeed, it is observed that the proposed method performs well even in the presence of confounding covariate factors thereby indicating its potential for large-scale face recognition. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • JPEG Error Analysis and Its Applications to Digital Image Forensics

    Publication Year: 2010 , Page(s): 480 - 491
    Cited by:  Papers (27)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1622 KB) |  | HTML iconHTML  

    JPEG is one of the most extensively used image formats. Understanding the inherent characteristics of JPEG may play a useful role in digital image forensics. In this paper, we introduce JPEG error analysis to the study of image forensics. The main errors of JPEG include quantization, rounding, and truncation errors. Through theoretically analyzing the effects of these errors on single and double JPEG compression, we have developed three novel schemes for image forensics including identifying whether a bitmap image has previously been JPEG compressed, estimating the quantization steps of a JPEG image, and detecting the quantization table of a JPEG image. Extensive experimental results show that our new methods significantly outperform existing techniques especially for the images of small sizes. We also show that the new method can reliably detect JPEG image blocks which are as small as 8 × 8 pixels and compressed with quality factors as high as 98. This performance is important for analyzing and locating small tampered regions within a composite image. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Forensic detection of image manipulation using statistical intrinsic fingerprints

    Publication Year: 2010 , Page(s): 492 - 506
    Cited by:  Papers (42)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1319 KB) |  | HTML iconHTML  

    As the use of digital images has increased, so has the means and the incentive to create digital image forgeries. Accordingly, there is a great need for digital image forensic techniques capable of detecting image alterations and forged images. A number of image processing operations, such as histogram equalization or gamma correction, are equivalent to pixel value mappings. In this paper, we show that pixel value mappings leave behind statistical traces, which we shall refer to as a mapping's intrinsic fingerprint, in an image's pixel value histogram. We then propose forensic methods for detecting general forms globally and locally applied contrast enhancement as well as a method for identifying the use of histogram equalization by searching for the identifying features of each operation's intrinsic fingerprint. Additionally, we propose a method to detect the global addition of noise to a previously JPEG-compressed image by observing that the intrinsic fingerprint of a specific mapping will be altered if it is applied to an image's pixel values after the addition of noise. Through a number of simulations, we test the efficacy of each proposed forensic technique. Our simulation results show that aside from exceptional cases, all of our detection methods are able to correctly detect the use of their designated image processing operation with a probability of 99% given a false alarm probability of 7% or less. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Estimation of Image Rotation Angle Using Interpolation-Related Spectral Signatures With Application to Blind Detection of Image Forgery

    Publication Year: 2010 , Page(s): 507 - 517
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1118 KB) |  | HTML iconHTML  

    Motivated by the image rescaling estimation method proposed by Gallagher (2nd Canadian Conf. Computer & Robot Vision, 2005: 65-72), we develop an image rotation angle estimator based on the relations between the rotation angle and the frequencies at which peaks due to interpolation occur in the spectrum of the image's edge map. We then use rescaling/rotation detection and parameter estimation to detect fake objects inserted into images. When a forged image contains areas from different sources, or from another part of the same image, rescaling and/or rotation are often involved. In these geometric operations, interpolation is a necessary step. By dividing the image into blocks, detecting traces of rescaling and rotation in each block, and estimating the parameters, we can effectively reveal the forged areas in an image that have been rescaled and/or rotated. If multiple geometrical operations are involved, different processing sequences, i.e., repeated zooming, repeated rotation, rotation-zooming, or zooming-rotation, may be determined from different behaviors of the peaks due to rescaling and rotation. This may also provide a useful clue to image authentication. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Centered Hyperspherical and Hyperellipsoidal One-Class Support Vector Machines for Anomaly Detection in Sensor Networks

    Publication Year: 2010 , Page(s): 518 - 533
    Cited by:  Papers (28)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1408 KB) |  | HTML iconHTML  

    Anomaly detection in wireless sensor networks is an important challenge for tasks such as intrusion detection and monitoring applications. This paper proposes two approaches to detecting anomalies from measurements from sensor networks. The first approach is a linear programming-based hyperellipsoidal formulation, which is called a centered hyperellipsoidal support vector machine (CESVM). While this CESVM approach has advantages in terms of its flexibility in the selection of parameters and the computational complexity, it has limited scope for distributed implementation in sensor networks. In our second approach, we propose a distributed anomaly detection algorithm for sensor networks using a one-class quarter-sphere support vector machine (QSSVM). Here a hypersphere is found that captures normal data vectors in a higher dimensional space for each sensor node. Then summary information about the hyperspheres is communicated among the nodes to arrive at a global hypersphere, which is used by the sensors to identify any anomalies in their measurements. We show that the CESVM and QSSVM formulations can both achieve high detection accuracies on a variety of real and synthetic data sets. Our evaluation of the distributed algorithm using QSSVM reveals that it detects anomalies with comparable accuracy and less communication overhead than a centralized approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Audio Authenticity: Detecting ENF Discontinuity With High Precision Phase Analysis

    Publication Year: 2010 , Page(s): 534 - 543
    Cited by:  Papers (19)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1346 KB) |  | HTML iconHTML  

    This paper addresses a forensic tool used to assess audio authenticity. The proposed method is based on detecting phase discontinuity of the power grid signal; this signal, referred to as electric network frequency (ENF), is sometimes embedded in audio signals when the recording is carried out with the equipment connected to an electrical outlet or when certain microphones are in an ENF magnetic field. After down-sampling and band-filtering the audio around the nominal value of the ENF, the result can be considered a single tone such that a high-precision Fourier analysis can be used to estimate its phase. The estimated phase provides a visual aid to locating editing points (signalled by abrupt phase changes) and inferring the type of audio editing (insertion or removal of audio segments). From the estimated values, a feature is used to quantify the discontinuity of the ENF phase, allowing an automatic decision concerning the authenticity of the audio evidence. The theoretical background is presented along with practical implementation issues related to the proposed technique, whose performance is evaluated on digitally edited audio signals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting and Extracting the Photo Composites Using Planar Homography and Graph Cut

    Publication Year: 2010 , Page(s): 544 - 555
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2966 KB) |  | HTML iconHTML  

    With the advancement of photo and video editing tools, it has become fairly easy to tamper with photos and videos. One common way is to insert visually plausible composites into target images and videos. In this paper, we propose an automatic fake region detection method based on the planar homography constraint, and an automatic extraction method using graph cut with online feature/parameter selection. Two steps are taken in our method: 1) the targeting step, and 2) the segmentation step. First, the fake region is located roughly by enforcing the planar homography constraint. Second, the fake object is segmented via graph cut with the initialization given by the targeting step. To achieve an automatic segmentation, the optimal features and parameters for graph cut are dynamically selected via the proposed online feature/parameter selection. Performance of this method is evaluated on both semisimulated and real images. Our method works efficiently on images as long as there are regions satisfying the planar homography constraint, including image pairs captured by the approximately cocentered cameras, image pairs photographing planar or distant scenes, and a single image with duplications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Noninteractive Pairwise Key Establishment for Sensor Networks

    Publication Year: 2010 , Page(s): 556 - 569
    Cited by:  Papers (11)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (958 KB) |  | HTML iconHTML  

    As a security primitive, key establishment plays the most crucial role in the design of the security mechanisms. Unfortunately, the resource limitation of sensor nodes poses a great challenge for designing an efficient and effective key establishment scheme for wireless sensor networks (WSNs). In spite of the fact that many elegant and clever solutions have been proposed, no practical key establishment scheme has emerged. In this paper, a ConstrAined Random Perturbation-based pairwise keY establishment (CARPY) scheme and its variant, a CARPY+ scheme, for WSNs, are presented. Compared to all existing schemes which satisfy only some requirements in so-called sensor-key criteria, including (1) resilience to the adversary's intervention, (2) directed and guaranteed key establishment, (3) resilience to network configurations, (4) efficiency, and (5) resilience to dynamic node deployment, the proposed CARPY+ scheme meets all requirements. In particular, to the best of our knowledge, CARPY+ is the first noninteractive key establishment scheme with great resilience to a large number of node compromises designed for WSNs. We examine the CARPY and CARPY+ schemes from both the theoretical and experimental aspects. Our schemes have also been practically implemented on the TelosB compatible mote to evaluate the corresponding performance and overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Online Anonymity Protection in Computer-Mediated Communication

    Publication Year: 2010 , Page(s): 570 - 580
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (609 KB) |  | HTML iconHTML  

    In any situation where a set of personal attributes are revealed, there is a chance that revealed data can be linked back to its owner. Examples of such situations are publishing user profile micro-data or information about social ties, sharing profile information on social networking sites, or revealing personal information in computer-mediated communication (CMC). Measuring user anonymity is the first step to ensuring that the identity of the owner of revealed information cannot be inferred. Most current measures of anonymity ignore important factors such as the probabilistic nature of identity inference, the inferrer's outside knowledge, and the correlation between user attributes. Furthermore, in the social computing domain, variations in personal information and various levels of information exchange among users make the problem more complicated. We present an information-entropy-based realistic estimation of the user anonymity level to deal with these issues in social computing in an effort to help predict the identity inference risks. We then address implementation issues of online protection by proposing complexity reduction methods that take advantage of basic information entropy properties. Our analysis and delay estimation based on experimental data show that our methods are viable, effective, and efficient in facilitating privacy in social computing and synchronous CMCs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Web Spam Detection: New Classification Features Based on Qualified Link Analysis and Language Models

    Publication Year: 2010 , Page(s): 581 - 590
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (648 KB) |  | HTML iconHTML  

    Web spam is a serious problem for search engines because the quality of their results can be severely degraded by the presence of this kind of page. In this paper, we present an efficient spam detection system based on a classifier that combines new link-based features with language-model (LM)-based ones. These features are not only related to quantitative data extracted from the Web pages, but also to qualitative properties, mainly of the page links. We consider, for instance, the ability of a search engine to find, using information provided by the page for a given link, the page that the link actually points at. This can be regarded as indicative of the link reliability. We also check the coherence between a page and another one pointed at by any of its links. Two pages linked by a hyperlink should be semantically related, by at least a weak contextual relation. Thus, we apply an LM approach to different sources of information from a Web page that belongs to the context of a link, in order to provide high-quality indicators of Web spam. We have specifically applied the Kullback-Leibler divergence on different combinations of these sources of information in order to characterize the relationship between two linked pages. The result is a system that significantly improves the detection of Web spam using fewer features, on two large and public datasets SUchasWEBSPAM-UK2006 and WEBSPAM-UK2007. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Joint Feature Correspondences and Appearance Similarity for Robust Visual Object Tracking

    Publication Year: 2010 , Page(s): 591 - 606
    Cited by:  Papers (7)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1676 KB) |  | HTML iconHTML  

    A novel visual object tracking scheme is proposed by using joint point feature correspondences and object appearance similarity. For point feature-based tracking, we propose a candidate tracker that simultaneously exploits two separate sets of point feature correspondences in the foreground and in the surrounding background, where background features are exploited for the indication of occlusions. Feature points in these two sets are then dynamically maintained. For object appearance-based tracking, we propose a candidate tracker based on an enhanced anisotropic mean shift with a fully tunable (five degrees of freedom) bounding box that is partially guided by the above feature point tracker. Both candidate trackers contain a reinitialization process to reset the tracker in order to prevent accumulated tracking error propagation in frames. In addition, a novel online learning method is introduced to the enhanced mean shift-based candidate tracker. The reference object distribution is updated in each time interval if there is an indication of stable and reliable tracking without background interferences. By dynamically updating the reference object model, tracking is further improved by using a more accurate object appearance similarity measure. An optimal selection criterion is applied to the final tracker based on the results of these candidate trackers. Experiments have been conducted on several videos containing a range of complex scenarios. To evaluate the performance, the proposed scheme is further evaluated using three objective criteria, and compared with two existing trackers. All our results have shown that the proposed scheme is very robust and has yielded a marked improvement in terms of tracking drift, tightness, and accuracy of tracked bounding boxes, especially for complex video scenarios containing long-term partial occlusions or intersections, deformation, or background clutter with similar color distributions to the foreground object. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Information Forensics and Security EDICS

    Publication Year: 2010 , Page(s): 607
    Save to Project icon | Request Permissions | PDF file iconPDF (21 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security Information for authors

    Publication Year: 2010 , Page(s): 608 - 609
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Mauro Barni
University of Siena, Italy