Scheduled System Maintenance:
On May 6th, single article purchases and IEEE account management will be unavailable from 8:00 AM - 5:00 PM ET (12:00 - 21:00 UTC). We apologize for the inconvenience.
By Topic

Information Forensics and Security, IEEE Transactions on

Issue 6 • Date June 2013

Filter Results

Displaying Results 1 - 25 of 29
  • Front Cover

    Publication Year: 2013 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (279 KB)  
    Freely Available from IEEE
  • IEEE Transactions on Information Forensics and Security publication information

    Publication Year: 2013 , Page(s): C2
    Save to Project icon | Request Permissions | PDF file iconPDF (130 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2013 , Page(s): 831 - 832
    Save to Project icon | Request Permissions | PDF file iconPDF (179 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2013 , Page(s): 833 - 834
    Save to Project icon | Request Permissions | PDF file iconPDF (179 KB)  
    Freely Available from IEEE
  • Guest Editorial: Special issue on privacy and trust management in cloud and distributed systems

    Publication Year: 2013 , Page(s): 835 - 837
    Save to Project icon | Request Permissions | PDF file iconPDF (105 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Utility-Privacy Tradeoffs in Databases: An Information-Theoretic Approach

    Publication Year: 2013 , Page(s): 838 - 852
    Cited by:  Papers (10)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2958 KB) |  | HTML iconHTML  

    Ensuring the usefulness of electronic data sources while providing necessary privacy guarantees is an important unsolved problem. This problem drives the need for an analytical framework that can quantify the privacy of personally identifiable information while still providing a quantifiable benefit (utility) to multiple legitimate information consumers. This paper presents an information-theoretic framework that promises an analytical model guaranteeing tight bounds of how much utility is possible for a given level of privacy and vice-versa. Specific contributions include: 1) stochastic data models for both categorical and numerical data; 2) utility-privacy tradeoff regions and the encoding (sanization) schemes achieving them for both classes and their practical relevance; and 3) modeling of prior knowledge at the user and/or data source and optimal encoding schemes for both cases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automatic General-Purpose Sanitization of Textual Documents

    Publication Year: 2013 , Page(s): 853 - 862
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (962 KB) |  | HTML iconHTML  

    The advent of new information sharing technologies has led society to a scenario where thousands of textual documents are publicly published every day. The existence of confidential information in many of these documents motivates the use of measures to hide sensitive data before being published, which is precisely the goal of document sanitization. Even though methods to assist the sanitization process have been proposed, most of them are focused on the detection of specific types of sensitive entities for concrete domains, lacking generality and and requiring user supervision. Moreover, to hide sensitive terms, most approaches opt to remove them, a measure that hampers the utility of the sanitized document. This paper presents a general-purpose sanitization method that, based on information theory and exploiting knowledge bases, detects and hides sensitive textual information while preserving its meaning. Our proposal works in an automatic and unsupervised way and it can be applied to heterogeneous documents, which make it specially suitable for environments with massive and heterogeneous information-sharing needs. Evaluation results show that our method outperforms strategies based on trained classifiers regarding the detection recall, whereas it better retains the document's utility compared to term-suppression methods. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Novel Privacy Preserving Location-Based Service Protocol With Secret Circular Shift for K-NN Search

    Publication Year: 2013 , Page(s): 863 - 873
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1864 KB) |  | HTML iconHTML  

    Location-based service (LBS) is booming up in recent years with the rapid growth of mobile devices and the emerging of cloud computing paradigm. Among the challenges to establish LBS, the user privacy issue becomes the most important concern. A successful privacy-preserving LBS must be secure and provide accurate query [e.g., -nearest neighbor (NN)] results. In this work, we propose a private circular query protocol (PCQP) to deal with the privacy and the accuracy issues of privacy-preserving LBS. The protocol consists of a space filling curve and a public-key homomorphic cryptosystem. First, we connect the points of interest (POIs) on a map to form a circular structure with the aid of a Moore curve. And then the homomorphism of Paillier cryptosystem is used to perform secret circular shifts of POI-related information (POI-info), stored on the server side. Since the POI-info after shifting and the amount of shifts are encrypted, LBS providers (e.g., servers) have no knowledge about the user's location during the query process. The protocol can resist correlation attack and support a multiuser scenario as long as the predescribed secret circular shift is performed before each query; in other words, the robustness of the proposed protocol is the same as that of a one-time pad encryption scheme. As a result, the security level of the proposed protocol is close to perfect secrecy without the aid of a trusted third party and simulation results show that the k-NN query accuracy rate of the proposed protocol is higher than 90% even when is large. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • TrPF: A Trajectory Privacy-Preserving Framework for Participatory Sensing

    Publication Year: 2013 , Page(s): 874 - 887
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1961 KB) |  | HTML iconHTML  

    The ubiquity of the various cheap embedded sensors on mobile devices, for example cameras, microphones, accelerometers, and so on, is enabling the emergence of participatory sensing applications. While participatory sensing can benefit the individuals and communities greatly, the collection and analysis of the participators' location and trajectory data may jeopardize their privacy. However, the existing proposals mostly focus on participators' location privacy, and few are done on participators' trajectory privacy. The effective analysis on trajectories that contain spatial-temporal history information will reveal participators' whereabouts and the relevant personal privacy. In this paper, we propose a trajectory privacy-preserving framework, named TrPF, for participatory sensing. Based on the framework, we improve the theoretical mix-zones model with considering the time factor from the perspective of graph theory. Finally, we analyze the threat models with different background knowledge and evaluate the effectiveness of our proposal on the basis of information entropy, and then compare the performance of our proposal with previous trajectory privacy protections. The analysis and simulation results prove that our proposal can protect participators' trajectories privacy effectively with lower information loss and costs than what is afforded by the other proposals. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enforcing Secure and Privacy-Preserving Information Brokering in Distributed Information Sharing

    Publication Year: 2013 , Page(s): 888 - 900
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2108 KB) |  | HTML iconHTML  

    Today's organizations raise an increasing need for information sharing via on-demand access. Information brokering systems (IBSs) have been proposed to connect large-scale loosely federated data sources via a brokering overlay, in which the brokers make routing decisions to direct client queries to the requested data servers. Many existing IBSs assume that brokers are trusted and thus only adopt server-side access control for data confidentiality. However, privacy of data location and data consumer can still be inferred from metadata (such as query and access control rules) exchanged within the IBS, but little attention has been put on its protection. In this paper, we propose a novel approach to preserve privacy of multiple stakeholders involved in the information brokering process. We are among the first to formally define two privacy attacks, namely attribute-correlation attack and inference attack, and propose two countermeasure schemes automaton segmentation and query segment encryption to securely share the routing decision-making responsibility among a selected set of brokering servers. With comprehensive security analysis and experimental results, we show that our approach seamlessly integrates security enforcement with query routing to provide system-wide security with insignificant overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed Architecture With Double-Phase Microaggregation for the Private Sharing of Biomedical Data in Mobile Health

    Publication Year: 2013 , Page(s): 901 - 910
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1739 KB) |  | HTML iconHTML  

    In this paper, we present the concept of double-phase microaggregation as an improvement of classical microaggregation for the protection of privacy in distributed scenarios without fully trusted parties. We apply this new concept in the context of mobile health and we show that a distributed architecture consisting of patients and several intermediate entities can apply it to protect the privacy of patients, whose data are released to third parties for secondary use. After recalling some fundamental concepts of statistical disclosure control and microaggregation, we detail the distributed architecture that allows the private gathering, storage, and sharing of biomedical data. We show that double-phase multivariate microaggregation properly fits the needs for privacy preservation of biomedical data in the distributed context of mobile health. Moreover, we show that double-phase microaggregation performs similarly to classical microaggregation in terms of information loss, disclosure risk, and correlation preservation, while avoiding the limitations of a centralized approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Comparative Study of Trust Modeling for Automatic Landmark Tagging

    Publication Year: 2013 , Page(s): 911 - 923
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2076 KB) |  | HTML iconHTML  

    Many images uploaded to social networks are related to travel, since people consider traveling to be an important event in their life. However, a significant amount of travel images on the Internet lack proper geographical annotations or tags. In many cases, the images are tagged manually. One way to make this time-consuming manual tagging process more efficient is to propagate tags from a small set of tagged images to the larger set of untagged images automatically. In this paper, we present a system for automatic geotag propagation in images based on the similarity between image content (famous landmarks) and its context (associated geotags). In such a scenario, however, an incorrect or a spam tag can damage the integrity and reliability of the automated propagation system. Therefore, for reliable geotags propagation, we suggest adopting a user trust model based on social feedback from the users of the photo-sharing system. We compare this socially-driven approach with other user trust models via experiments and subjective testing on an image database of various famous landmarks. Results demonstrate that relying on user feedback is more efficient, since the number of propagated tags more than doubles without loss of accuracy compared to using other models or propagating without trust modeling. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LDTS: A Lightweight and Dependable Trust System for Clustered Wireless Sensor Networks

    Publication Year: 2013 , Page(s): 924 - 935
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2560 KB) |  | HTML iconHTML  

    The resource efficiency and dependability of a trust system are the most fundamental requirements for any wireless sensor network (WSN). However, existing trust systems developed for WSNs are incapable of satisfying these requirements because of their high overhead and low dependability. In this work, we proposed a lightweight and dependable trust system (LDTS) for WSNs, which employ clustering algorithms. First, a lightweight trust decision-making scheme is proposed based on the nodes' identities (roles) in the clustered WSNs, which is suitable for such WSNs because it facilitates energy-saving. Due to canceling feedback between cluster members (CMs) or between cluster heads (CHs), this approach can significantly improve system efficiency while reducing the effect of malicious nodes. More importantly, considering that CHs take on large amounts of data forwarding and communication tasks, a dependability-enhanced trust evaluating approach is defined for cooperations between CHs. This approach can effectively reduce networking consumption while malicious, selfish, and faulty CHs. Moreover, a self-adaptive weighted method is defined for trust aggregation at CH level. This approach surpasses the limitations of traditional weighting methods for trust factors, in which weights are assigned subjectively. Theory as well as simulation results shows that LDTS demands less memory and communication overhead compared with the current typical trust systems for WSNs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Securing Online Reputation Systems Through Trust Modeling and Temporal Analysis

    Publication Year: 2013 , Page(s): 936 - 948
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2138 KB) |  | HTML iconHTML  

    With the rapid development of reputation systems in various online social networks, manipulations against such systems are evolving quickly. In this paper, we propose scheme TATA, the abbreviation of joint Temporal And Trust Analysis, which protects reputation systems from a new angle: the combination of time domain anomaly detection and Dempster-Shafer theory-based trust computation. Real user attack data collected from a cyber competition is used to construct the testing data set. Compared with two representative reputation schemes and our previous scheme, TATA achieves a significantly better performance in terms of identifying items under attack, detecting malicious users who insert dishonest ratings, and recovering reputation scores. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Decentralized Privacy Preserving Reputation Protocol for the Malicious Adversarial Model

    Publication Year: 2013 , Page(s): 949 - 962
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2789 KB) |  | HTML iconHTML  

    Users hesitate to submit negative feedback in reputation systems due to the fear of retaliation from the recipient user. A privacy preserving reputation protocol protects users by hiding their individual feedback and revealing only the reputation score. We present a privacy preserving reputation protocol for the malicious adversarial model. The malicious users in this model actively attempt to learn the private feedback values of honest users as well as to disrupt the protocol. Our protocol does not require centralized entities, trusted third parties, or specialized platforms, such as anonymous networks and trusted hardware. Moreover, our protocol is efficient. It requires an exchange of messages, where and are the number of users in the protocol and the environment, respectively. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Using Mussel-Inspired Self-Organization and Account Proxies to Obfuscate Workload Ownership and Placement in Clouds

    Publication Year: 2013 , Page(s): 963 - 972
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2164 KB) |  | HTML iconHTML  

    Recent research has provided evidence indicating how a malicious user could perform coresidence profiling and public-to-private IP mapping to target and exploit customers which share physical resources. The attacks rely on two steps: resource placement on the target's physical machine and extraction. Our proposed solution, in part inspired by mussel self-organization, relies on user account and workload clustering to mitigate coresidence profiling. Users with similar preferences and workload characteristics are mapped to the same cluster. To obfuscate the public-to-private IP map, each cluster is managed and accessed by an account proxy. Each proxy uses one public IP address, which is shared by all clustered users when accessing their instances, and maintains the mapping to private IP addresses. We describe a set of capabilities and attack paths an attacker needs to execute for targeted coresidence, and present arguments to show how our approach disrupts the critical steps in the attack path for most cases. We then perform a risk assessment to determine the likelihood an individual user will be victimized, given that a successful nondirected exploit has occurred. Our results suggest that while possible, this event is highly unlikely. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards Trustworthy Resource Scheduling in Clouds

    Publication Year: 2013 , Page(s): 973 - 984
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1203 KB) |  | HTML iconHTML  

    Managing the allocation of cloud virtual machines at physical resources is a key requirement for the success of clouds. Current implementations of cloud schedulers do not consider the entire cloud infrastructure neither do they consider the overall user and infrastructure properties. This results in major security, privacy, and resilience concerns. In this paper, we propose a novel cloud scheduler which considers both user requirements and infrastructure properties. We focus on assuring users that their virtual resources are hosted using physical resources that match their requirements without getting users involved with understanding the details of the cloud infrastructure. As a proof-of-concept, we present our prototype which is built on OpenStack. The provided prototype implements the proposed cloud scheduler. It also provides an implementation of our previous work on cloud trust management which provides the scheduler with input about the trust status of the cloud infrastructure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • CAM: Cloud-Assisted Privacy Preserving Mobile Health Monitoring

    Publication Year: 2013 , Page(s): 985 - 997
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2268 KB) |  | HTML iconHTML  

    Cloud-assisted mobile health (mHealth) monitoring, which applies the prevailing mobile communications and cloud computing technologies to provide feedback decision support, has been considered as a revolutionary approach to improving the quality of healthcare service while lowering the healthcare cost. Unfortunately, it also poses a serious risk on both clients' privacy and intellectual property of monitoring service providers, which could deter the wide adoption of mHealth technology. This paper is to address this important problem and design a cloud-assisted privacy preserving mobile health monitoring system to protect the privacy of the involved parties and their data. Moreover, the outsourcing decryption technique and a newly proposed key private proxy reencryption are adapted to shift the computational complexity of the involved parties to the cloud without compromising clients' privacy and service providers' intellectual property. Finally, our security and performance analysis demonstrates the effectiveness of our proposed design. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Individuality of Relatively Permanent Pigmented or Vascular Skin Marks (RPPVSM) in Independently and Uniformly Distributed Patterns

    Publication Year: 2013 , Page(s): 998 - 1012
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2469 KB) |  | HTML iconHTML  

    With recent advances in multimedia technology, the involvement of digital images/videos in crimes has been increasing significantly. Identification of individuals in these images/videos can be challenging. For example, in cases of child sexual abuse, child pornography, and masked gunmen, the faces of criminals or victims are often hidden or covered and only some body parts (e.g., back, thigh, and arm) can be observed from the digital evidence. Although tattoos and scars can be used for identification in some cases, they are neither universal nor unique. We propose a group of skin marks named Relatively Permanent Pigmented or Vascular Skin Marks (RPPVSM) as a biometric trait for forensic identification. To support the scientific underpinnings of using RPPVSM patterns as a novel biometric trait, the individuality was studied. RPPVSM on the backs of 269 male subjects were examined. We found that RPPVSM in middle to low density patterns tend to form an independent and uniform distribution, while RPPVSM in high density patterns tend to form clusters. We present in this paper an individuality model for the independently and uniformly distributed RPPVSM patterns. When compared to the empirical results, this model fits the empirical distribution very well. Finally, the predicted error rates for verification and identification are reported. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fine-Grained Refinement on TPM-Based Protocol Applications

    Publication Year: 2013 , Page(s): 1013 - 1026
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3427 KB) |  | HTML iconHTML  

    Trusted Platform Module (TPM) is a coprocessor for detecting platform integrity and attesting the integrity to the remote entity. There are two obstacles in the application of TPM: minimizing trusted computing base (TCB) for reducing risk of flaws in TCB, for which a number of convincing solutions have been developed; formal guarantees on each level of TCB, where the formal methods on analyzing the application level have not been well addressed. To the best of our knowledge, there is no general formal framework for developing the TPM-based protocol applications, which not only guarantees the security but also makes it easier for design. In this paper, we make fine-grained refinement on TPM-based security protocols to illustrate our formal solution on the application level by using the Event-B language. First, we modify the classical Dolev-Yao attacker model, which assumes normal entity's compliance with the protocol even without TPM's protection. Thus, the classical security protocols are vulnerable in this modified attacker model. Second, we make stepwise refinement of the security protocol by refining the protocol events and adding security constraints. From the fifth refinement, we make a case study to illustrate the entire refinement and further formally prove the key agreement protocol from DAAODV, the TPM-based routing protocol, under the extended Dolev-Yao attacker model. The refinement provides another way of formal modeling the TPM-based security protocols and a more fine-grained model to satisfy with the rigorous security requirement of applying TPM. Finally, we prove all the proof obligations generated by Rodin, an Eclipse-based IDE for Event-B, to ensure the soundness of our proposal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FM 99.9, Radio Virus: Exploiting FM Radio Broadcasts for Malware Deployment

    Publication Year: 2013 , Page(s): 1027 - 1037
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (750 KB) |  | HTML iconHTML  

    Many modern smartphones and car radios are shipped with embedded FM radio receiver chips. The number of devices with similar chips could grow very significantly if the U.S. Congress decides to make their inclusion mandatory in any portable device as suggested by organizations such as the RIAA. While the main goal of embedding these chips is to provide access to traditional FM radio stations, a side effect is the availability of a data channel, the FM Radio Data System (RDS), which connects all these devices. Different from other existing IP-based data channels among portable devices, this new one is open, broadcast in nature, and so far completely ignored by security providers. This paper illustrates for the first time how to exploit the FM RDS protocol as an attack vector to deploy malware that, when executed, gains full control of the victim's device. We show how this attack vector allows the adversary to deploy malware on different platforms. Furthermore, we have shown the infection is undetected on devices running the Android OS, since malware detection solutions are limited in their ability due to some features of the Android security model. We support our claims by implementing an attack using RDS on different devices available on the market (smartphones, car radios, and tablets) running three different versions of Android OS. We also provide suggestions on how to limit the threat posed by this new attack vector and explain what are the design choices that make Android vulnerable. However, there are no straightforward solutions. Therefore, we also wish to draw the attention of the security community towards these attacks and initiate more research into countermeasures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Impacts of Watermarking Security on Tardos-Based Fingerprinting

    Publication Year: 2013 , Page(s): 1038 - 1050
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2729 KB) |  | HTML iconHTML  

    This paper presents a study of the embedding of Tardos binary fingerprinting codes with watermarking techniques. By taking into account the security of the embedding scheme, we present a new approach for colluding strategies which relies on the possible estimation error rate of the code symbols (denoted ∈). We derive a new attack strategy called “ ∈-Worst Case Attack” and show its efficiency using the computation of achievable rates for simple decoding. Then we consider the interplay between security and robustness regarding the accusation performances of the fingerprinting scheme and show that 1) for the same accusation rate secure schemes can afford to be less robust than insecure ones, and 2) that secure schemes enable to cast the Worst Case Attack into an interleaving attack. Additionally, we use the security analysis of the watermarking scheme to derive from ∈ a security attack for a fingerprinting scheme based on Tardos codes and a new scheme called stochastic spread-spectrum watermarking. We compare a removal attack against an AWGN robustness attack and we show that for the same distortion, the combination of a fingerprinting attack and a security attack easily outperform classical attacks even with a small number of observations. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adaptive Quality-Based Performance Prediction and Boosting for Iris Authentication: Methodology and Its Illustration

    Publication Year: 2013 , Page(s): 1051 - 1060
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1416 KB) |  | HTML iconHTML  

    Three practical methods to improve performance of a single biometric matcher based on vectors of quality measures associated with biometric data are described. The first two methods adaptively select probe biometric data and matching scores based on predicted values of Quality of Sample (QS) index (defined here as d-prime) and Confidence in matching Scores (CS), respectively. The third method, Quality Sample and Template features (QST), treats quality measures as weak but useful features for discriminating between genuine and imposter matching scores. The unifying theme for the three methods consists in learning a nonlinear mapping between vectors of quality measures and QS, CS, and QST for each of the three methods, respectively. For the first method, learning requires a small set of input data in the form of a vector of quality metrics per each biometric image and the output data in the form of QS estimated per image. For the second method, learning requires a small set of input data in the form of two vectors of quality metrics per each matching pair and the output data in the form of CS estimated per matching score. For the third method, learning requires a small set of input data in the form of biometric feature vector (template) concatenated with a vector of quality metrics and a set of output data in the form of matching labels. The proposed methodology is generic and is suitable for any biometric modality and for any choice of a nonlinear mapping between vectors of quality measures and QS, CS, and QST. The experimental results (obtained by means of neural nets) show significant performance improvements for all three methods when applied to iris biometrics. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Formal Usability Constraints Model for Watermarking of Outsourced Datasets

    Publication Year: 2013 , Page(s): 1061 - 1072
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2174 KB) |  | HTML iconHTML  

    The large datasets are being mined to extract hidden knowledge and patterns that assist decision makers in making effective, efficient, and timely decisions in an ever increasing competitive world. This type of “knowledge-driven” data mining activity is not possible without sharing the “datasets” between their owners and data mining experts (or corporations); as a consequence, protecting ownership (by embedding a watermark) on the datasets is becoming relevant. The most important challenge in watermarking (to be mined) datasets is: how to preserve knowledge in features or attributes? Usually, an owner needs to manually define “Usability constraints” for each type of dataset to preserve the contained knowledge. The major contribution of this paper is a novel formal model that facilitates a data owner to define usability constraints-to preserve the knowledge contained in the dataset-in an automated fashion. The model aims at preserving “classification potential” of each feature and other major characteristics of datasets that play an important role during the mining process of data; as a result, learning statistics and decision-making rules also remain intact. We have implemented our model and integrated it with a new watermark embedding algorithm to prove that the inserted watermark not only preserves the knowledge contained in a dataset but also significantly enhances watermark security compared with existing techniques. We have tested our model on 25 different data-mining datasets to show its efficacy, effectiveness, and the ability to adapt and generalize. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • IEEE Transactions on Information Forensics and Security EDICS

    Publication Year: 2013 , Page(s): 1073
    Save to Project icon | Request Permissions | PDF file iconPDF (81 KB)  
    Freely Available from IEEE

Aims & Scope

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Mauro Barni
University of Siena, Italy