Scheduled System Maintenance on May 29th, 2015:
IEEE Xplore will be upgraded between 11:00 AM and 10:00 PM EDT. During this time there may be intermittent impact on performance. We apologize for any inconvenience.
By Topic

Dependable and Secure Computing, IEEE Transactions on

Issue 5 • Date Sept.-Oct. 2012

Filter Results

Displaying Results 1 - 24 of 24
  • [Front cover]

    Publication Year: 2012 , Page(s): c1
    Save to Project icon | Request Permissions | PDF file iconPDF (112 KB)  
    Freely Available from IEEE
  • [Inside front cover]

    Publication Year: 2012 , Page(s): c2
    Save to Project icon | Request Permissions | PDF file iconPDF (122 KB)  
    Freely Available from IEEE
  • Guest Editors' Introduction: Special Section on Data and Applications Security and Privacy

    Publication Year: 2012 , Page(s): 625 - 626
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | PDF file iconPDF (77 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Privacy-Preserving Enforcement of Spatially Aware RBAC

    Publication Year: 2012 , Page(s): 627 - 640
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1119 KB) |  | HTML iconHTML  

    Several models for incorporating spatial constraints into role-based access control (RBAC) have been proposed, and researchers are now focusing on the challenge of ensuring such policies are enforced correctly. However, existing approaches have a major shortcoming, as they assume the server is trustworthy and require complete disclosure of sensitive location information by the user. In this work, we propose a novel framework and a set of protocols to solve this problem. Specifically, in our scheme, a user provides a service provider with role and location tokens along with a request. The service provider consults with a role authority and a location authority to verify the tokens and evaluate the policy. However, none of the servers learn the requesting user's identity, role, or location. In this paper, we define the protocols and the policy enforcement scheme, and present a formal proof of a number of security properties. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Query Profile Obfuscation by Means of Optimal Query Exchange between Users

    Publication Year: 2012 , Page(s): 641 - 654
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1612 KB) |  | HTML iconHTML  

    We address the problem of query profile obfuscation by means of partial query exchanges between two users, in order for their profiles of interest to appear distorted to the information provider (database, search engine, etc.). We illustrate a methodology to reach mutual privacy gain, that is, a situation where both users increase their own privacy protection through collaboration in query exchange. To this end, our approach starts with a mathematical formulation, involving the modeling of the users' apparent profiles as probability distributions over categories of interest, and the measure of their privacy as the corresponding Shannon entropy. The question of which query categories to exchange translates into finding optimization variables representing exchange policies, for various optimization objectives based on those entropies, possibly under exchange traffic constraints. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Constraint-Aware Role Mining via Extended Boolean Matrix Decomposition

    Publication Year: 2012 , Page(s): 655 - 669
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (4068 KB) |  | HTML iconHTML  

    The role mining problem has received considerable attention recently. Among the many solutions proposed, the Boolean matrix decomposition (BMD) formulation has stood out, which essentially discovers roles by decomposing the binary matrix representing user-to-permission assignment (UPA) into two matrices-user-to-role assignment (UA) and permission-to-role assignment (PA). However, supporting certain embedded constraints, such as separation of duty (SoD) and exceptions, is critical to the role mining process. Otherwise, the mined roles may not capture the inherent constraints of the access control policies of the organization. None of the previously proposed role mining solutions, including BMD, take into account these underlying constraints while mining. In this paper, we extend the BMD so that it reflects such embedded constraints by proposing to allow negative permissions in roles or negative role assignments for users. Specifically, by allowing negative permissions in roles, we are often able to use less roles to reconstruct the same given user-permission assignments. Moreover, from the resultant roles we can discover underlying constraints such as separation of duty constraints. This feature is not supported by any existing role mining approaches. Hence, we call the role mining problem with negative authorizations the constraint-aware role mining problem (CRM). We also explore other interesting variants of the CRM, which may occur in real situations. To enable CRM and its variants, we propose a novel approach, extended Boolean matrix decomposition (EBMD), which addresses the ineffectiveness of BMD in its ability of capturing underlying constraints. We analyze the computational complexity for each of CRM variants and present heuristics for problems that are proven to be NP-hard. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Access Control with Privacy Enhancements a Unified Approach

    Publication Year: 2012 , Page(s): 670 - 683
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (458 KB) |  | HTML iconHTML  

    We describe an approach that aims to unify certain aspects of access control and privacy. Our unified approach is based on the idea of axiomatizing access control in general terms. We show how multiple access control and privacy models and policies can be uniformly represented as particular logical theories in our axiom system. We show that our approach translates into different practical languages for implementation and we give some performance measures for some candidate implementations of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Hybrid Approach to Private Record Matching

    Publication Year: 2012 , Page(s): 684 - 698
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (873 KB) |  | HTML iconHTML  

    Real-world entities are not always represented by the same set of features in different data sets. Therefore, matching records of the same real-world entity distributed across these data sets is a challenging task. If the data sets contain private information, the problem becomes even more difficult. Existing solutions to this problem generally follow two approaches: sanitization techniques and cryptographic techniques. We propose a hybrid technique that combines these two approaches and enables users to trade off between privacy, accuracy, and cost. Our main contribution is the use of a blocking phase that operates over sanitized data to filter out in a privacy-preserving manner pairs of records that do not satisfy the matching condition. We also provide a formal definition of privacy and prove that the participants of our protocols learn nothing other than their share of the result and what can be inferred from their share of the result, their input and sanitized views of the input data sets (which are considered public information). Our method incurs considerably lower costs than cryptographic techniques and yields significantly more accurate matching results compared to sanitization techniques, even when privacy requirements are high. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Trapdoor Hash-Based Mechanism for Stream Authentication

    Publication Year: 2012 , Page(s): 699 - 713
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (684 KB) |  | HTML iconHTML  

    Digital streaming Internet applications such as online gaming, multimedia playback, presentations, news feeds, and stock quotes involve end-users with very low tolerance for high latency, low data rates, and playback interruption. To protect such delay-sensitive streams against malicious attacks, security mechanisms need to be designed to efficiently process long sequence of bits. We study the problem of efficient authentication for real-time and delay-sensitive streams commonly seen in content distribution, multicast, and peer-to-peer networks. We propose a novel signature amortization technique based on trapdoor hash functions for authenticating individual data blocks in a stream. Our technique provides: 1) Resilience against transmission losses of intermediate blocks in the stream; 2) Small and constant memory/compute requirements at the sender and receiver; 3) Minimal constant communication overhead needed for transmission of authenticating information. Our proposed technique renders authentication of digital streams practical and efficient. We substantiate this claim by constructing DL-SA, a discrete-log-based instantiation of the proposed technique. DL-SA provides adaptive stream verification, where the receiver has control over modulating computation cost versus buffer size. Our performance analysis demonstrates that DL-SA incurs the least per-block communication and signature generation overheads compared to existing schemes with comparable features. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Early Detection of Malicious Flux Networks via Large-Scale Passive DNS Traffic Analysis

    Publication Year: 2012 , Page(s): 714 - 726
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (905 KB) |  | HTML iconHTML  

    In this paper, we present FluxBuster, a novel passive DNS traffic analysis system for detecting and tracking malicious flux networks. FluxBuster applies large-scale monitoring of DNS traffic traces generated by recursive DNS (RDNS) servers located in hundreds of different networks scattered across several different geographical locations. Unlike most previous work, our detection approach is not limited to the analysis of suspicious domain names extracted from spam emails or precompiled domain blacklists. Instead, FluxBuster is able to detect malicious flux service networks in-the-wild, i.e., as they are "accessed” by users who fall victim of malicious content, independently of how this malicious content was advertised. We performed a long-term evaluation of our system spanning a period of about five months. The experimental results show that FluxBuster is able to accurately detect malicious flux networks with a low false positive rate. Furthermore, we show that in many cases FluxBuster is able to detect malicious flux domains several days or even weeks before they appear in public domain blacklists. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • More Hybrid and Secure Protection of Statistical Data Sets

    Publication Year: 2012 , Page(s): 727 - 740
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1170 KB) |  | HTML iconHTML  

    Different methods and paradigms to protect data sets containing sensitive statistical information have been proposed and studied. The idea is to publish a perturbed version of the data set that does not leak confidential information, but that still allows users to obtain meaningful statistical values about the original data. The two main paradigms for data set protection are the classical one and the synthetic one. Recently, the possibility of combining the two paradigms, leading to a hybrid paradigm, has been considered. In this work, we first analyze the security of some synthetic and (partially) hybrid methods that have been proposed in the last years, and we conclude that they suffer from a high interval disclosure risk. We then propose the first fully hybrid SDC methods; unfortunately, they also suffer from a quite high interval disclosure risk. To mitigate this, we propose a postprocessing technique that can be applied to any data set protected with a synthetic method, with the goal of reducing its interval disclosure risk. We describe through the paper a set of experiments performed on reference data sets that support our claims. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Pollution Attacks and Defenses in Wireless Interflow Network Coding Systems

    Publication Year: 2012 , Page(s): 741 - 755
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1023 KB) |  | HTML iconHTML  

    We study data pollution attacks in wireless interflow network coding systems. Although several defenses for these attacks are known for intraflow network coding systems, none of them are applicable to interflow coding systems. We formulate a model for interflow network coding that encompasses all the existing systems, and use it to analyze the impact of pollution attacks. Our analysis shows that the effects of pollution attacks depend not only on the network topology, but also on the location and strategy of the attacker nodes. We propose CodeGuard, a reactive attestation-based defense mechanism that uses efficient bit-level traceback and a novel cross-examination technique to unequivocally identify attacker nodes. We analyze the security of CodeGuard and prove that it is always able to identify and isolate at least one attacker node on every occurrence of a pollution attack. We analyze the overhead of CodeGuard and show that the storage, computation, and communication overhead are practical. We experimentally demonstrate that CodeGuard is able to identify attacker nodes quickly (within 500 ms) and restore system throughput to a high level, even in the presence of many attackers, thus preserving the performance of the underlying network coding system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Workflow Signatures for Business Process Compliance

    Publication Year: 2012 , Page(s): 756 - 769
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (401 KB) |  | HTML iconHTML  

    Interorganizational workflow systems play a fundamental role in business partnerships. We introduce and investigate the concept of workflow signatures. Not only can these signatures be used to ensure authenticity and protect integrity of workflow data, but also to prove the sequence and logical relationships, such as AND-join and AND-split, of a workflow. Hence, workflow signatures can be electronic evidence useful for auditing, that is proving compliance of business processes against some regulatory requirements. Furthermore, signing keys can be used to grant permissions to perform tasks. Since the signing keys are issued on-the-fly, authorization to execute a task within a workflow can be controlled and granted dynamically at runtime. In this paper, we propose a concrete workflow signature scheme, which is based on hierarchical identity-based cryptography, to meet security properties required by interorganizational workflows. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Interconnect Reliability-Driven Routing Technique for Electromigration Failure Avoidance

    Publication Year: 2012 , Page(s): 770 - 776
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (519 KB)  

    As VLSI technology enters the nanoscale regime, design reliability is becoming increasingly important. A major design reliability concern arises from electromigration which refers to the transport of material caused by ion movement in interconnects. Since the lifetime of an interconnect drastically depends on the current flowing through it, the electromigration problem aggravates with increasingly growing thinner wires. Further, the current-density-induced interconnect thermal issue becomes much more severe with larger current. To mitigate the electromigration and the current-density-induced thermal effects, interconnect current density needs to be reduced. Assigning wires to thick metals increases wire volume, and thus, reduces the current density. However, overstretching thick-metal assignment may hurt routability. Thus, it is highly desirable to minimize the thick-metal usage, or total wire cost, subject to the reliability constraint. In this paper, the minimum cost reliability-driven routing, which consists of Steiner tree construction and layer assignment, is considered. The problem is proven to be NP-hard and a highly effective iterative rounding-based integer linear programming algorithm is proposed. In addition, a unified routing technique is proposed to directly handle multiple current levels, which is critical in analog VLSI design. Further, the new algorithm is extended to handle blockage. Our experiments on 450 nets demonstrate that the new algorithm significantly outperforms the state-of-the-art work [CHECK END OF SENTENCE] with up to 14.7 percent wire reduction. In addition, the new algorithm can save 11.4 percent wires over a heuristic algorithm for handling multiple currents. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Call for Papers: Special Issue on Cloud Computing Assessment

    Publication Year: 2012 , Page(s): 777
    Save to Project icon | Request Permissions | PDF file iconPDF (1170 KB)  
    Freely Available from IEEE
  • Call for Papers: Transactions on Dependable and Secure Computing

    Publication Year: 2012 , Page(s): 778
    Save to Project icon | Request Permissions | PDF file iconPDF (1127 KB)  
    Freely Available from IEEE
  • What's new in Transactions [advertisement]

    Publication Year: 2012 , Page(s): 779
    Save to Project icon | Request Permissions | PDF file iconPDF (777 KB)  
    Freely Available from IEEE
  • New Transactions Newsletter [advertisement]

    Publication Year: 2012 , Page(s): 780
    Save to Project icon | Request Permissions | PDF file iconPDF (661 KB)  
    Freely Available from IEEE
  • Transactions Media Center

    Publication Year: 2012 , Page(s): 781
    Save to Project icon | Request Permissions | PDF file iconPDF (738 KB)  
    Freely Available from IEEE
  • IEEE Computer Society OnlinePlus Publishing Model

    Publication Year: 2012 , Page(s): 782
    Save to Project icon | Request Permissions | PDF file iconPDF (1577 KB)  
    Freely Available from IEEE
  • Stay Connected with the IEEE Computer Society [advertisement]

    Publication Year: 2012 , Page(s): 783
    Save to Project icon | Request Permissions | PDF file iconPDF (583 KB)  
    Freely Available from IEEE
  • CPS Handles the Details for you [advertisement]

    Publication Year: 2012 , Page(s): 784
    Save to Project icon | Request Permissions | PDF file iconPDF (937 KB)  
    Freely Available from IEEE
  • [Inside back cover]

    Publication Year: 2012 , Page(s): c3
    Save to Project icon | Request Permissions | PDF file iconPDF (122 KB)  
    Freely Available from IEEE
  • [Back cover]

    Publication Year: 2012 , Page(s): c4
    Save to Project icon | Request Permissions | PDF file iconPDF (112 KB)  
    Freely Available from IEEE

Aims & Scope

The purpose of TDSC is to publish papers in dependability and security, including the joint consideration of these issues and their interplay with system performance.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Elisa Bertino
CS Department
Purdue University