Notification:
We are currently experiencing intermittent issues impacting performance. We apologize for the inconvenience.
By Topic

Parallel and Distributed Systems, IEEE Transactions on

Issue 2 • Date Feb. 2014

Filter Results

Displaying Results 1 - 25 of 25
  • Guest Editors' Introduction: Special Issue on Trust, Security, and Privacy in Parallel and Distributed Systems

    Publication Year: 2014 , Page(s): 279 - 282
    Save to Project icon | Request Permissions | PDF file iconPDF (155 KB)  
    Freely Available from IEEE
  • Verifying Keys through Publicity and Communities of Trust: Quantifying Off-Axis Corroboration

    Publication Year: 2014 , Page(s): 283 - 291
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1194 KB)  

    The DNS Security Extensions (DNSSEC) arguably make DNS the first core Internet system to be protected using public key cryptography. The success of DNSSEC not only protects the DNS, but has generated interest in using this secured global database for new services such as those proposed by the IETF DANE working group. However, continued success is only possible if several important operational issues can be addressed. For example, .gov and .arpa have already suffered misconfigurations where DNS continued to function properly, but DNSSEC failed (thus, orphaning their entire subtrees in DNSSEC). Internet-scale verification systems must tolerate this type of chaos, but what kind of verification can one derive for systems with dynamism like this? In this paper, we propose to achieve robust verification with a new theoretical model, called Public Data, which treats operational deployments as Communities of Trust (CoTs) and makes them the verification substrate. Using a realization of the above idea, called Vantages, we quantitatively show that using a reasonable DNSSEC deployment model and a typical choice of a CoT, an adversary would need to be able to have visibility into and perform on-path Man-in-the-Middle (MitM) attacks on arbitrary traffic into and out of up to 90 percent of the all of the Autonomous Systems (ASes) in the Internet before having even a 10 percent chance of spoofing a DNSKEY. Further, our limited deployment of Vantages has outperformed the verifiability of DNSSEC and has properly validated its data up to 99.5 percent of the time. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Trustworthy Operations in Cellular Networks: The Case of PF Scheduler

    Publication Year: 2014 , Page(s): 292 - 300
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (741 KB)  

    Cellular data networks are proliferating to address the need for ubiquitous connectivity. To cope with the increasing number of subscribers and with the spatiotemporal variations of the wireless signals, current cellular networks use opportunistic schedulers, such as the Proportional Fairness scheduler (PF), to maximize network throughput while maintaining fairness among users. Such scheduling decisions are based on channel quality metrics and Automatic Repeat reQuest (ARQ) feedback reports provided by the User's Equipment (UE). Implicit in current networks is the a priori trust on every UE's feedback. Malicious UEs can, thus, exploit this trust to disrupt service by intelligently faking their reports. This work proposes a trustworthy version of the PF scheduler (called TPF) to mitigate the effects of such Denial-of-Service (DoS) attacks. In brief, based on the channel quality reported by the UE, we assign a probability to possible ARQ feedbacks. We then use the probability associated with the actual ARQ report to assess the UE's reporting trustworthiness. We adapt the scheduling mechanism to give higher priority to more trusted users. Our evaluations show that TPF 1) does not induce any performance degradation under benign settings, and 2) it completely mitigates the effects of the activity of malicious UEs. In particular, while colluding attackers can obtain up to 77 percent of the time slots with the most sophisticated attack, TPF is able to contain this percentage to as low as 6 percent. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Traffic Pattern-Based Content Leakage Detection for Trusted Content Delivery Networks

    Publication Year: 2014 , Page(s): 301 - 309
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3553 KB) |  | HTML iconHTML  

    Due to the increasing popularity of multimedia streaming applications and services in recent years, the issue of trusted video delivery to prevent undesirable content-leakage has, indeed, become critical. While preserving user privacy, conventional systems have addressed this issue by proposing methods based on the observation of streamed traffic throughout the network. These conventional systems maintain a high detection accuracy while coping with some of the traffic variation in the network (e.g., network delay and packet loss), however, their detection performance substantially degrades owing to the significant variation of video lengths. In this paper, we focus on overcoming this issue by proposing a novel content-leakage detection scheme that is robust to the variation of the video length. By comparing videos of different lengths, we determine a relation between the length of videos to be compared and the similarity between the compared videos. Therefore, we enhance the detection performance of the proposed scheme even in an environment subjected to variation in length of video. Through a testbed experiment, the effectiveness of our proposed scheme is evaluated in terms of variation of video length, delay variation, and packet loss. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enabling Trustworthy Service Evaluation in Service-Oriented Mobile Social Networks

    Publication Year: 2014 , Page(s): 310 - 320
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (871 KB) |  | HTML iconHTML  

    In this paper, we propose a Trustworthy Service Evaluation (TSE) system to enable users to share service reviews in service-oriented mobile social networks (S-MSNs). Each service provider independently maintains a TSE for itself, which collects and stores users' reviews about its services without requiring any third trusted authority. The service reviews can then be made available to interested users in making wise service selection decisions. We identify three unique service review attacks, i.e., linkability, rejection, and modification attacks, and develop sophisticated security mechanisms for the TSE to deal with these attacks. Specifically, the basic TSE (bTSE) enables users to distributedly and cooperatively submit their reviews in an integrated chain form by using hierarchical and aggregate signature techniques. It restricts the service providers to reject, modify, or delete the reviews. Thus, the integrity and authenticity of reviews are improved. Further, we extend the bTSE to a Sybil-resisted TSE (SrTSE) to enable the detection of two typical sybil attacks. In the SrTSE, if a user generates multiple reviews toward a vendor in a predefined time slot with different pseudonyms, the real identity of that user will be revealed. Through security analysis and numerical results, we show that the bTSE and the SrTSE effectively resist the service review attacks and the SrTSE additionally detects the sybil attacks in an efficient manner. Through performance evaluation, we show that the bTSE achieves better performance in terms of submission rate and delay than a service review system that does not adopt user cooperation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • ReDS: A Framework for Reputation-Enhanced DHTs

    Publication Year: 2014 , Page(s): 321 - 331
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (813 KB)  

    Distributed hash tables (DHTs), such as Chord and Kademlia, offer an efficient means to locate resources in peer-to-peer networks. Unfortunately, malicious nodes on a lookup path can easily subvert such queries. Several systems, including Halo (based on Chord) and Kad (based on Kademlia), mitigate such attacks by using redundant lookup queries. Much greater assurance can be provided; we present Reputation for Directory Services (ReDS), a framework for enhancing lookups in redundant DHTs by tracking how well other nodes service lookup requests. We describe how the ReDS technique can be applied to virtually any redundant DHT including Halo and Kad. We also study the collaborative identification and removal of bad lookup paths in a way that does not rely on the sharing of reputation scores, and we show that such sharing is vulnerable to attacks that make it unsuitable for most applications of ReDS. Through extensive simulations, we demonstrate that ReDS improves lookup success rates for Halo and Kad by 80 percent or more over a wide range of conditions, even against strategic attackers attempting to game their reputation scores and in the presence of node churn. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Certificateless Remote Anonymous Authentication Schemes for WirelessBody Area Networks

    Publication Year: 2014 , Page(s): 332 - 342
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1150 KB) |  | HTML iconHTML  

    Wireless body area network (WBAN) has been recognized as one of the promising wireless sensor technologies for improving healthcare service, thanks to its capability of seamlessly and continuously exchanging medical information in real time. However, the lack of a clear in-depth defense line in such a new networking paradigm would make its potential users worry about the leakage of their private information, especially to those unauthenticated or even malicious adversaries. In this paper, we present a pair of efficient and light-weight authentication protocols to enable remote WBAN users to anonymously enjoy healthcare service. In particular, our authentication protocols are rooted with a novel certificateless signature (CLS) scheme, which is computational, efficient, and provably secure against existential forgery on adaptively chosen message attack in the random oracle model. Also, our designs ensure that application or service providers have no privilege to disclose the real identities of users. Even the network manager, which serves as private key generator in the authentication protocols, is prevented from impersonating legitimate users. The performance of our designs is evaluated through both theoretic analysis and experimental simulations, and the comparative studies demonstrate that they outperform the existing schemes in terms of better trade-off between desirable security properties and computational overhead, nicely meeting the needs of WBANs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • LocaWard: A security and privacy aware location-based rewarding system

    Publication Year: 2014 , Page(s): 343 - 352
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (315 KB)  

    The proliferation of mobile devices has driven the mobile marketing to surge in the past few years. Emerging as a new type of mobile marketing, mobile location-based services (MLBSs) have attracted intense attention recently. Unfortunately, current MLBSs have a lot of limitations and raise many concerns, especially about system security and users' privacy. In this paper, we propose a new location-based rewarding system, called LocaWard, where mobile users can collect location-based tokens from token distributors, and then redeem their gathered tokens at token collectors for beneficial rewards. Tokens act as virtual currency. The token distributors and collectors can be any commercial entities or merchants that wish to attract customers through such a promotion system, such as stores, restaurants, and car rental companies. We develop a security and privacy aware location-based rewarding protocol for the LocaWard system, and prove the completeness and soundness of the protocol. Moreover, we show that the system is resilient to various attacks and mobile users' privacy can be well protected in the meantime. We finally implement the system and conduct extensive experiments to validate the system efficiency in terms of computation, communication, energy consumption, and storage costs. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Internet Traffic Privacy Enhancement with Masking: Optimization and Tradeoffs

    Publication Year: 2014 , Page(s): 353 - 362
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1500 KB)  

    An increasing number of recent experimental works have demonstrated that the supposedly secure channels in the Internet are prone to privacy breaking under many respects, due to packet traffic features leaking information on the user activity and traffic content. We aim at understanding if and how complex it is to obfuscate the information leaked by packet traffic features, namely packet lengths, directions, and times: we call this technique traffic masking. We define a security model that points out what the ideal target of masking is, and then define the optimized traffic masking algorithm that removes any leaking (full masking). Further, we investigate the tradeoff between traffic privacy protection and masking cost, namely required amount of overhead and realization complexity/feasibility. Numerical results are based on measured Internet traffic traces. Major findings are that: 1) optimized full masking achieves similar overhead values with padding only and in case fragmentation is allowed, and 2) if practical realizability is accounted for, optimized statistical masking attains only moderately better overhead than simple fixed pattern masking does, while still leaking correlation information that can be exploited by the adversary. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Scalable Two-Phase Top-Down Specialization Approach for Data Anonymization Using MapReduce on Cloud

    Publication Year: 2014 , Page(s): 363 - 373
    Cited by:  Papers (2)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (625 KB)  

    A large number of cloud services require users to share private data like electronic health records for data analysis or mining, bringing privacy concerns. Anonymizing data sets via generalization to satisfy certain privacy requirements such as k-anonymity is a widely used category of privacy preserving techniques. At present, the scale of data in many cloud applications increases tremendously in accordance with the Big Data trend, thereby making it a challenge for commonly used software tools to capture, manage, and process such large-scale data within a tolerable elapsed time. As a result, it is a challenge for existing anonymization approaches to achieve privacy preservation on privacy-sensitive large-scale data sets due to their insufficiency of scalability. In this paper, we propose a scalable two-phase top-down specialization (TDS) approach to anonymize large-scale data sets using the MapReduce framework on cloud. In both phases of our approach, we deliberately design a group of innovative MapReduce jobs to concretely accomplish the specialization computation in a highly scalable way. Experimental evaluation results demonstrate that with our approach, the scalability and efficiency of TDS can be significantly improved over existing approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploiting Service Similarity for Privacy in Location-Based Search Queries

    Publication Year: 2014 , Page(s): 374 - 383
    Cited by:  Papers (1)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (816 KB)  

    Location-based applications utilize the positioning capabilities of a mobile device to determine the current location of a user, and customize query results to include neighboring points of interests. However, location knowledge is often perceived as personal information. One of the immediate issues hindering the wide acceptance of location-based applications is the lack of appropriate methodologies that offer fine grain privacy controls to a user without vastly affecting the usability of the service. While a number of privacy-preserving models and algorithms have taken shape in the past few years, there is an almost universal need to specify one's privacy requirement without understanding its implications on the service quality. In this paper, we propose a user-centric location-based service architecture where a user can observe the impact of location inaccuracy on the service accuracy before deciding the geo-coordinates to use in a query. We construct a local search application based on this architecture and demonstrate how meaningful information can be exchanged between the user and the service provider to allow the inference of contours depicting the change in query results across a geographic area. Results indicate the possibility of large default privacy regions (areas of no change in result set) in such applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Decentralized Access Control with Anonymous Authentication of Data Stored in Clouds

    Publication Year: 2014 , Page(s): 384 - 394
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1141 KB) |  | HTML iconHTML  

    We propose a new decentralized access control scheme for secure data storage in clouds that supports anonymous authentication. In the proposed scheme, the cloud verifies the authenticity of the series without knowing the user's identity before storing data. Our scheme also has the added feature of access control in which only valid users are able to decrypt the stored information. The scheme prevents replay attacks and supports creation, modification, and reading data stored in the cloud. We also address user revocation. Moreover, our authentication and access control scheme is decentralized and robust, unlike other access control schemes designed for clouds which are centralized. The communication, computation, and storage overheads are comparable to centralized approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • RRE: A Game-Theoretic Intrusion Response and Recovery Engine

    Publication Year: 2014 , Page(s): 395 - 406
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (981 KB) |  | HTML iconHTML  

    Preserving the availability and integrity of networked computing systems in the face of fast-spreading intrusions requires advances not only in detection algorithms, but also in automated response techniques. In this paper, we propose a new approach to automated response called the response and recovery engine (RRE). Our engine employs a game-theoretic response strategy against adversaries modeled as opponents in a two-player Stackelberg stochastic game. The RRE applies attack-response trees (ART) to analyze undesired system-level security events within host computers and their countermeasures using Boolean logic to combine lower level attack consequences. In addition, the RRE accounts for uncertainties in intrusion detection alert notifications. The RRE then chooses optimal response actions by solving a partially observable competitive Markov decision process that is automatically derived from attack-response trees. To support network-level multiobjective response selection and consider possibly conflicting network security properties, we employ fuzzy logic theory to calculate the network-level security metric values, i.e., security levels of the system's current and potentially future states in each stage of the game. In particular, inputs to the network-level game-theoretic response selection engine, are first fed into the fuzzy system that is in charge of a nonlinear inference and quantitative ranking of the possible actions using its previously defined fuzzy rule set. Consequently, the optimal network-level response actions are chosen through a game-theoretic optimization process. Experimental results show that the RRE, using Snort's alerts, can protect large networks for which attack-response trees have more than 500 nodes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Enabling Data Integrity Protection in Regenerating-Coding-Based Cloud Storage: Theory and Implementation

    Publication Year: 2014 , Page(s): 407 - 416
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (899 KB)  

    To protect outsourced data in cloud storage against corruptions, adding fault tolerance to cloud storage, along with efficient data integrity checking and recovery procedures, becomes critical. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. Therefore, we study the problem of remotely checking the integrity of regenerating-coded data against corruptions under a real-life cloud storage setting. We design and implement a practical data integrity protection (DIP) scheme for a specific regenerating code, while preserving its intrinsic properties of fault tolerance and repair-traffic saving. Our DIP scheme is designed under a mobile Byzantine adversarial model, and enables a client to feasibly verify the integrity of random subsets of outsourced data against general or malicious corruptions. It works under the simple assumption of thin-cloud storage and allows different parameters to be fine-tuned for a performance-security trade-off. We implement and evaluate the overhead of our DIP scheme in a real cloud storage testbed under different parameter choices. We further analyze the security strengths of our DIP scheme via mathematical models. We demonstrate that remote integrity checking can be feasibly integrated into regenerating codes in practical deployment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Balancing Performance, Accuracy, and Precision for Secure Cloud Transactions

    Publication Year: 2014 , Page(s): 417 - 426
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1659 KB) |  | HTML iconHTML  

    In distributed transactional database systems deployed over cloud servers, entities cooperate to form proofs of authorizations that are justified by collections of certified credentials. These proofs and credentials may be evaluated and collected over extended time periods under the risk of having the underlying authorization policies or the user credentials being in inconsistent states. It therefore becomes possible for policy-based authorization systems to make unsafe decisions that might threaten sensitive resources. In this paper, we highlight the criticality of the problem. We then define the notion of trusted transactions when dealing with proofs of authorization. Accordingly, we propose several increasingly stringent levels of policy consistency constraints, and present different enforcement approaches to guarantee the trustworthiness of transactions executing on cloud servers. We propose a Two-Phase Validation Commit protocol as a solution, which is a modified version of the basic Two-Phase Validation Commit protocols. We finally analyze the different approaches presented using both analytical evaluation of the overheads and simulations to guide the decision makers to which approach to use. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic Authentication with Sensory Information for the Access Control Systems

    Publication Year: 2014 , Page(s): 427 - 436
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1157 KB)  

    Access card authentication is critical and essential for many modern access control systems, which have been widely deployed in various government, commercial, and residential environments. However, due to the static identification information exchange among the access cards and access control clients, it is very challenging to fight against access control system breaches due to reasons such as loss, stolen or unauthorized duplications of the access cards. Although advanced biometric authentication methods such as fingerprint and iris identification can further identify the user who is requesting authorization, they incur high system costs and access privileges cannot be transferred among trusted users. In this work, we introduce a dynamic authentication with sensory information for the access control systems. By combining sensory information obtained from onboard sensors on the access cards as well as the original encoded identification information, we are able to effectively tackle the problems such as access card loss, stolen, and duplication. Our solution is backward-compatible with existing access control systems and significantly increases the key spaces for authentication. We theoretically demonstrate the potential key space increases with sensory information of different sensors and empirically demonstrate simple rotations can increase key space by more than 1,000,000 times with an authentication accuracy of 90 percent. We performed extensive simulations under various environment settings and implemented our design on WISP to experimentally verify the system performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Distributed, Concurrent, and Independent Access to Encrypted Cloud Databases

    Publication Year: 2014 , Page(s): 437 - 446
    Cited by:  Papers (3)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (922 KB)  

    Placing critical data in the hands of a cloud provider should come with the guarantee of security and availability for data at rest, in motion, and in use. Several alternatives exist for storage services, while data confidentiality solutions for the database as a service paradigm are still immature. We propose a novel architecture that integrates cloud database services with data confidentiality and the possibility of executing concurrent operations on encrypted data. This is the first solution supporting geographically distributed clients to connect directly to an encrypted cloud database, and to execute concurrent and independent operations including those modifying the database structure. The proposed architecture has the further advantage of eliminating intermediate proxies that limit the elasticity, availability, and scalability properties that are intrinsic in cloud-based solutions. The efficacy of the proposed architecture is evaluated through theoretical analyses and extensive experimental results based on a prototype implementation subject to the TPC-C standard benchmark for different numbers of clients and network latencies. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A System for Denial-of-Service Attack Detection Based on Multivariate Correlation Analysis

    Publication Year: 2014 , Page(s): 447 - 456
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1089 KB)  

    Interconnected systems, such as Web servers, database servers, cloud computing servers and so on, are now under threads from network attackers. As one of most common and aggressive means, denial-of-service (DoS) attacks cause serious impact on these computing systems. In this paper, we present a DoS attack detection system that uses multivariate correlation analysis (MCA) for accurate network traffic characterization by extracting the geometrical correlations between network traffic features. Our MCA-based DoS attack detection system employs the principle of anomaly based detection in attack recognition. This makes our solution capable of detecting known and unknown DoS attacks effectively by learning the patterns of legitimate network traffic only. Furthermore, a triangle-area-based technique is proposed to enhance and to speed up the process of MCA. The effectiveness of our proposed detection system is evaluated using KDD Cup 99 data set, and the influences of both non-normalized data and normalized data on the performance of the proposed detection system are examined. The results show that our system outperforms two other previously developed state-of-the-art approaches in terms of detection accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A $(rm UCON_{ABC})$ Resilient Authorization Evaluation for Cloud Computing

    Publication Year: 2014 , Page(s): 457 - 467
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (982 KB) |  | HTML iconHTML  

    The business-driven access control used in cloud computing is not well suited for tracking fine-grained user service consumption. UCONABC applies continuous authorization reevaluation, which requires usage accounting that enables fine-grained access control for cloud computing. However, it was not designed to work in distributed and dynamic authorization environments like those present in cloud computing. During a continuous (periodical) reevaluation, an authorization exception condition, disparity among usage accounting and authorization attributes may occur. This proposal aims to provide resilience to the UCONABC continuous authorization reevaluation, by dealing with individual exception conditions while maintaining a suitable access control in the cloud environment. The experiments made with a proof-of-concept prototype show a set of measurements for an application scenario (e-commerce) and allows for the identification of exception conditions in the authorization reevaluation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Key-Aggregate Cryptosystem for Scalable Data Sharing in Cloud Storage

    Publication Year: 2014 , Page(s): 468 - 477
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (880 KB)  

    Data sharing is an important functionality in cloud storage. In this paper, we show how to securely, efficiently, and flexibly share data with others in cloud storage. We describe new public-key cryptosystems that produce constant-size ciphertexts such that efficient delegation of decryption rights for any set of ciphertexts are possible. The novelty is that one can aggregate any set of secret keys and make them as compact as a single key, but encompassing the power of all the keys being aggregated. In other words, the secret key holder can release a constant-size aggregate key for flexible choices of ciphertext set in cloud storage, but the other encrypted files outside the set remain confidential. This compact aggregate key can be conveniently sent to others or be stored in a smart card with very limited secure storage. We provide formal security analysis of our schemes in the standard model. We also describe other application of our schemes. In particular, our schemes give the first public-key patient-controlled encryption for flexible hierarchy, which was yet to be known. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Distributed Information Divergence Estimation over Data Streams

    Publication Year: 2014 , Page(s): 478 - 487
    Cited by:  Papers (1)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1398 KB)  

    In this paper, we consider the setting of large scale distributed systems, in which each node needs to quickly process a huge amount of data received in the form of a stream that may have been tampered with by an adversary. In this situation, a fundamental problem is how to detect and quantify the amount of work performed by the adversary. To address this issue, we propose a novel algorithm AnKLe for estimating the Kullback-Leibler divergence of an observed stream compared with the expected one. AnKLe combines sampling techniques and information-theoretic methods. It is very efficient, both in terms of space and time complexities, and requires only a single pass over the data stream. We show that AnKLe is an (ε, δ)-approximation algorithm with a space complexity Õ(1/ε + 1/ε2) bits in "most" cases, and Õ(1/ε + (n-ε-1)/ε2) otherwise, where n is the number of distinct data items in a stream. Moreover, we propose a distributed version of AnKLe that requires at most O (rℓ (log n + 1)) bits of communication between the ℓ participating nodes, where r is number of rounds of the algorithm. Experimental results show that the estimation provided by AnKLe remains accurate even for different adversarial settings for which the quality of other methods dramatically decreases. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • FLAP: An Efficient WLAN Initial Access Authentication Protocol

    Publication Year: 2014 , Page(s): 488 - 497
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1391 KB)  

    Nowadays, with the rapid increase of WLAN-enabled mobile devices and the more widespread use of WLAN, it is increasingly important to have a more efficient initial link setup mechanism, and there is a demand for a faster access authentication method faster than the current IEEE 802.11i. In this paper, through experiments we observe that the authentication delay of 802.11i is intolerable under some scenarios, and we point that the main reason resulting in such inefficiency is due to its design from the framework perspective which introduces too many messages. To overcome this drawback, we propose an efficient initial access authentication protocol, FLAP, which realizes the authentications and key distribution through two roundtrip messages. We formally prove that our scheme is more secure than the four-way handshake protocol. Our practical measurement result indicates that FLAP can improve the efficiency of EAP-TLS by 94.7 percent. Extensive simulations are conducted in different scenarios, and the results demonstrate that when a WLAN gets crowded the advantage of FLAP becomes more salient. Furthermore, a simple and practical method is presented to make FLAP compatible with 802.11i. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Collaborative Policy Administration

    Publication Year: 2014 , Page(s): 498 - 507
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (888 KB)  

    Policy-based management is a very effective method to protect sensitive information. However, the overclaim of privileges is widespread in emerging applications, including mobile applications and social network services, because the applications' users involved in policy administration have little knowledge of policy-based management. The overclaim can be leveraged by malicious applications, then lead to serious privacy leakages and financial loss. To resolve this issue, this paper proposes a novel policy administration mechanism, referred to as collaborative policy administration (CPA for short), to simplify the policy administration. In CPA, a policy administrator can refer to other similar policies to set up their own policies to protect privacy and other sensitive information. This paper formally defines CPA and proposes its enforcement framework. Furthermore, to obtain similar policies more effectively, which is the key step of CPA, a text mining-based similarity measure method is presented. We evaluate CPA with the data of Android applications and demonstrate that the text mining-based similarity measure method is more effective in obtaining similar policies than the previous category-based method. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An Error-Minimizing Framework for Localizing Jammers in Wireless Networks

    Publication Year: 2014 , Page(s): 508 - 517
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1543 KB)  

    Jammers can severely disrupt the communications in wireless networks, and jammers' position information allows the defender to actively eliminate the jamming attacks. Thus, in this paper, we aim to design a framework that can localize one or multiple jammers with a high accuracy. Most of existing jammer-localization schemes utilize indirect measurements (e.g., hearing ranges) affected by jamming attacks, which makes it difficult to localize jammers accurately. Instead, we exploit a direct measurement-the strength of jamming signals (JSS). Estimating JSS is challenging as jamming signals may be embedded in other signals. As such, we devise an estimation scheme based on ambient noise floor and validate it with real-world experiments. To further reduce estimation errors, we define an evaluation feedback metric to quantify the estimation errors and formulate jammer localization as a nonlinear optimization problem, whose global optimal solution is close to jammers' true positions. We explore several heuristic search algorithms for approaching the global optimal solution, and our simulation results show that our error-minimizing-based framework achieves better performance than the existing schemes. In addition, our error-minimizing framework can utilize indirect measurements to obtain a better location estimation compared with prior work. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Securing Broker-Less Publish/Subscribe Systems Using Identity-Based Encryption

    Publication Year: 2014 , Page(s): 518 - 528
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1363 KB)  

    The provisioning of basic security mechanisms such as authentication and confidentiality is highly challenging in a content-based publish/subscribe system. Authentication of publishers and subscribers is difficult to achieve due to the loose coupling of publishers and subscribers. Likewise, confidentiality of events and subscriptions conflicts with content-based routing. This paper presents a novel approach to provide confidentiality and authentication in a broker-less content-based publish/subscribe system. The authentication of publishers and subscribers as well as confidentiality of events is ensured, by adapting the pairing-based cryptography mechanisms, to the needs of a publish/subscribe system. Furthermore, an algorithm to cluster subscribers according to their subscriptions preserves a weak notion of subscription confidentiality. In addition to our previous work , this paper contributes 1) use of searchable encryption to enable efficient routing of encrypted events, 2) multicredential routing a new event dissemination strategy to strengthen the weak subscription confidentiality, and 3) thorough analysis of different attacks on subscription confidentiality. The overall approach provides fine-grained key management and the cost for encryption, decryption, and routing is in the order of subscribed attributes. Moreover, the evaluations show that providing security is affordable w.r.t. 1) throughput of the proposed cryptographic primitives, and 2) delays incurred during the construction of the publish/subscribe overlay and the event dissemination. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

IEEE Transactions on Parallel and Distributed Systems (TPDS) is published monthly. It publishes a range of papers, comments on previously published papers, and survey articles that deal with the parallel and distributed systems research areas of current importance to our readers.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
David Bader
College of Computing
Georgia Institute of Technology