By Topic

Dependable and Secure Computing, IEEE Transactions on

Popular Articles (January 2015)

Includes the top 50 most frequently downloaded documents for this publication according to the most recent monthly usage statistics.
  • 1. Database security - concepts, approaches, and challenges

    Publication Year: 2005 , Page(s): 2 - 19
    Cited by:  Papers (49)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (226 KB) |  | HTML iconHTML  

    As organizations increase their reliance on, possibly distributed, information systems for daily business, they become more vulnerable to security breaches even as they gain productivity and efficiency advantages. Though a number of techniques, such as encryption and electronic signatures, are currently available to protect data when transmitted across sites, a truly comprehensive approach for data protection must also include mechanisms for enforcing access control policies based on data contents, subject qualifications and characteristics, and other relevant contextual information, such as time. It is well understood today that the semantics of data must be taken into account in order to specify effective access control policies. Also, techniques for data integrity and availability specifically tailored to database systems must be adopted. In this respect, over the years, the database security community has developed a number of different techniques and approaches to assure data confidentiality, integrity, and availability. However, despite such advances, the database security area faces several new challenges. Factors such as the evolution of security concerns, the "disintermediation" of access to data, new computing paradigms and applications, such as grid-based computing and on-demand business, have introduced both new security requirements and new contexts in which to apply and possibly extend current approaches. In this paper, we first survey the most relevant concepts underlying the notion of database security and summarize the most well-known techniques. We focus on access control systems, on which a large body of research has been devoted, and describe the key access control models, namely, the discretionary and mandatory access control models, and the role-based access control (RBAC) model. We also discuss security for advanced data management systems, and cover topics such as access control for XML. We then discuss current challenges for database securit- y and some preliminary approaches that address some of these challenges. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 2. k-Zero Day Safety: A Network Security Metric for Measuring the Risk of Unknown Vulnerabilities

    Publication Year: 2014 , Page(s): 30 - 44
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1613 KB) |  | HTML iconHTML  

    By enabling a direct comparison of different security solutions with respect to their relative effectiveness, a network security metric may provide quantifiable evidences to assist security practitioners in securing computer networks. However, research on security metrics has been hindered by difficulties in handling zero-day attacks exploiting unknown vulnerabilities. In fact, the security risk of unknown vulnerabilities has been considered as something unmeasurable due to the less predictable nature of software flaws. This causes a major difficulty to security metrics, because a more secure configuration would be of little value if it were equally susceptible to zero-day attacks. In this paper, we propose a novel security metric, k-zero day safety, to address this issue. Instead of attempting to rank unknown vulnerabilities, our metric counts how many such vulnerabilities would be required for compromising network assets; a larger count implies more security because the likelihood of having more unknown vulnerabilities available, applicable, and exploitable all at the same time will be significantly lower. We formally define the metric, analyze the complexity of computing the metric, devise heuristic algorithms for intractable cases, and finally demonstrate through case studies that applying the metric to existing network security practices may generate actionable knowledge. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 3. A Random Decision Tree Framework for Privacy-Preserving Data Mining

    Publication Year: 2014 , Page(s): 399 - 411
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2743 KB)  

    Distributed data is ubiquitous in modern information driven applications. With multiple sources of data, the natural challenge is to determine how to collaborate effectively across proprietary organizational boundaries while maximizing the utility of collected information. Since using only local data gives suboptimal utility, techniques for privacy-preserving collaborative knowledge discovery must be developed. Existing cryptography-based work for privacy-preserving data mining is still too slow to be effective for large scale data sets to face today's big data challenge. Previous work on random decision trees (RDT) shows that it is possible to generate equivalent and accurate models with much smaller cost. We exploit the fact that RDTs can naturally fit into a parallel and fully distributed architecture, and develop protocols to implement privacy-preserving RDTs that enable general and efficient distributed privacy-preserving knowledge discovery. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 4. Basic concepts and taxonomy of dependable and secure computing

    Publication Year: 2004 , Page(s): 11 - 33
    Cited by:  Papers (540)  |  Patents (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2520 KB) |  | HTML iconHTML  

    This paper gives the main definitions relating to dependability, a generic concept including a special case of such attributes as reliability, availability, safety, integrity, maintainability, etc. Security brings in concerns for confidentiality, in addition to availability and integrity. Basic definitions are given first. They are then commented upon, and supplemented by additional definitions, which address the threats to dependability and security (faults, errors, failures), their attributes, and the means for their achievement (fault prevention, fault tolerance, fault removal, fault forecasting). The aim is to explicate a set of general concepts, of relevance across a wide range of situations and, therefore, helping communication and cooperation among a number of scientific and technical communities, including ones that are concentrating on particular types of system, of system failures, or of causes of system failures. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 5. Effective Risk Communication for Android Apps

    Publication Year: 2014 , Page(s): 252 - 265
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1261 KB) |  | HTML iconHTML  

    The popularity and advanced functionality of mobile devices has made them attractive targets for malicious and intrusive applications (apps). Although strong security measures are in place for most mobile systems, the area where these systems often fail is the reliance on the user to make decisions that impact the security of a device. As our prime example, Android relies on users to understand the permissions that an app is requesting and to base the installation decision on the list of permissions. Previous research has shown that this reliance on users is ineffective, as most users do not understand or consider the permission information. We propose a solution that leverages a method to assign a risk score to each app and display a summary of that information to users. Results from four experiments are reported in which we examine the effects of introducing summary risk information and how best to convey such information to a user. Our results show that the inclusion of risk-score information has significant positive effects in the selection process and can also lead to more curiosity about security-related information. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 6. NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems

    Publication Year: 2013 , Page(s): 198 - 211
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1603 KB) |  | HTML iconHTML  

    Cloud security is one of most important issues that has attracted a lot of research and development effort in past few years. Particularly, attackers can explore vulnerabilities of a cloud system and compromise virtual machines to deploy further large-scale Distributed Denial-of-Service (DDoS). DDoS attacks usually involve early stage actions such as multistep exploitation, low-frequency vulnerability scanning, and compromising identified vulnerable virtual machines as zombies, and finally DDoS attacks through the compromised zombies. Within the cloud system, especially the Infrastructure-as-a-Service (IaaS) clouds, the detection of zombie exploration attacks is extremely difficult. This is because cloud users may install vulnerable applications on their virtual machines. To prevent vulnerable virtual machines from being compromised in the cloud, we propose a multiphase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE, which is built on attack graph-based analytical models and reconfigurable virtual network-based countermeasures. The proposed framework leverages OpenFlow network programming APIs to build a monitor and control plane over distributed programmable virtual switches to significantly improve attack detection and mitigate attack consequences. The system and security evaluations demonstrate the efficiency and effectiveness of the proposed solution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 7. Evaluation of Web Security Mechanisms Using Vulnerability & Attack Injection

    Publication Year: 2014 , Page(s): 440 - 453
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1428 KB)  

    In this paper we propose a methodology and a prototype tool to evaluate web application security mechanisms. The methodology is based on the idea that injecting realistic vulnerabilities in a web application and attacking them automatically can be used to support the assessment of existing security mechanisms and tools in custom setup scenarios. To provide true to life results, the proposed vulnerability and attack injection methodology relies on the study of a large number of vulnerabilities in real web applications. In addition to the generic methodology, the paper describes the implementation of the Vulnerability & Attack Injector Tool (VAIT) that allows the automation of the entire process. We used this tool to run a set of experiments that demonstrate the feasibility and the effectiveness of the proposed methodology. The experiments include the evaluation of coverage and false positives of an intrusion detection system for SQL Injection attacks and the assessment of the effectiveness of two top commercial web application vulnerability scanners. Results show that the injection of vulnerabilities and attacks is indeed an effective way to evaluate security mechanisms and to point out not only their weaknesses but also ways for their improvement. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 8. Secure Data Aggregation Technique for Wireless Sensor Networks in the Presence of Collusion Attacks

    Publication Year: 2015 , Page(s): 98 - 110
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1432 KB)  

    Due to limited computational power and energy resources, aggregation of data from multiple sensor nodes done at the aggregating node is usually accomplished by simple methods such as averaging. However such aggregation is known to be highly vulnerable to node compromising attacks. Since WSN are usually unattended and without tamper resistant hardware, they are highly susceptible to such attacks. Thus, ascertaining trustworthiness of data and reputation of sensor nodes is crucial for WSN. As the performance of very low power processors dramatically improves, future aggregator nodes will be capable of performing more sophisticated data aggregation algorithms, thus making WSN less vulnerable. Iterative filtering algorithms hold great promise for such a purpose. Such algorithms simultaneously aggregate data from multiple sources and provide trust assessment of these sources, usually in a form of corresponding weight factors assigned to data provided by each source. In this paper we demonstrate that several existing iterative filtering algorithms, while significantly more robust against collusion attacks than the simple averaging methods, are nevertheless susceptive to a novel sophisticated collusion attack we introduce. To address this security issue, we propose an improvement for iterative filtering techniques by providing an initial approximation for such algorithms which makes them not only collusion robust, but also more accurate and faster converging. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 9. MOSES: Supporting and Enforcing Security Profiles on Smartphones

    Publication Year: 2014 , Page(s): 211 - 223
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (729 KB) |  | HTML iconHTML  

    Smartphones are very effective tools for increasing the productivity of business users. With their increasing computational power and storage capacity, smartphones allow end users to perform several tasks and be always updated while on the move. Companies are willing to support employee-owned smartphones because of the increase in productivity of their employees. However, security concerns about data sharing, leakage and loss have hindered the adoption of smartphones for corporate use. In this paper we present MOSES, a policy-based framework for enforcing software isolation of applications and data on the Android platform. In MOSES, it is possible to define distinct Security Profiles within a single smartphone. Each security profile is associated with a set of policies that control the access to applications and data. Profiles are not predefined or hardcoded, they can be specified and applied at any time. One of the main characteristics of MOSES is the dynamic switching from one security profile to another. We run a thorough set of experiments using our full implementation of MOSES. The results of the experiments confirm the feasibility of our proposal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 10. STARS: A Statistical Traffic Pattern Discovery System for MANETs

    Publication Year: 2014 , Page(s): 181 - 192
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (964 KB) |  | HTML iconHTML  

    Many anonymity enhancing techniques have been proposed based on packet encryption to protect the communication anonymity of mobile ad hoc networks (MANETs). However, in this paper, we show that MANETs are still vulnerable under passive statistical traffic analysis attacks. To demonstrate how to discover the communication patterns without decrypting the captured packets, we present a novel statistical traffic pattern discovery system (STARS). STARS works passively to perform traffic analysis based on statistical characteristics of captured raw traffic. STARS is capable of discovering the sources, the destinations, and the end-to-end communication relations. Empirical studies demonstrate that STARS achieves good accuracy in disclosing the hidden traffic patterns. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 11. Analysis of Field Data on Web Security Vulnerabilities

    Publication Year: 2014 , Page(s): 89 - 100
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2035 KB) |  | HTML iconHTML  

    Most web applications have critical bugs (faults) affecting their security, which makes them vulnerable to attacks by hackers and organized crime. To prevent these security problems from occurring it is of utmost importance to understand the typical software faults. This paper contributes to this body of knowledge by presenting a field study on two of the most widely spread and critical web application vulnerabilities: SQL Injection and XSS. It analyzes the source code of security patches of widely used Web applications written in weak and strong typed languages. Results show that only a small subset of software fault types, affecting a restricted collection of statements, is related to security. To understand how these vulnerabilities are really exploited by hackers, this paper also presents an analysis of the source code of the scripts used to attack them. The outcomes of this study can be used to train software developers and code inspectors in the detection of such faults and are also the foundation for the research of realistic vulnerability and attack injectors that can be used to assess security mechanisms, such as intrusion detection systems, vulnerability scanners, and static code analyzers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 12. Security and Privacy-Enhancing Multicloud Architectures

    Publication Year: 2013 , Page(s): 212 - 224
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1400 KB) |  | HTML iconHTML  

    Security challenges are still among the biggest obstacles when considering the adoption of cloud services. This triggered a lot of research activities, resulting in a quantity of proposals targeting the various cloud security threats. Alongside with these security issues, the cloud paradigm comes with a new set of unique features, which open the path toward novel security approaches, techniques, and architectures. This paper provides a survey on the achievable security merits by making use of multiple distinct clouds simultaneously. Various distinct architectures are introduced and discussed according to their security and privacy capabilities and prospects. View full abstract»

    Freely Available from IEEE
  • 13. A Computational Dynamic Trust Model for User Authorization

    Publication Year: 2015 , Page(s): 1 - 15
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2369 KB)  

    Development of authorization mechanisms for secure information access by a large community of users in an open environment is an important problem in the ever-growing Internet world. In this paper we propose a computational dynamic trust model for user authorization, rooted in findings from social science. Unlike most existing computational trust models, this model distinguishes trusting belief in integrity from that in competence in different contexts and accounts for subjectivity in the evaluation of a particular trustee by different trusters. Simulation studies were conducted to compare the performance of the proposed integrity belief model with other trust models from the literature for different user behavior patterns. Experiments show that the proposed model achieves higher performance than other models especially in predicting the behavior of unstable users. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 14. PPTP: Privacy-Preserving Traffic Padding in Web-Based Applications

    Publication Year: 2014 , Page(s): 538 - 552
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1335 KB)  

    Web-based applications are gaining popularity as they require less client-side resources, and are easier to deliver and maintain. On the other hand, web applications also pose new security and privacy challenges. In particular, recent research revealed that many high profile web applications might cause sensitive user inputs to be leaked from encrypted traffic due to side-channel attacks exploiting unique patterns in packet sizes and timing. Moreover, existing solutions, such as random padding and packet-size rounding, were shown to incur prohibitive overhead while still failing to guarantee sufficient privacy protection. In this paper, we first observe an interesting similarity between this privacy-preserving traffic padding (PPTP) issue and another well studied problem, privacy-preserving data publishing (PPDP). Based on such a similarity, we present a formal PPTP model encompassing the privacy requirements, padding costs, and padding methods. We then formulate PPTP problems under different application scenarios, analyze their complexity, and design efficient heuristic algorithms. Finally, we confirm the effectiveness and efficiency of our algorithms by comparing them to existing solutions through experiments using real-world web applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 15. Behavior Rule Specification-Based Intrusion Detection for Safety Critical Medical Cyber Physical Systems

    Publication Year: 2015 , Page(s): 16 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1855 KB) |  | HTML iconHTML  

    We propose and analyze a behavior-rule specification-based technique for intrusion detection of medical devices embedded in a medical cyber physical system (MCPS) in which the patient's safety is of the utmost importance. We propose a methodology to transform behavior rules to a state machine, so that a device that is being monitored for its behavior can easily be checked against the transformed state machine for deviation from its behavior specification. Using vital sign monitor medical devices as an example, we demonstrate that our intrusion detection technique can effectively trade false positives off for a high detection probability to cope with more sophisticated and hidden attackers to support ultra safe and secure MCPS applications. Moreover, through a comparative analysis, we demonstrate that our behavior-rule specification-based IDS technique outperforms two existing anomaly-based techniques for detecting abnormal patient behaviors in pervasive healthcare applications. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 16. Mobiflage: Deniable Storage Encryptionfor Mobile Devices

    Publication Year: 2014 , Page(s): 224 - 237
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (526 KB) |  | HTML iconHTML  

    Data confidentiality can be effectively preserved through encryption. In certain situations, this is inadequate, as users may be coerced into disclosing their decryption keys. Steganographic techniques and deniable encryption algorithms have been devised to hide the very existence of encrypted data. We examine the feasibility and efficacy of deniable encryption for mobile devices. To address obstacles that can compromise plausibly deniable encryption (PDE) in a mobile environment, we design a system called Mobiflage. Mobiflage enables PDE on mobile devices by hiding encrypted volumes within random data in a devices free storage space. We leverage lessons learned from deniable encryption in the desktop environment, and design new countermeasures for threats specific to mobile systems. We provide two implementations for the Android OS, to assess the feasibility and performance of Mobiflage on different hardware profiles. MF-SD is designed for use on devices with FAT32 removable SD cards. Our MF-MTP variant supports devices that instead share a single internal partition for both apps and user accessible data. MF-MTP leverages certain Ext4 file system mechanisms and uses an adjusted data-block allocator. These new techniques for soring hidden volumes in Ext4 file systems can also be applied to other file systems to enable deniable encryption for desktop OSes and other mobile platforms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 17. A Probabilistic Discriminative Model for Android Malware Detection with Decompiled Source Code

    Publication Year: 2014 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (645 KB)  

    Mobile devices are an important part of our everyday lives, and the Android platform has become a market leader. In recent years a number of approaches for Android malware detection have been proposed, using permissions, source code analysis, or dynamic analysis. In this paper, we propose to use a probabilistic discriminative model based on regularized logistic regression for Android malware detection. Through extensive experimental evaluation, we demonstrate that it can generate probabilistic outputs with highly accurate classification results. In particular, we propose to use Android API calls as features extracted from decompiled source code, and analyze and explore issues in feature granularity, feature representation, feature selection, and regularization. We show that the probabilistic discriminative model also works well with permissions, and substantially outperforms the state-of-the-art methods for Android malware detection with application permissions. Furthermore, the discriminative learning model achieves the best detection results by combining both decompiled source code and application permissions. To the best of our knowledge, this is the first research that proposes probabilistic discriminative model for Android malware detection with a thorough study of desired representation of decompiled source code and is the first research work for Android malware detection task that combines both analysis of decompiled source code and application permissions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 18. Ensuring Distributed Accountability for Data Sharing in the Cloud

    Publication Year: 2012 , Page(s): 556 - 568
    Cited by:  Papers (17)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (807 KB)  

    Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A major feature of the cloud services is that users' data are usually processed remotely in unknown machines that users do not own or operate. While enjoying the convenience brought by this new emerging technology, users' fears of losing control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services. To address this problem, in this paper, we propose a novel highly decentralized information accountability framework to keep track of the actual usage of the users' data in the cloud. In particular, we propose an object-centered approach that enables enclosing our logging mechanism together with users' data and policies. We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users' data will trigger authentication and automated logging local to the JARs. To strengthen user's control, we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the efficiency and effectiveness of the proposed approaches. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 19. An Architectural Approach to Preventing Code Injection Attacks

    Publication Year: 2010 , Page(s): 351 - 365
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3637 KB) |  | HTML iconHTML  

    Code injection attacks, despite being well researched, continue to be a problem today. Modern architectural solutions such as the execute-disable bit and PaX have been useful in limiting the attacks; however, they enforce program layout restrictions and can oftentimes still be circumvented by a determined attacker. We propose a change to the memory architecture of modern processors that addresses the code injection problem at its very root by virtually splitting memory into code memory and data memory such that a processor will never be able to fetch injected code for execution. This virtual split memory system can be implemented as a software-only patch to an operating system and can be used to supplement existing schemes for improved protection. Furthermore, our system is able to accommodate a number of response modes when a code injection attack occurs. Our experiments with both benchmarks and real-world attacks show the system is effective in preventing a wide range of code injection attacks while incurring reasonable overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 20. Toward Secure Multikeyword Top-k Retrieval over Encrypted Cloud Data

    Publication Year: 2013 , Page(s): 239 - 250
    Cited by:  Papers (5)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1015 KB) |  | HTML iconHTML  

    Cloud computing has emerging as a promising pattern for data outsourcing and high-quality data services. However, concerns of sensitive information on cloud potentially causes privacy problems. Data encryption protects data security to some extent, but at the cost of compromised efficiency. Searchable symmetric encryption (SSE) allows retrieval of encrypted data over cloud. In this paper, we focus on addressing data privacy issues using SSE. For the first time, we formulate the privacy issue from the aspect of similarity relevance and scheme robustness. We observe that server-side ranking based on order-preserving encryption (OPE) inevitably leaks data privacy. To eliminate the leakage, we propose a two-round searchable encryption (TRSE) scheme that supports top-k multikeyword retrieval. In TRSE, we employ a vector space model and homomorphic encryption. The vector space model helps to provide sufficient search accuracy, and the homomorphic encryption enables users to involve in the ranking while the majority of computing work is done on the server side by operations only on ciphertext. As a result, information leakage can be eliminated and data security is ensured. Thorough security and performance analysis show that the proposed scheme guarantees high security and practical efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 21. Secure and Robust Multi-Constrained QoS aware Routing Algorithm for VANETs

    Publication Year: 2015 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1665 KB)  

    Secure QoS routing algorithms are a fundamental part of wireless networks that aim to provide services with QoS and security guarantees. In Vehicular Ad hoc Networks (VANETs), vehicles perform routing functions, and at the same time act as end-systems thus routing control messages are transmitted unprotected over wireless channels. The QoS of the entire network could be degraded by an attack on the routing process, and manipulation of the routing control messages. In this paper, we propose a novel secure and reliable multi-constrained QoS aware routing algorithm for VANETs. We employ the Ant Colony Optimisation (ACO) technique to compute feasible routes in VANETs subject to multiple QoS constraints determined by the data traffic type. Moreover, we extend the VANET-oriented Evolving Graph (VoEG) model to perform plausibility checks on the exchanged routing control messages among vehicles. Simulation results show that the QoS can be guaranteed while applying security mechanisms to ensure a reliable and robust routing service. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 22. Credit Card Fraud Detection Using Hidden Markov Model

    Publication Year: 2008 , Page(s): 37 - 48
    Cited by:  Papers (22)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3114 KB) |  | HTML iconHTML  

    Due to a rapid advancement in the electronic commerce technology, the use of credit cards has dramatically increased. As credit card becomes the most popular mode of payment for both online as well as regular purchase, cases of fraud associated with it are also rising. In this paper, we model the sequence of operations in credit card transaction processing using a hidden Markov model (HMM) and show how it can be used for the detection of frauds. An HMM is initially trained with the normal behavior of a cardholder. If an incoming credit card transaction is not accepted by the trained HMM with sufficiently high probability, it is considered to be fraudulent. At the same time, we try to ensure that genuine transactions are not rejected. We present detailed experimental results to show the effectiveness of our approach and compare it with other techniques available in the literature. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 23. Continuous and Transparent User Identity Verification for Secure Internet Services

    Publication Year: 2014 , Page(s): 1
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (945 KB)  

    Session management in distributed Internet services is traditionally based on username and password, explicit logouts and mechanisms of user session expiration using classic timeouts. Emerging biometric solutions allow substituting username and password with biometric data during session establishment, but in such an approach still a single verification is deemed sufficient, and the identity of a user is considered immutable during the entire session. Additionally, the length of the session timeout may impact on the usability of the service and consequent client satisfaction. This paper explores promising alternatives offered by applying biometrics in the management of sessions. A secure protocol is defined for perpetual authentication through continuous user verification. The protocol determines adaptive timeouts based on the quality, frequency and type of biometric data transparently acquired from the user. The functional behavior of the protocol is illustrated through Matlab simulations, while model-based quantitative analysis is carried out to assess the ability of the protocol to contrast security attacks exercised by different kinds of attackers. Finally, the current prototype for PCs and Android smartphones is discussed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 24. Secure Overlay Cloud Storage with Access Control and Assured Deletion

    Publication Year: 2012 , Page(s): 903 - 916
    Cited by:  Papers (9)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1126 KB) |  | HTML iconHTML  

    We can now outsource data backups off-site to third-party cloud storage services so as to reduce data management costs. However, we must provide security guarantees for the outsourced data, which is now maintained by third parties. We design and implement FADE, a secure overlay cloud storage system that achieves fine-grained, policy-based access control and file assured deletion. It associates outsourced files with file access policies, and assuredly deletes files to make them unrecoverable to anyone upon revocations of file access policies. To achieve such security goals, FADE is built upon a set of cryptographic key operations that are self-maintained by a quorum of key managers that are independent of third-party clouds. In particular, FADE acts as an overlay system that works seamlessly atop today's cloud storage services. We implement a proof-of-concept prototype of FADE atop Amazon S3, one of today's cloud storage services. We conduct extensive empirical studies, and demonstrate that FADE provides security protection for outsourced data, while introducing only minimal performance and monetary cost overhead. Our work provides insights of how to incorporate value-added security features into today's cloud storage services. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 25. On the Effects of Process Variation in Network-on-Chip Architectures

    Publication Year: 2010 , Page(s): 240 - 254
    Cited by:  Papers (13)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3746 KB)  

    The advent of diminutive technology feature sizes has led to escalating transistor densities. Burgeoning transistor counts are casting a dark shadow on modern chip design: global interconnect delays are dominating gate delays and affecting overall system performance. Networks-on-Chip (NoC) are viewed as a viable solution to this problem because of their scalability and optimized electrical properties. However, on-chip routers are susceptible to another artifact of deep submicron technology, Process Variation (PV). PV is a consequence of manufacturing imperfections, which may lead to degraded performance and even erroneous behavior. In this work, we present the first comprehensive evaluation of NoC susceptibility to PV effects, and we propose an array of architectural improvements in the form of a new router design-called SturdiSwitch-to increase resiliency to these effects. Through extensive reengineering of critical components, SturdiSwitch provides increased immunity to PV while improving performance and increasing area and power efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 26. A Large-Scale Study of Failures in High-Performance Computing Systems

    Publication Year: 2010 , Page(s): 337 - 350
    Cited by:  Papers (31)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3708 KB) |  | HTML iconHTML  

    Designing highly dependable systems requires a good understanding of failure characteristics. Unfortunately, little raw data on failures in large IT installations are publicly available. This paper analyzes failure data collected at two large high-performance computing sites. The first data set has been collected over the past nine years at Los Alamos National Laboratory (LANL) and has recently been made publicly available. It covers 23,000 failures recorded on more than 20 different systems at LANL, mostly large clusters of SMP and NUMA nodes. The second data set has been collected over the period of one year on one large supercomputing system comprising 20 nodes and more than 10,000 processors. We study the statistics of the data, including the root cause of failures, the mean time between failures, and the mean time to repair. We find, for example, that average failure rates differ wildly across systems, ranging from 20-1000 failures per year, and that time between failures is modeled well by a Weibull distribution with decreasing hazard rate. From one system to another, mean repair time varies from less than an hour to more than a day, and repair times are well modeled by a lognormal distribution. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 27. PRGA: Privacy-Preserving Recording & Gateway-Assisted Authentication of Power Usage Information for Smart Grid

    Publication Year: 2015 , Page(s): 85 - 97
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (793 KB) |  | HTML iconHTML  

    Smart grid network facilitates reliable and efficient power generation and transmission. The power system can adjust the amount of electricity generated based on power usage information submitted by end users. Sender authentication and user privacy preservation are two important security issues on this information flow. In this paper, we propose a scheme such that even the control center (power operator) does not know which user makes the requests of using more power or agreements of using less power until the power is actually used. At the end of each billing period (i.e., after electricity usage), the end user can prove to the power operator that it has really requested to use more power or agreed to use less power earlier. To reduce the total traffic volume in the communications network, our scheme allows gateway smart meters to help aggregate power usage information, and the power generators to determine the total amount of power that needs to be generated at different times. To reduce the impact of attacking traffic, our scheme allows gateway smart meters to help filter messages before they reach the control center. Through analysis and experiments, we show that our scheme is both effective and efficient. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 28. Robust Multi-Factor Authentication for Fragile Communications

    Publication Year: 2014 , Page(s): 568 - 581
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (615 KB)  

    In large-scale systems, user authentication usually needs the assistance from a remote central authentication server via networks. The authentication service however could be slow or unavailable due to natural disasters or various cyber attacks on communication channels. This has raised serious concerns in systems which need robust authentication in emergency situations. The contribution of this paper is two-fold. In a slow connection situation, we present a secure generic multi-factor authentication protocol to speed up the whole authentication process. Compared with another generic protocol in the literature, the new proposal provides the same function with significant improvements in computation and communication. Another authentication mechanism, which we name stand-alone authentication, can authenticate users when the connection to the central server is down. We investigate several issues in stand-alone authentication and show how to add it on multi-factor authentication protocols in an efficient and generic way. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 29. Generating Summary Risk Scores for Mobile Applications

    Publication Year: 2014 , Page(s): 238 - 251
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1137 KB) |  | HTML iconHTML  

    One of Android's main defense mechanisms against malicious apps is a risk communication mechanism which, before a user installs an app, warns the user about the permissions the app requires, trusting that the user will make the right decision. This approach has been shown to be ineffective as it presents the risk information of each app in a “stand-alone” fashion and in a way that requires too much technical knowledge and time to distill useful information. We discuss the desired properties of risk signals and relative risk scores for Android apps in order to generate another metric that users can utilize when choosing apps. We present a wide range of techniques to generate both risk signals and risk scores that are based on heuristics as well as principled machine learning techniques. Experimental results conducted using real-world data sets show that these methods can effectively identify malware as very risky, are simple to understand, and easy to use. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 30. SIP Flooding Attack Detection with a Multi-Dimensional Sketch Design

    Publication Year: 2014 , Page(s): 582 - 595
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1234 KB)  

    The session initiation protocol (SIP) is widely used for controlling multimedia communication sessions over the Internet Protocol (IP). Effectively detecting a flooding attack to the SIP proxy server is critical to ensure robust multimedia communications over the Internet. The existing flooding detection schemes are inefficient in detecting low-rate flooding from dynamic background traffic, or may even totally fail when flooding is launched in a multi-attribute manner by simultaneously manipulating different types of SIP messages. In this paper, we develop an online detection scheme for SIP flooding attacks, by integrating a novel three-dimensional sketch design with the Hellinger distance (HD) detection technique. In our sketch design, each SIP attribute is associated with a two-dimensional sketch hash table, which summarizes the incoming SIP messages into a probability distribution over the sketch table. The evolution of the probability distribution can then be monitored through HD analysis for flooding attack detection. Our three-dimensional design offers the benefit of high detection accuracy even for low-rate flooding, robust performance under multi-attribute flooding, and the capability of selectively discarding the offending SIP messages to prevent the attacks from bringing damages to the network. Furthermore, we design a scheme to control the distribution of the normal traffic over the sketch. Such a design ensures our detection scheme's effectiveness even under the severe distributed denial of service (DDoS) scenario, where attackers can flood over all the sketch table entries. In this paper, we not only theoretically analyze the performance of the proposed detection techniques, but also resort to extensive computer simulations to thoroughly examine the performance. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 31. Hiding in the Mobile Crowd: LocationPrivacy through Collaboration

    Publication Year: 2014 , Page(s): 266 - 279
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (883 KB) |  | HTML iconHTML  

    Location-aware smartphones support various location-based services (LBSs): users query the LBS server and learn on the fly about their surroundings. However, such queries give away private information, enabling the LBS to track users. We address this problem by proposing a user-collaborative privacy-preserving approach for LBSs. Our solution does not require changing the LBS server architecture and does not assume third party servers; yet, it significantly improves users' location privacy. The gain stems from the collaboration of mobile devices: they keep their context information in a buffer and pass it to others seeking such information. Thus, a user remains hidden from the server, unless all the collaborative peers in the vicinity lack the sought information. We evaluate our scheme against the Bayesian localization attacks that allow for strong adversaries who can incorporate prior knowledge in their attacks. We develop a novel epidemic model to capture the, possibly time-dependent, dynamics of information propagation among users. Used in the Bayesian inference framework, this model helps analyze the effects of various parameters, such as users' querying rates and the lifetime of context information, on users' location privacy. The results show that our scheme hides a high fraction of location-based queries, thus significantly enhancing users' location privacy. Our simulations with real mobility traces corroborate our model-based findings. Finally, our implementation on mobile platforms indicates that it is lightweight and the cost of collaboration is negligible. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 32. Context-based Access Control Systems for Mobile Devices

    Publication Year: 2014 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1168 KB)  

    Mobile Android applications often have access to sensitive data and resources on the user device. Misuse of this data by malicious applications may result in privacy breaches and sensitive data leakage. An example would be a malicious application surreptitiously recording a confidential business conversation. The problem arises from the fact that Android users do not have control over the application capabilities once the applications have been granted the requested privileges upon installation. In many cases, however, whether an application may get a privilege depends on the specific user context and thus we need a context-based access control mechanism by which privileges can be dynamically granted or revoked to applications based on the specific context of the user. In this paper we propose such an access control mechanism. Our implementation of context differentiates between closely located sub-areas within the same location. We have modified the Android operating system so that context-based access control restrictions can be specified and enforced. We have performed several experiments to assess the efficiency of our access control mechanism and the accuracy of context detection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 33. Extending the Agile Development Process to Develop Acceptably Secure Software

    Publication Year: 2014 , Page(s): 497 - 509
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1477 KB)  

    The agile software development approach makes developing secure software challenging. Existing approaches for extending the agile development process, which enables incremental and iterative software development, fall short of providing a method for efficiently ensuring the security of the software increments produced at the end of each iteration. This article (a) proposes a method for security reassurance of software increments and demonstrates it through a simple case study, (b) integrates security engineering activities into the agile software development process and uses the security reassurance method to ensure producing acceptably secure-by the business owner-software increments at the end of each iteration, and (c) discusses the compliance of the proposed method with the agile values and its ability to produce secure software increments. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 34. Efficient and Privacy-Aware Data Aggregation in Mobile Sensing

    Publication Year: 2014 , Page(s): 115 - 129
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1389 KB) |  | HTML iconHTML  

    The proliferation and ever-increasing capabilities of mobile devices such as smart phones give rise to a variety of mobile sensing applications. This paper studies how an untrusted aggregator in mobile sensing can periodically obtain desired statistics over the data contributed by multiple mobile users, without compromising the privacy of each user. Although there are some existing works in this area, they either require bidirectional communications between the aggregator and mobile users in every aggregation period, or have high-computation overhead and cannot support large plaintext spaces. Also, they do not consider the Min aggregate, which is quite useful in mobile sensing. To address these problems, we propose an efficient protocol to obtain the Sum aggregate, which employs an additive homomorphic encryption and a novel key management technique to support large plaintext space. We also extend the sum aggregation protocol to obtain the Min aggregate of time-series data. To deal with dynamic joins and leaves of mobile users, we propose a scheme that utilizes the redundancy in security to reduce the communication cost for each join and leave. Evaluations show that our protocols are orders of magnitude faster than existing solutions, and it has much lower communication overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 35. Detecting Automation of Twitter Accounts: Are You a Human, Bot, or Cyborg?

    Publication Year: 2012 , Page(s): 811 - 824
    Cited by:  Papers (6)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2268 KB) |  | HTML iconHTML  

    Twitter is a new web application playing dual roles of online social networking and microblogging. Users communicate with each other by publishing text-based posts. The popularity and open structure of Twitter have attracted a large number of automated programs, known as bots, which appear to be a double-edged sword to Twitter. Legitimate bots generate a large amount of benign tweets delivering news and updating feeds, while malicious bots spread spam or malicious contents. More interestingly, in the middle between human and bot, there has emerged cyborg referred to either bot-assisted human or human-assisted bot. To assist human users in identifying who they are interacting with, this paper focuses on the classification of human, bot, and cyborg accounts on Twitter. We first conduct a set of large-scale measurements with a collection of over 500,000 accounts. We observe the difference among human, bot, and cyborg in terms of tweeting behavior, tweet content, and account properties. Based on the measurement results, we propose a classification system that includes the following four parts: 1) an entropy-based component, 2) a spam detection component, 3) an account properties component, and 4) a decision maker. It uses the combination of features extracted from an unknown user to determine the likelihood of being a human, bot, or cyborg. Our experimental evaluation demonstrates the efficacy of the proposed classification system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 36. Private Searching on Streaming Data Based on Keyword Frequency

    Publication Year: 2014 , Page(s): 155 - 167
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (690 KB) |  | HTML iconHTML  

    Private searching on streaming data is a process to dispatch to a public server a program, which searches streaming sources of data without revealing searching criteria and then sends back a buffer containing the findings. From an Abelian group homomorphic encryption, the searching criteria can be constructed by only simple combinations of keywords, for example, disjunction of keywords. The recent breakthrough in fully homomorphic encryption has allowed us to construct arbitrary searching criteria theoretically. In this paper, we consider a new private query, which searches for documents from streaming data on the basis of keyword frequency, such that the frequency of a keyword is required to be higher or lower than a given threshold. This form of query can help us in finding more relevant documents. Based on the state of the art fully homomorphic encryption techniques, we give disjunctive, conjunctive, and complement constructions for private threshold queries based on keyword frequency. Combining the basic constructions, we further present a generic construction for arbitrary private threshold queries based on keyword frequency. Our protocols are semantically secure as long as the underlying fully homomorphic encryption scheme is semantically secure. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 37. Wireless Intrusion Detection and Device Fingerprinting through Preamble Manipulation

    Publication Year: 2014 , Page(s): 1
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1367 KB)  

    Wireless networks are particularly vulnerable to spoofing and route poisoning attacks due to the contested transmission medium. Recent works investigate physical layer features such as received signal strength or radio frequency fingerprints to localize and identify malicious devices. In this paper we demonstrate a novel and complementary approach to exploiting physical layer differences among wireless devices that is more energy efficient and invariant with respect to the environment. Specifically, we exploit subtle design differences among transceiver hardware types. Transceivers fulfill the physicallayer aspects of wireless networking protocols, yet specific hardware implementations vary among manufacturers and device types. In this paper we demonstrate that precise manipulation of the physical layer header prevents a subset of transceiver types from receiving the manipulated packet. By soliciting acknowledgments from wireless devices using a small number of packets with manipulated preambles and frame lengths, a response pattern identifies the true transceiver class of the device under test. Herein we demonstrate a transceiver taxonomy of six classes with greater than 99% accuracy, irrespective of environment. We successfully demonstrate wireless multi-factor authentication, intrusion detection, and transceiver type fingerprinting through preamble manipulation. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 38. Camouflage Traffic: Minimizing Message Delay for Smart Grid Applications under Jamming

    Publication Year: 2015 , Page(s): 31 - 44
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1310 KB) |  | HTML iconHTML  

    Smart grid is a cyber-physical system that integrates power infrastructures with information technologies. To facilitate efficient information exchange, wireless networks have been proposed to be widely used in the smart grid. However, the jamming attack that constantly broadcasts radio interference is a primary security threat to prevent the deployment of wireless networks in the smart grid. Hence, spread spectrum systems, which provide jamming resilience via multiple frequency and code channels, must be adapted to the smart grid for secure wireless communications, while at the same time providing latency guarantee for control messages. An open question is how to minimize message delay for timely smart grid communication under any potential jamming attack. To address this issue, we provide a paradigm shift from the case-by-case methodology, which is widely used in existing works to investigate well-adopted attack models, to the worst-case methodology, which offers delay performance guarantee for smart grid applications under any attack. We first define a generic jamming process that characterizes a wide range of existing attack models. Then, we show that in all strategies under the generic process, the worst-case message delay is a U-shaped function of network traffic load. This indicates that, interestingly, increasing a fair amount of traffic can in fact improve the worst-case delay performance. As a result, we demonstrate a lightweight yet promising system, transmitting adaptive camouflage traffic (TACT), to combat jamming attacks. TACT minimizes the message delay by generating extra traffic called camouflage to balance the network load at the optimum. Experiments show that TACT can decrease the probability that a message is not delivered on time in order of magnitude. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 39. Meeting Cardinality Constraints in Role Mining

    Publication Year: 2015 , Page(s): 71 - 84
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2059 KB)  

    Role mining is a critical step for organizations that migrate from traditional access control mechanisms to role based access control (RBAC). Additional constraints may be imposed while generating roles from a given user-permission assignment relation. In this paper we consider two such constraints which are the dual of each other. A role-usage cardinality constraint limits the maximum number of roles any user can have. Its dual, the permission-distribution cardinality constraint, limits the maximum number of roles to which a permission can belong. These two constraints impose mutually contradictory requirements on user to role and role to permission assignments. An attempt to satisfy one of the constraints may result in a violation of the other. We show that the constrained role mining problem is NP-Complete and present heuristic solutions. Two distinct frameworks are presented in this paper. In the first approach, roles are initially mined without taking the constraints into account. The user-role and role-permission assignments are then checked for constraint violation in a post-processing step, and appropriately re-assigned, if necessary. In the second approach, constraints are enforced during the process of role mining. The methods are first applied on problems that consider the two constraints individually, and then with both considered together. Both methods are evaluated over a number of real-world data sets. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 40. Privacy-Preserving Multi-Class Support Vector Machine for Outsourcing the Data Classification in Cloud

    Publication Year: 2014 , Page(s): 467 - 479
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1143 KB)  

    Emerging cloud computing infrastructure replaces traditional outsourcing techniques and provides flexible services to clients at different locations via Internet. This leads to the requirement for data classification to be performed by potentially untrusted servers in the cloud. Within this context, classifier built by the server can be utilized by clients in order to classify their own data samples over the cloud. In this paper, we study a privacy-preserving (PP) data classification technique where the server is unable to learn any knowledge about clients' input data samples while the server side classifier is also kept secret from the clients during the classification process. More specifically, to the best of our knowledge, we propose the first known client-server data classification protocol using support vector machine. The proposed protocol performs PP classification for both two-class and multi-class problems. The protocol exploits properties of Pailler homomorphic encryption and secure two-party computation. At the core of our protocol lies an efficient, novel protocol for securely obtaining the sign of Pailler encrypted numbers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 41. Modeling and Analysis on the Propagation Dynamics of Modern Email Malware

    Publication Year: 2014 , Page(s): 361 - 374
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2064 KB) |  | HTML iconHTML  

    Due to the critical security threats imposed by email-based malware in recent years, modeling the propagation dynamics of email malware becomes a fundamental technique for predicting its potential damages and developing effective countermeasures. Compared to earlier versions of email malware, modern email malware exhibits two new features, reinfection and self-start. Reinfection refers to the malware behavior that modern email malware sends out malware copies whenever any healthy or infected recipients open the malicious attachment. Self-start refers to the behavior that malware starts to spread whenever compromised computers restart or certain files are visited. In the literature, several models are proposed for email malware propagation, but they did not take into account the above two features and cannot accurately model the propagation dynamics of modern email malware. To address this problem, we derive a novel difference equation based analytical model by introducing a new concept of virtual infected user. The proposed model can precisely present the repetitious spreading process caused by reinfection and self-start and effectively overcome the associated computational challenges. We perform comprehensive empirical and theoretical study to validate the proposed analytical model. The results show our model greatly outperforms previous models in terms of estimation accuracy. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 42. SORT: A Self-ORganizing Trust Model for Peer-to-Peer Systems

    Publication Year: 2013 , Page(s): 14 - 27
    Cited by:  Papers (2)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1361 KB) |  | HTML iconHTML  

    Open nature of peer-to-peer systems exposes them to malicious activity. Building trust relationships among peers can mitigate attacks of malicious peers. This paper presents distributed algorithms that enable a peer to reason about trustworthiness of other peers based on past interactions and recommendations. Peers create their own trust network in their proximity by using local information available and do not try to learn global trust information. Two contexts of trust, service, and recommendation contexts, are defined to measure trustworthiness in providing services and giving recommendations. Interactions and recommendations are evaluated based on importance, recentness, and peer satisfaction parameters. Additionally, recommender's trustworthiness and confidence about a recommendation are considered while evaluating recommendations. Simulation experiments on a file sharing application show that the proposed model can mitigate attacks on 16 different malicious behavior models. In the experiments, good peers were able to form trust relationships in their proximity and isolate malicious peers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 43. Process Authentication for High System Assurance

    Publication Year: 2014 , Page(s): 168 - 180
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (798 KB) |  | HTML iconHTML  

    This paper points out the need in modern operating system kernels for a process authentication mechanism, where a process of a user-level application proves its identity to the kernel. Process authentication is different from process identification. Identification is a way to describe a principal; PIDs or process names are identifiers for processes in an OS environment. However, the information such as process names or executable paths that is conventionally used by OS to identify a process is not reliable. As a result, malware may impersonate other processes, thus violating system assurance. We propose a lightweight secure application authentication framework in which user-level applications are required to present proofs at runtime to be authenticated to the kernel. To demonstrate the application of process authentication, we develop a system call monitoring framework for preventing unauthorized use or access of system resources. It verifies the identity of processes before completing the requested system calls. We implement and evaluate a prototype of our monitoring architecture in Linux. The results from our extensive performance evaluation show that our prototype incurs reasonably low overhead, indicating the feasibility of our approach for cryptographically authenticating applications and their processes in the operating system. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 44. Dynamic Security Risk Management Using Bayesian Attack Graphs

    Publication Year: 2012 , Page(s): 61 - 74
    Cited by:  Papers (15)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2118 KB) |  | HTML iconHTML  

    Security risk assessment and mitigation are two vital processes that need to be executed to maintain a productive IT infrastructure. On one hand, models such as attack graphs and attack trees have been proposed to assess the cause-consequence relationships between various network states, while on the other hand, different decision problems have been explored to identify the minimum-cost hardening measures. However, these risk models do not help reason about the causal dependencies between network states. Further, the optimization formulations ignore the issue of resource availability while analyzing a risk model. In this paper, we propose a risk management framework using Bayesian networks that enable a system administrator to quantify the chances of network compromise at various levels. We show how to use this information to develop a security mitigation and management plan. In contrast to other similar models, this risk model lends itself to dynamic analysis during the deployed phase of the network. A multiobjective optimization platform provides the administrator with all trade-off information required to make decisions in a resource constrained environment. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 45. Control Flow-Based Malware VariantDetection

    Publication Year: 2014 , Page(s): 307 - 317
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1791 KB) |  | HTML iconHTML  

    Static detection of malware variants plays an important role in system security and control flow has been shown as an effective characteristic that represents polymorphic malware. In our research, we propose a similarity search of malware to detect these variants using novel distance metrics. We describe a malware signature by the set of control flowgraphs the malware contains. We use a distance metric based on the distance between feature vectors of string-based signatures. The feature vector is a decomposition of the set of graphs into either fixed size k-subgraphs, or q-gram strings of the high-level source after decompilation. We use this distance metric to perform pre-filtering. We also propose a more effective but less computationally efficient distance metric based on the minimum matching distance. The minimum matching distance uses the string edit distances between programs' decompiled flowgraphs, and the linear sum assignment problem to construct a minimum sum weight matching between two sets of graphs. We implement the distance metrics in a complete malware variant detection system. The evaluation shows that our approach is highly effective in terms of a limited false positive rate and our system detects more malware variants when compared to the detection rates of other algorithms. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 46. Collusion-Tolerable Privacy-Preserving Sum and Product Calculation without Secure Channel

    Publication Year: 2015 , Page(s): 45 - 57
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1195 KB)  

    Much research has been conducted to securely outsource multiple parties' data aggregation to an untrusted aggregator without disclosing each individual's privately owned data, or to enable multiple parties to jointly aggregate their data while preserving privacy. However, those works either require secure pair-wise communication channels or suffer from high complexity. In this paper, we consider how an external aggregator or multiple parties can learn some algebraic statistics (e.g., sum, product) over participants' privately owned data while preserving the data privacy. We assume all channels are subject to eavesdropping attacks, and all the communications throughout the aggregation are open to others. We first propose several protocols that successfully guarantee data privacy under semi-honest model, and then present advanced protocols which tolerate up to k passive adversaries who do not try to tamper the computation. Under this weak assumption, we limit both the communication and computation complexity of each participant to a small constant. At the end, we present applications which solve several interesting problems via our protocols. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 47. WarningBird: A Near Real-Time Detection System for Suspicious URLs in Twitter Stream

    Publication Year: 2013 , Page(s): 183 - 195
    Cited by:  Papers (1)
    Multimedia
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (2064 KB)  

    Twitter is prone to malicious tweets containing URLs for spar, phishing, and malware distribution. Conventional Twitter spar detection schemes utilize account features such as the ratio of tweets containing URLs and the account creation date, or relation features in the Twitter graph. These detection schemes are ineffective against feature fabrications or consume much time and resources. Conventional suspicious URL detection schemes utilize several features including lexical features of URLs, URL redirection, HTIUIL content, and dynamic behavior. However, evading techniques such as time-based evasion and crawler evasion exist. in this paper, we propose WARNINGBIRD, a suspicious URL detection system for Twitter. Our system investigates correlations of URL redirect chains extracted from several tweets. Because attackers have limited resources and usually reuse them, their URL redirect chains frequently share the same URLs. We develop methods to discover correlated URL redirect chains using the frequently shared URLs and to determine their suspiciousness. We collect numerous tweets from the Twitter public timeline and build a statistical classifier using them. Evaluation results show that our classifier accurately and efficiently detects suspicious URLs. We also present WARNINGBIRD as a near real-time system for classifying suspicious URLs in the Twitter stream. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 48. Security Analysis and Related Usability of Motion-Based CAPTCHAs: Decoding Codewords in Motion

    Publication Year: 2014 , Page(s): 480 - 493
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1272 KB)  

    We explore the robustness and usability of moving-image object recognition (video) CAPTCHAS, designing and implementing automated attacks based on computer vision techniques. Our approach is suitable for broad classes of moving-image CAPTCHAS involving rigid objects. We first present an attack that defeats instances of such a CAPTCHA (NuCaptcha) representing the state-of-the-art, involving dynamic text strings called codewords. We then consider design modifications to mitigate the attacks (e.g., overlapping characters more closely, randomly changing the font of individual characters, or even randomly varying the number of characters in the codeword). We implement the modified CAPTCHAS and test if designs modified for greater robustness maintain usability. Our lab-based studies show that the modified captchas fail to offer viable usability, even when the captcha strength is reduced below acceptable targets. Worse yet, our GPU-based implementation shows that our automated approach can decode these captchas faster than humans can, and we can do so at a relatively low cost of roughly 50 cents per 1,000 captchas solved based on Amazon EC2 rates circa 2012. To further demonstrate the challenges in designing usable captchas, we also implement and test another variant of moving text strings using the known emerging images concept. This variant is resilient to our attacks and also offers similar usability to commercially available approaches. We explain why fundamental elements of the emerging images idea resist our current attack where others fail. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 49. Secure Two-Party Differentially Private Data Release for Vertically Partitioned Data

    Publication Year: 2014 , Page(s): 59 - 71
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (734 KB) |  | HTML iconHTML  

    Privacy-preserving data publishing addresses the problem of disclosing sensitive data when mining for useful information. Among the existing privacy models, ϵ-differential privacy provides one of the strongest privacy guarantees. In this paper, we address the problem of private data publishing, where different attributes for the same set of individuals are held by two parties. In particular, we present an algorithm for differentially private data release for vertically partitioned data between two parties in the semihonest adversary model. To achieve this, we first present a two-party protocol for the exponential mechanism. This protocol can be used as a subprotocol by any other algorithm that requires the exponential mechanism in a distributed setting. Furthermore, we propose a two-party algorithm that releases differentially private data in a secure way according to the definition of secure multiparty computation. Experimental results on real-life data suggest that the proposed algorithm can effectively preserve information for a data mining task. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • 50. A Denial of Service Attack to UMTS Networks Using SIM-Less Devices

    Publication Year: 2014 , Page(s): 280 - 291
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (703 KB) |  | HTML iconHTML  

    One of the fundamental security elements in cellular networks is the authentication procedure performed by means of the Subscriber Identity Module that is required to grant access to network services and hence protect the network from unauthorized usage. Nonetheless, in this work we present a new kind of denial of service attack based on properly crafted SIM-less devices that, without any kind of authentication and by exploiting some specific features and performance bottlenecks of the UMTS network attachment process, are potentially capable of introducing significant service degradation up to disrupting large sections of the cellular network coverage. The knowledge of this attack can be exploited by several applications both in security and in network equipment manufacturing sectors. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.

Aims & Scope

The purpose of TDSC is to publish papers in dependability and security, including the joint consideration of these issues and their interplay with system performance.

Full Aims & Scope

Meet Our Editors

Editor-in-Chief
Elisa Bertino
CS Department
Purdue University