By Topic

High Assurance Systems Engineering Symposium, 2008. HASE 2008. 11th IEEE

Date 3-5 Dec. 2008

Filter Results

Displaying Results 1 - 25 of 68
  • [Front cover]

    Publication Year: 2008 , Page(s): C1
    Save to Project icon | Request Permissions | PDF file iconPDF (370 KB)  
    Freely Available from IEEE
  • [Title page i]

    Publication Year: 2008 , Page(s): i
    Save to Project icon | Request Permissions | PDF file iconPDF (33 KB)  
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2008 , Page(s): iii
    Save to Project icon | Request Permissions | PDF file iconPDF (67 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2008 , Page(s): iv
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • Table of contents

    Publication Year: 2008 , Page(s): v - ix
    Save to Project icon | Request Permissions | PDF file iconPDF (138 KB)  
    Freely Available from IEEE
  • Message from the Chairs

    Publication Year: 2008 , Page(s): x - xi
    Save to Project icon | Request Permissions | PDF file iconPDF (84 KB) |  | HTML iconHTML  
    Freely Available from IEEE
  • Conference organization

    Publication Year: 2008 , Page(s): xii - xiv
    Save to Project icon | Request Permissions | PDF file iconPDF (103 KB)  
    Freely Available from IEEE
  • list-reviewer

    Publication Year: 2008 , Page(s): xv
    Save to Project icon | Request Permissions | PDF file iconPDF (71 KB)  
    Freely Available from IEEE
  • Path Sensitive Analysis for Security Flaws

    Publication Year: 2008 , Page(s): 3
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (33 KB)  

    Despite increasing efforts in detecting and managing software security flaws, the number of security attacks is still rising every year. As software becomes more complex, security flaws are more easily introduced into a software system and more difficult to eliminate. In this talk, I present our research on the development of a framework for detecting and managing security flaws. The key idea is to develop static analysis tools to determine program paths that lead to various types of vulnerabilities. I describe a path-sensitive analysis that can handle a number of software vulnerabilities, including buffer overflow, integer errors, violation of safety properties, and flaws that can cause denial of service. The novelty of the work is that we address the scalability of path-sensitive analysis using a demand-driven algorithm, to provide both precision and scalability. We first develop a general vulnerability model to easily specify new types of vulnerabilities or application specific security flaws to guide our demand-driven analysis. Our analysis starts at the program points where vulnerability could possibly occur. A partial reversal of the dataflow analysis is performed to determine the types of paths with regard to feasibility and vulnerability, including the severity of the vulnerability. With this technique, we are able to more precisely identify vulnerabilities. Our experiments show that we are able to detect and classify more vulnerabilities than current tools and the analysis scales to above 1 million lines of code. We also provide information about the vulnerability to help with the user understand and remove its root cause. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Transaction Calculus

    Publication Year: 2008 , Page(s): 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (118 KB)  

    Transaction-based services are increasingly being applied in solving many universal interoperability problems. Compensation is one typical feature for long-running transactions. This paper presents a design model for specifying the behaviour of compensable programs. The new model for handling exception and compensation is built as conservative extension of the standard relational model. The paper puts forward a mathematical framework for transactions where a transaction is treated as a mapping from its environment to compensable programs. We propose a transaction refinement calculus, and show that every transaction can be converted to a primitive one which simply consists of a forward activity and a compensation module. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Assurance Technology of System Test Based on Operators' Aspects

    Publication Year: 2008 , Page(s): 5
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (35 KB)  

    As the systems have been integrated through network, they are requested to keep their operation under the coexistence of their respective heterogeneous demands and modes. The heterogeneous requirements have to be achieved and the heterogeneous modes have to coexist under the evolving situation. Then the system management and maintenance cost significantly increase in comparison with its construction. Change, such as integration, combination, renewal and etc, on large-scale and complex system brings great influence on the operators of the system. However, the evaluation based on the operatorspsila aspects is not carried out so far. The conventional system evaluation is based on the userpsilas viewpoint how to measure an influence of non-service, while system stopping. The above-mentioned requirement could not be solved.Therefore, based on test process of the systempsilas change, from processing a test on running system (on-line test), through defining the assurance of test, which represents the reduction of the influence to the operators. From operatorpsilas viewpoint, on-line test technologies and the assurance evaluation technologies are proposed. By reducing the influence to the operators through introducing the proper online test by using this evaluation technology, smooth changes of the system is enabled and so that improvement of the assurance is achieved. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Security Goal Indicator Trees: A Model of Software Features that Supports Efficient Security Inspection

    Publication Year: 2008 , Page(s): 9 - 18
    Cited by:  Papers (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (357 KB) |  | HTML iconHTML  

    We analyze the specific challenges of inspecting software development documents for security: Most security goals are formulated as negative (i.e. avoidance) goals, and security is a non-local property of the whole system. We suggest a new type of model for security relevant features to address these challenges. Our model, named security goal indicator tree (SGIT), maps negative and non-local goals to positive, concrete features of the software that can be checked during an inspection. It supports inspection of software documents from various phases of the development process. An SGIT links a security goal with numerous indicators (which may be beneficial or detrimental for the achievement of the goal) and structures the set of indicators by Boolean and conditional relationships enabling an efficient selection of indicator subsets. We present SGIT examples, explain how to use them in an inspection, give advice on creating SGITs, and give an outlook on how SGITs will be embedded in a comprehensive method for software security inspection. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Low Cost Secure Computation for the General Client-Server Computation Model

    Publication Year: 2008 , Page(s): 19 - 26
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (325 KB) |  | HTML iconHTML  

    Due to the large number of attacks on open networks, information theft becomes a more and more severe problem. Secure computation can offer highly assured confidentiality protection to critical information and data against external and insider attacks. However, existing secure computation methods are not widely used in practice due to their excessive performance overheads and limited applicability. In this paper, we consider secure computation under a general client-server model which fits the computation of most modern application systems. A novel secure computation protocol has been developed to support highly efficient arithmetic operations. Since the algorithms are based on matrix computations, they are highly efficient. Due to its efficiency and applicability, our secure computation approach can benefit many application systems, including medical databases, secure financial systems, defense information systems, E-commerce systems, agent-based computing, etc. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating Security Risks following a Compliance Perspective

    Publication Year: 2008 , Page(s): 27 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (504 KB) |  | HTML iconHTML  

    One of the great challenges of information security area concerns the development of methods for measuring the degree of risk to which information is subject, consequence of the wide gamma of vulnerabilities and potential attacks. The compliance perspective for risk evaluation methodologies can be characterized as the search for turning a information system more aligned with a given security standard, for example ISO 27002. This paper proposes a security assessment procedure for quantifying the current compliance-level of information systems (IS) according to a control-based standard. It aims at identifying the that should be fully or partially implemented to achieve the maximum return of a given investment (ROI). Basically, to assess compliance, we have investigated different analytical models associated to a set of security attributes and compounds. Lastly, we make use of hypothetic scenarios to evaluate the behaviour of the proposed models through a comparative analysis under selected requirements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Comparison of Network Attack Datasets: An Empirical Analysis

    Publication Year: 2008 , Page(s): 39 - 48
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (273 KB) |  | HTML iconHTML  

    Network malicious activity can be collected and reported by various sources using different attack detection solutions. The granularity of these solutions provides either very detailed information (intrusion detection systems, honeypots) or high-level trends (CAIDA, SANS). The problem for network security operators is often to select the sources of information to better protect their network. How much information from these sources is redundant and how much is unique? The goal of this paper is to show empirically that while some global attack events can be correlated across various sensors, the majority of incoming malicious activity has local specificities. This study presents a comparative analysis of four different attack datasets offering three different levels of granularity: 1) two high interaction honeynets deployed at two different locations (i.e., a corporate and an academic environment); 2) ATLAS which is a distributed network telescope from Arbor; and 3) Internet Protecttrade which is a global alerting service from AT&T. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Use of Security Metrics Based on Intrusion Prevention System Event Data: An Empirical Analysis

    Publication Year: 2008 , Page(s): 49 - 58
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (185 KB) |  | HTML iconHTML  

    With the increasing number of attacks on the Internet, a primary concern for organizations is the protection of their network. To do so, organizations install security devices such as intrusion prevention systems to monitor network traffic. However, data that are collected by these devices are often imperfect. The contribution of this paper is to try to define some practical metrics based on imperfect data collected by an intrusion prevention system. Since attacks greatly differ, we propose to group the attacks into several attack type groups. We then define a set of metrics for each attack type group. We introduce an approach that consists in analyzing the evolution of these metrics per attack type group by focusing on outliers in order to give an insight into an organizationpsilas security. The method is assessed for an organization of about 40,000 computers. The results were encouraging: outliers could be related to security issues that, in some cases, had not been previously flagged. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Deployment of a Darknet on an Organization-Wide Network: An Empirical Analysis

    Publication Year: 2008 , Page(s): 59 - 68
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (296 KB) |  | HTML iconHTML  

    Darknet sensors have the interesting property of collecting only suspicious traffic, including misconfiguration, backscatter and malicious traffic. The type of traffic collected highly depends on two parameters: the size and the location of the darknet sensor. The goals of this paper are to study empirically the relationship between these two parameters and to try to increase the volume of attackers detected by a given darknet sensor. Our empirical results reveal that on average, on a daily basis, 485 distinct external source IP addresses perform a TCP scan on one of the two /16 networks of our organizationpsilas network. Moreover, a given darknet sensor of 77 IP addresses deployed in the same /16 network collects on average attack traffic from 26% of these attackers. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Scalable Checkpoint Encoding Algorithm for Diskless Checkpointing

    Publication Year: 2008 , Page(s): 71 - 79
    Cited by:  Papers (4)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (220 KB) |  | HTML iconHTML  

    Diskless checkpointing is an efficient technique to save the state of a long running application in a distributed environment without relying on stable storage. In this paper, we introduce several scalable encoding strategies into diskless checkpointing and reduce the overhead to survive k failures in p processes from 2[logp].k((beta + 2gamma)m + alpha) to (1 + O(1/radic(m))).k(beta + 2gamma)m, where a is the communication latency, 1/beta is the network bandwidth between processes, 1/gamma is the rate to perform calculations, and m is the size of local checkpoint per process. The introduced algorithm is scalable in the sense that the overhead to survive k failures in p processes does not increase as the number of processes p increases. We evaluate the performance overhead of the introduced algorithm by using a preconditioned conjugate gradient equation solver as an example. Experimental results demonstrate that the introduced techniques are highly scalable. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • HyperMIP: Hypervisor Controlled Mobile IP for Virtual Machine Live Migration across Networks

    Publication Year: 2008 , Page(s): 80 - 88
    Cited by:  Papers (11)  |  Patents (3)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (346 KB) |  | HTML iconHTML  

    Live migration provides transparent load-balancing and fault-tolerant mechanism for applications. When a Virtual Machine migrates among hosts residing in two networks, the network attachment point of the Virtual Machine is also changed, thus the Virtual Machine will suffer from IP mobility problem after migration. This paper proposes an approach called Hypervisor controlled Mobile IP to support live migration of Virtual Machine across networks, which enables virtual machine live migration over distributed computing resources. Since Hypervisor is capable of predicting exact time and destination host of Virtual Machine migration, our approach not only can improve migration performance but also reduce the network restoration latency. Some comprehensive experiments have been conducted and the results show that the HyperMIP brings negligible overhead to network performance of Virtual Machines. The network restoration time of HyperMIP supported migration is about only 3 second. HyperMIP is a promising essential component to provide reliability and fault tolerant for network application running in Virtual Machine. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards Secure Trust Bootstrapping in Pervasive Computing Environment

    Publication Year: 2008 , Page(s): 89 - 96
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1426 KB) |  | HTML iconHTML  

    The deployment of small handheld devices in a pervasive environment inevitably raises security concerns while sharing services. Trust models play a major role in guarding against privacy violations and security breaches. Though assignment of initial trust is an important issue, little work has been done in this area. Most of the prior research on trust models assume a constant level of the initial trust value. However, in a pervasive smart space, trust is context dependent. The need for security varies from context to context. In addition, some services, being shared in this environment, require high security while sharing. To ensure this, security levels should be incorporated in the initial trust calculation. In this paper, we propose a new initial trust model called ICSTB(integration of context security in trust bootstrapping). The model categorizes services or contexts in different security levels based on their security needs, and these security needs are considered in trust bootstrapping. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Small Logs for Transactional Services: Distinction is Much More Accurate than (Positive) Discrimination

    Publication Year: 2008 , Page(s): 97 - 106
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (298 KB) |  | HTML iconHTML  

    For complex services, logging is an integral part of many middleware aspects, especially, transactions and monitoring. In the event of a failure, the log allows us to deduce the cause of failure (diagnosis), recover by compensating the logged actions (atomicity), etc. However, for heterogeneous services, logging all the actions is often impracticable due to privacy/security constraints. Also, logging is expensive in terms of both time and space. Thus, we are interested in determining a small number of actions that needs to be logged, to know with certainty the actual sequence of executed actions from any given partial log. We propose two heuristics to determine such a small set of transitions, with services modeled as finite state machines. The first one is based on (Positive) discrimination of transitions, using every observation to know (discriminate) that a maximal number of transitions occurred. We characterize it algebraically, giving a very fast algorithm. The second algorithm, the distinguishing algorithm, uses every observation to maximize the number of transitions which are ensured not to have occurred. We show experimentally that the second algorithm gives much more accurate results than the first one, although it is also slower (but still fast enough). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Low Energy Soft Error-Tolerant Register File Architecture for Embedded Processors

    Publication Year: 2008 , Page(s): 109 - 116
    Cited by:  Papers (1)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (366 KB) |  | HTML iconHTML  

    This paper presents a soft error-tolerant architecture to protect embedded processors register files. The proposed architecture is based on selectively duplication of the most vulnerable registers values in a cache memory embedded beside the processor register file so called register cache. To do this, two parity bits are added to each register of the processor to detect up to three contiguous errors. To recover the erroneous register value, two distinct cache memories are utilized for storing the redundant copy of the vulnerable registers, one for short lived registers and the other one for long lived registers. The proposed method has two key advantageous as compared to fully ECC protected register file: 1) the proposed architecture corrects up to three contiguous errors while the ECC protected register file just corrects one bit error, and 2) the proposed architecture consumes about 25% less power than the fully ECC protected register file. The experimental results show that the AVF of the unprotected register file is improved about 90% by the proposed architecture while having a little area overhead. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Randomization Based Probabilistic Approach to Detect Trojan Circuits

    Publication Year: 2008 , Page(s): 117 - 124
    Cited by:  Papers (16)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (176 KB) |  | HTML iconHTML  

    In this paper, we propose a randomization based technique to verify whether a manufactured chip conforms to its design or is infected by any trojan circuit. A trojan circuit can be inserted into the design or fabrication mask by a malicious manufacturer such that it monitors for a specific rare trigger condition, and then it produces a payload error in the circuit which alters the functionality of the circuit often causing a catastrophic crash of the system where the chip was being used. Since trojans are activated by rare input patterns, they are stealthy by nature and are difficult to detect through conventional techniques of functional testing. In this paper, we propose a novel randomized approach to probabilistically compare the functionality of the implemented circuit with the design of the circuit. Using hypothesis testing, we provide quantitative guarantees when our algorithm reports that there is no trojan in the implemented circuit. This allows us to trade runtime for accuracy. The technique is sound, that is, it reports presence of a trojan only if the implemented circuit is actually infected. If our algorithm finds that the implemented circuit is infected with a trojan, it also reports a fingerprint input pattern to distinguish the implemented circuit from the design. We illustrate the effectiveness of our technique on a set of infected and benign combinational circuits. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • On the Integrity of Lightweight Checkpoints

    Publication Year: 2008 , Page(s): 125 - 134
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (202 KB) |  | HTML iconHTML  

    This paper proposes a lightweight checkpointing scheme for real-time embedded systems. The goal is to separate concerns by allowing applications to take checkpoints independently while providing them with an operating system service to assure the integrity of checkpoints. The scheme takes error detection latency into account and assumes a broad class of application failure modes. In this paper we detail the design of the operating system service, which offers a very simple programming model to application designers and introduces only a small execution overhead for each checkpoint. Moreover, we describe the usage of model checking to ascertain the correctness of our approach. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Fast Performance Analysis Tool for Multicore, Multithreaded Communication Processors

    Publication Year: 2008 , Page(s): 135 - 144
    Cited by:  Papers (2)
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (343 KB) |  | HTML iconHTML  

    To allow fast communication processor (CP) performance testing of task-to-CP-topology mapping, we propose a fast CP simulation tool with a few novel ideas that make it generic, fast, and accurate. Our major goal is to focus on modeling features common to a wide variety of CP architectures and incorporate relevant CP specific features as plug-ins. This tool not only allows user-defined packet arrival processes and code path mixtures to be tested, but also provides a way to allow the maximum sustainable line rate to be quickly estimated. Case studies based on a large number of code samples available in IXP1200/2400 workbenches show that the maximum sustainable line rates estimated using our tool are consistently within 6% of cycle-accurate simulation results. Moreover, each simulation run takes only a few seconds to finish on a Pentium III PC, which strongly demonstrates the power of this tool for fast CP performance testing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.