By Topic

Technologies for Homeland Security (HST), 2010 IEEE International Conference on

Date 8-10 Nov. 2010

Filter Results

Displaying Results 1 - 25 of 103
  • Welcome message from the Conference Chair

    Page(s): 1 - 3
    Save to Project icon | Request Permissions | PDF file iconPDF (577 KB)  
    Freely Available from IEEE
  • Organizational Committee

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (352 KB)  
    Freely Available from IEEE
  • Keynote speaker

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (144 KB)  

    Provides an abstract of the keynote presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Track speaker

    Page(s): 1 - 4
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (184 KB)  

    Provides an abstract for each of the presentations and a brief professional biography of each presenter. The complete presentations were not made available for publication as part of the conference proceedings. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Thank you to our sponsors and exhibitors

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (79 KB)  
    Freely Available from IEEE
  • Westin hotel room layout

    Page(s): i - 365
    Save to Project icon | Request Permissions | PDF file iconPDF (3716 KB)  
    Freely Available from IEEE
  • Table of contents

    Page(s): 1 - 11
    Save to Project icon | Request Permissions | PDF file iconPDF (170 KB)  
    Freely Available from IEEE
  • Author index

    Page(s): 1 - 10
    Save to Project icon | Request Permissions | PDF file iconPDF (98 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (114 KB)  
    Freely Available from IEEE
  • The DETER project: Advancing the science of cyber security experimentation and test

    Page(s): 1 - 7
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (580 KB) |  | HTML iconHTML  

    Since 2004, the DETER Cybersecurity Testbed Project has worked to create the necessary infrastructure - facilities, tools, and processes-to provide a national resource for experimentation in cyber security. The next generation of DETER envisions several conceptual advances in testbed design and experimental research methodology, targeting improved experimental validity, enhanced usability, and increased size, complexity, and diversity of experiments. This paper outlines the DETER project's status and current R&D directions. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Experimental results of cross-site exchange of web content Anomaly Detector alerts

    Page(s): 8 - 14
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (496 KB) |  | HTML iconHTML  

    We present our initial experimental findings from the collaborative deployment of network Anomaly Detection (AD) sensors. Our system examines the ingress http traffic and correlates AD alerts from two administratively disjoint domains: Columbia University and George Mason University. We show that, by exchanging packet content alerts between the two sites, we can achieve zero-day attack detection capabilities with a relatively small number of false positives. Furthermore, we empirically demonstrate that the vast majority of common abnormal data represent attack vectors rather than false positives. We posit that cross-site collaboration enables the automated detection of common abnormal data which are likely to ferret out zero-day attacks with high accuracy and minimal human intervention. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Evaluating information assurance performance and the impact of data characteristics

    Page(s): 15 - 21
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (3076 KB) |  | HTML iconHTML  

    Research and development of new information assurance techniques and technologies is ongoing and varied. Each new proposal and technique arrives with great promise and anticipated success as research teams struggle to develop new and innovative responses to emerging threats. Unfortunately, these techniques frequently fall short of expectation when deployed due to difficulties with false alarms, trouble operating in a non-idealized or new domain, or flexibility limiting assumptions which are only valid with specific input sets. We believe these failures are due to fundamental problems with the experimental method for evaluating the effectiveness of new ideas and techniques. This work explores the effect of a poorly understood data synthesis process on the evaluation of IA devices. The point of an evaluation is to independently determine what a detector can and cannot detect, i.e. the metric of detection. This can only be done when the data contains carefully controlled ground truth. We broadly define the term “similarity class” to facilitate discussion about the different ways data (and more specifically test data) can be similar, and use these ideas to illustrate the pre-requisites for correct evaluation of anomaly detectors. We focus on how anomaly detectors function and should be evaluated in 2 specific domains with disparate system architectures and data: a sensor and data transport network for air frame tracking and display, and a deep space mission spacecraft command link. Finally, we present empirical evidence illustrating the effectiveness of our approach in these domains, and introduce the entropy of a time series sensor as a critical measure of data similarity for test data in these domains. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Key parameters for modeling information diffusion in populations

    Page(s): 22 - 28
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (460 KB) |  | HTML iconHTML  

    Modeling and simulation can be an important tool in helping develop techniques to better communicate safety-critical information for disaster preparation and recovery. However, these tools are only moderately useful if they do not capture both the social component (how information diffuses in a population through communication between individuals) and the cognitive component (how individuals integrate information and change behavior). The objective of this paper is to lay the groundwork for more complex simulations by providing a summarization of some of the important phenomenon identified in the attitude change literature. We describe four processes that are important to capture: (1) the drive for consistency; (2) information distortion; (3) persuasion route; and (4) implicit/explicit attitudes. We describe the experiments that illustrated these phenomenon and the factors that influence them (cognitive load, attitude relationships, and the social network). Finally, we describe a conceptual model that captures some of these processes and can be used as a starting point. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A framework for supplier-supply chain risk management: Tradespace factors to achieve risk reduction — Return on investment

    Page(s): 29 - 34
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (522 KB) |  | HTML iconHTML  

    The growing trend in information and communications technology (ICT) globalization and outsourcing provides opportunities for adversaries to attack the supply chains of critical information systems and networks in order to gain unauthorized access to data, alter data, disrupt operations, or interrupt communications by inserting malicious code into or otherwise corrupting components; or to obtain knowledge of the uses and users of systems. A challenging issue is the ability to assure that articles of supply and the suppliers can be trusted to do only that which is expected or specified and to do so reliably and dependably. This paper describes a framework for discovering, defining, learning, and establishing capabilities to manage the risks of suppliers and supply chains of ICT. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Improving supply chain robustness and preventing counterfeiting through authenticated product labels

    Page(s): 35 - 41
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (774 KB) |  | HTML iconHTML  

    Counterfeiting is a serious problem impacting customers and producers in the global economy. We design authenticated product labels (APL), a cryptography based practical counterfeit detection method. Our solution can be used by customers, distributors, and law enforcement alike. APLs not only detects counterfeit goods, but also deters counterfeiting by provably pinpointing its source in the supply chain. Counterfeiting is a serious problem impacting customers and producers in the global economy. We design authenticated product labels (APL), a cryptography based practical counterfeit detection method. Our solution can be used by customers, distributors, and law enforcement alike. APLs not only detects counterfeit goods, but also deters counterfeiting by provably pinpointing its source in the supply chain. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Information sharing infrastructure for pharmaceutical supply chain management in emergency response

    Page(s): 42 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (437 KB) |  | HTML iconHTML  

    For an effective and timely acquisition, deployment, and distribution of needed drugs in emergencies, it is crucial to provide the end-to-end visibility of the pharmaceutical supply chain (PSC) to the involved parties. Recently, the Science and Technology Directorate of the U.S. Department of Homeland Security has developed a standard-based framework for maintaining and sharing of the emergency data, called Unified Incident Command Decision Support (UICDS). UICDS provides an infrastructure for information sharing among the disparate organizations responding to an emergency. In this paper, we leverage the UICDS infrastructure, and extend it to enable information sharing for pharmaceutical supply chain management in emergency response. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Supply chain risk mitigation for IT electronics

    Page(s): 49 - 55
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1002 KB) |  | HTML iconHTML  

    Supply Chain Risk Management (SCRM) is one of the 12 Comprehensive National Cybersecurity Inititiatives (CNCI), but the range of supply chain problems has not been defined rigorously, and effective defenses have not yet been developed. Risks range from the increased unreliability of counterfeits to data exfiltration and adversary control enabled by hardware Trojan horses embedded in chips. Risks are different for military vs. non-military Government vs. civilian organizations. We cite cases that underscore the reality of supply chain risk, and analyze the structure of supply chains that affect different part of the market for IT electronics, in order to provide a better understanding of attack methods. We discuss techniques for defending against the range of threats, and propose a practical solution based on a suite of simple, inexpensive test procedures that could be used to build an "80% solution" for detection of counterfeits and embedded malicious implants before they are deployed. Tests we have prototyped include power signatures and of IR thermographic signatures of boot events. Deployment of such a test suite would change the SCRM game by making it significantly more difficult for supply chain exploits to succeed. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Modeling the Federal User Identity, Credential, and Access Management (ICAM) decision space to facilitate secure information sharing

    Page(s): 56 - 62
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (542 KB) |  | HTML iconHTML  

    Providing the right information to the right person at the right time is critical, especially for emergency response and law enforcement operations. Accomplishing this across sovereign organizations while keeping resources secure is a formidable task. What is needed is an access control solution that can break down information silos by securely enabling information sharing with non-provisioned users in a dynamic environment. Multiple government agencies, including the Department of Homeland Security (DHS) Science and Technology Directorate (S&T) are currently developing Attribute-Based Access Control (ABAC) solutions to do just that. ABAC supports cross-organizational information sharing by facilitating policy-based resource access control. The critical components of an ABAC solution are the governing organizational policies, attribute syntax and semantics, and authoritative sources. The policies define the business objectives and the authoritative sources provide critical attribute attestation, but syntactic and semantic agreement between the information exchange endpoints is the linchpin of attribute sharing. The Organization for the Advancement of Structured Information Standards (OASIS) Security Assertion Markup Language (SAML) standard provides federation partners with a viable attribute sharing syntax, but establishing semantic agreement is an impediment to ABAC efforts. This critical issue can be successfully addressed with conceptual modeling. S&T is sponsoring the following research and development effort to provide a concept model of the User Identity, Credential, and Access Management decision space for secure information sharing. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Prototyping Fusion Center information sharing; implementing policy reasoning over cross-jurisdictional data transactions occurring in a decentralized environment

    Page(s): 63 - 69
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (649 KB) |  | HTML iconHTML  

    In 2004, the White House and then Congress determined there should be an “Information Sharing Environment” that facilitates the flow of critical information for counter-terrorism, related law enforcement, and disaster management activities. That work has been progressing but a major challenge is how to create technologies that: ensure compliance with laws and policies of the federal government, 50 states, and individual agencies; convey appropriate data that would support access control and privilege decisions in different jurisdictions; and achieve accountability and transparency for this activity. We have built a prototype of Fusion Center information sharing that shows significant progress in the representation of law in a policy language, the reasoning of that law over data transactions occurring in a web environment (internet or intranet), acquiring necessary information from authoritative sources wherever they reside in the decentralized environment, and providing both a binary response suitable for automated workflow implementation and a detailed justification suitable for human validation of the conclusion. In this paper, we briefly describe the technologies employed for serializing the data and policy, reasoning over the rules contained in the policy, and displaying the results to users. These combine to provide a powerful tool supporting a range of necessary governmental functions including access control, privilege management, audit, periodic reporting, and risk modeling. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Score level fusion of hand based biometrics using t-norms

    Page(s): 70 - 76
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1683 KB) |  | HTML iconHTML  

    A multimodal biometric system amalgamates the information from multiple biometric sources to alleviate the limitations in performance of each individual biometric system. In this paper a multimodal biometric system employing hand based biometrics (i.e. palmprint, hand veins, and hand geometry) is developed. A general combination approach is proposed for the score level fusion which combines the matching scores from these hand based modalities using t-norms due to Hamacher, Yager, Weber, Schweizer and Sklar. This study aims at exploring the potential usefulness of t-norms for multimodal biometrics. These norms deal with the real challenge of uncertainty and imperfection pervading the different sources of knowledge (scores from different modalities). We construct the membership functions of fuzzy sets formed from the genuine and imposter scores of each of the modalities considered. The fused genuine score and imposter scores are obtained by integrating the fuzzified genuine scores and imposter scores respectively from each of the modalities. These norms are relatively very simple to apply unlike the other methods (example SVM, decision trees, discriminant analysis) as no training or any learning is required here. The proposed approach renders very good performance as it is quite computationally fast and outperforms the score level fusion using the conventional rules (min, max, sum, median) The experimental evaluation on a database of 100 users confirms the effectiveness of score level fusion. The preliminary results are encouraging in terms of decision accuracy and computing efficiency. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • An automatic non-native speaker recognition system

    Page(s): 77 - 83
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (701 KB) |  | HTML iconHTML  

    Identification of non-native personnel is a critical piece of information for making crucial on-the-spot decisions for security purposes. Identification of a non-native speaker is often readily apparent in normal conversation with a native speaker through speech content and accent. Such identification which requires familiarity with language nuances may not be possible for a non-native interrogator or intelligence analyst or when conversing or listening through a machine language translator. Developing an automatic system to identify speakers as native or non-native, as well as their native language, including dialect, within input audio streams, is the major goal of this project. Such a system may be used alone or with other downstream applications such as machine language translation systems. In this paper we present four approaches to identify native and non-native speakers as a binary recognition problem. The approaches can be further categorized into phonetic-based approaches and non-phonetic-based approaches. These approaches were tested on two separate databases, including text-dependent read speech and text-independent spontaneous speech. The results show that our system is competitive in comparison with other published, state-of-the-art non-native speaker recognition systems. Key metrics for automated non-native recognition systems include: 1) positive identification rates, 2) false alarm/identification rates, and 3) length of captured speech sample required to reach a decision. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Goal-based assessment for the cybersecurity of critical infrastructure

    Page(s): 84 - 88
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (797 KB) |  | HTML iconHTML  

    Undertaking a comprehensive cybersecurity risk assessment of the networks and systems of a single infrastructure, or even a single organization of moderate size, requires significant resources. Efforts to simplify the assessment instrument usually obscure the ultimate goal of the assessment and the motivations for the assessment questions. This can make it difficult for assessors to justify the questions and can undermine the credibility of the assessment in the eyes of the organizations assessed. This paper describes the use of assurance cases to help address these problems. Viewing an assessment approach in terms of an assurance case clarifies the underlying motivation for the assessment and supports more rigorous analysis. The paper also shows how the assurance case method has been used to guide the development of an assessment approach called the Cyber Resilience Review (CRR), developed for the U.S. Department of Homeland Security. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Automated computation of malware behavior

    Page(s): 89 - 92
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (186 KB) |  | HTML iconHTML  

    Automated software behavior computation is an emerging technology under development at the Software Engineering Institute that can be applied to analysis of malicious code. Behavior computation is based on the semantics of programming language instructions and the opportunity to compose them to determine net effects of programs. An initial implementation is targeted to malicious code expressed in Intel assembly language. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Computer-assisted validation and verification of cybersecurity requirements

    Page(s): 93 - 98
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (668 KB) |  | HTML iconHTML  

    Errors in requirements are often a contributing cause of the failure of critical infrastructure and their underlying information systems to adequately guard against cyber intrusions and withstand cyber attacks. However, detecting errors in the cybersecurity requirements, and for requirements in general, is a challenging task. In this paper we describe how computer-aided formal verification and validation can be leveraged to address the challenge of correctly capturing natural language cybersecurity requirements, converting the natural language statements into formal requirements specifications, and then checking the formal specifications to ensure that they match the original intent of the stakeholders. Our approach centers on creating a one-to-one mapping between natural language requirements and UML statechart assertions. Statechart assertions are Boolean statements about the expected behavior of the system, expressed as UML statecharts. The set of assertions created by the security or software engineer is a formal model of the system's requirements. We demonstrate our approach using examples of formally specifying and validating requirements for correct cyber system behaviors and the detection of illegal business schemes in choreographed web services. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fab forensics: Increasing trust in IC fabrication

    Page(s): 99 - 105
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1059 KB) |  | HTML iconHTML  

    Fabrication and design are now performed by different companies as semiconductor fabrication facilities (fabs or foundries) seek to reduce costs by serving multiple clients and consolidating resources. However, lack of immediate control and observation reduces the trust which IC designers have in some fabs. To help fabs increase trust in their processes, we propose an approach for logging forensic information of the fab process and printing the information on chips so that examination of the chip reveals provable deviations from the design. Fab owners can benefit by catching rogue employees and by demonstrating high security standards to their customers. Our proposed solution uses a light runtime system that interacts with a trusted platform module (TPM). View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.