By Topic

Secure Network Protocols (NPSec), 2010 6th IEEE Workshop on

Date 5-5 Oct. 2010

Filter Results

Displaying Results 1 - 18 of 18
  • [Title page]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (46 KB)  
    Freely Available from IEEE
  • [Copyright notice]

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (68 KB)  
    Freely Available from IEEE
  • Message from the general chairs

    Page(s): 1 - 2
    Save to Project icon | Request Permissions | PDF file iconPDF (88 KB)  
    Freely Available from IEEE
  • Committees

    Page(s): 1
    Save to Project icon | Request Permissions | PDF file iconPDF (69 KB)  
    Freely Available from IEEE
  • SUDOKU: Secure and usable deployment of keys on wireless sensors

    Page(s): 1 - 6
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1131 KB) |  | HTML iconHTML  

    Initial deployment of secrets plays a crucial role in any security design, but especially in hardware constrained wireless sensor networks. Many key management schemes assume either manually pre-installed shared secrets or keys authenticated with the aid of out-of-band channels. While manually installing secret keys affects the practicability of the key deployment, out-of-band channels require additional interfaces of already hardware-limited wireless sensor nodes. In this work, we present a key deployment protocol that uses pair-wise ephemeral keys generated from physical layer information which subsequently enables an authenticated exchange of public keys. Hence, this work presents an elegant solution to the key deployment problem without requiring more capabilities than already available on common low-cost devices. To justify the feasibility of this solution, we implement and experimentally evaluate the proposed key deployment protocol using commodity wireless sensor motes. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Dynamic key management for secure continuous handoff in wireless LAN

    Page(s): 7 - 12
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1536 KB) |  | HTML iconHTML  

    The secure dynamic key management has been introduced in the wireless security standard, IEEE802.11i However, dynamic keys are very limited and their lifetime is very short. In addition, those keys must be initialized by a central authority before sending them to all communicating hosts, and the key must be renewed after every single use. This paper proposes a novel dynamic key management mechanism in wireless networks for fast authentication and secure continuous roaming. Our work focuses on generating session keys independently after the first pair-wise session key has been established between a mobile node and the home access point. The OMNet++ network simulator is used for performance evaluation. The roaming performance is measured using three scenarios having different parameters including moving speed, background workload, and client density. The experimental results show that our proposed work outperforms the standard IEEE802.11i and gives high scalability as well. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • APRAP: Another privacy preserving RFID authentication protocol

    Page(s): 13 - 18
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1141 KB) |  | HTML iconHTML  

    Privacy preserving RFID (Radio Frequency Identification) authentication has been an active research area in recent years. Both forward security and backward security are required to maintain the privacy of a tag, i.e., exposure of a tag's secret key should not reveal the past or future secret keys of the tag. We envisage the need for a formal model for backward security for RFID protocol designs in shared key settings, since the RFID tags are too resource-constrained to support public key settings. However, there has not been much research on backward security for shared key environment since Serge Vaudenay in his Asiacrypt 2007 paper showed that perfect backward security is impossible to achieve without public key settings. We propose a Privacy Preserving RFID Authentication Protocol for shared key environment, APRAP, which minimizes the damage caused by secret key exposure using insulated keys. Even if a tag's secret key is exposed during an authentication session, forward security and 'restricted' backward security of the tag are preserved under our assumptions. The notion of 'restricted' backward security is that the adversary misses the protocol transcripts which are needed to update the compromised secret key. Although our definition does not capture perfect backward security, it is still suitable for effective implementation as the tags are highly mobile in practice. We also provide a formal security model of APRAP. Our scheme is more efficient than previous proposals from the viewpoint of computational requirements. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Analyzing failures and attacks in Map & Encap protocols

    Page(s): 19 - 24
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1323 KB) |  | HTML iconHTML  

    This paper examines failures and attacks in Map & Encap routing protocols. In Map & Encap, a packet is routed to an encapsulator, which maps the destination address to a decapsulator, and encapsulates the packet. This important and growing class of protocols, ranging from widely used MPLS VPNs to future routing architectures such as LISP, introduce new problems and challenges for handling failures and attacks. To capture fundamental components, we introduce a Simple Map & Encap Protocol (SMEP). Some failure handling approaches from traditional routing protocols also apply in SMEP, but these approaches alone are insufficient. SMEP design choices, and mapping dissemination in particular, have a large impact on whether new techniques are needed. In some cases, the control plane alone cannot adequately handle failures without support from the data plane and attacks can be much harder to diagnose. The results identify new potential failures and attacks and can help designers improve Map & Encap protocol robustness. We illustrate the benefits of our work by analyzing two very different types of Map & Encap protocols, MPLS-VPN and LISP. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Protecting against DNS cache poisoning attacks

    Page(s): 25 - 30
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1206 KB) |  | HTML iconHTML  

    DNS is vulnerable to cache poisoning attacks, whereby an attacker sends a spoofed reply to its own query. Historically, an attacker only needed to guess a predictable, or more recently, a 16 bit pseudorandom ID in order to be successful. The Kaminsky attack demonstrated successful poisoning attacks that required only 6 seconds on typical networks. Since then, source port randomization (spr) has been used for additional protection. Nevetheless, E. Polyakov demonstrated successful poisoning attacks against spr given a Gigabit network, on the order of 10 hours. Even with slower network speeds, an attack is likely to be successful in a moderate time period. DNSSEC will provide a strong countermeasure to poisoning as well as other attacks against the DNS. However, until DNSSEC is actually deployed, there is a need for additional countermeasures that can be deployed in the near term. In this paper, we describe a new approach that is based on detecting a poisoning attack, then sending an additional request for the same DNS Resource Record. Since the defense is only activated when attacks occur, we expect the performance impact to be minimal. The countermeasure requires no changes to the DNS standards, and only requires modifications to the caching server. Thus it can be deployed incrementally in order to obtain immediate security benefits. We show that our proposed defense makes poisoning attacks substantially more difficult. We have implemented the countermeasure using a local proxy for the BIND caching server, and our tests show that the performance impact is minimal. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • VSITE: A scalable and secure architecture for seamless L2 enterprise extension in the cloud

    Page(s): 31 - 36
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (357 KB) |  | HTML iconHTML  

    This paper presents an end-to-end architecture, called VSITE, for seamless integration of cloud resources into an enterprise's intranet at layer 2. VSITE allows a cloud provider to carve out its resources to serve multiple enterprises simultaneously while maintaining isolation and security. Resources (allocated to an enterprise) in the cloud provider appears "internal" to the enterprise. VSITE achieves this abstraction through the use of VPN technologies, the assignment of different VLANs to different enterprises, and the encoding of enterprise IDs in MAC addresses. Unlike traditional layer 2 VPN technology such as VPLS, VSITE suppresses layer 2 MAC learning related broadcast traffic from reaching the remote sites. VSITE makes use of location IP (represents location area) for scalable migration support. The MAC or IP address of a VM is not visible in data center core. VSITE hypervisor enforces security mechanisms to prevent enterprises from attacking one another. Thus, VSITE is scalable, secure and efficient, and it facilitates common data center operation such as VM migration. Because VSITE extends enterprise network at layer 2, this offers transparency to most existing applications and presents an easy migration path for an enterprise to leverage cloud computing resources. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Anonymous communication for encouraging the reporting of wrongdoing in social groups

    Page(s): 37 - 42
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1763 KB) |  | HTML iconHTML  

    We propose an application of DC-net called Layered DC-net for encouraging the reporting of wrongdoing in social groups. A social group is a set of individuals, structured into layered subgroups. Individuals are divided into subgroups called lower groups. One of the individuals in each lower group is a representative who has authority and responsibility for the lower group. The set of all representatives from all lower groups is called the upper group. In social groups, wrongdoing (e.g., bullying in schools) occasionally occurs. If individuals in a lower group discover wrongdoing, they should report it to their representative. Meanwhile, all representatives are responsible for settling wrongdoing. However, once individuals reporting such wrongdoing and their representative are identified, the payoffs for both are smaller than those for individuals who do not report it. We focus on reporting bullying in a school as an example of reporting wrongdoing in social groups. By using game theory, we formulate a game among bystanders of bullying in a lower group and another game among teachers in the upper group. We evaluate the games with and without the Layered DC-net. The results show that the Layered DC-net implements the Nash equilibria that encourage the reporting of wrongdoing in both lower and upper groups and settles wrongdoing in the social group. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detection and control system for Peer-to-Peer file exchange application

    Page(s): 43 - 48
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1599 KB) |  | HTML iconHTML  

    As P2P (Peer-to-Peer) file sharing software is widely deployed, it causes some serious problems, such as network traffic congestion, unwanted file sharing by computer viruses that abuse P2P software, and so on. Detecting and controlling traffic of P2P software is an important issue to solve these problems. In this paper, we propose a basic architecture to observe large-scale network traffic, identify the P2P traffic and control them. This architecture consists of four units, the observation unit, the analysis unit, the control unit and the managing unit. We evaluate the architecture using lOGbps full duplex traffic, and demonstrate that this system can control the P2P traffic properly. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Towards verifiable parallel content retrieval

    Page(s): 49 - 54
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1513 KB) |  | HTML iconHTML  

    Many software vendors are providing mechanisms for parallel content retrieval using multiple connections, e.g., parallel HTTP channels, to increase the availability and reliability of the download procedure. At the same time, there is no native verification mechanism to support simultaneous content verification from multiple sources. While it is possible to set-up multiple TLS tunnels with different sources, there is no guarantee that the data itself is authentic since the trust is placed on the connection and not in the data itself. In this paper we present a parallel verification mechanism based on hash trees, allowing clients to authenticate data segments regardless of the container from where they were retrieved from, but just with the original provider. Some benefits of the proposed mechanism include low CPU processing costs, low verification overhead and possibility to support legacy data. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Policy extension for data access control

    Page(s): 55 - 60
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1103 KB) |  | HTML iconHTML  

    In this paper, we propose a security framework, looking at different policies for data access control in the mobile environments. We have started with extending the Platform for Privacy Preferences (P3P) policy for controlling the data access. The aim is to modify the P3P policy and to use it in the security capsule of a mobile handset. The service provider can publish the P3P policy in the WebServices and request the mobile client for the user preferences. With the introduction of P3P policy into the mobile device the access to the data is controlled including user preferences and identity mapping. Service provider data will always be encrypted and successful decryption will be a big challenge. Further we looked at the extensible Access Control Markup Language (XACML) policy as it is the way forward for the mobile environment and XACML is the latest policy that is operational smoothly in the mobile environment. Though XACML is a rich framework, it intentionally does not address how to preserve the privacy of authorization entities. For this, we require well-defined trust relationships between the participants, but first time business partners may not have pre-existing relationships. Therefore, a mechanism for gradual building of trust is needed and the security capsule that is presented in this work will provide this. This paper identifies the steps involved in performing transactions with the service provider through the retrieval of policy information and hence proposes an architecture that verifies the data access control. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Fingerprinting custom botnet protocol stacks

    Page(s): 61 - 66
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1088 KB) |  | HTML iconHTML  

    This paper explores the use of TCP fingerprints for identifying and blocking spammers. Evidence has shown that some bots use custom protocol stacks for tasks such as sending spam. If a receiver could effectively identify the bot TCP fingerprint, connection requests from spam bots could be dropped immediately, thus reducing the amount of spam received and processed by a mail server. Starting from a list of known spammers flagged by a commercial reputation list, we fingerprinted each spammer and found the roughly 90% have only a single known fingerprint typically associated with well known operating system stacks. For the spammers with multiple fingerprints, a particular combination of native/custom protocol stack fingerprints becomes very prominent. This allows us to extract the fingerprint of the custom stack and then use it to detect more bots that were not flagged by the commercial service. We applied our methodology to a trace captured at our regional ISP, and clearly detected bots belonging to the Srizbi botnet. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Tracking multiple C&C botnets by analyzing DNS traffic

    Page(s): 67 - 72
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1988 KB) |  | HTML iconHTML  

    Botnets have been considered as a main source of Internet threats. A common feature of recent botnets is the use of one or more C&C servers with multiple domain names for the purpose of increasing flexibility and survivability. In contrast with single domain botnets, these multi domain botnets are hard to be quarantined because they change domain names regularly for connecting their C&C server(s). In this paper, we introduce a tracking method of botnets by analyzing the relationship of domain names in DNS traffic generated from botnets. By examining the DNS queries from the clients which accessed the known malicious domain names, we can find a set of unknown malicious domain names and their relationship. This method enables to track malicious domain names and clients duplicately infected by multiple bot codes which make botnets revivable against existing quarantine methods. From the experiments with one hour DNS traffic in an ISP network, we find tens of botnets, and each botnet has tens of malicious domains. In addition to botnet domains, we find a set of other domain names used for spamming or advertising servers. The proposed method can be used for quarantining recent botnets and for limiting their survivability by tracking the change of domain names. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Implementation and evaluation of bot detection scheme based on data transmission intervals

    Page(s): 73 - 78
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1517 KB) |  | HTML iconHTML  

    Botnet is one of the most considerable issues in the world. A host infected with a bot is used for collecting personal information, launching DoS attacks, sending spam e-mail and so on. If such a machine exists in an organizational network, that organization will lose its reputation. We have to detect these bots existing in organizational networks immediately. Several network-based bot detection methods have been proposed; however, some traditional methods using payload analysis or signature-based detection scheme are undesirable in large amount of traffic. Also there is a privacy issue with looking into payloads, so we have to develop another scheme that is independent of payload analysis. In this paper, we propose a bot detection method which focuses on data transmission intervals. We distinguish human-operated clients and bots by their network behaviors. We assumed that a bot communicates with C&C server periodically and each interval of data transmission will be the same. We found that we can detect such behaviors by using clustering analysis to these intervals. We implemented our proposed algorithm and evaluated by testing normal IRC traffic and bot traffic captured in our campus network. We found that our method could detect IRC-based bots with low false positives. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A behavior based malware detection scheme for avoiding false positive

    Page(s): 79 - 84
    Save to Project icon | Request Permissions | Click to expandQuick Abstract | PDF file iconPDF (1142 KB) |  | HTML iconHTML  

    The number of malware is increasing rapidly and a lot of malware use stealth techniques such as encryption to evade pattern matching detection by anti-virus software. To resolve the problem, behavior based detection method which focuses on malicious behaviors of malware have been researched. Although they can detect unknown and encrypted malware, they suffer a serious problem of false positives against benign programs. For example, creating files and executing them are common behaviors performed by malware, however, they are also likely performed by benign programs thus it causes false positives. In this paper, we propose a malware detection method based on evaluation of suspicious process behaviors on Windows OS. To avoid false positives, our proposal focuses on not only malware specific behaviors but also normal behavior that malware would usually not do. Moreover, we implement a prototype of our proposal to effectively analyze behaviors of programs. Our evaluation experiments using our malware and benign program datasets show that our malware detection rate is about 60% and it does not cause any false positives. Furthermore, we compare our proposal with completely behavior-based anti-virus software. Our results show that our proposal puts few burdens on users and reduces false positives. View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.