2018 IEEE Security and Privacy Workshops (SPW)

24-24 May 2018

Filter Results

Displaying Results 1 - 25 of 60
  • [Title page i]

    Publication Year: 2018, Page(s): 1
    Request permission for reuse | PDF file iconPDF (15 KB)
    Freely Available from IEEE
  • [Title page iii]

    Publication Year: 2018, Page(s): 3
    Request permission for reuse | PDF file iconPDF (122 KB)
    Freely Available from IEEE
  • [Copyright notice]

    Publication Year: 2018, Page(s): 4
    Request permission for reuse | PDF file iconPDF (137 KB)
    Freely Available from IEEE
  • Table of Contents

    Publication Year: 2018, Page(s):5 - 9
    Request permission for reuse | PDF file iconPDF (122 KB)
    Freely Available from IEEE
  • Message from the Workshops General Chair

    Publication Year: 2018, Page(s): 10
    Request permission for reuse | PDF file iconPDF (128 KB)
    Freely Available from IEEE
  • Message from the DLS Organizers

    Publication Year: 2018, Page(s): 11
    Request permission for reuse | PDF file iconPDF (109 KB)
    Freely Available from IEEE
  • DLS Committees

    Publication Year: 2018, Page(s): 12
    Request permission for reuse | PDF file iconPDF (115 KB)
    Freely Available from IEEE
  • Message from the SADFE Organizers

    Publication Year: 2018, Page(s): 13
    Request permission for reuse | PDF file iconPDF (132 KB)
    Freely Available from IEEE
  • SADFE Committees

    Publication Year: 2018, Page(s):14 - 15
    Request permission for reuse | PDF file iconPDF (151 KB)
    Freely Available from IEEE
  • Message from the WRIT Organizers

    Publication Year: 2018, Page(s): 16
    Request permission for reuse | PDF file iconPDF (122 KB)
    Freely Available from IEEE
  • WRIT Committees

    Publication Year: 2018, Page(s): 17
    Request permission for reuse | PDF file iconPDF (519 KB)
    Freely Available from IEEE
  • Message from the BioStar Organizers

    Publication Year: 2018, Page(s): 18
    Request permission for reuse | PDF file iconPDF (129 KB)
    Freely Available from IEEE
  • BioStar Committees

    Publication Year: 2018, Page(s): 19
    Request permission for reuse | PDF file iconPDF (134 KB)
    Freely Available from IEEE
  • Message from the LangSec Organizers

    Publication Year: 2018, Page(s): 20
    Request permission for reuse | PDF file iconPDF (86 KB)
    Freely Available from IEEE
  • LangSec Committees

    Publication Year: 2018, Page(s): 21
    Request permission for reuse | PDF file iconPDF (51 KB)
    Freely Available from IEEE
  • Audio Adversarial Examples: Targeted Attacks on Speech-to-Text

    Publication Year: 2018, Page(s):1 - 7
    Cited by:  Papers (4)
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (584 KB) | HTML iconHTML

    We construct targeted audio adversarial examples on automatic speech recognition. Given any audio waveform, we can produce another that is over 99.9% similar, but transcribes as any phrase we choose (recognizing up to 50 characters per second of audio). We apply our white-box iterative optimization-based attack to Mozilla's implementation DeepSpeech end-to-end, and show it has a 100% success rate.... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • A Deep Learning Approach to Fast, Format-Agnostic Detection of Malicious Web Content

    Publication Year: 2018, Page(s):8 - 14
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (310 KB) | HTML iconHTML

    Malicious web content is a serious problem on the Internet today. In this paper we propose a deep learning approach to detecting malevolent web pages. While past work on web content detection has relied on syntactic parsing or on emulation of HTML and Javascript to extract features, our approach operates directly on a language-agnostic stream of tokens extracted directly from static HTML files wit... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Mouse Authentication Without the Temporal Aspect – What Does a 2D-CNN Learn?

    Publication Year: 2018, Page(s):15 - 21
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (2690 KB) | HTML iconHTML

    Mouse dynamics as behavioral biometrics are under investigation for their effectiveness in computer security systems. Previous state-of-the-art methods relied on heuristic feature engineering for the extraction of features. Our work addresses this issue by learning the features with a convolutional neural network (CNN), thereby eliminating the need for manual feature design. Contrary to time-serie... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Detecting Homoglyph Attacks with a Siamese Neural Network

    Publication Year: 2018, Page(s):22 - 28
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (319 KB) | HTML iconHTML

    A homoglyph (name spoofing) attack is a common technique used by adversaries to obfuscate file and domain names. This technique creates process or domain names that are visually similar to legitimate and recognized names. For instance, an attacker may create malware with the name svchost.exe so that in a visual inspection of running processes or a directory listing, the process or file name might ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Machine Learning DDoS Detection for Consumer Internet of Things Devices

    Publication Year: 2018, Page(s):29 - 35
    Cited by:  Papers (2)
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (709 KB) | HTML iconHTML

    An increasing number of Internet of Things (IoT) devices are connecting to the Internet, yet many of these devices are fundamentally insecure, exposing the Internet to a variety of attacks. Botnets such as Mirai have used insecure consumer IoT devices to conduct distributed denial of service (DDoS) attacks on critical Internet infrastructure. This motivates the development of new techniques to aut... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Adversarial Examples for Generative Models

    Publication Year: 2018, Page(s):36 - 42
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (485 KB) | HTML iconHTML

    We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model in... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Learning Universal Adversarial Perturbations with Generative Models

    Publication Year: 2018, Page(s):43 - 49
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (630 KB) | HTML iconHTML

    Neural networks are known to be vulnerable to adversarial examples, inputs that have been intentionally perturbed to remain visually similar to the source input, but cause a misclassification. It was recently shown that given a dataset and classifier, there exists so called universal adversarial perturbations, a single perturbation that causes a misclassification when applied to any input. In this... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers

    Publication Year: 2018, Page(s):50 - 56
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (884 KB) | HTML iconHTML

    Although various techniques have been proposed to generate adversarial samples for white-box attacks on text, little attention has been paid to a black-box attack, which is a more realistic scenario. In this paper, we present a novel algorithm, DeepWordBug, to effectively generate small text perturbations in a black-box setting that forces a deep-learning classifier to misclassify a text input. We... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • Exploring the Use of Autoencoders for Botnets Traffic Representation

    Publication Year: 2018, Page(s):57 - 62
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (304 KB) | HTML iconHTML

    Botnets are a significant threat to cyber security. Compromised, a.k.a. malicious hosts in a network have, of late, been detected by machine learning from hand-crafted features directly sourced from different types of network logs. Our interest is in automating feature engineering while examining flow data from hosts labeled to be malicious or not. To automatically express full temporal character ... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.
  • The Good, the Bad and the Bait: Detecting and Characterizing Clickbait on YouTube

    Publication Year: 2018, Page(s):63 - 69
    Request permission for reuse | Click to expandAbstract | PDF file iconPDF (867 KB) | HTML iconHTML

    The use of deceptive techniques in user-generated video portals is ubiquitous. Unscrupulous uploaders deliberately mislabel video descriptors aiming at increasing their views and subsequently their ad revenue. This problem, usually referred to as "clickbait," may severely undermine user experience. In this work, we study the clickbait problem on YouTube by collecting metadata for 206k videos. To a... View full abstract»

    Full text access may be available. Click article title to sign in or learn about subscription options.