By Topic

Robustness of neural ensembles against targeted and random Adversarial Learning

Sign In

Cookies must be enabled to login.After enabling cookies , please use refresh or reload or ctrl+f5 on the browser for the login options.

Formats Non-Member Member
$33 $13
Learn how you can qualify for the best price for this item!
Become an IEEE Member or Subscribe to
IEEE Xplore for exclusive pricing!
close button

puzzle piece

IEEE membership options for an individual and IEEE Xplore subscriptions for an organization offer the most affordable access to essential journal articles, conference papers, standards, eBooks, and eLearning courses.

Learn more about:

IEEE membership

IEEE Xplore subscriptions

4 Author(s)
Shir Li Wang ; School of SEIT, UNSW@ADFA, University of New South Wales, Australia ; Kamran Shafi ; Chris Lokan ; Hussein A. Abbass

Machine learning has become a prominent tool in various domains owing to its adaptability. However, this adaptability can be taken advantage of by an adversary to cause dysfunction of machine learning; a process known as Adversarial Learning. This paper investigates Adversarial Learning in the context of artificial neural networks. The aim is to test the hypothesis that an ensemble of neural networks trained on the same data manipulated by an adversary would be more robust than a single network. We investigate two attack types: targeted and random. We use Mahalanobis distance and covariance matrices to selected targeted attacks. The experiments use both artificial and UCI datasets. The results demonstrate that an ensemble of neural networks trained on attacked data are more robust against the attack than a single network. While many papers have demonstrated that an ensemble of neural networks is more robust against noise than a single network, the significance of the current work lies in the fact that targeted attacks are not white noise.

Published in:

Fuzzy Systems (FUZZ), 2010 IEEE International Conference on

Date of Conference:

18-23 July 2010