Loading [a11y]/accessibility-menu.js
DeepIA: An Interpretability Analysis based Test Data Generation Method for DNN | IEEE Conference Publication | IEEE Xplore
Scheduled Maintenance: On Monday, 30 June, IEEE Xplore will undergo scheduled maintenance from 1:00-2:00 PM ET (1800-1900 UTC).
On Tuesday, 1 July, IEEE Xplore will undergo scheduled maintenance from 1:00-5:00 PM ET (1800-2200 UTC).
During these times, there may be intermittent impact on performance. We apologize for any inconvenience.

DeepIA: An Interpretability Analysis based Test Data Generation Method for DNN


Abstract:

Recently, deep neural networks (DNN) have been widely applied in various fields, such as image classification, even replace humans to make decisions in some specific task...Show More

Abstract:

Recently, deep neural networks (DNN) have been widely applied in various fields, such as image classification, even replace humans to make decisions in some specific tasks. However, like traditional software, DNNs inevitably contain defects. If defective DNN models are applied in safety-critical fields, such as autonomous driving and medical diagnosis, it may cause disastrous consequences. Therefore, effective testing methods are urgently needed to improve the reliability of DNNs. The existing DNN testing methods typically generate test data by either globally modifying the original data or taking adversarial approaches. The generated test data typically struggle to simultaneously achieve good performance in both the degree of difference from the original data and the Error-inducing Success Rate (ESR) with respect to the target DNN model. Moreover, the perturbation-based methods are difficult to be understood by humans. To address the above issue, this paper proposes DeepIA, an interpretability analysis based test data generation method for DNN. DeepIA analyzes the interpretability of decision-making behaviors for DNN. According to the interpretability analysis results, the original training data is split into different regions to evaluate their influences on decision-making results of the DNN. After that, the most significant regions of the original test data are transformed to generate new test data. Experimental results show that the interpretability method effectively enhances the misleading ability of DeepIA for the DNN model under test. Compared with DeepTest and DeepSearch, DeepIA can generate test data with minor permutations and greater ESR.
Date of Conference: 22-26 October 2023
Date Added to IEEE Xplore: 25 December 2023
ISBN Information:

ISSN Information:

Conference Location: Chiang Mai, Thailand

Funding Agency:


1. Introduction

Recently, Deep Neural Networks (DNN) have been widely applied in various fields, such as computer vision [1], speech recognition [2], and natural language processing [3]. DNN is a multi-layer neural network model based on neurons. It learns the features and representations of data through multiple layers of nonlinear transformations and can continuously learn and train to improve its performance in processing large-scale and complex data. In some specific fields, the performance of DNN has even surpassed that of humans. However, like traditional software, DNNs also inevitably contain defects. In safety-critical fields such as autonomous driving systems, medical diagnosis, and malware detection, failures by defects of DNNs may cause disastrous consequences. For example, in the first robot-assisted heart valve surgery in UK, the mechanical arm of the robot collided with the hand of the surgeon, resulting in the aorta of patient being punctured [4]. In addition, several serious traffic accidents are caused by DNN defects in autonomous driving systems. For instance, in 2021, a Tesla vehicle equipped with autonomous driving software collided with a white cargo truck in Detroit, United States, as the autonomous driving software identified the cargo truck as the sky by mistake [5]. Similarly, in 2016 and 2019, two serious accidents occurred in Florida, United States, which also caused by erroneous behaviors of the car autonomous driving software [6], [7].

Contact IEEE to Subscribe

References

References is not available for this document.