Probing the Purview of Neural Networks via Gradient Analysis

We analyze the data-dependent capacity of neural networks and assess anomalies in inputs from the perspective of networks during inference. The notion of data-dependent capacity allows for analyzing the knowledge base of a model populated by learned features from training data. We define purview as the additional capacity necessary to characterize inference samples that differ from the training data. To probe the purview of a network, we utilize gradients to measure the amount of change required for the model to characterize the given inputs more accurately. To eliminate the dependency on ground-truth labels in generating gradients, we introduce confounding labels that are formulated by combining multiple categorical labels. We demonstrate that our gradient-based approach can effectively differentiate inputs that cannot be accurately represented with learned features. We utilize our approach in applications of detecting anomalous inputs, including out-of-distribution, adversarial, and corrupted samples. Our approach requires no hyperparameter tuning or additional data processing and outperforms state-of-the-art methods by up to 2.7%, 19.8%, and 35.6% of AUROC scores, respectively.


I. INTRODUCTION
Deep neural networks are prone to failure when deployed in real-world environments as they often encounter data that diverge from training conditions [1]- [3]. Neural networks rely on the implicit assumption that any given input during inference is drawn from the same distribution as the training data. Limited to classes seen during training, neural networks classify any input image among such in-distribution classes, even if the image is significantly different from training data. In addition, it is widely accepted that neural networks tend to make overconfident predictions even for inputs that differ from training data [4]- [7], making it more challenging to distinguish inputs of anomalous conditions. This behavior can have serious consequences when utilized in safety-critical applications, such as autonomous vehicles and medical diagnostics [8], [9]. To ensure reliable performance for practical applications of neural networks, models must be able to distinguish inputs that differ from training data and cannot be handled adequately based on their capacity.
The capacity of neural networks is broadly discussed in terms of the size of the networks (i.e., the number of model parameters) [10]- [12]. It is central to the general-ization performance of neural networks in traditional statistical learning theories as models with larger capacity are expected to overfit training data, leading to poor generalization performance [12], [13]. However, recent studies show that over-parameterization of deep neural networks only helps with their generalization performance [14], [15]. Many researchers aim to understand this phenomenon by examining the representational capacity, which is the types of functions a model can learn [16], [17]. Others discuss it from the optimization point of view [12]. They recognize that a learning algorithm, defined by model architecture and training procedure, is unlikely to find the best function among all possible functions, although it can still learn one that performs well for a given task. This notion of capacity, defined as the types of functions that can be reached via some learning algorithm, is denoted as the effective capacity [14], [15]. These perspectives of data-independent network capacity analyze the generalization behavior of neural networks by changing network architectures or forcing memorization. They assume that training data is available to be manipulated for memorization and repetitive training of networks with altered architectures. However, this assumption does not hold for deployed models in which the training data and network architectures are fixed and novel situations cannot be foreseen or simulated during training. Hence, they are not suitable for analyzing the capacity of deployed models and their ability to handle samples given during inference.
In this work, we examine the capacity of trained neural networks in terms of their knowledge base and reformulate the definition of the model capacity. The knowledge base of a network is established by training data, allowing the model to characterize the inputs observed during inference with its learned features. We argue that the capacity of models should be investigated not only in terms of the learning algorithm (i.e., model architectures and training procedures) but also in terms of the training data. To analyze the data-dependent capacity of the trained networks, we introduce the concept of purview from top-down and bottom-up perspectives, as illustrated in Fig. 1. The top-down purview, derived from the model perspective, is built on the inclusive relationship between representational capacity (RC) and effective capacity (EC), where the latter is a subset of the former. The purview lies in the region described by RC − EC and shown in blue, which denotes the additional capacity in RC that can be utilized to enhance EC. The bottom-up purview, established from the data perspective, is based on the learned features T that populate the knowledge base of a trained model, i.e., the data-dependent EC and its features T EC . It is the gap between the learned features T * EC established on the training data and the ideal EC with additional features in T * purview that is necessary to handle inference samples in addition to the training data.
To probe the purview of the trained networks, we utilize gradients to determine anomalous inputs from the perspective of the models. Our intuition is that gradients correspond to the amount of change that a model requires to properly represent a given sample. We hypothesize that the required change captured in gradients would be more significant for inputs that cannot be represented accurately with the learned features of a network. From the data-dependent view of model capacity, a model trained with simpler data would have a less extensive feature set, and thus, lower EC. In turn, there is more room for improvement in its feature set (i.e., larger RC −EC region, larger purview), leading to more significant gradient responses. During training, gradients are generated with respect to the model outputs and ground-truth labels for given inputs, the latter of which are unavailable during inference. To remove dependency on information regarding given samples during inference, we introduce confounding labels, which are labels formulated by combining multiple categorical labels. Although the use of manipulated labels has been explored in existing studies [18]- [22], it is only analyzed during model training, and the label designs rely on the statistics of the training data. In comparison, a confounding label does not require any knowledge of the training data and can be used during inference with no access to ground-truth labels. The contributions of this paper are five-fold: • We define purview by integrating the top-down view from the representational capacity to the effective capacity and the bottom-up view based on the learned features of the trained models.
• We introduce the concept of confounding labels as an unsupervised tool to elicit a model response that can be utilized to probe the purview of trained neural networks.
• We demonstrate the relationship between the degree of exposure to diverse data in model training and the purview of trained networks via gradient analysis.
• We utilize our proposed method in applications of OOD detection, adversarial detection, and corrupted input detection and achieve state-of-the-art performance.
• We conduct extensive ablation studies to study the manipulation of supervised and unsupervised confounding labels.

A. CAPACITY OF NEURAL NETWORKS
Following the distinction between representational capacity and effective capacity established in Section I, we introduce how they have been studied in the literature. VC (Vapnik-Chervonenkis) dimension [13] is a traditional approach for quantifying the representational capacity of a learning machine with cardinality of data samples to provide the theoretical bounds. More recent studies have examined the representational capacity of deep neural networks from the viewpoint of generalization, which diverges from the traditional understanding of the overfitting nature of larger models. To do so, some proposed to utilize sample complexity to understand standard generalization [14], [23] and adversarial generalization [24]. Others employed the intrinsic dimensions of networks [25], [26]. Zhang et al. [15] proposed the concept of the effective capacity to analyze the memorization behavior of deep neural networks, extended by Arpit et al. [27]. They demonstrated the memorization behavior by searching for the network of the smallest capacity required to learn perturbed images and labels. These studies of representational capacity and effective capacity analyze networks during training by altering the model architectures. Their interpretation of the model capacity is data-independent. They focus on the vary-ing sizes of the models and the functions they are capable of learning, rather than on the effect of the training data, in addition to the network architectures and training algorithms. We argue that training data is critical in defining the effective capacity of networks. This data-dependent view of the effective capacity of trained models allows for analyzing their ability to handle different inputs during inference and determining the validity of model predictions. We aim to demonstrate the connection between representational capacity and effective capacity from a data-dependent perspective and the proposed concept of purview.

B. ANOMALOUS INPUT DETECTION
Many studies have aimed to detect anomalous samples due to statistical shifts in the data distribution or adversarial generation. This application exploits the insufficient knowledge base of models trained on relatively small datasets to distinguish inputs that differ from the training samples. There are mainly two types of approaches: analyzing the output distributions of classifiers to differentiate anomalous inputs or employing auxiliary networks for detection. Hendrycks and Gimpel [28] introduced a baseline method of thresholding samples based on predicted softmax distributions. Liang et al. [29] employed additional input and output processing to the previous method to further improve detection performance. DeVries and Taylor [30] utilized prediction confidence scores obtained from an augmented confidence estimation branch on a pretrained classifier. Ma et al. [31] proposed to characterize the dimensional properties of adversarial regions with local intrinsic dimensionality. Lee et al. [32] proposed a confidence metric using Mahalanobis distance. Liu et al. [33] utilized energy scores to capture the likelihood of occurrence during inference and training time. These approaches utilize activation-based representations to characterize the anomaly in inputs via learned features that establish the effective capacity of a model, i.e., what the network knows about the given inputs. However, the overconfident nature of neural networks makes it counterintuitive to characterize anomalous inputs solely with activations that are known to be poorly calibrated [5]. Rather, we argue that the anomaly in inputs should be established based on what the model is unfamiliar with and thus incapable of representing accurately. Instead of focusing solely on the learned features based on the effective capacity of the networks, we focus on the additional features necessary to represent the anomaly in the regions of purview.

C. GRADIENTS AS FEATURES
At the core of the advancement of deep neural networks lies gradient-based optimization techniques that allow for finding solutions to tasks at hand [34]. In addition to its utility as an optimization tool, gradients have been utilized for various purposes. Goodfellow et al. [4] first demonstrated that small, hardly perceptible perturbations, known as adversarial attacks, can deliberately fool trained networks to make irrelevant predictions with high confidence, extended by numerous studies including [35], [36]. These gradientbased adversarial attacks force models to deviate from their data-dependent EC and yield perturbed outputs. Gradients are also commonly used in visual explanation techniques to produce localization maps of pixels relevant to model predictions [37], [38], which accentuate the existing features that define the effective capacity. Others expanded their work with contrastive explanations, in which they visualized features of other potential predictions for actual predictions [39], [40]. Another application using gradients is anomaly detection by constraining gradients in autoencoder learning such that inliers form a specific knowledge base that can be used to differentiate outliers [41]. Some have utilized gradients to quantify the uncertainty of trained neural networks [6], [42], [43]. In general, gradients are used to illuminate the learned features of networks and examine the divergence from the established knowledge base. Our focus of purview exploits this nature of gradients in terms of model capacity, bridging the gap between effective capacity and representational capacity. We show that gradients enable the analysis of data samples during inference from the perspective of models by considering the adequacy of the effective capacity established on training data and the additional capacity (i.e., purview), stepping further into the representational capacity needed for more accurate representation.

D. MANIPULATING LABEL ENCODINGS
One-hot encoding is the most widely used approach for formulating labels to handle nominal data (i.e., categorical data with no quantitative relationship between categories). A combination of multiple one-hot encodings is used to indicate information regarding multiple classes in each image. Based on these two, recent studies have analyzed different formulations of labels for various purposes. Some work [18]- [20] have proposed the mixing of two input images and their labels with a pre-computed ratio as a data augmentation technique. For the interpretability of neural networks, Prabhushankar and AlRegib [44] proposed to extract causal visual features using combinations of binary classification labels. To address the problem of missing labels in a multilabel classification setting, Durand et al. [21] utilized partial labels formulated using the proportions of known labels. Duarte et al. [22] proposed a similar approach to handle imbalanced datasets by masking parts of the labels based on the ratio of positive samples to negative samples. These methods impose constraints on feature learning to improve the generalizability and robustness. However, they explored different label encodings only for model training and assumed the availability of information regarding the training data. We propose manipulating label encodings to probe the purview of networks built upon their knowledge base of learned features. Contrary to existing studies, our approach does not depend on the availability of training data statistics or label information for the inference data. VOLUME 11, 2023

III. PROBING THE PURVIEW OF NEURAL NETWORKS
In this section, we expand the definition of the purview of neural networks based on data-dependent effective capacity.
We discuss the purview of neural networks in terms of the knowledge base established by the training data and the additional knowledge necessary to handle samples during inference. We then introduce our gradient-based approach to probe the purview of the trained models.

A. FEATURE-BASED CAPACITY ANALYSIS
We define purview as the additional capacity required for a model to handle samples observed during inference that differ from the training data. Our definition of purview depends on the effective capacity established by the training data, as well as network architectures and training procedures. Assuming a fixed architecture and training procedure, we first discuss the effective capacity of a trained model in terms of the features learned from the training data. Following the notations utilized by AlRegib and Prabhushankar [39], we introduce our setup for feature-based effective capacity analysis. Let f (·) be a neural network trained for an image classification task, where it maps an input of dimension h × w × c to an output vector of dimension N , which is the number of classes defined in the training dataset. Given an input image x, the model f utilizes its learned features to produce a model output (i.e., logits) The predicted class C is determined by taking the index of the largest logit value, C = arg max y, C ∈ {1, 2, · · · , N }.
Let T be the set of all features that the network learned to extract during training for the classification task, where P denotes the total number of learned features. Each feature captures a unique characteristic of the training data. Depending on the classes represented in the training data and their similarities, some classes have unique features whereas others have shared features for their similar characteristics. The model prediction of C for image x means that the network's decision is based on the set of features relevant to class C, which is a subset of all available features T . We describe our interpretation of data-dependent effective capacity using the toy example in Fig. 2. Consider two networks, f and g, which share the same architecture and training procedures, but have different training data. We assume that the network architecture has a sufficient number of model parameters to learn their training data and that the training procedures allow for optimization. The first network f is trained on handwritten digit images of 0, 6, shown in Fig. 2(a). Based on the training data, it learns the features of curves shared among all digits (T 1 ) and 3-way intersections for the digit 6 (T 2 ), forming the feature set T f , On the other hand, the second network g is trained on the images of the same handwritten digits as f and an additional digit of 7. Some of the learned features of g are similar to those of f , and are written as T 1 and T 2 . Moreover, the additional training data of digit 7 leads to extra features of straight lines (T 3 ) and corners (T 4 ), constructing the overall feature set T g , Model g has a higher effective capacity with a more comprehensive feature set due to its exposure to relatively diverse training data compared to f , which is described with more ridges in Fig. 2(b). Given these two models, we now consider an inference scenario in which an input image x of the handwritten digit 1 is presented. For model f , the feature span T f is insufficient to accurately represent x due to the lack of the features for the straight lines. The best the model f can do to capture the characteristics of the sample x is to utilize the existing features for an approximation, where h is a function that combines the learned features. For model g, however, the same image of digit 1 can be handled more properly because the feature set T g includes the straight-line feature, learned from digit 7 during training. The model g can utilize this learned feature to represent the inference sample, In contrast to T g x , T f x is still inadequate for representing the sample because each learned feature captures a unique characteristic, and the combination of existing features cannot precisely account for the lack of relevant feature. While the predictions from both models are irrelevant because the class of 1 does not exist in the training data, this feature-based focus enables the analysis of the anomaly in inputs from the perspective of models that is not limited to the predicted class distributions.
The comparison between these two models highlights the core of our approach, which focuses on the absence of relevant features in the model's knowledge base to distinguish inputs that cannot be represented accurately, i.e., anomalous samples. Although the knowledge base of a model can be broadened via training with comprehensive datasets and various data augmentation techniques, it is still impractical for a model to learn everything in existence. Consequently, in practice, we only have access to models that lack some features (i.e., f ). Writing in general terms, Ideally, we want the approximation T f x to be equivalent to the lacking feature, where T P +1 is an arbitrary feature that is relevant to input x and exists only in T g . This would allow for the representation of the sample considered anomalous from the perspective of the models with its learned features. With the purview, we examine the gap in the knowledge base of f that needs to be offset to fully grasp the unfamiliar characteristics of x.

B. PURVIEW OF NETWORKS AND GRADIENTS
Building on the effective capacity analysis setup, we discuss the purview of the models. We define purview as the additional feature-based capacity necessary for a trained network to characterize inputs given during inference that differ from the training data. It is centered on the absence of features in the network's feature span that are relevant for the given inputs. Consider the models f and g and their features used in response to input x during inference T f x and T g x . With the purview, we examine the gap in the knowledge base of f that needs to be offset such that f can represent x more accurately. This gap is effectively the necessary change to be made to f to bring the best approximation h(T 1 , · · · , T P ) closer to the necessary feature T P +1 .
To assess the necessary change for the model such that h(T 1 , · · · , T P ) = T P +1 , we make assumptions regarding the learned features. Each learned feature captures a unique aspect of training data, so the features from each model are orthogonal to each other, We also assume that the function h for approximation is linear. Based on the orthogonality of features in Eq. 13 and the relationship between T f x and T g x , we write h(T 1 , · · · , T P ) ⊥ T P +2 ⊥ · · · ⊥ T P +Q (14) This relationship also assures that the approximated feature is orthogonal to the span of the rest of the features, To measure the necessary change, we employ gradients based on their utility in model optimization, where they correspond to the amount of change that a model requires to properly represent inputs. Gradients have a unique property of orthogonality, which ensures that the gradients of the approximated feature would lie in the span of the other features, span(T P +2 , · · · , T P +Q ) ⊃ ∇h(T 1 , · · · , T P ).
With the linear assumption on h, the gradients of a linear combination of orthogonal features can be re-written as This shows that the gap in the knowledge base for the absent feature can be examined by approximating T P +2 , · · · , T P +Q with the gradients of the existing features T f . We hypothesize that the amount of necessary change for a model to represent inputs seen during inference would be more significant for inputs that differ greatly from the training data. Equivalently, the amount of necessary change is inversely related to the effective capacity of the model established by the training data. Networks that are exposed to diverse training data would have enhanced effective capacity with more generalizable features, leading to a smaller amount of required change to handle inference samples. We propose to exploit this relationship and probe the purview using gradients. Similar to the use of gradients during model training, gradients can be generated during the inference of a trained model. The obtained gradients are not applied for the actual model updates. Because the model in question is a converged VOLUME 11, 2023 Trained model ( )

Logits ;
Confounding label solution, the required model update would not be drastic for inputs within its knowledge base. These gradients can be used to probe the purview of a trained network.

C. CONFOUNDING LABELS FOR GRADIENTS
Consider an image classifier during inference. The model will make a prediction for any given input, but the validity of model predictions remains in question due to the lack of access to labels or information on input data distribution. This presents a challenge in utilizing gradients to probe the purview of a trained network. We introduce confounding labels to remove the dependency on ground-truth labels in gradient generation during inference. Confounding label y c is a label formulated by combining multiple categorical labels.
In an image classification setting, an ordinary label consists of a single class (i.e., one-hot encodings) for a model trained to minimize cross-entropy loss, whereas a confounding label may include multiple classes or none.
Our approach focuses on the absence of relevant features in the knowledge base of a trained model, examined via gradients, to fully grasp the anomaly in the given inputs during inference. A confounding label provides an unsupervised methodology to elicit a gradient response that can be utilized to probe the purview of trained neural networks. We discuss the framework for collecting gradient-based representations with confounding labels during inference in Fig. 3. We utilize the binary cross-entropy loss between the logits and a confounding label, whereŷ i is the predicted probability for class i and y c,i is the true probability represented by the confounding label.
With backpropagation of the loss, gradients are generated at each set of model parameters (i.e., the weight and bias parameters of the network layers). While any form of gradient that preserves its magnitude would be valid, we measure the squared L 2 -norm for each parameter set and concatenate them to represent the given input. The obtained gradientbased representation has the following form: where L is the number of layers or parameter sets in the given network. We highlight that the gradient generation process involves no hyperparameters compared with other approaches that distinguish between in-distribution and OOD.

1) Gradients vs. Activations vs. Loss
We demonstrate the effectiveness of gradients obtained with confounding labels compared to activations and loss values in differentiating anomalous inputs. In an out-of-distribution detection setup, we show the disparity between the features learned during training and the features necessary to represent test images of both in-distribution and OOD. The key idea is that the model will require more significant updates in its feature set to handle OOD samples than to handle indistribution samples. Based on our hypothesis, the gradient magnitudes obtained from the OOD samples should be larger than those of the in-distribution samples. As discussed in Section II, most OOD detection approaches use activationbased measures. We will show that gradients can capture distributional shifts more effectively than activations and loss values.
For the demonstration, a ResNet classifier [45] is trained with MNIST [46] and used to generate gradients with confounding labels on the test sets of MNIST, SVHN [47], CIFAR-10/100 [48], LSUN [49], and ImageNet [50], where the last two are resized subsets of their original versions provided by Liang et al. [29]. We employ a network architecture that is sufficiently large for all datasets. We collect the squared L 2 -norm of the layer-wise gradients and activations, as well as the loss values, and visualize their distributions in Fig. 4(a). For gradient generation, we utilize a confounding label that combines one-hot encodings of all classes (i.e., allhot encoding). For visualization, we select a convolutional layer from each residual block of the ResNet architecture. The distributions of the gradient and activation magnitudes and loss values for the in-distribution dataset are highlighted by red circles in each plot for clarity. The separation in the ranges of gradient magnitudes between the in-distribution and OOD datasets is more evident in some parts of the network because each layer captures information about different aspects of the given inputs. Nevertheless, we observe a sharp distinction based on the purview of the model, with smaller gradient magnitudes for the in-distribution datasets and significantly larger magnitudes for the OOD datasets throughout the network layers. Given that the in-distribution dataset is the simplest of all considered datasets, the learned  features are insufficient to characterize the OOD samples, leading to larger purview and thus larger gradient magnitudes. In contrast, the activations in Fig. 4(b) show apparent overlaps in the magnitude ranges between the in-distribution and OOD datasets throughout the network. This supports our intuition that gradients can capture the anomaly (i.e., distributional shift in the case of OOD) in inputs better than activations based on the unfamiliar aspects of the given inputs from the model's perspective. Similar to the activations, we observe the apparent overlap of the loss values across different datasets, as shown in Fig. 4(c). Loss and gradients are intertwined in the process of backpropagation, but the loss is determined based on the last layer of activation from the network and is limited to a single value per sample. On the contrary, the gradients have the same dimensions as their corresponding parameter sets, preserving more information about the current state of the model and the necessary adjustment for better representation of the given inputs. The overlap in the loss value range between the in-distribution and OOD datasets shows that the loss alone is inadequate for differentiating anomalous inputs from familiar inputs from the model's perspective. Overall, gradients are more effective in characterizing anomalies in inputs than activations or loss values.

2) Data-Dependent Capacity and Purview
We demonstrate the effectiveness of our gradient-based approach in probing the purview of neural networks trained with data of varying levels of complexity. In particular, we analyze the asymmetry in the data-dependent effective capacity and the purview observed in the gradients. The core of this analysis is to show that a model trained on simpler data would have lower effective capacity and more room for improvement in its feature set (i.e., purview), leading to larger gradient responses observed throughout the network. The experiment follows the setup of the OOD detection experi-ment introduced in the previous section, with the exception of the in-distribution dataset for each model. We utilize training datasets of various complexities: MNIST, CIFAR-10, and ImageNet in ascending order of complexity in terms of the represented types and numbers of class categories. The magnitudes of the gradients obtained with the confounding labels are shown in Fig. 5 to compare the models with different degrees of effective capacity. Each row represents the gradient magnitudes obtained from a model trained with the specified dataset, and each column corresponds to the gradient magnitudes from a convolutional layer of the model. As shown in Fig. 5(a), the model trained on MNIST (i.e., the dataset with the lowest complexity) exhibits the largest gap in gradient magnitudes between the in-distribution and OOD datasets. It captures larger model purview in response to the OOD samples, which is attributed to the limited datadependent effective capacity. In contrast, the increase in the complexity of the in-distribution datasets leads to smaller gaps in the gradient magnitudes between the in-distribution and OOD, as seen in Fig. 5(b) and 5(c). This indicates that by exposing models to datasets of higher complexity in training and enhancing the data-dependent effective capacity, the learned feature sets are sufficient to capture the characteristics of the OOD samples. However, models with less extensive feature sets would require more significant amounts of model updates to accurately represent the OOD samples. This highlights the value of our data-dependent perspective on model capacity, which allows for learned feature-based analysis.

IV. EXPERIMENTS
In this section, we apply our gradient-based approach to probe the purview of neural networks to detect anomalous inputs: OOD detection, adversarial detection, and corrupted input detection. We distinguish the anomaly based on the application and training datasets. For OOD detection, the in-distribution datasets are referred to as normal, and the OOD datasets are anomalous. For adversarial and corrupted input detection, clean images are considered normal, and adversarial attack images or corrupted images are considered anomalous. From a trained classifier, we obtain gradientbased representations from the test sets of both normal and anomalous datasets using a confounding label where all classes are positive (i.e., all-hot encoding). For each experiment, the collected gradient representations are used to train a simple binary detector with two fully-connected layers as an anomalous input detector. Each reported detection result is an average of 5-fold cross-validation results repeated with two different random seeds, totaling ten rounds with randomly initialized detectors for all detection experiments. Further details regarding the implementation are provided in Section IV-D. In addition, we provide an ablation study on detector network architectures and training, as well as the designs of confounding labels in Section IV-E. We note that our approach is not specifically devised to perform well for these applications. Rather, it is a byproduct of the focus on understanding the purview of models and the apparent gap in the data-dependent capacity observed in the distribution of the gradient responses and has proven useful in such applications.

A. OUT-OF-DISTRIBUTION DETECTION
We apply our gradient-based approach on OOD detection using various image classification datasets. We utilize CIFAR-10 and SVHN datasets as in-distribution. For OOD, we consider the following additional datasets: CIFAR-100 [48], resized LSUN and ImageNet [29], and fixed LSUN and ImageNet [51], denoted as LSUN (FIX) and ImageNet (FIX), respectively. ResNet-18 and DenseNet [52] architectures are utilized as classifiers from which gradient-based representations are obtained. We measure the detection performance with the following evaluation metrics: • Detection accuracy measures the maximum classification accuracy over all possible confidence thresholds δ, max where q(x) denotes the confidence score from the detector. For our method, we fix the threshold δ to 0.5.
• AUROC measures the area under the receiver operating characteristic curve, which plots the true positive rate against the false positive rate obtained with varying threshold settings.
• AUPR measures the area under the precision-recall curve, which plots the precision (the ratio between true positives and all positives) against the true positive rate with varying threshold values. The results of OOD detection are reported in Table 1 along with other state-of-the-art methods, including Baseline [28], ODIN [29], Mahalanobis [32], Energy [33], and GradNorm [43]. For the Energy and GradNorm methods, the detection accuracy is omitted because they do not specifically determine the threshold values for their OOD scores for detection. We observe that our method, shown as Purview, outperforms all state-of-the-art methods in AUROC when SVHN is used as in-distribution in 11 out of 12 cases. With CIFAR-10 as in-distribution, our method outperforms all others in AUROC when SVHN, LSUN, and LSUN (FIX) are used as OOD. Our method is particularly effective when there is a considerable difference in the complexity of the indistribution and OOD datasets. We provide sample images from each dataset for the visual analysis in Fig. 6. ImageNet shows the most visual similarity with CIFAR-10 in terms of the types and scales of objects in the images among the three OOD datasets shown in Fig. 6 (b)-(d). SVHN is a street-view house number dataset that is vastly different from CIFAR-10 dataset of natural images. LSUN is a scene-understanding VOLUME 11, 2023 dataset with scene categories for classification. The results support our intuition behind the purview of a trained network based on its data-dependent effective capacity. The models trained on CIFAR-10 have higher effective capacity than those trained on the simpler SVHN. Their feature sets include more diverse features that can be utilized to approximate the unfamiliar characteristics of the OOD datasets. On the other hand, models trained on SVHN have relatively simple features. They require more extensive updates to handle OOD datasets, exhibiting a more apparent gap in the gradient responses and leading to a better OOD detection performance. Similar reasoning can be applied to the inferior detection performance observed with ImageNet and CIFAR-100 as OOD. With SVHN as in-distribution, our approach achieves the best performance in all cases except one, and a very close second in one case. However, with CIFAR-10 as indistribution, our approach is outperformed by up to 3.5% AUROC for ImageNet, 3.7% for ImageNet(FIX), and 6.8% for CIFAR-100. Our experiment results show that the more similar the in-distribution and OOD datasets are, the smaller the gap in the spreads of gradient magnitudes, which makes it more challenging to differentiate OOD samples from indistribution samples. Overall, our gradient-based method outperforms all activation-based methods in 17 out of 24 cases in terms of AUROC.

B. ADVERSARIAL DETECTION
We utilize ResNet and DenseNet classifiers trained on CIFAR-10 training set for adversarial detection. The test set of CIFAR-10 is utilized to generate the following adversarial attacks: fast gradient sign method (FGSM) [4], basic iterative method (BIM) [35], Carlini & Wagner attack (C&W) [53], projected gradient descent (PGD) [36], iterative least-likely class method (IterLL) [35], semantic attack [54], and Au-toAttack [55]. We utilize the pristine CIFAR-10 test set as normal samples and adversarial images as anomalous samples. For comparison, we employ the Baseline method, local intrinsic dimensionality (LID) scores [31], the Mahalanobis method, and the approach introduced by Hu et al. [56]. The performance is measured in AUROC and reported in Table 2. For the Mahalanobis method (M), we report vanilla results (V), i.e., without the input pre-processing or feature ensemble, and results with both to demonstrate its dependency on the delicate calibration of hyperparameters, neither of which our approach requires.
The Mahalanobis method performs best for the adversarial attack types of FGSM and BIM. In contrast, our approach outperforms all state-of-the-art methods for C&W, PGD, IterLL, Semantic and AutoAttack attacks. Note that the Mahalanobis method requires fine-tuning of hyperparameters for each attack type, and we report their results with the bestperforming parameters selected within the range of values explored in their study. Interestingly, the Mahalanobis approach shows a significant drop in performance for the types of attacks that were not included in their work. This indicates the need for additional hyperparameter tuning outside of the suggested parameter values. It highlights the benefit of our approach in its simplicity of obtaining gradient-based representations without the need for pre-or post-processing. Our method can effectively characterize adversarial attack samples, even for attack types widely considered highly challenging to detect: C&W, PGD, and AutoAttack.
Along with known attack detection experiments, we show our approach's unknown attack detection performance against the Mahalanobis method, which showed the secondbest performance following ours. The detectors are trained on attack images of BIM and PGD (denoted as "seen") and evaluated on unknown attacks. The detection performance on the known attack is also included but shown in parentheses for reference. We observe a similar trend to the known attack detection, where the Mahalanobis method shows saturated performance on the attacks for which the hyperparameters are tuned. On the contrary, our approach shows more consistent performance across all unknown attacks except Semantic  attack with an average of over 95% AUROC and outperforms the compared method in 18 out of 28 cases. Our approach can effectively characterize adversarial perturbations even for unknown attacks.

C. CORRUPTED INPUT DETECTION
In addition to the widely accepted OOD and adversarial detection setup, we consider corrupted inputs as another type of anomaly. Deployed in the real world, neural networks are known to suffer from imperfect samples due to the data acquisition process and environmental factors, such as motion blur or weather conditions [1]- [3]. We use image classification datasets designed to benchmark the robustness of neural networks under realistic challenging conditions. CIFAR-10-C [2] consists of 19 diverse corruption types in four categories, including noise, blur, weather, and digital, at five different severity levels that are applied to test images of CIFAR-10 dataset. CURE-TSR [8] is a traffic sign recognition dataset that includes real-world and simulated challenging conditions of 12 types and five severity levels.
For each dataset, a ResNet model is trained on corruptionfree images, and the gradients are collected from pristine images in the test sets and their corrupted versions. We utilize the Mahalanobis method [32] for comparison because it showed the best performance among all the compared methods for OOD detection and adversarial detection.
In the experiments, we observed that the AUROC scores are highly saturated for both methods in many cases, particularly for CIFAR-10-C dataset, which calls for a more comprehensive comparison. To better facilitate the performance comparison, we employ corrected repeated k-fold crossvalidated (CV) paired t-test [57] as a measure of statistical significance. Paired t-test is a statistical test for comparing two different learning schemes, A and B, based on a number of observations; in this case, predictive accuracy a and b. To improve the stability of the test, Nadeau and Bengio [58] proposed a corrected variant for the CV setup, and Bouckaert and Frank [57] extended it to a repeated CV setup. For the corrected repeated k-fold CV test, each model is trained using k-fold cross-validation sets and the process is repeated r times (r > 1), resulting in a ij and b ij for each fold i, 1 ≤ i ≤ k and each run j, 1 ≤ j ≤ r. Let x ij be the observed difference x ij = a ij − b ij , and m and σ 2 be the estimates for the mean m = 1 kr k i=1 r j=1 x ij and variance The test statistic t is computed as following: where n 1 and n 2 are the numbers of training and testing VOLUME 11, 2023 5: Classification accuracy for models used in OOD and adversarial detection (%). For OOD detection, we report the accuracy on test sets of the in-distribution datasets. For adversarial detection, the model trained on clean CIFAR-10 is tested on the adversarial attack images generated using the test set. We also include the accuracy of models used in Section III-C2. instances for the variance estimate correction, respectively. t is compared to a threshold value based on the significance level p to determine whether the performance of the two models is significantly different (i.e., statistically significant, SS). In our experiments, we use k = 5, r = 2, and p = 0.05. We report the detection performance on CURE-TSR in Table 3 and CIFAR-10-C in Table 4. As the statistical test is conducted using model predictions, we report the performance in terms of detection accuracy rather than AUROC. We highlight the SS detection accuracy in green and red, where green indicates that our method outperforms the Mahalanobis method, and red shows the opposite case. Both methods show higher variability in performance across different severity levels of all corruption types for CURE-TSR compared to the relatively saturated results for CIFAR-10-C. The proposed method outperforms the compared method in all 95 cases for CIFAR-10-C and 30 out of 37 SS cases (60 overall cases) for CURE-TSR. The other 23 cases of CURE-TSR show no statistically significant differences. Our approach also shows statistically superior performance in 29 out of 31 cases in the average detection accuracy across all severity levels of each corruption type in both datasets. The remaining two cases show no statistical difference. Note the superior SS performance of our approach, even at lower severity levels for both datasets. This shows that our gradientbased representations can characterize corruption more effectively, even at a subtle degree. Overall, we highlight that our approach outperforms the activation-based approach in terms of SS results: 94.7% among the corruption-level-specific performances and 100% among the corruption-wise average performances.

D. IMPLEMENTATION DETAILS
Classification Networks. For the image classification task, we employ two state-of-the-art neural network architectures, which are then utilized to generate gradient-based representations with confounding labels: ResNet [45] and DenseNet [52]. We use ResNet with 18 layers and DenseNet with 100 layers, growth rate k = 12, and dropout rate of 0. Both networks with no pre-training are trained to minimize the cross-entropy loss with SGD optimization (Nesterov momentum factor of 0.9 and weight decay of 5 × 10 −4 . A learning rate scheduler is implemented to decay the learning rate by a factor of 0.1 at 50% and 75% of the total number of training epochs. The models are trained for 300 epochs with a starting learning rate of 0.1 and batch size of 64. The classification accuracy on test sets for the models used in the paper is listed in Table 5. Detection Networks. For anomalous input detection tasks, we utilize a simple multilayer perceptron (MLP) architecture of two fully-connected layers with ReLU non-linearity and dropout following the first fully-connected layer and Sigmoid after the second layer. Given an input of gradient-based representations with dimension d, the fully-connected layers have dimensions of d × 40 and 40 × 1. The detectors are trained to minimize the binary cross-entropy loss using the Adam optimizer and a learning rate of 1 × 10 −3 for 30 epochs.
Adversarial Attack Hyperparameters. To generate adversarial attack images, we borrow publicly available code bases 1 and utilize the hyperparameter values listed in Table 6.

E. ABLATION STUDIES
Our approach to probing the purview of neural networks is free of hyperparameters. It involves some parameters that can be adjusted, but they remain fixed for the experiments shown throughout the paper: 1) the design of confounding labels and 2) the anomalous input detector architecture and training. This section provides detailed ablation studies on these two factors to further validate that our method does not require hyperparameter tuning.

1) Confounding Label Designs
We introduced confounding labels as a methodology to elicit a gradient response without relying on any information regarding the inputs given during inference. The confounding labels are devised by combining the one-hot encodings of multiple classes, and we utilize the default choice of the confounding label where the one-hot encodings of all classes are combined (i.e., an all-hot label) throughout the experiment section. In this section, we present ablation studies on different designs of confounding labels, as opposed to the choice of all-hot labels in generating gradients. We establish two types of confounding labels based on reference to information regarding inputs during inference: no-reference (NR) and full-reference (FR) designs. The terminologies are borrowed from the field of image quality assessment [59], where the reference to clean images is a critical factor in evaluating the quality of corrupted images. Our goal is to probe the purview of neural networks during inference, where we generally do not have access to any information regarding the given inputs. Nonetheless, we consider both options to explore the effect of confounding label designs and show the most viable options in each case in the OOD and adversarial detection setups. Following the experimental setup presented in Sections IV-A and IV-B, we utilize a ResNet classifier trained on CIFAR-10. For adversarial detection, we employ FGSM, BIM, C&W, PGD, IterLL, and Semantic attacks. For OOD detection, we utilize the originally utilized SVHN, resized LSUN, resized ImageNet, and an additional dataset of STL-10 [60], given the shared classes with CIFAR-10, which allow for unique insights.
No-Reference (NR) Labels. To construct NR confounding labels, we only utilize the information that is safely assumed to be available at inference time: trained classes of the classifier and model outputs in response to given inputs during inference. We formulate three types of NR labels based on 1) top-k predictions, 2) taxonomy of trained classes, and 3) maximum logit values. The top-k prediction-based confounding labels are devised by combining the one-hot encodings of the top 1 through k classes based on the predicted class probabilities. The class taxonomy-based labels are constructed based on the two superclasses of CIFAR-10: animals (six subclasses: bird, cat, deer, dog, frog, and horse) and vehicles (four subclasses: airplane, car, ship, and truck). For each superclass, the labels are implemented by combining the one-hot encodings of the subclasses. The logit-based label design is based on the literature introduced in Section II-C, where the maximum logit value is directly backpropagated in generating gradients instead of loss values between the model outputs and some labels.
We employ the three types of NR labels in gradient generation for adversarial and OOD detection and report the detection performance in AUROC in Table 7. Overall, the all-hot label performs best in 7 out of 10 cases. For adversarial detection, all-hot label performance is followed by taxonomy-based labels and maximum logit values. Topk prediction-based labels are the least effective in generating gradient-based representations that capture perturbations. This is because of the nature of adversarial attacks, specifically designed to deceive the classifiers and result in perturbed probability distributions. Consequently, the top-k predictions are unreliable for eliciting meaningful gradient responses. On the other hand, the trained class-based labels can generate gradients for specific semantic categories, which capture adversarial perturbations more effectively. For OOD detection, the results exhibit less variation, except for the inferior performance of taxonomy-based labels for Ima-geNet and superior performance for STL-10. Contrary to adversarial attacks, OOD samples are not intentionally devised for incorrect classification but are drawn far from the training data distribution. The trained classes of CIFAR-10 may be insufficient for the exponentially larger number of classes VOLUME 11, 2023  represented in the ImageNet dataset, leading to an inferior performance. Conversely, the shared classes between STL-10 and CIFAR-10 led to more relevant gradient responses, and thus, superior detection performance. Overall, the all-hot confounding label is the most viable design for generating gradient-based representations that can distinguish adversarial and OOD samples from clean and in-distribution data, with no reference to information regarding the data samples.
Full-Reference (FR) Labels. When there is no available information concerning the given inputs during inference, the all-hot confounding label is proven to be the best option. However, if we have access to information regarding the inputs, we can devise better confounding label designs. For adversarial detection, we consider targeted adversarial attacks where the perturbation is added to the images so that they can be classified as specific target classes. The target classes can then be used to formulate the confounding labels using their one-hot encodings. For OOD detection, we utilize STL-10 dataset for its nine common classes with CIFAR-10 and one unique class. A common class-based confounding label can be constructed by combining the one-hot encodings of the nine common classes and the unique class-based label with the unique class' one-hot encoding.
The detection results obtained using FR labels are shown in Table 8. For adversarial detection, we utilize four attack types (FGSM, BIM, C&W, and PGD) and present the average results across all target classes. The target-class-based confounding labels perform better than the all-hot labels for three out of four adversarial attacks. For OOD detection with STL-10, the unique class-based label performs better than the all-hot label, and the common class-based labels perform worse than the all-hot label. The unique class-based confounding label can elicit gradient responses with respect to the features of the unique class of CIFAR-10, which are absent in STL-10. As a result, the gradient distributions exhibit a more significant gap between the datasets and lead to a better OOD detection performance. Overall, the knowledge about inputs given during inference has proven useful in designing confounding labels to generate gradient responses that can better characterize the anomaly in inputs, leading to improved adversarial and OOD detection performances compared to NR label designs.

2) Detector Designs and Training
We utilize a simple multilayer perceptron (MLP) as an anomalous input detector using gradients. This section provides ablation studies on the detector network design and its training. We consider the ablation settings for the following hyperparameters: • Layers: number of layers for the MLP • Neurons: number of neurons in each layer • Epochs: number of passes through the dataset in training • Learning rates: step size at each iteration in optimization Following Section IV-A, we utilize the OOD detection setup with a ResNet classifier and CIFAR-10 dataset as in-distribution. The detection performances using various hyperparameter values are reported in Table 9, measured in detection accuracy, AUROC, and AUPR, using 5-fold cross-validation. The values used for the experiments in Sections IV-A through IV-C are referred to as the default values and are underlined for reference. For the ablation of each hyperparameter, all other hyperparameters are held constant at the default values, and only the specific hyperparameter in question is changed.
We observe that varying the hyperparameter values has little effect on the detection performance for all hyperparameters. The maximum AUROC gap observed for SVHN and LSUN is only 0.14% when varying the number of epochs and 0.46% for ImageNet with a different number of neurons. The effect of different hyperparameter values only leads to an average AUROC gap of 0.09% for SVHN, 0.1% for LSUN, and 0.32% for ImageNet. Note that the default values for the hyperparameters do not always lead to the best performance. This ablation study further highlights the simplicity of our approach with no hyperparameters involved in detector design and training as well as generating gradients to effectively characterize the anomaly in inputs. The gradient-based representations generated with confounding labels have proven useful in differentiating anomalous inputs from the perspective of models without relying on a sophisticated detector.

V. CONCLUSION
In this paper, we propose to examine the data-dependent effective capacity of neural networks to probe their purview. We define purview of a model as the capacity required to characterize given samples during inference, in addition to the capacity defined by its training data. Inspired by the utility of gradients used in model training, we utilize gradients to measure the amount of change required for a model to characterize inputs more accurately during inference. To facilitate gradient generation during inference, we introduce confounding labels that can be formulated with no information regarding the given inputs. We validate the effectiveness of gradients generated with confounding labels in capturing anomalies in inputs for detecting OOD, adversarial, and corrupted inputs. Finally, we provide extensive ablation studies for the design of confounding labels and hyperparameter settings. GHASSAN ALREGIB (Fellow, IEEE) is currently the John and Marilu McCarty Chair Professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. In the Omni Lab for Intelligent Visual Engineering and Science (OLIVES), he and his group work on robust and interpretable machine learning algorithms, uncertainty and trust, and human-inthe-loop algorithms. The group has demonstrated their work on a wide range of applications such as Autonomous Systems, Medical Imaging, and Subsurface Imaging. The group is interested in advancing the fundamentals as well as the deployment of such systems in real-world scenarios. He has been issued several U.S. patents and invention disclosures. He is active in the IEEE as an IEEE Fellow. He served on the editorial board of several transactions and served as the TPC Chair for ICIP 2020, ICIP 2024, and GlobalSIP 2014. He was area editor for the IEEE Signal Processing Magazine. In 2008, he received the ECE Outstanding Junior Faculty Member Award. In 2017, he received the 2017 Denning Faculty Award for Global Engagement. He and his students received the Best Paper Award in ICIP 2019. VOLUME 11, 2023