Search-and-Attack: Temporally Sparse Adversarial Perturbations on Videos

Modern neural networks are known to be vulnerable to adversarial attacks in various domains. Although most attack methods usually densely change the input values, recent works have shown that deep neural networks (DNNs) are also vulnerable to sparse perturbations. Spatially sparse attacks on images or frames of a video are proven effective but the temporally sparse perturbations on videos have been less explored. In this paper, we present a novel framework to generate a temporally sparse adversarial attack, called Search-and-Attack scheme, on videos. The Search-and-Attack scheme first retrieves the most vulnerable frames and then attacks only those frames. Since identifying the most vulnerable set of frames involves an expensive combinatorial optimization problem, we introduce alternative definitions or surrogate objective functions: Magnitude of the Gradients (MoG) and Frame-wise Robustness Intensity (FRI). Combining them with iterative search schemes, extensive experiments on three public benchmark datasets (UCF, HMDB, and Kinetics) show that the proposed method achieves comparable performance to state-of-the-art dense attack methods.


I. INTRODUCTION
In recent years, deep neural networks (DNNs) have shown great performance in various tasks such as image classification [1]- [4] and object detection [5]- [8]. Despite the success of the modern deep neural networks, DNNs are known to be vulnerable to adversarial attacks, which are carefully crafted perturbations to fool machine learning models. Even though generating optimal adversarial perturbations is an NP-hard problem [9], [10], simple gradient-ascent based attack algorithms [11]- [14] have proven effective to deceive deep neural networks. To generate small and possibly imperceptible adversarial perturbations, various constraints (e.g., l ∞ , or l 2 -norm) are often imposed.
While much effort has been devoted to adversarial attacks on the image, adversarial attacks have been less studied The associate editor coordinating the review of this manuscript and approving it for publication was Rosalia Maglietta .
in the video domain. Recently, several studies [15]- [17] have been conducted to investigate adversarial attacks on videos, but they do not fully leverage the relation of frames. More precisely, those studies have either attacked all frames or regularly sampled frames without considering temporal dependency between frames. This line of approaches is suboptimal to attack video classification models that utilize the temporal information of videos. Even though most stateof-the-art video classification models use 3D convolution [18]- [20] to capture the temporal information, the models differently cope with the temporal dynamics by using different strides along the time direction for their efficiency and flexibility. SlowFast Networks [21] use even two separate 3D convolutional neural networks with two different frame rates. Exploiting both the common trait of video classification models such as 3D convolution and architecture-specific vulnerability is a key to develop less perceptible (or temporally sparse) and stronger adversarial attacks. Overview of our Search-and-Attack pipeline. At the first stage (frame selection), with the target model parameter θ and the input x, our algorithms find the most vulnerable frames with respect to the surrogate losses: MoG (Magnitude of the Gradient) or FRI (Frame-wise Vulnerability). Then, the multi-step frame selection method, either greedy search or section search is applied to generate the index set I. Here, Greedy Search iteratively updates its input video and Section Search selects each frame from equally divided sections. Finally, at the second stage (perturbation generation), our algorithms create an adversarial perturbation δ I by FGSM or PGD, where the perturbation exists only on the frames selected by the search stage.
In this paper, we propose a simple yet effective method for temporally sparse adversarial attacks against video classification models. We propose a two-stage pipeline called the Search-and-Attack, which finds the most vulnerable frames of a video at the Search stage, and then perturbs only the frames at the Attack stage. We formally define the temporally sparse attacks and the most vulnerable frame set. Identifying the most vulnerable frames involves a nonlinear mixed integer programming. For efficient optimization, instead of the conventional loss function, we propose surrogate objective functions and find vulnerable frames with respect to the surrogate losses. With the surrogate losses, we explore a single-step method and two iterative methods for the frame selection. In the attack stage, we generate a frame-wise perturbation with a modified version of FGSM [12] and PGD [13] which only perturb the frames selected in the search stage. To test the performance of the proposed method, we carry out experiments on HMDB, UCF, and Kinetics datasets with widely used action-recognition models such as I3D, R3D, R(2+1)D, SlowFast, and IRCSNv2. The experiments show that attacking only a few vulnerable frames is as strong as attacking the entire frame. To sum up, our main contributions are fourfold.
• We formally introduce temporally sparse adversarial attacks for video action recognition models.
• We propose frame search algorithms with surrogate objective functions to identify vulnerable frames.
• We study model-specific temporal vulnerability of video classification models.
• Our experiments on HMDB, UCF, and Kinetics datasets demonstrate that the proposed methods successfully search vulnerable frames and generate strong temporally sparse adversarial perturbation on videos.

A. ACTION RECOGNITION
Many studies have investigated various models for action recognition. One approach is CNN+RNN based models, which train RNN models on a sequence of the frame features over time with CNNs [22]. Another common approach is 3D convolutional neural networks that take a set of frames as an input tensor. Some models learn spatial and temporal features concurrently [18], [19], while some models consist of separable 3D convolutions in which temporal and spatial convolutions are implemented separately [20]. In addition, there are two-stream CNNs with RGB images and optical flows. They use the information of short-term motions to supplement with the information of object appearance [18].

B. ADVERSARIAL ATTACK
Adversarial attack has been studied by a variety of approaches. The attack methods have been mostly demonstrated in image classification. Gradient-based methods are one of the most common methods for adversarial attack and they modify the input images with respect to computing the gradients of the cost function [12], [13]. Although these attacks usually densely change the input values, recent works have shown that DNNs are also vulnerable to sparse perturbations. For example, Jacobian-based Saliency Map Attack (JSMA) constructs a saliency map, which shows the impact of each pixel on the resulting classification, through forward gradients [23]. Based on the saliency map, the JSMA VOLUME 9, 2021 iteratively attacks the most influential pixel. Moreover, [14] suggests optimization-based adversarial attacks that use distance metric l 0 -norm. Recent works have studied sparse and imperceptible perturbations to introduce more effective attack methods; for more details, see e.g., [24]- [26].

C. ADVERSARIAL ATTACK TO VIDEO
Regarding the video domain, [15] has explored adversarial attacks on videos. They proposed a method that optimizes spatio-temporal sparse adversarial perturbation with a CNN+RNN [22]. However, they just sampled frames without considering temporal dependency between frames.
On the other hand, [17] proposed a dense adversarial attack on a two-stream action-recognition model, which consists of optical flow and RGB. They extended FGSM and iterative FGSM attacks to the video domain [12], [27]. Unlike those optimization-based methods, attacks using a generative model have also been suggested. For example, [16] generated black-box adversarial perturbations to the real-time action-recognition model using Generative Adversarial Networks (GANs).

III. METHODS
In this section, we first formally introduce the temporally sparse adversarial perturbations. Then, we propose our Search-and-Attack methods that are efficient two-stage algorithms. At the first stage (frame selection), our algorithms search the most vulnerable frames with respect to surrogate objective functions, and at the second stage (perturbation generation), our algorithms attack the selected frames by FGSM or PGD.

A. TEMPORALLY SPARSE ADVERSARIAL ATTACKS
Let θ denote the parameters of neural networks and x ∈ R T ×C×H ×W denote an input video with its ground-truth target label y, where T , C, H , and W are the numbers of frames, the number of channels, the height, and the width of a frame respectively. The i-th frame of an input video x is denoted by Our goal is to attack neural networks for video classification by temporally sparse perturbations. The goal can be achieved by finding the most vulnerable N frames I ∈ B T and perturbing only the selected frames. This can be written as follows: where L is the loss function and δ (i) ∈ R T ×C×H ×W denotes the perturbation on the i-th frame of the video, i.e., all elements of δ (i) are zeros except those corresponding to the i th frame. In other words, the perturbation on the most vulnerable N frames δ I is defined as the sum of the frame-wise perturbation δ (i) of the selected frames where i ∈ I. Algorithm 1 Search-and-Attack Method Input: Video x and its label y, model parameters θ, the number of attacked frames N , surrogate loss J , attack method A Output: perturbed video x It is worth noting that Eq. (1) is a mixed-integer non-linear program (MINLP). Since MINLP is an NP-hard combinatorial problem [28], no polynomial-time algorithm has been found yet to seek the global optimal solution. The feasible set of (1) has T N solutions for I and even for small problems; e.g., T = 32, N = 4, it is huge. Besides, the continuous variables δ (i) need to be simultaneously optimized. Hence, in this paper, we instead decompose the problem in (1) into two subproblems: (i) search (frame selection by optimizing I), and (ii) attack (perturbation generation δ I for the selected frames). Figure 1 illustrates the overall pipeline of our approach.

B. SURROGATE OBJECTIVE FUNCTIONS
At the search stage, we propose an alternative formulation with a surrogate objective function J to perform the frame selection. The optimization problem in (1) is reduced to an integer programming problem as follows: Alg. 1 shows how the generalized search method works.

1) MAGNITUDE OF THE GRADIENTS (MoG)
We derive a first surrogate objective function from the first-order Taylor expansion of the loss function L. The Taylor series for the original objective function is given as follows: From the maximum perturbation constraint, i.e., δ (i) ∞ ≤ ε in (1), the upper bound of the first order approximation can be derived as follows: The upper bound is obtained by the sum of the L 1 -norm of the gradient. So, we name our surrogate loss function as the sum of Magnitudes of Gradients (MoG). For a fixed data point x, since ε and L(x, y; θ) are constant, we define J MoG as follows: x p ← x p + δ (i * ) 9: end for 10: return I 11: end function This has the same optimal solution as in Eq. (4) since To propose the second surrogate loss function, we assume that the loss function is locally linear around data point x. More precisely, we assume that the increase of the loss value by δ I is approximately similar to the sum of the increases induced by δ (i) . The second surrogate loss can be derived as follows: To avoid clutter, y and θ are omitted in the equation. Similar to MoG, the surrogate loss function can be further simplified by removing the constant L(x). The second surrogate objective function J is given as: We name this surrogate loss as the sum of FRame-wise vulnerability (FRI). The frame-wise perturbation δ (i) is generated by a simple method. More details will be discussed in Section III-D.
Note that the optimal solution I * with respect to MoG and FRI (given δ) is simply top N frames with the largest norm of gradients or frame-wise vulnerability. So, a single-step frame selection algorithm can be efficiently implemented by sorting.

C. FRAME SELECTION
As aforementioned, the proposed surrogate loss functions (e.g., MoG and FRI) yield single-step frame selection algorithms. Those surrogate losses are derived based on linearity,e.g., Eq. (3) and (6). However, if the number of frames to attack increases, the surrogate loss becomes inaccurate Algorithm 3 Section Search for Frame Selection Input: Video x, label y, model parameters θ, the number of frames to attack N , surrogate loss J , attack method A Output: Frame Index Set I 6: for n = 1 to N do 7: i * ← arg max i∈S n J (x + δ (i) , y; θ) 8: I ← I + {i * } 9: end for 10: return I 11: end function due to the non-linearity of the original loss function L, i.e., the discrepancy between the original loss and the surrogate loss gets larger so the frame selection becomes suboptimal. To address this problem, we propose iterative frame selection algorithms. Since δ I is defined as the sum of δ i , the iterative frame selection estimates only one frame perturbation per step.

1) GREEDY SEARCH
We propose a Greedy Search algorithm for frame selection, which iteratively selects frames while updating input video x. The greedy search algorithm in Alg. 2 selects one frame at each iteration based on J . Here, J can be either J MoG or J FRI . Then, the selected frame i is perturbed by δ (i) , i.e., x p = x p + δ (i) as line 8 in Alg. 2. The updated video is utilized at the next iteration to find the subsequent vulnerable frame. Although the greedy algorithm is computationally more expensive than single-step frame selection, it generally provides better frame selection. The experimental results for the greedy algorithm are available in Section IV-D

2) SECTION SEARCH
In the experiment of the Greedy Search, we have observed that the selected frames tend to disperse compared to the non-iterative method. More details will be discussed in Section IV-D22 and Section IV-E.
Motivated by the observation, we propose a more efficient iterative method called Section Search, which selects each frame from equally divided sections. The section search algorithm divides input video into sections {S n } n∈N with an equal length r = T N , i.e., S n ← {(n − 1)r + j} r j=1 as line 4 in Alg. 3. Then, for each section S n , only one frame is selected based on J . Since the section search does not update the input, it is computationally more efficient than the greedy search algorithm. Also, they generally provide better frame selection than a single-step frame selection. Figure 2 illustrates the idea of the section search algorithm.

D. PERTURBATION GENERATION
We use the FGSM or PGD attack to generate adversarial perturbations. Unlike the previous adversarial attacks on video [15], [17], we applied FGSM and PGD attacks to only the selected frames I from the frame selection stage. The FGSM attack in our framework can be written as follows: where is Hadamard product (or element-wise product) and Similarly, the update step of the PGD attack in our framework for iteration k ∈ {1, · · · , N } is defined as below: where denotes the projection to the set of small perturbations . Starting from δ 0 = 0, the perturbation by PGD attack is obtained after N updates, i.e., A PGD (x, y, θ, I) = δ N (x, I).

IV. EXPERIMENTS
In this section, we validate the effectiveness of our attack methods on three video classification datasets: UCF101, HMDB51, Kinetics400. We first briefly introduce the datasets and provide implementation details. Also, we present the experimental results of single-step attacks, greedy search, and section search. Lastly, we discuss the motivating observation that the base networks in video classification have the architecture-specific vulnerability and in which circumstance the proposed methods are recommended.
A. DATASET UCF101 dataset [29] is a widely used dataset in video action-recognition. The dataset consists of 13,320 video clips from 101 human action categories with a total length of 27 hours, average duration of 7.2 seconds. All the videos have been collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240. Specifically, we split UCF101 according to the official split-1, which splits the dataset into 8K training set and 3K testing set.
HMDB51 dataset [30] is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,849 video clips from 51 action categories (such as ''jump'', ''kiss'' and ''laugh''), with each category containing at least 101 clips. The original evaluation scheme uses three different training/ testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. We use split-1 for our experiment.
Kinetics400 dataset [31] is one version of the kinetics dataset, a collection of large-scale, high-quality Videos' URLs that include about 650,000 video clips with 400/600/700 human action classes. The videos include human-object interactions such as playing instruments, as well as human-human interactions such as handshaking and hugging. Each clip is annotated with a single action class and lasts around 10 seconds. We split Kinetics400 according to the official split in [31]; 250-1000 videos per class for training, 50 videos per class for validation and 100 videos per class for testing.

C. SINGLE-STEP FRAME SELECTION
We compare the surrogate loss functions, MoG and FRI, introduced in Section III-B. Table 2 shows the performance of various video classification models against the attacks based on MoG or FRI on UCF101. For perturbation generation, FGSM was used. Overall, the FRI chooses more vulnerable frame sets than the MoG achieving lower classification accuracy. FRI in (7) is the sum of loss changes induced by  'actually' perturbed frames with δ (i) . This allows a more accurate estimation of the vulnerability of frames than the norm of gradients. However, FRI requires relatively higher computational cost than MoG since FRI needs N inferences for evaluating a set of L(x + δ(i), y; θ). In other words, there is a trade-off between the MoG and the FRI with respect to the time complexity and the quality of frame selection (or the success rate of attacks). Also, we additionally provide experimental results about the changes of attack performance with varying N (See Fig. 3). Despite the performance of the proposed attack depends on the choice of N, the results were reported with fixed N (N = 8). Since the difference between each threat model with a lower sparsity attack (N ≤ 8) is insignificant, we fix N as 8 to clearly show the characteristics of each threat model while maintaining 25% of sparsity.

D. ITERATIVE FRAME SELECTION
We evaluate the effectiveness of iterative frame selection methods: greedy search and section search. Also, we provide the experimental results to show model-specific vulnerability and why the section search can achieve comparable adversarial attacks with much less computational cost than greedy search.

1) GREEDY SEARCH
We compare greedy search-based frame selection with single-step frame selection methods. With the same surrogate TABLE 3. Classification accuracy on UCF 101. We compare a single-step frame search and greedy search (denoted with suffix '-G') for frame selection. The adversarial attacks are generated by FGSM after the frame selection. Overall, greedy search-based frame selection achieves a 2.37% higher success rate on average than single-step frame selection.
objective functions as MoG and FRI in single-step frame selection, we applied the greedy search that chooses one frame at a time and update the surrogate objective functions. The greedy search algorithms with MoG and FRI are denoted by MoG-G and FRI-G, respectively. Table 3 shows classification accuracy on UCF101 against adversarial examples. Adversarial attacks (FGSM) with the greedy search-based frame selection method achieve 2.37% higher attack success rates than ones with single-step frame selection methods. For R3D, R(2+1)D, and IRCSNv2, the improvement by greedy search is significant for both surrogate objective functions. In particular, the largest improvement by greedy search is 7% with MoG-G and R(2+1)D.

2) MODEL-SPECIFIC VULNERABILITY
We observe that the performance gain by 'Greedy Search' varies depending on base networks as shown in Table 3. Interestingly, Greedy Search overall improved the power of adversarial attacks but the improvement of I3D and SlowFast by Greedy Search is relatively small. We investigated the different behaviors and found that models have different vulnerabilities at the frame level. For some architectures, single-step methods are sufficient to find a set of vulnerable frames. To analyze this hypothesis, we visualize the L 1 -norm of each frame's gradient for baseline models (Fig. 4). In the case of I3D and SlowFast, frames can be categorized into two groups: highly vulnerable frames and others. And the vulnerable frame group has significantly larger gradients than non-vulnerable frames. For instance, in Fig. 4, I3D and SlowFast have 8 highly vulnerable frames and the norms of their gradients are larger than 0.4 and 0.6 respectively. We conjecture that the significant difference makes the frame selection problem easy and leads to the small We compare single-step frame search and section search (denoted with suffix '-S') for frame selection. The adversarial attacks are generated by FGSM after the frame selection. Overall, section search-based frame selection achieves a 3.63% higher success rate on average than single-step frame selection.   performance gap between single-step methods and greedy search. Another interesting observation is that the vulnerable frames are evenly spaced. For example, in the SlowFast case, every fourth frame is a highly vulnerable frame (e.g., {1, 5, 9, · · · , 29}). Besides, in Fig. 5 the histogram shows that both FRI and MoG with FGSM mostly chooses every fourth frame {2, 7, 11, 15, · · · , 31}.

3) SECTION SEARCH
Motivated by the observation, we proposed Section Search in Section III-C2. Our experiments in Table 4 demonstrate that Section Search (denoted by MoG-S and FRI-S) also significantly improves the quality of frame selection resulting in stronger adversarial attacks than ones with single-step frame selection (denoted by MoG and FRI). Section Search improves success rate by 3.63% on average in the UCF101 dataset. The largest improvement is 10.2% with R(2+1)D, which is even larger than the improvement by Greedy Search.

E. MAIN RESULTS
For more comprehensive comparison, we provide experimental results in UCF101, HMDB51, and Kinetics400 in Table 5,  Table 6, and Table 7, respectively. Before discussing the performance, we define a metric called Frame Interval Variance (FI-VAR) to summarize the patterns of selected frames. Let I denote the set of selected indices of frames, and I i be the i-th smallest element in I. Given the indices of selected frames , Frame Interval Variance (FI-VAR) is defined as the variance of intervals between the indices, which is given as: where d i = I i+1 − I i where i = 1, · · · , N − 1, We added d N to construct a cyclic interval. FI-Var quantifies how evenly the selected frames are spread out. When the frames are evenly spread, e.g., {1, 5, · · · , 29} as discussed with SlowFast in Section IV-D2, the FI-VAR ≈ 0. If selected frames are congregated, then thanks to d N , the FI-VAR increases. Also, randomly scattered indices may have a large FI-VAR. FIGURE 6. Attack Success Rate with a particular FI-VAR on UCF101. We randomly selected frames with a particular FI-VAR and attack the frames to the R3D model. The horizontal line shows the baseline success rate without particular FI-VAR. In the process of changing the FI-VAR, it can be seen that the smaller the FI-VAR, the stronger the randomly selected attack.

TABLE 5.
Classification accuracy on UCF101 dataset against baseline and various Search-and-Attack methods. '-G' and '-S' denote the greedy search and section search respectively. For PGD attack, we use α = 0.5, ε = 2, n = 4. * and † mean the best performance and the second best performance respectively. We select 8 frames for each video. In FGSM, the FRI-S shows the best performance or the gap with the best performance is very small. Unlike in FGSM, in PGD, the breakthrough of iterative methods is significant. Interestingly, the FI-Var of the FRI-G method decreases much more in PGD than FGSM. In the process of updating the frame for each iteration, it can be seen that if the attack intensity is strong, the frame is naturally selected to increase the distance between frames.

TABLE 6.
Classification accuracy on HMDB51 dataset against baseline and various Search-and-Attack methods. '-G' and '-S' denote the greedy search and section search respectively. For PGD attack, we use α = 0.5, ε = 2, n = 4. * and † mean the best performance and the second best performance respectively. We select 8 frames for each video. Like UCF101, the FRI-S generally shows the best performance.
We observe that frame selection with a smaller FI-VAR often leads to stronger attacks. In other words, the more the frames are dispersed, the better candidate frames to perturb. Fig. 6 shows the relation between FI-Var and accuracy. Here, we randomly select frames and attack the frames by FGSM. The Fig. 6 shows that the attack success rate is negatively correlated with FI-VAR.
Overall, the surrogate objective function FRI shows better performance than the MoG. In addition, iterative algorithms, Greedy Search and Section Search, outperform the single-step frame selection method. Especially, the combination of Greedy Search for frame selection and PGD for perturbation generation achieves the best performance on average in various settings. Table 5 shows that in UCF101 dataset Greedy Search with PGD, specifically FRI-G with PGD, achieved the highest attack success rate (or the lowest classification accuracy) over all of the base neural networks. Similarly, in HMDB51, Table 6 demonstrates that FRI-G with PGD achieves the best performance in four out of five settings. On Kinetics 400 shown in Table 7, MoG-G with PGD achieves the best performance three out of five settings whereas FRI-G with PGD is the best-performing construction in two settings. But the performance gap between FRI-G and MoG-G with PGD is marginal. In short, FRI-G with PGD is overall the best Search-and-Attack method.
When considering efficient schemes for temporally sparse attacks, FGSM is preferred to PGD. Interestingly, in this case, Section Search is a better option than Greedy Search. Table 5, Table 6, and Table 7 show that with FGSM, FRI-S outperforms FRI-G by 0.72%, 0.26% and 0.2% on average TABLE 7. Classification accuracy on Kinetics400 dataset against baseline and various Search-and-Attack methods. '-G' and '-S' denote the greedy search and section search respectively. For PGD attack, we use α = 0.5, ε = 2, n = 4. * and † mean the best performance and the second best performance respectively. We select 8 frames for each video. Like UCF101 and HMDB51, the FRI-S generally shows the best performance. in UCF101, HMDB51, and Kinetics 400. It is worth noting that Section Search (FRI-S) is ×1.9 ∼ ×2.3 faster than Greedy Search (FRI-G). We conjecture that if the perturbation method is strong (e.g., PGD), then the loss may change drastically after the perturbation. So the iterative method, especially Greedy Search is more effective. On the other hand, when the perturbation generation is relatively weak (e.g., FGSM), then Section Search is more efficient and achieves a comparable attack success rate.

V. CONCLUSION
In this work, we propose a Search-and-Attack framework for temporally sparse adversarial perturbations on videos. The Search-and-Attack framework has two stages: frame selection (Search) and perturbation generation (Attack). To identify the most vulnerable frames in the Search stage, we explore Single-step Search methods with surrogate objective functions: MoG and FRI. In addition, for more accurate frame selection with more computational power, we propose Greedy Search and Section Search. Our extensive experiments on three benchmark datasets, e.g., UCF101, HMDB51, and Kinetics400, show that Greedy Search with FRI and PGD (FRI-G + PGD) achieves the best attack success rate on average. We observe that neural networks in video classification have model-specific vulnerability in the time domain and evenly-spaced selected frames are often effective. Motivated by the observation, we propose Section Search and interestingly, with FGSM, the Section Search with FRI (FRI-S + FGSM) shows the best attack success rate although it is computationally more efficient than Greedy Search. In our experiment, we assume that the number of frames to attack N is given, not dynamically estimates N . However, since a strategy on how to determine such parameters in an unsupervised manner is necessary, clustering techniques for complex networks [33]- [35] could be used to determine those parameters.
The future directions of this work include studying more imperceptible and efficient adversarial perturbation for videos and robust neural network architectures in video classification. Specifically, since the proposed attack targets the neural networks on videos, it can be used to verify the robustness of holistic video understanding systems using neural networks. For example, it can verify robustness or help to build a more robust system for a video tagging system [36] or autonomous car [37] which uses video neural networks to make proper decisions.