Unleashing Dynamic Pipeline Reconfiguration of P4 Switches for Efficient Network Monitoring

As it is happening in many fields that need efficient and effective classification of data, Machine Learning (ML) is becoming increasingly popular in network management and monitoring. In general we can say that ML algorithms are complex, therefore better suited for execution in the centralized control plane of modern networks, but are also heavily reliant on data, that are necessarily collected in the data plane. The inevitable consequence is that may arise the need to transfer lots of data from the data plane to the control plane, with the risk to cause congestion on the control communication channel. This may turn into a major drawback, since congestion on the control channel may have a significant impact on network operations. Therefore it is of paramount importance to design systems capable of minimizing the interaction between data and control planes while ensuring good monitoring performance. The most recent generation of data plane programmable switches supporting the P4 language can help mitigate this problem by preprocessing traffic data at line rate. In this manuscript we follow this approach and propose P4RTHENON: an architecture to distill in the data plane the relevant information to be mirrored to the control plane, where complex analysis can be performed. P4RTHENON leverages the P4-native support for runtime data plane pipeline reconfiguration to minimize the interaction between data and control planes while ensuring good monitoring performance. We tested our scheme on the volumetric DDoS detection use case: P4RTHENON reduces the volume of exchanged data by almost 75% compared to a pure control-plane-based solution, guarantees low memory consumption in the data plane, and does not degrade the overall DDoS detection capabilities.


I. INTRODUCTION
N ETWORKS are becoming by the day more pervasive in the processes of our daily lives, from work to leisure.This creates unimaginable opportunities but also opens the floor to new threats.For this reason, modern networks should promptly respond to unexpected events to safeguard the running services and avoid service disruption.Thus, to cope with legacy network technologies, modern infrastructures require in-depth and responsive network monitoring, management, and control.
Two key points, also relevant in the 5G world [1], are (i) control and data plane separation (the so-called CUPS approach) and (ii) network programmability.CUPS improves flexibility and scalability by de-coupling the logic problems from the pure data forwarding issues, while programmability allows networks to react to unwanted situations [2].This envisions a closed-loop approach where the control plane collects real-time information about the status of the underlying network and reacts, by issuing suitable directives to the data plane, to modify its behavior [3], [4].
In this manuscript, we present P4RTHENON, a viable approach to implement a closed-loop monitoring system, which can intercept network behaviors and take actions in a real-time fashion.P4RTHENON stems from the idea of minimizing the congestion of the control channel between data and control planes [5], especially in the occurrence of abnormal behaviors.
We validate our solution by devising a volumetric Distributed Denial of Service (DDoS) attack detection over P4RTHENON, to showcase how we can minimize the impact on the control channel and keep high detection rates.P4RTHENON follows the Software Defined Networking (SDN) paradigm: the detection logic is split between a simple data plane logic and a more complex control plane strategy.
SDN is a paradigm that has been extensively investigated in the last decade and has two major flavors in terms of architecture.The first and older approach is based on the OpenFlow protocol and a centralized control plane, separated from the data plane [6].It provides a high level of control but heavily depends on a network controller [7] that may become a possible single point of failure.Moreover, it does not allow fine-grained monitoring since it is not possible to inspect and process packets in the data plane, in a customized fashion [8].
The second and more recent approach is focused on making the data plane programmable and is typically based on the P4 language [9].With P4 it is possible to design in-network monitoring strategies that fully leverage the computation capacity of the network devices [10].This significantly reduces overhead and increases efficiency, but may affect monitoring performance if not performed properly [11].
In this latter landscape, the goal of our approach is to achieve the best possible trade-off between monitoring performance, computational complexity, and control channel utilization.
To this aim, P4RTHENON splits the monitoring task into two phases called coarse-grained and fine-grained monitoring, i.e., CGM and FGM, respectively, that for the sake of the proposed use case have been implemented as (i) CGM_DDoS strategy to detect the traffic portion suspected to belong to a DDoS attack; (ii) FGM_DDoS strategy to deeply analyze the suspect traffic in the control plane and classify it in the right DDoS class if proven to be malicious.We implemented CGM_DDoS as a simple in-network P4-based solution that calculates the degree of traffic asymmetry between two end hosts A and B in the two directions (A → B and B → A), assuming that traffic is strongly asymmetric when a DDoS attack is in place, and flagging as suspect all the flows characterized by a high asymmetry degree.This strategy, based on a Count-min Sketch [12], over-estimates the number of DDoS attack flows, leading to some false positives while keeping the number of false negatives low.Once suspect flows are identified by CGM_DDoS, P4RTHENON triggers FGM_DDoS (i) to extract relevant features from their packets in the data plane and (ii) to mirror this data to the control plane in the form of P4 digests.The collected features are then given as input to a trained Convolutional Neural Network (CNN), i.e., LUCID [13], that performs ML inference, classifying any suspect flow as belonging to a specific DDoS attack class or as benign.According to our results, FGM_DDoS can substantially reduce false positives with respect to CGM_DDoS, thus achieving high Precision while keeping the control channel utilization low.
The main contributions of this work are the following: • A new architecture, i.e., P4RTHENON, to dynamically reconfigure the data plane pipeline at runtime and remodel the control plane accordingly, to minimize the network overhead.• A lightweight P4 pipeline to early detect Volumetric DDoS flows, i.e., Asymmetric Count-Min Sketch (ACMS).• A validation of P4RTHENON to detect Volumetric DDoS attacks, which analyzes the tradeoff between memory consumption/data transmission and the detection performance, leveraging a state-of-the-art control plane agent, namely LUCID [13].The manuscript is organized as follows.We start by summarizing the state of the art on existing P4-based (i.e., in-network) and ML-based monitoring solutions (Section II), outlining their limitations.Section III details the principles our proposal, P4RTHENON, and its architectural components.Section IV presents the DDoS detection use case, from the scenario to the implementation of a testbed, and the performance of the proposed solution is evaluated in Section V. Finally, we draw our conclusions in Section VI.

II. RELATED WORK
In this Section, we analyze the state of the art regarding SDN monitoring solutions.First, we sum up the existing programmable data plane-based monitoring solutions.Then, we investigate how ML-based monitoring is exploited in the SDN control plane.We proceed to give an overview of works that integrate data plane and ML-centered control plane solutions for monitoring.In anticipation of our architectural choice, we conclude the Section by analyzing the pipeline reconfiguration methods proposed in the literature.

A. Monitoring Solutions With Programmable Data Planes
Offloading part of the control plane intelligence to the data plane has become increasingly popular in SDN [14] thanks to the rise of data plane programmability (DPP).DPP opened the field to greater monitoring expressiveness on network devices since it can be leveraged to describe arbitrary, albeit simple packet manipulation strategies on top of regular forwarding.In recent years, programmable data planes have proven to be effective in supporting complex monitoring strategies by coding part of them directly on the data plane [10], most commonly exploiting the P4 language [9].
The most straightforward approach showing how DPP can be exploited to support network monitoring is In-band Network Telemetry (INT) [15], a framework proposed by some of the biggest networking companies in conjunction with the P4 Working Group.INT allows gathering monitoring information by transparently adding custom headers to users' packets, which are then extrapolated and forwarded to a centralized collector.INT has been extensively used to support traffic engineering [16], congestion control [17], and routing [18].
Another possibility to take advantage of DPP for network monitoring consists in exploiting the stateful memory made available by P4-based programmable data planes (i.e., P4 registers), to implement customized data structures (i.e., sketches [19]) for advanced in-network monitoring [20].Thanks to these data structures, it is possible to support complex tasks, such as intrusion or anomaly detection, by keeping track of flows' state and aggregate statistics directly in the data plane.For instance, many strategies have been proposed to detect heavy flows (or heavy hitters) [21], using different data structures such as hash tables [22] or invertible sketches [23].
Recently, some works have also proposed to offload ML inference to the data plane, meaning that the whole ML model is made executable in the data plane pipeline in support of widely different monitoring tasks.For example, pForest [25] and BACKORDERS [26] have proposed to offload a random forest (RF) [34] on the data plane for in-network inference.These works either require a large amount of Match-Action Tables (MAT) installed or use arbitrary F1Score thresholds to rate the detection quality.A further step in this domain has been made by Taurus [24], which offloads CNN [35], Deep Neural Networks (DNN) [36], and Support Vector Machine (SVM) [37] models to the data plane exploiting MapReduce [38].However, Taurus shows limitations already at model-level accuracy, which is below 70% for the DNN, while we cannot find accuracy benchmarks for the other two models.Other works, such as [27], [28], [29], offload different ML models, among which Decision Trees (DT) and Binary Neural Networks (BNN), to the data plane.Especially, DT and BNN exhibit low inference performance.This is because the DT can only have limited depths and BNN adopts simple binary weights to overcome data plane operational limitations.Razavi et al. [29] implement a DNN, but the authors encoded weights' floating point numbers with 8-bit integers, leading to similar performance degradation as [27], [28]: this is a major limitation that they also clearly highlight in the paper.
Table I provides a brief summary of the aforementioned ML-based data plane solutions, all exploiting the P4 language technology, outlining their limitations.To summarize, trying to fit ML models to the programmable data plane is not simple, requires nontrivial operations to optimize the code and memory consumption, and high inference performance is hard to achieve.These are the reasons why P4RTHENON relies on simple sketch-based strategies in the data plane for CGM, while a ML-based in-depth analysis, as that performed by FGM (see Section II-B), leverages the more powerful control plane computational capabilities.
DDoS attack detection: As specified in the Introduction, in this paper we focus on DDoS attack detection as a use case.
To address this problem, many solutions based on DPP have been proposed [39].ML-based methods in the data plane have also been proposed, as BACKORDERS [26] already discussed above.Other works, e.g., [40], [41], adopt a coarse-grained strategy by definition, where the data plane is employed as a valid support to grossly detect anomalies.Ding et al. [42] propose INDDoS, a pragmatic way to detect victims targeted by a DDoS attack using Direct Bitmap combined with a Countmin Sketch.P4RTHENON DDoS detection implementation indeed overlaps with other P4-based solutions mentioned in the AlSabeh et al. survey [39].However, none of the considered schemes provides an in-depth analysis of the memory and control channel utilization across data and control plane, compared with the detection performances.Moreover, the P4 implementation of asymmetric flow detection is a novel contribution.We argue that this analysis not only validates the scalability of P4RTHENON, but demonstrates how it is possible to match the detection performance of state-of-the-art solutions while drastically reducing the management overhead.Our proposed CGM_DDoS strategy takes inspiration from these previous works, but it simplifies the strategy even further at the expense of increasing the false positive rate, which is then corrected by FGM_DDoS.

B. ML-Based Monitoring With SDN Control Planes
The effectiveness of ML-based solutions involving the SDN centralized control plane for monitoring has been thoroughly demonstrated [43].The most important factors contributing to their success are the following [44]: (i) a single ML model can be deployed on top of the centralized controller to monitor network-wide scenarios; (ii) centralizing data collection is key for precise prediction; (iii) relevant data can be retrieved in real-time by the controller.In the following, we will specifically focus on monitoring tasks related to ML-based DDoS attack detection, from which we took inspiration to design FGM_DDoS.

DDoS attack detection:
A thorough high-level analysis of ML techniques to detect DDoS attacks is proposed by He et al. [45], which outlines detection performance differences when selecting various features and models.This work also suggests that classic ML approaches are usually highly dependent on feature choice and datasets.To better generalize the model and loosen the constraint of selecting a fixed set of features, Deep Learning-based schemes have become extremely popular to detect DDoS attacks.Among them, Ghanbari and Kinsner [30] propose a solution that leverages a CNN, achieving high detection rates on a very wellknown dataset, i.e., UNB ISCX intrusion detection evaluation dataset [33].DeepDefense [32] is another example that combines a CNN and a Recurrent Neural Network (RNN), evaluated with good performance the CAIDA DDoS 2007 attack dataset [31].However, both solutions require a high number of features and do not suit real-time scenarios given their complexity.LUCID, proposed by Doriguzzi-Corin et al. [13], adopts similar concepts but in a way that makes the trained ML model suitable for online scenarios.LUCID is a lightweight CNN that classifies each traffic flow as belonging to a known DDoS class or as benign.With a rather fast training phase and a limited number of needed features, LUCID is still able to achieve high detection rates.
Table II reports a comparison summary of the three works discussed above.LUCID ensures limited training time while keeping detection performance high, and thus it is a suitable solution to be adopted on top of an SDN control plane.However, it needs to inspect all the network traffic to provide good prediction rates.Mirroring packets to the control plane during a volumetric attack could congest the control channel [46], in fact propagating it.Our proposed FGM_DDoS strategy exploits LUCID to classify network traffic, but it adds a data plane data aggregation logic to relieve the control channel, by delivering to the control plane, via P4 digests, only features extracted from suspect traffic in the data plane.

C. Interaction Between Data and ML-Based Control Planes
Some works in literature have proposed monitoring architectures envisioning a tight interaction between programmable data and control planes in an SDN environment, with the goal of implementing refined strategies to optimize such an interaction: this is to some extent also the main objective of P4RTHENON.It is important to note that all the previous work on this topic focuses on anomaly/attack detection tasks.
Zhang et al. propose POSEIDON [47], a framework to map attack countermeasures to the programmable data plane and to servers.They propose a language for hardware abstraction and a runtime environment to orchestrate the real-time reaction to attacks.However, this solution requires multiple technologies and components and is bound to the language proposed by the authors, making it hardly replicable.A general solution that jointly maps ML-assisted detection in a programmable data plane and in a control plane is IIsy [48].Two models are proposed: a lighter one, fully deployed in the data plane, and a heavier one, in the control plane.They intensively tested the deployment of different models in the data plane, but do not consider the possibility to swap between different configurations at runtime.ORACLE [49] and the work proposed by Musumeci et al. [50] focus on two architectural approaches that envision a collaboration between control and programmable data plane to detect DDoS attacks.In both cases, aggregated statistics from packets' flows are computed by the programmable data plane and forwarded to the control plane, where they are processed by an ML engine to detect attacks through ML inference.Although simple and effective, these solutions require intense and constant communication between the data and the control plane, even when the attack is not happening, as they require the data plane to constantly send the aggregated statistics to the control plane.Our solution takes inspiration from these proposals, but it is the first attempt to design a two-phase system designed to optimize the control channel utilization: in-network monitoring is autonomously performed by the programmable data plane to detect suspect traffic, and a more refined ML-based analysis happens in the control plane.Differently from [49], [50], the latter is performed on categorical features extracted from packets belonging to suspect flows.Extracting packets' categorical features (e.g., IP flags) instead of computing flows aggregated statistics is another peculiarity of our proposal.Thanks to this, multi-class classification can be better performed (e.g., to which type of DDoS attack the packet belongs, or benign) instead of only performing binary classification (DDoS/benign) as done in [49], [50].
FlowLens [51] is another work that resembles our solution regarding purpose and scope.Its authors propose an SDN architecture that leverages programmable switches to efficiently support multi-purpose ML-based security applications.
FlowLens collects features related to packets distribution at line speed and classifies flows directly in the switches, using their CPU.However, though highly flexible and reliable, FlowLens cannot benefit from the network-wide view provided by a centralized SDN control plane.In addition, it does not envision any data plane pipeline reconfiguration at runtime, as supported by P4RTHENON.Reconfiguring the data plane pipeline makes it possible to install specialized pipelines, instead of using a general-purpose one, and to optimize the amount of data exchanged between data and control plane.

D. Programmable Data Planes Pipeline Reconfiguration
The potential advantages of runtime data plane pipeline reconfiguration have already attracted the attention of the research community.We argue that the work presented by Xing et al. [52] is the most convincing attempt to (re)program a switch at runtime.In this work, the authors propose an extension of the P4 language that enables partial reconfiguration of the data plane with minimum resource overhead, without service disruption, and guaranteeing consistent packet processing.By allowing developers to load new features at runtime on a reserved memory area, the authors propose a solution to the notorious problems of repopulating all the existing tables and of the introduced delay when a switch firmware is replaced.This work does not consider any specific application domain and it is not clear what the impact would be if the whole pipeline had to be reconfigured.A similar proposal was advanced by Feng et al. [53].The authors designed a specific real-time upgradable architecture called Insitu Programmable Switch Architecture (IPSA).This approach allows to implement reconfiguration in a more efficient way, but at the cost of having to upgrade the whole network to adopt switches whose architecture follows the IPSA one, which may not be feasible in large-scale scenarios.
In contrast to the existing works, P4RTHENON leverages the native feature made available by the P4Runtime specification [54] that allows a P4 pipeline reconfiguration at runtime.We choose this approach because P4Runtime is a well-established Application Programming Interface (API) for controlling the data plane elements of a device whose behavior is specified by a P4 program, and thus no architectural change is needed as long as a P4-enabled device is adopted.To the best of our knowledge, the few experiments that we were able to find about partial or total pipeline reconfiguration have never focused on the optimization of the burden on the control channel.Conversely, P4RTHENON is specifically designed to minimize the amount of data exchanged between the involved planes, while ensuring high monitoring performance.

III. P4RTHENON: MONITORING ARCHITECTURE
In this Section, we describe the main concepts behind P4RTHENON, our general-purpose real-time solution to describe and implement monitoring policies.P4RTHENON considers the network traffic as a composition of multiple flows, where each flow is identified by its < IP src , IP dst > couple.The main goal of P4RTHENON is minimizing the amount of data exchanged between data and control planes while achieving high performance of monitoring engines running in the control plane that require detailed features extracted from packets, such as ML-based ones.
P4RTHENON executes monitoring tasks in two phases: (i) in the first phase, an approximate traffic analysis is performed, which identifies the flows that should be monitored more in depth (i.e., Coarse-Grained Monitoring); (ii) in the second phase, an accurate analysis is done on the flows selected by CGM, with the aim of further discriminating what flows meet the behaviour specified by the monitoring policies (i.e., Fine-Grained Monitoring).Each phase is associated with a specific strategy deployed by the control plane, which requires a runtime reconfiguration of the data plane pipeline.
Specifically, CGM is meant to run completely in the data plane, meaning that just a few data points need to be forwarded to the control plane in this phase.Based on the information gathered during CGM, when meeting some pre-defined condition, the control plane triggers FGM, with a consequent reconfiguration of the data plane pipeline.FGM is dataintensive as it requires traffic features to be mirrored from the data plane to the control plane.However, the features mirrored during FGM are only those extracted from flows selected by CGM: this significantly reduces the amount of data to be forwarded and analyzed, enhancing Precision by lowing the input detection noise and reducing the burden on the control channel.Whenever some other pre-defined condition is met (e.g., after a timeout expiration), P4RTHENON triggers the return to the execution of CGM and the pipeline is reconfigured to the previous status accordingly.Figure 1 illustrates the architecture of the system, also showing the specific strategies adopted in our DDoS detection use case, whose design and implementation will be detailed in Section IV.In the following we will provide further details on CGM and FGM, on design principles, and on the main enabling technologies.

A. Coarse-Grained Monitoring
CGM is the default strategy installed in the data plane.It is designed to be executed on top of the regular forwarding with a low-added overhead.It relies on a very simple monitoring strategy that (i) continuously monitors the traffic; (ii) identifies the set of flows that meet some monitoring requirements.FGM is fully executed by the programmable data plane, and exchanges minimal data with the control plane.In fact, the data plane occasionally sends management messages to the control plane updating it with a summary of the current network status.Then, the control plane inspects the collected data and, if a condition is met, the execution of FGM is triggered.

B. Fine-Grained Monitoring
FGM is deployed by the control plane whenever some traffic needs to be inspected with higher Precision.CGM is responsible to identify the traffic flows worth being monitored by FGM.Unlike CGM, FGM's logic is evenly split between data and control planes.In the data plane, FGM extracts relevant features (e.g., IP flag, TCP ports, etc.) from packets of selected flows and mirrors them to the control plane.In the control plane, a specialized agent (e.g., a trained ML model) takes as input the extracted features and performs a deeper monitoring (e.g., flow classification).

C. Main Design Principles of P4RTHENON
CGM and FGM should be designed to ensure that CGM is able to recognize all (ideally) the flows that may need attention, but it could include in such a set also flows that are wrongly selected as interesting.Instead, FGM should be capable of further discriminating, from the set of flows selected by CGM, the flows that are truly relevant.The use case presented later in this manuscript (see Section IV), which refers to DDoS detection, will show that CGM_DDoS is effective in identifying a superset of flows that belong to DDoS attacks, hence finding all the true positives with a certain degree of false positives, while FGM_DDoS is very efficient at trimming out all the false positives.As far as these design principles are met, P4RTHENON could be adopted for widely different monitoring tasks other than DDoS detection.

D. Programming and Interaction of the Architectural Elements
The data plane pipeline's behavior is specified by a program written in the P4 language [9].Each P4-programmable pipeline consists of a set of processing blocks, which can modify the packet headers and gather packet-related data (e.g., the features required by FGM).As Southbound Interface (SBI) we adopt the well-known P4Runtime [54], which is exploited to (i) install match-action rules (enabling the selective per-flow features' mirroring in FGM) and (ii) send data to the control plane (e.g., extracted features) by means of digest messages.
The digest is a type of message specified in the P4Runtime specification [54] that can be adapted to send one-way data recovered by the data plane to the control plane.As the documentation explains, it differs from packet-in messages [55] as it is optimized to only send some packet's header fields and metadata, while packet-in is generally used to also send the payload.Multiple digests can be aggregated by P4Runtime into larger messages to reduce their number.The control plane retrieves the digest data as a JSON collection, where each JSON encapsulates a digest associated with a packet.The FGM specialized agent (see Fig. 1), which is implemented as a Python script, is continuously fed by the JSON collection, relying on a RESTful communication.

IV. P4RTHENON USE CASE: DDOS DETECTION
This Section illustrates the use case we chose to validate our approach.The produced code has been open-sourced. 1e considered volumetric DDoS detection as an example to showcase P4RTHENON peculiarities.We will refer to the specialized versions of CGM and FGM as CGM_DDoS and FGM_DDoS, respectively.A preliminary investigation on the considered use case can be found in [56].

A. Asymmetric Count-Min Sketch (ACMS)
To detect suspect DDoS attacks in the data plane, we devised a simple sketch-based algorithm implemented in P4 called Asymmetric Count-min Sketch (ACMS, see Fig. 2).
ACMS was designed by observing the behavior of volumetric DDoS attacks, which usually generate a large number of packets toward the victim by means of a large number of compromised clients belonging to a botnet.In particular, ACMS is designed to detect flows with an unexpected asymmetry rate.In this condition, the traffic volume between the compromised client and the victim is expected to be much larger than the traffic volume in the opposite direction.This P4RTHENON specialization is designed to detect heavy asymmetry flows: however, P4RTHENON can be simply configured to support different types of attacks and multiple flavors of DDoS attacks, i.e., DDoS attacks that target a specific destination (analyzed in [42]).It incorporates two algorithms, i.e., Count-min Sketch and asymmetric flow detection: 1) Count-Min Sketch (CMS) [12]: It exploits a probabilistic, low-memory data structure (i.e., sketch) that can be used to estimate flows' packet count, i.e., the number of packets carried by any network flow in a time window.It relies on two operations carried out on the sketch: (i) Update, to keep the count of incoming packets updated in the sketch; (ii) Query, to estimate the number of counted packets for a given flow.CMS relies on d different pairwise-independent hash functions, each with an output size w.The data structure is composed of a matrix of d • w counters: the packet-count estimation accuracy increases as the two dimensions increase, and vice versa, with theoretical bounds that have been proven [12].
2) Asymmetric Flow Detection: It is a simple in-network algorithm (proposed in P-SCOR [4]) that calculates whether a flow is part of a potential DDoS attack.It uses a fixed Threshold, a data structure called R that includes w counters, and a hashing function h that returns a number between 0 and w − 1.Every time a packet crosses the switch, k is calculated as the hash of the s =< IP src , IP dst > string, i.e., h(s) = k.The counter of R in the k-th position, i.e., R(k), is then incremented (R(k) = R(k) + 1).The algorithm then calculates h(s ) = j , where s =< IP dst , IP src >, and the asymmetry rate asym = |R(k ) − R(j )|: if asym > Threshold , the flow is marked as a potential DDoS attack, as the difference of the traffic volume in the two directions is abnormal.The choice to keep track of network flows using only the s =< IP src , IP dst > rather than other flow data such as source or dest port has been made to prioritize a slim strategy.Distinguishing the protocols of the respective streams was outside the scope since it was a task left to LUCID.

B. Strategies Description 1) CGM_DDoS:
The strategy leverages ACMS as follows (Fig. 2).When a packet enters the data plane pipeline, the algorithm updates a CMS to increase the packets' counter for the considered flow.Then, the CMS is queried to retrieve the packet count estimation for the flow in the forward direction represented by the key < IP src , IP dst >, i.e., src todst .The CMS is then queried again using the key < IP dst , IP src > to retrieve the estimated packet count in the backward direction, i.e., dst tosrc .The asymmetry rate is then computed as asym = |src todst − dst tosrc |: if asym is higher than a value Threshold the flow is labeled as suspect, and an alert is sent to the control Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.plane in the form of a digest.The CMS is reset by the control plane every time a fixed time window expires.
It must be noted that setting the most appropriate Threshold is not trivial and could affect the detection performance in both CGM_DDoS and FGM_DDoS.In Section V we will report the results of a sensitivity analysis aimed at determining what Threshold best suits our scenario.
2) FGM_DDoS: It comes into place, through a data plane pipeline reconfiguration, following an alert that is sent to the control plane during CGM_DDoS.It includes both a data plane and a control plane logic.
The data plane logic in the P4-programmable pipeline combines two sub-strategies, namely (i) ACMS and (ii) optimized mirroring.ACMS is the same as that deployed in CGM_DDoS, and it is needed to keep monitoring any new suspect flow once the data plane pipeline has been reconfigured.Optimized mirroring is instead deployed to extract relevant features from packets and forward them to the control plane through digests.We call it optimized mirroring because it is meant to minimize the amount of data flowing on the control channel.It only mirrors features from packets belonging to flows deemed suspect by ACMS, both the ones marked as such during CGM_DDoS and, if any, those detected during FGM_DDoS.To further reduce the burden on the control channel, it also employs packet sampling, meaning that features from only 1 out of N (i.e., sampling rate of 1/N) suspect packets, flowing through the pipeline, are forwarded.N is a parameter that needs to be carefully set to strike the best balance between detection performance and control channel utilization, as we will show in Section V.
The control plane collects and stores the features extracted from the network traffic that are mirrored through the control channel.This data is then fed to a specialized online ML algorithm based on a pre-trained CNN model, i.e., LUCID [13], which pre-process it and performs a classification task to determine what suspect flows truly belong to a DDoS attack and what are instead benign.

C. Transition Between CGM_DDoS and FGM_DDoS
The time is slotted in time windows, which starts at integer time reference t s = t and lasts until t e = t + 1.At the beginning of each time window, it is possible to switch from CGM_DDoS to FGM_DDoS or vice-versa.Figure 3 shows an example of how the transition between the two strategies occurs.The top part of the figure reports a flow diagram showing state transitions in the face of a DDoS attack, while the bottom part focuses on a time perspective.Let's assume, as shown in the bottom part of the figure, that a DDoS attack starts during the second time window (in between t s = 1 and t e = 2) and expires in the sixth time window (in between t s = 5 and t e = 6).No other DDoS attack is in place in our time horizon, meaning that at the beginning of the first time window, CGM_DDoS is installed for coarse-grained traffic analysis.
During the first time window, nothing is detected by ACMS and no interaction between data and control plane occurs.During the second time window, as soon as the DDoS attack begins, CGM_DDoS starts sending alerts to the control plane every time a flow is considered to be suspect, as its asymmetry rate computed by ACMS overcomes the pre-defined threshold.After being notified of a possible attack, the control plane waits until the end of the current time window and then switches to FGM_DDoS, which requires a data plane pipeline reconfiguration: this happens at the beginning of the third time window, i.e., at t s = 2.The reconfigured data plane starts extracting and mirroring features from (sampled) packets of the suspect flows identified by ACMS during CGM_DDoS, and at the same time monitors the rest of the traffic for potential new suspect flows.In the meantime, the control plane feeds the ML-based agent with packets' extracted features to identify malicious flows with high confidence.This condition holds until the DDoS attack ends, in this case during the sixth time window.As soon as this happens, the asymmetry rate of all flows falls behind the specified threshold, and at the beginning of the seventh time windows, CGM_DDoS can replace FGM_DDoS again.

D. Implementation 1) CGM_DDoS:
To develop CGM_DDoS, we wrote ∼250 lines of P4 code.Our implementation of Asymmetric Countmin Sketch is summarized in Algorithm 1, including a description of the developed functions in P4.
The P4 program specifies a CMS data structure as an array of P4 registers, which is used to summarize the number of packets per flow (i.e., packet count) in any direction.CMS is updated and queried leveraging a set of CRC32 hash functions (H), and the asymmetry Threshold used to evaluate abnormal packet count differences in forward and backward flow directions is hard-coded in the program.Every time a packet enters the P4 pipeline, the following operations are sequentially performed: • updateCMS: the CMS is updated.The packet count for the < ip src , ip dst > flow is increased by one unit.This is done by accessing, for each row i of the data structure, the cell with index equivalent to the hash value of < ip src , ip dst >, obtained by considering the i-th hash function from the set H, and increasing its value accordingly (see [12]).• queryCMS: the operation is similar to the one illustrated in updateCMS but, instead of updating the value from Every Δt (time window size, in this paper we consider a Δt = 30 s) the switch sends a digest notifying the expiration of the window, which can result in two different outcomes: (i) if no flow is deemed suspect during the time slot, no action is required apart from resetting the counters of CMS; (ii) if at least one alert has been sent to the control plane during the window, the controller triggers FGM_DDoS.
2) FGM_DDoS: The P4-based data plane logic of FGM_DDoS is a superset of the logic of CGM_DDoS.In fact it includes ACMS (see Alg. 1) in its whole, with in addition: 1) A feature extraction logic to retrieve relevant features from packets flowing through the pipeline; 2) A feature forwarding logic to only forward to the control plane features (i) extracted from packets pertaining to suspect flows through ACMS and (ii) meeting the sampling requirements.Together, 1) and 2) define the optimized mirroring strategy as described in Section IV-B.The feature extraction logic is detailed in Alg. 2, while the feature forwarding logic encapsulates the extracted metadata in a digest with a total size of 281 bits, which is sent to the control plane through the control channel using P4Runtime [54].The procedure is shown in Alg. 3. The control plane then decodes the digest's data and saves the features in a JSON list.
The control plane exploits LUCID [13] for a finer-grained detection of DDoS attacks.LUCID includes a trained ML model (i.e., CNN) and a preprocessing algorithm, needed to reorganize retrieved features as required by the ML model (i.e., on a per-flow basis).For an online detection (i.e., classification) of malicious flows, the JSON list including the features is continuously sent to LUCID via RESTful communication.LUCID then aggregates and splits the traffic into flows, marking them as malicious or as benign by means of ML inference.The JSON list is emptied every Δt seconds, i.e., every time window expiration (Δt = 30 s in our case).This is done to reduce the amount of data stored in the control plane and to keep it updated on the current shape of the underlying traffic.If LUCID is fed with the most recent traffic, it is possible to spot whether a flow previously deemed as belonging to a DDoS attack starts behaving legitimately.In this case, the flow can be ruled out from the list of malicious flows.

V. PERFORMANCE EVALUATION
This Section presents a performance evaluation of P4RTHENON, with respect to the considered use case of DDoS detection, both from a resource consumption and detection capability point of view.

A. Evaluation Metrics and Methodology
The tests here presented are based on a labeled PCAP dataset (details in Section V-B), containing both true positives (TP, i.e., flows that belong to DDoS classes) and true negatives (TN, i.e., flows of benign traffic).The total number of flows is T = TP + TN.In each experiment, we obtain both false positives (FP, i.e., all the flows wrongly deemed belonging to a DDoS attack) and false negatives (FN, i.e., all those flows wrongly deemed benign).The detection performance is thus analysed by means of three metrics: • Precision = TP TP +FP .It measures how many of the positive predictions are correct.The higher the value, the lower the noise from false positives.
• Recall = TP TP +FN .It measures how many positive cases are recognized.The higher the value, the lower the number of attacks escaping detection.
Precision+Recall .It is computed as the harmonic mean of Precision and Recall, indicating an overall quality of the detection.We also measure the average Control Channel Utilization (CCU), which is defined as the amount of data transmitted on the control channel, which we call collectedData size , in an observation time window Δt, i.e., CCU = collectedData size Δt .
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.The higher CCU, the less efficient the strategy in terms of data-control plane interaction.We divided our evaluation into four parts: • CGM_DDoS evaluation, which presents a performance evaluation of our solution if only in-network data plane detection is performed.We compare it to an effective state-of-the-art in-network solution.
• FGM_DDoS evaluation, which analyses and validates our solution when ML-based control plane logic is installed.We evaluate the effectiveness and efficiency of the strategy over multiple combinations of ACMS thresholds and sampling rates.• Overall evaluation, which summarises the results of CGM_DDoS and FGM_DDoS when combined, clearly pointing out the benefits of P4RTHENON with respect to other approaches.• Data plane pipeline reconfiguration evaluation, which reports a brief discussion on the time needed by P4RTHENON to reconfigure the data plane pipeline while swapping between CGM_DDoS and FGM_DDoS.Before delving into the obtained results, in the following, we give a concise description of the testbed and its settings.

B. Description of the Testbed Environment and Parameters
Our experiments were carried out on in a virtual environment that consists of: • An emulated single-switch network running on Mininet [57] with bmv2 [58] as P4 virtual switch target; • A controller, developed in Go [59], responsible of (i) the information exchange with the data plane and (ii) reconfiguring the pipeline using the P4Runtime APIs.• A process running LUCID, interacting with the controller via RESTful communication.LUCID was pre-trained using a dataset provided in its official repository,2 and model hyperparameters were set as the default ones specified in the paper [13].For further details on LUCID's configuration the reader should refer to [13].• A process simulating the DDoS attack by means of tcpreplay [60], which replays network traffic at 50 Mbps speed for a 6-minute long attack.We generated a PCAP sample dataset containing roughly 2 Gb of traffic.It is composed of 10% of benign traffic (taken from the CIC-IDS2017 dataset [61]) and 90% of DDoS traffic (generated with the hping3 [62] Linux utility).The attack speed is designed to saturate the switch, while the 6-minute duration allows replaying the dataset ∼ 2 times.All the components were executed on a Ubuntu 20.04 LTS Server with 14GB of RAM and 3 CPU cores KVM machine.

C. CGM_DDoS Evaluation
In this Section, we evaluate CGM_DDoS performance.

1) Sensitivity Analysis of ACMS:
To choose the right CMS d and w, we conducted a brief performance analysis by varying these two parameters.Table IV shows a comparison between different values of d for w = 1024.This shows how the F1Score slightly improves for d = 2, and does not significantly improve for d = 3. Figure 4 shows the orthogonal analysis, for fixed d = 2. Here, the Precision, Recall, and F1Score are collected over a variable number of flows, with fixed w (Figure 4(a)) and fixed thresholds (Figure 4(b)).This analysis suggests that for w = 1024, the F1Score is significantly higher for each number of flows, compared with w = 512, and matches w = 2048.On the other hand, for Threshold = 750, the F1Score outperforms every other configuration.The threshold analysis anticipates the result we will further discuss in Section V-D.This investigation suggests that the optimal parameters for CGM_DDoS are w = 1024 and d = 2.

2) Comparison With the State of the Art: We compare
CGM_DDoS with an open-source, 4 state-of-the-art solution called INDDoS [42].As CGM_DDoS, INDDoS is an innetwork P4-based solution that detects hosts targeted by volumetric DDoS attacks.It is threshold-based like ACMS, i.e., the core strategy of CGM_DDoS: it estimates the perdestination flow cardinality (number of sources contacting a specific destination) and, if above a threshold value, the destination is considered under attack.To estimate it, BACON sketch is used: a data structure that combines a CMS and a Bitmap to Update and Query the per-destination flow cardinality once a packet enters the P4 pipeline.When the queried value overcomes the specified threshold, a digest wrapping the IP destination of the victim is sent to the control plane.
The main difference between INDDoS and ACMS is that they focus on two different properties of volumetric DDoS attacks to detect them: the former on per-destination flow cardinality (which is expected to be high for destinations under attack), the latter on flows' asymmetry rate (which is expected to be high for malicious flows).We want to stress that INDDoS could replace ACMS as the core in-network algorithm of CGM_DDoS.However, if we look at Table V, some aspects can be highlighted.We decided to test CGM_DDoS and INDDoS considering their best configuration in terms of detection performance (F1Score) obtained after an exhaustive search, which are: • INDDoS: Threshold = 60, BACON sketch of size (d = 3) × (w = 1024) × (m = 1024) [42].• CGM_DDoS: Threshold = 750, CMS of size (d = 2) × (w = 1024); From Table V it can be seen that much more memory is used by INDDoS with respect to CGM_DDoS.In fact, the memory occupied by BACON sketch, considering that 1 bit is allocated to each cell [42], is 3 • 1024 • 1024 = 3145.7Kb,which is more than 300 times higher than the memory occupied by the CMS adopted by CGM_DDoS (i.e., 9.8 Kb).However, INDDoS outperforms CGM_DDoS in terms of Precision, Recall, and F1Score.This is explained by the higher complexity (and required memory) of INDDoS compared with CGM_DDoS, which makes it a more performing stand-alone solution.However, Recall of both solutions is high (1 or close to 1), while Precision of both is just decent, with slightly worse performance for CGM_DDoS.
In addition, CCU of CGM_DDoS, although only in the order of tens of bps, is higher than CCU of INDDoS.This happens for two reasons: (i) the higher frequency of sent alerts, as CGM_DDoS sends an alert every time it spots a suspect flow, while INDDoS groups alerts by destination; (ii) the higher size of the digests payload, as CGM_DDoS sends 8-bytes alerts (IP source and IP destination of the flow), while INDDoS sends only 4-bytes alerts (IP address of the victim).In our testbed, the size of a CGM_DDoS alert is  around 100 bits after being encapsulated in a JSON structure, while the size of an INDDoS alert is around 50 bits.
To summarize, INDDoS provides a superior detection performance compared to CGM_DDoS.However, as specified in Section III-C, P4RTHENON requires a low number of false negatives for CGM, which is guaranteed by both strategies (high Recall), while it is tolerant to false positives, which can be filtered out by FGM.So, although CGM_DDoS Precision is slightly lower and CGM_DDoS CCU higher (but still low in absolute terms), its adoption in the place of INDDoS is fully justified by the much less occupied memory.

D. FGM_DDoS Evaluation
In this Section, we analyze the benefits of FGM_DDoS.We provide an overview of the configurations we tested in the environment described in Section V-B, setting different ACMS thresholds and different sampling rates.The goal is to explore the existing trade-offs between detection performance (in terms of Recall, Precision, F1Score) and Control Channel Utilization, as these two configuration parameters are the most impactful on the above-mentioned metrics.Furthermore, we compare these results with a naïve strategy that we call Mirror All and is inspired by [63]: it does not provide ACMSaided optimized mirroring, but it simply performs features extraction and forwarding from any packet, regardless of that it belongs to a suspect flow or not.In other words, it does not embed any ACMS logic and discriminating between benign and malicious flows is fully enforced by the control plane.As for FGM_DDoS, it is possible to reduce the burden on the control channel through sampling, i.e., by only forwarding features extracted from one packet out of N.
We tested different combinations of ACMS thresholds and sampling rates: in the following we report only the most significant combinations for the sake of conciseness.Figures 5 and 6(c) report the results for our tests, where for each configuration Precision, Recall, F1Score and CCU are reported.With respect to CCU reported values, we want to stress that in FGM_DDoS the size of a digest, including the packet's features, is around 2 Kb, i.e., 20 times the size of the CGM_DDoS one.Moreover, in CGM_DDoS a much lower number of digests is sent to the control plane, as only one digest per suspect flow, in any time window, is forwarded to the control plane.This is the reason why CCU for FGM_DDoS is several orders of magnitude higher than for CGM_DDoS (as reported in Table V). Figure 5 reports the detection performance and CCU while fixing the sampling rate to four values, i.e., 1, 1/50, 1/75, 1/100, and varying the ACMS threshold.Note that Mirror All is insensitive to the threshold as ACMS is not adopted, and thus in the left-hand-side subfigures its Precision, Recall and F1Score values are reported as single points.We can see that by increasing the threshold a negative impact on Recall is experienced, as the number of false negatives significantly increases.In fact, only flows with very high asymmetry rates are deemed suspect by ACMS and thus some malicious flows, with lower asymmetry rate, are neglected by ACMS.On the other hand, choosing a higher threshold has a very good impact on CCU, as feature extracted by packets belonging to fewer flows (i.e., only those suspect) need to be forwarded to the control plane.Instead, Precision is not strongly affected is always high, meaning that LUCID has a very good ability to filter out false positives.
Figure 5 also shows that lowering the sampling rate from 1 to 1/50 is beneficial for both detection performance and Control Channel Utilization for high thresholds.CCU is lowered by one order of magnitude, while the detection performance (in terms of F1Score) increases.This phenomenon may seem counter-intuitive, however, by lowering the amount of data sent to the control plane, congestion on the control channel is reduced with consequent benefits on detection performance.In fact, congestion causes uncontrolled digests' discard, meaning that lower congestion reduces the amount of noise (in terms of flows' patterns alteration) given as input to LUCID.By further decreasing the sampling rate, e.g., 1/75 and 1/100, the number of packets' features sent to the control plane decreases up to a point that LUCID has not enough data to perform a proper classification.CCU is low but Precision, Recall, and F1Score are also low regardless of the threshold value.
The same trend is confirmed by looking at Figs. 6(c) and 7, which report the detection performance (Fig. 6(c)) and CCU (Fig. 7) while fixing the ACMS threshold to four values, i.e., 750, 850, 950, 1450, and while varying the sampling rate.Fig. 6(c) shows that the detection performance peeks for sampling rates higher than 1/50.However, the most important trend is clearly highlighted in Fig. 7: whenever sampling is performed, CCU for both Mirror All and FGM_DDoS drops significantly.For very low sampling rates (<1/75) the same considerations as those done for Fig. 5, with respect to high thresholds, apply: in these cases, the amount of informative data sent to the control plane is too limited to ensure robust detection performance.Interesting is also the case for threshold values of 750 and 850.With respect to detection performance (Fig. 6(c)) they behave the same for any sampling rate, but CCU (Fig. 7) is reduced by 30% in the case of a threshold of 850.
By comparing FGM_DDoS with Mirror All, we can see that Mirror All performs best for sampling rates of 1/50 and 1/75.Its counter-intuitive worse detection performance with a sampling rate of 1 is due to the high congestion on the control channel.However, in all the cases, Mirror All leads to a much higher CCU than FGM_DDoS.Specifically, the same detection performance of Mirror All can be obtained by FGM_DDoS with a sampling rate 1/50 and threshold of 750, while reducing CCU by around three times.
In summary, our results show that by choosing the most appropriate sampling rate and ACMS threshold our strategy makes it possible to find a good balance between detection performance and amount of traffic on the control channel.

E. Overall Evaluation
Table VI summarizes the results obtained in Sections V-C and V-D, with respect to the following strategies and related configuration parameters:   CGM_DDoS and INDDoS).However, Recall is always high, meaning that all strategies, also those fully executed in the data plane, are good at effectively identifying true positives (i.e., the malicious traffic).Follows that the strategies relying on LUCID as an ML engine in the control plane (i.e., FGM_DDoS, Mirror All and P4RTHENON) have a much higher Precision, meaning that by deeply analyzing in the control plane the traffic features extracted from packets is very effective it to keep the number of false positives low.In the case of FGM_DDoS and P4RTHENON this property can be effectively exploited to filter out in the control plane the suspect flows, as identified by ACMS in the data plane, which instead is benign.But is considering resource allocation and utilization that our proposed solution stands out.If we analyze CCU we can see that the in-network strategies lead to minimal usage of the control channel, while the others, for which feature extraction and forwarding to the control plane is needed, pay the prize of a much higher average channel occupation.However, FGM_DDoS and especially P4RTHENON have a reduced CCU with respect to Mirror All, of around 40% and 75% respectively, as they benefit from the presence of ACMS to only forward features from suspect flows.P4RTHENON reduced CCU even further by having almost no interaction between control and data plane when CGM_DDoS is installed and attacks are not in place.
Moreover, by looking at the occupied memory in the switch, we can stress again how INDDoS allocates a much higher amount of memory (3145.7 Kb) than ACMS (9.8 Kb), which is used in CGM_DDoS, FGM_DDoS and P4RTHENON.Instead, Mirror All does not require any data structure in the data plane, so it does not consume memory.This, however, comes at the expense of a significantly higher CCU.
Finally, a look at Table VI shows how P4RTHENON, thanks to its peculiarities, strikes the best balance between detection performance, CCU, and memory occupation with respect to the other strategies.Figure 8 confirms the speculation we drew from Table VI and reports a comparison between the two strategies with the best detection performance, i.e., Mirror All and P4RTHENON, in terms of CCU overtime during an attack, which is marked by a red area.The attack starts at t = 30s: for P4RTHENON, CGM_DDoS is in place before this time instant, and a negligible amount of data is sent on the control channel.After t, CGM_DDoS starts identifying suspect flows and after another Δt, at t = 60s, FGM_DDoS is installed and optimized mirroring starts (reason why CCU consistently increases).Then, the attack ends at t = 420s and, in the next time window, CGM_DDoS is restored and CCU drops to almost zero.By looking instead at Mirror All, we can see an almost constant CCU of around 90 Kbps as the features are extracted and forwarded from any packet, also when no attack is happening.Moreover, when the attack is in place, the data plane logic adopted by P4RTHENON (i.e., ACMS) makes it possible to save much control channel bandwidth by only forwarding features from suspect flows.

F. Data Plane Pipeline Reconfiguration Evaluation
We performed some experiments to evaluate the system downtime when a real-time P4 pipeline reconfiguration is performed to swap between CGM_DDoS and FGM_DDoS.It is important to stress that such an evaluation is strongly dependent on the adopted emulated environment and software switch target, and further tests will be performed as future work tasks in hardware testbeds to confirm our findings.In our experiment we swapped between CGM_DDoS and FGM_DDoS 100 times and we measured the downtime during each transition, then calculating mean and variance.The computed mean is 263.4 ms, with a very low variance (0.6).Reconfiguring the pipeline, at least on Mininet with bmv2, is quick and stable.

VI. CONCLUSION
Minimizing data exchanged on the control channel for datadriven monitoring tasks is pivotal in complex networks.In fact, introducing a new feature or service should not be detrimental to the system.P4RTHENON is a scheme that supports the employment of lightweight and precise monitoring tasks to meet these requirements.It leverages P4-assisted real-time reconfiguration of programmable network devices, with minimal overhead and traffic loss.
We demonstrate the validity of our scheme by formulating a P4RTHENON-assisted solution to detect volumetric DDoS attacks.This strategy leverages two phases: (i) a pre-filtering stage to select the important portions of suspect traffic to analyze, and (ii) a fine-grained strategy that leverages optimized packet features' mirroring from the data plane towards the control plane, where a ML-based specialized agent attests what portion of suspect traffic is indeed malicious.This use case shows how P4RTHENON can reduce the cross-plane communication overhead by almost 80% while keeping high DDoS detection rates.
Being the use case is of practical significance, we foresee to proceed in its improvement by investigating its performance on a hardware testbed and by automating parameters' optimization (i.e., ACMS threshold and sampling rate) according to the traffic shape.In addition, we believe that the presented approach could be applied with profit to other, more complex network monitoring scenarios; our line of research will be correspondingly widened to encompass other use cases for a broader validation of P4RTHENON.

Fig. 1 .
Fig. 1.P4RTHENON architecture with a focus on the DDoS detection use case (Section IV).

Fig. 5 .
Fig. 5. FGM_DDoS vs. Mirror All: Detection performance and Control Channel Utilization for different thresholds (sampling rate fixed).

Fig. 8 .
Fig. 8.Control Channel Utilization over time between P4RTHENON and Mirror All in their best configurations.

TABLE I LIMITATIONS
OF CURRENT P4-BASED ML DATA PLANE SOLUTIONS

end for return min end function the
[12] in each row i, the minimum among the stored values in the cells are kept to estimate the packet count for the corresponding flow (see[12]).queryCMS is executed twice, first to estimate the packet count for the forward flow < ip src , ip dst >, and then for the backward flow < ip dst , ip src >.Those values are called min fwd and min bwd respectively.<ip src , ip dst > leads to an asym value greater than the Threshold.
i [h] ← CMS i [h] + 1 // row i i ← i + 1 end for return CMS end function function QUERYCMS(CMS , H , src, dst) i ← 0 min ← ∞ for all hash ∈ H do h ← hash(src, dst) if CMS i [h] < min then // row i min ← CMS i [h] end if i ← i + 1 •The asymmetry rate (asym) is finally computed as asym = |min fwd − min bwd | and if it exceeds the value Threshold, the < ip src , ip dst > flow is considered suspect of belonging to a DDoS attack.When this happens, an alert is sent to the control plane in the form of a digest, which wraps 64 bits containing ip src and ip dst of the flow.To reduce the burden on the control channel, such an alert is generated only the first time, within a time window, that

TABLE III PACKET
FEATURES ENCAPSULATED IN THE DIGEST SENT TO THE CONTROL PLANE BY OPTIMIZED MIRRORING.THE FEATURES IN RED ARE USED BY LUCID [13] IN THE PREPROCESSING STAGE, BUT NOT FOR DETECTION

TABLE IV DETECTION
COMPARISON OF ACMS VARYING d FOR w = 1024 FIXED Table VI shows how P4RTHENON, FGM_DDoS and Mirror All outperform in terms of F1Score the in-network strategies that are fully executed in the data plane (i.e., Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.