Abstract:
Artificial Intelligence (AI) will play a critical role in future networks, exploiting real-time data collection for optimized utilization of network resources. However, c...Show MoreMetadata
Abstract:
Artificial Intelligence (AI) will play a critical role in future networks, exploiting real-time data collection for optimized utilization of network resources. However, current AI solutions predominantly emphasize model performance enhancement, engendering substantial risk when AI encounters irregularities such as adversarial attacks or unknown misbehaves due to its "black-box" decision process. Consequently, AI-driven network solutions necessitate enhanced accountability to stakeholders and robust resilience against known AI threats. This paper introduces a high-level process, integrating Explainable AI (XAI) techniques and illustrating their application across three typical use cases: encrypted network traffic classification, malware detection, and federated learning. Unlike existing task-specific qualitative approaches, the proposed process incorporates a new set of metrics, measuring model performance, explainability, security, and privacy, thus enabling users to iteratively refine their AI network solutions. The paper also elucidates future research challenges we deem critical to the actualization of trustworthy, AI-empowered networks.
Published in: 2024 Joint European Conference on Networks and Communications & 6G Summit (EuCNC/6G Summit)
Date of Conference: 03-06 June 2024
Date Added to IEEE Xplore: 19 July 2024
ISBN Information: