Abstract:
Trustworthy, resilient, and interpretable artificial intelligence (AI) is essential for effective operation of the Internet of Things (IoT) in adversarial environments. S...Show MoreMetadata
Abstract:
Trustworthy, resilient, and interpretable artificial intelligence (AI) is essential for effective operation of the Internet of Things (IoT) in adversarial environments. Such a robust and interpretable AI is needed to improve tactical coordination through scalability, corroboration, and context-aware intelligence. It is crucial to have robust machine learning (ML) models with characteristics such as low-supervision adaptability, decision explanations, and adaptive inference. Pre-trained large language models (LLMs) and foundation models (FMs) address some of these challenges, but are unpredictable and cannot directly solve complex tasks in mission-critical scenarios. However, their generalization capabilities make them potential building blocks for high-assurance AI/ML systems that compose multiple FMs and LLMs. In this paper, we propose combining neural foundation models (FMs) using symbolic programs that results in a more effective AI for adversarial conditions. Neuro-symbolic composition of FMs to solve complex tasks requires interactive and unambiguous specification of the intent, task decomposition into subtasks that can be solved by individual FMs, program synthesis for composing FMs, and neuro-symbolic inference that schedules inference of different FMs and combines their results. We give examples of such neuro-symbolic programs using foundation models to solve visual question-answering tasks such as out-of-context detection. This position paper identifies the challenges and opportunities in the neuro-symbolic composition of the large language models and foundation models.
Date of Conference: 30 October 2023 - 03 November 2023
Date Added to IEEE Xplore: 25 December 2023
ISBN Information: