Toward Interpretable Hybrid AI: Integrating Knowledge Graphs and Symbolic Reasoning in Medicine | IEEE Journals & Magazine | IEEE Xplore

Toward Interpretable Hybrid AI: Integrating Knowledge Graphs and Symbolic Reasoning in Medicine


0 seconds of 0 secondsVolume 90%
Press shift question mark to access a list of keyboard shortcuts
Keyboard Shortcuts
Play/PauseSPACE
Increase Volume
Decrease Volume
Seek Forward
Seek Backward
Captions On/Offc
Fullscreen/Exit Fullscreenf
Mute/Unmutem
Seek %0-9
00:00
00:00
00:00
 
AI Models in Medicine. Figures (a) and (b) illustrate clinical objectives for LC treatment and the limitations in transparency of traditional AI models. Hybrid AI integra...

Abstract:

Knowledge Graphs (KGs) are data structures that enable the integration of heterogeneous data sources and supporting both knowledge representation and formal reasoning. Th...Show More

Abstract:

Knowledge Graphs (KGs) are data structures that enable the integration of heterogeneous data sources and supporting both knowledge representation and formal reasoning. This paper introduces TrustKG, a KG-based framework designed to enhance the interpretability and reliability of hybrid AI systems in healthcare. Positioned within the context of lung cancer, TrustKG supports link prediction, which uncovers hidden relationships within medical data, and counterfactual prediction, which explores alternative scenarios to understand causal factors. These tasks are addressed through two specialized hybrid AI systems, VISE and HealthCareAI, which combine symbolic reasoning with inductive learning over KGs to provide interpretable AI solutions for clinical decision-making. Leveraging KGs to represent biomedical properties and relationships, and augmenting them with learned patterns through symbolic reasoning, our hybrid approach produces models that are both accurate and transparent. This interpretability is particularly important in medical applications, where trust and reliability in AI-driven predictions are paramount. The empirical analysis demonstrates the effectiveness of VISE and HealthCareAI in improving the predictive accuracy and clarity of model outputs. By addressing challenges in link prediction—such as discovering previously unknown connections between medical entities—and in counterfactual prediction, TrustKG, with VISE and HealthCareAI, underscores the potential of integrating KGs with symbolic AI to create trustworthy, interpretable AI systems in healthcare. This paper contributes to the advancement of semantic AI, offering a pathway for robust and reliable AI solutions in clinical settings.
0 seconds of 0 secondsVolume 90%
Press shift question mark to access a list of keyboard shortcuts
Keyboard Shortcuts
Play/PauseSPACE
Increase Volume
Decrease Volume
Seek Forward
Seek Backward
Captions On/Offc
Fullscreen/Exit Fullscreenf
Mute/Unmutem
Seek %0-9
00:00
00:00
00:00
 
AI Models in Medicine. Figures (a) and (b) illustrate clinical objectives for LC treatment and the limitations in transparency of traditional AI models. Hybrid AI integra...
Published in: IEEE Access ( Volume: 13)
Page(s): 39489 - 39509
Date of Publication: 13 January 2025
Electronic ISSN: 2169-3536

Funding Agency:


References

References is not available for this document.