0 seconds of 0 secondsVolume 90%
Press shift question mark to access a list of keyboard shortcuts
Keyboard Shortcuts
Play/PauseSPACE
Increase Volume↑
Decrease Volume↓
Seek Forward→
Seek Backward←
Captions On/Offc
Fullscreen/Exit Fullscreenf
Mute/Unmutem
Seek %0-9
Live
00:00
00:00
00:00
AI Models in Medicine. Figures (a) and (b) illustrate clinical objectives for LC treatment and the limitations in transparency of traditional AI models. Hybrid AI integra...
Abstract:
Knowledge Graphs (KGs) are data structures that enable the integration of heterogeneous data sources and supporting both knowledge representation and formal reasoning. Th...Show MoreMetadata
Abstract:
Knowledge Graphs (KGs) are data structures that enable the integration of heterogeneous data sources and supporting both knowledge representation and formal reasoning. This paper introduces TrustKG, a KG-based framework designed to enhance the interpretability and reliability of hybrid AI systems in healthcare. Positioned within the context of lung cancer, TrustKG supports link prediction, which uncovers hidden relationships within medical data, and counterfactual prediction, which explores alternative scenarios to understand causal factors. These tasks are addressed through two specialized hybrid AI systems, VISE and HealthCareAI, which combine symbolic reasoning with inductive learning over KGs to provide interpretable AI solutions for clinical decision-making. Leveraging KGs to represent biomedical properties and relationships, and augmenting them with learned patterns through symbolic reasoning, our hybrid approach produces models that are both accurate and transparent. This interpretability is particularly important in medical applications, where trust and reliability in AI-driven predictions are paramount. The empirical analysis demonstrates the effectiveness of VISE and HealthCareAI in improving the predictive accuracy and clarity of model outputs. By addressing challenges in link prediction—such as discovering previously unknown connections between medical entities—and in counterfactual prediction, TrustKG, with VISE and HealthCareAI, underscores the potential of integrating KGs with symbolic AI to create trustworthy, interpretable AI systems in healthcare. This paper contributes to the advancement of semantic AI, offering a pathway for robust and reliable AI solutions in clinical settings.
0 seconds of 0 secondsVolume 90%
Press shift question mark to access a list of keyboard shortcuts
Keyboard Shortcuts
Play/PauseSPACE
Increase Volume↑
Decrease Volume↓
Seek Forward→
Seek Backward←
Captions On/Offc
Fullscreen/Exit Fullscreenf
Mute/Unmutem
Seek %0-9
Live
00:00
00:00
00:00
AI Models in Medicine. Figures (a) and (b) illustrate clinical objectives for LC treatment and the limitations in transparency of traditional AI models. Hybrid AI integra...
Published in: IEEE Access ( Volume: 13)