EXPLAINABLE AI IN CLINICAL DECISION SUPPORT: INTERPRETABLE NEURAL MODELS FOR TRUSTWORTHY HEALTHCARE AUTOMATIONEXPLAINABLE AI IN CLINICAL DECISION SUPPORT: INTERPRETABLE NEURAL MODELS FOR TRUSTWORTHY HEALTHCARE AUTOMATION
Abstract
Clinical decision support (CDS) involves the use of AI-based systems that synthesize patient information and suggest recommendations for diagnosis or treatment. These systems help clinicians manage the growing amount of patient data while ensuring safety and performance. However, interpretability is crucial, as patients have a right to know the reasons be- hind important clinical decisions, and doctors must trust the outputs before acting on them. Recent regulatory statements have underscored the increasing focus on AI interpretability in healthcare. An interpretable model is one for which users can easily comprehend the rationale for its predictions. Empirical evidence shows that trust in a prediction is determined by its explanation. Explanations should therefore be tailored to the audience’s knowledge and expectations—supporting clinical decision-making processes—and authoritative in guiding action. Achieving trustworthy healthcare automation requires converg- ing interpretability and safety. Interpretable models complement risk assessment, governance, and continuous evaluation, and integrate with safety measures such as monitoring, fail-safe design, and auditing.
Downloads
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.