EXPLAINABLE DEEP NEURAL NETWORK (X-DNN) FOR PREDICTING MENTAL HEALTH OUTCOMES FROM MULTISOURCE MEDICAL AND PSYCHOLOGICAL DATA
Keywords:
Mental health prediction, explainable AI, deep neural networks, multisource data fusion, SHAPsAbstract
One of the most common causes of mental health problems is the complicated combination of biological, psychological, and social factors. For initial intervention and tailored therapy in mental health, it is important to be able to accurately predict outcomes. But the models that are now being used can be hard to understand, which makes professionals less sure of them and more prone to accept them. Deep learning models that have been used for a long time to guess mental health are really good at it. But they don't tell us much about how they make decisions since they aren't open about it. It is suggested that explainable models use data from a range of sources, such as medical records, psychological evaluations, and data from wearable sensors. This would make projections more accurate and things clearer. The Explainable Deep Neural Network (X-DNN) framework is what this study indicates. It makes it easier than ever to predict how mental health problems may affect people. It mixes data from several sources and employs layers that people can understand. It uses SHAP (SHapley Additives) and attention processes to point out important bits after the fact. Experiments on a large clinical dataset reveal that X-DNN is more accurate than baseline models (F1-score of 0.87) and that its results make sense and are in accordance with what doctors already know.
Downloads
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.