AGMFU-NET++: A UNIFIED ARCHITECTURE FOR DENOISING AND MODALITY-AWARE SEGMENTATION IN NOISY AND INCOMPLETE MEDICAL IMAGING
Keywords:
Multimodal medical image segmentation; Skip-Fusion Decoding; Attention-Guided Multimodal Fusion; Transformer-Based Global Contextualization; Information Blending.Abstract
In computer-aided diagnosis, multimodal medical picture segmentation is essential, but it is still difficult because of different noise levels, uneven resolution, and missing modality information. To suggest AGMFU-Net++, a novel architecture that combines Transformer-Based Global Contextualisation, Attention-Guided Multimodal Fusion, and Gated Skip-Fusion Decoding for reliable and accurate segmentation, in order to overcome these problems. At the bottleneck, modality-specific encoding, cross-modal attention fusion, and transformer bridging come after registration, intensity normalisation, and stochastic augmentation. With the help of deep supervision and uncertainty estimation for training stability, the decoder integrates learnable gating for modality-aware information mixing. Extensive trials on the MSD and BraTS (2018–2023) datasets were carried out to verify the efficacy of AGMFU-Net++. The suggested model performed better in denoising and segmentation than the most advanced baselines. Its greatest PSNRs were 32.7 dB (BraTS) and 34.0 dB (MSD), with corresponding SSIM values of 0.94 and 0.96. Following denoising, AGMFU-Net++ increased Dice scores in segmentation-aware evaluation from 0.90 to 0.93 (BraTS) besides 0.92 to 0.95 (MSD). Compared to competing models, robustness experiments showed a substantially superior Dice loss of only 2.6% under Gaussian noise and 4.2% with modality dropout. Additionally, cross-dataset generalisation showed its scalability with Dice scores of 0.86 (BraTS→MSD) and 0.85 (MSD→BraTS). AGMFU-Net++ is a potential technique for practical clinical application in multi-modal medical imaging scenarios because of its overall improved denoising performance, segmentation accuracy, and robustness across imaging circumstances.
Downloads
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.