Abstract
Neurodegenerative diseases such as Alzheimer’s disease (AD) and Parkinson’s disease (PD) present significant obstacles in early diagnosis due to the complex interplay of structural and functional biomarkers. Multi-modal neuroimaging provides complementary information, yet integrating heterogeneous features remains a persistent challenge. In this work, we propose Dense Feature Fusion Network (DFF-Net), an end-to-end deep learning framework that leverages MRI and PET modalities through a dense feature fusion block and cross-modal attention mechanism. Our approach facilitates richer representation learning by preserving both modality-specific and shared features. We evaluate DFF-Net on benchmark datasets such as ADNI and PPMI, achieving superior performance over baseline fusion strategies in terms of accuracy, AUC, and F1-score. Furthermore, an interpretability analysis through attention maps pinpoints critical brain regions involved in neurodegenerative progression. The proposed model demonstrates the strong potential of dense feature fusion in improving clinical decision-making for early and accurate detection of neurodegenerative disorders.
References
- G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks," in Proc. IEEE CVPR, 2017, pp. 4700–4708.
- K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. IEEE CVPR, 2016, pp. 770–778.
- A. Dosovitskiy et al., "An image is worth 16x16 words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
- J. Liu et al., "Multi-modal neuroimaging feature learning with multimodal stacked deep polynomial networks for diagnosis of Alzheimer’s disease," in Proc. IEEE EMBC, 2014.
- Y. Zhang et al., "Hybrid models for early diagnosis of Parkinson’s disease using multi-modal imaging," NeuroImage, vol. 25, pp. 559–572, 2021.