Lightweight 3D-CNN for MRI-Based Alzheimer's Disease Classification
Issued Date
2025-01-01
Resource Type
Scopus ID
2-s2.0-105014449848
Journal Title
22nd International Conference on Electrical Engineering Electronics Computer Telecommunications and Information Technology Ecti Con 2025
Rights Holder(s)
SCOPUS
Bibliographic Citation
22nd International Conference on Electrical Engineering Electronics Computer Telecommunications and Information Technology Ecti Con 2025 (2025)
Suggested Citation
Yenvaree R., Thiennviboon P., Intarawichian S., Sungkarat W., Laothamatas J. Lightweight 3D-CNN for MRI-Based Alzheimer's Disease Classification. 22nd International Conference on Electrical Engineering Electronics Computer Telecommunications and Information Technology Ecti Con 2025 (2025). doi:10.1109/ECTI-CON64996.2025.11100812 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/112016
Title
Lightweight 3D-CNN for MRI-Based Alzheimer's Disease Classification
Corresponding Author(s)
Other Contributor(s)
Abstract
Alzheimer's disease (AD) is a leading cause of dementia worldwide, and early detection is essential for timely intervention. While deep learning has shown promise in AD classification, many existing models rely on multi-modal data and complex architectures, limiting their feasibility in resource-constrained settings. This study proposes a lightweight 3D Convolutional Neural Network (3D-CNN) model for binary classification of AD and cognitively normal (CN) individuals using only Magnetic Resonance Imaging (MRI) data, eliminating the need for additional modalities such as Positron Emission Tomography (PET) or handcrafted feature extraction. The proposed model has approximately 0.4 million trainable parameters, significantly fewer than many existing deep learning models. It was evaluated on 726 MRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using 6-fold and 5-fold cross-validation. The model achieved an accuracy of 91.46%, sensitivity of 89.75%, specificity of 92.82%, and AUROC of 94.98% with 6-fold cross-validation, while 5-fold cross-validation yielded an accuracy of 90.63%, sensitivity of 89.44%, specificity of 91.58%, and AUROC of 94.42%. When compared with selected existing models, these results suggest that the proposed model achieves competitive classification performance while requiring significantly fewer parameters and computational resources. Its reliance on MRI-only data and lightweight architecture makes it a potentially practical option for applications where computational resources and multi-modal imaging data are limited.
