MGFormer: A lightweight multi-granular transformer for subject-independent Alzheimer’s classification

July 9, 2025

View Publication (DOI)

Raiyan Rahman, Abul Kalam al Azad, Sifat Momen

Transformer-based models have shown great potential for detecting Alzheimer’s Disease (AD) from EEG signals. However, these models often struggle due to their high computational demands and limited ability to capture EEG features across multiple scales. Most existing methods use shallow feature extraction techniques, failing to fully consider important spatial relationships and spectral information essential for identifying subtle disease-related changes. We propose MGFormer, a lightweight Convolutional Neural Network (CNN)-Transformer hybrid architecture designed for Electroencephalogram (EEG)-based AD classification. MGFormer features a Multi-Granular Token Encoder (MgTE) that extracts spatial–temporal features at multiple granularities, and a Hybrid Feature Fusion (HFF) module that combines long-range temporal modeling (via self-attention), local feature extraction (via 1D CNN), and Fast Fourier based spectral information from the embeddings. We perform a subject-independent evaluation across 113 subjects, which ensures realistic generalizability in medical domain. MGFormer achieves 70.48% accuracy on the AUT-AD dataset and 97.85% on the FSU-AD dataset, outperforming nine state-of-the-art models across six metrics. Ablation studies confirm the critical contributions of FFT features, 1D CNN, and HFF depth, while model complexity analysis shows superior efficiency in parameters and FLOPs compared to baselines. MGFormer effectively addresses key limitations in EEG-based AD classification, offering strong accuracy, generalizability, and computational efficiency.