Enhancing Sport Activity Recognition Based on Wearable Sensors Using Hybrid Deep Neural Networks
Issued Date
2026-01-01
Resource Type
Scopus ID
2-s2.0-105036985464
Journal Title
11th International Conference on Digital Arts Media and Technology and 9th Ecti Northern Section Conference on Electrical Electronics Computer and Telecommunications Engineering Ecti Damt and Ncon 2026
Start Page
676
End Page
681
Rights Holder(s)
SCOPUS
Bibliographic Citation
11th International Conference on Digital Arts Media and Technology and 9th Ecti Northern Section Conference on Electrical Electronics Computer and Telecommunications Engineering Ecti Damt and Ncon 2026 (2026) , 676-681
Suggested Citation
Mekruksavanich S., Hnoohom N., Jitpattanakul A. Enhancing Sport Activity Recognition Based on Wearable Sensors Using Hybrid Deep Neural Networks. 11th International Conference on Digital Arts Media and Technology and 9th Ecti Northern Section Conference on Electrical Electronics Computer and Telecommunications Engineering Ecti Damt and Ncon 2026 (2026) , 676-681. 681. doi:10.1109/ECTIDAMTNCON67592.2026.11459997 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/116505
Title
Enhancing Sport Activity Recognition Based on Wearable Sensors Using Hybrid Deep Neural Networks
Author(s)
Author's Affiliation
Corresponding Author(s)
Other Contributor(s)
Abstract
Human activity recognition (HAR) is increasingly important in sports science and fitness for real-time monitoring of athlete performance. This paper proposes a hybrid deep learning architecture for sport activity recognition using wearable sensor data. The model integrates convolutional neural networks (CNNs) for automatic spatial feature extraction with bidirectional gated recurrent units (BiGRUs) to model temporal dynamics, enabling effective learning of complex sport movements. Multisensor fusion of tri-axial accelerometer data from wrist, neck, and thigh placements is employed to capture comprehensive motion patterns. The proposed CNN-BiGRU framework is evaluated on the publicly available IM-Sporting Behaviors dataset, which includes six sport activities performed by 20 subjects. Experimental results demonstrate superior performance over conventional machine learning methods and baseline deep models. The approach achieves accuracies of 99.83% for wrist-based recognition, 99.83% for neck-based recognition, and 100.00% for thigh-based recognition, with corresponding F1-scores of 99.78%, 99.77%, and 100.00%, respectively. Low performance variance across all experiments indicates strong robustness and generalization. Ablation studies further confirm that the hybrid architecture significantly outperforms standalone CNN and BiGRU models, validating the effectiveness of jointly learning spatial and temporal representations for sport activity recognition.
