A Novel Deep BiGRU-ResNet Model for Human Activity Recognition using Smartphone Sensors
Issued Date
2022-01-01
Resource Type
Scopus ID
2-s2.0-85136230076
Journal Title
2022 19th International Joint Conference on Computer Science and Software Engineering, JCSSE 2022
Rights Holder(s)
SCOPUS
Bibliographic Citation
2022 19th International Joint Conference on Computer Science and Software Engineering, JCSSE 2022 (2022)
Suggested Citation
Mekruksavanich S., Jantawong P., Hnoohom N., Jitpattanakul A. A Novel Deep BiGRU-ResNet Model for Human Activity Recognition using Smartphone Sensors. 2022 19th International Joint Conference on Computer Science and Software Engineering, JCSSE 2022 (2022). doi:10.1109/JCSSE54890.2022.9836276 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/84412
Title
A Novel Deep BiGRU-ResNet Model for Human Activity Recognition using Smartphone Sensors
Author's Affiliation
Other Contributor(s)
Abstract
Human activity recognition (HAR) employing wearable sensors is utilized in several implementations, including remote health monitoring and exercise performance. The most widely used HAR research is inspired by traditional machine learning and developing methodologies using deep learning. Whereas machine learning techniques have proven effective in resolving HAR, these require human feature extraction. Consequently, deep learning methods have been designed to circumvent this constraint autonomously rather than manually extracting information. This paper provides an innovative deep residual learning approach based on LSTM-CNN and deep residual modeling techniques. The objective of the proposed model, BiGRUResNet, was to increase accuracy while decreasing the number of parameters. Two BiGRU layers, three residual layers, one global average pooling layer, and one softmax layer are present. Utilizing a publicly recognized UCI-HAR dataset, the proposed model was analyzed. Results of the experiment indicate that the proposed model outperforms previous deep learning-based models in 5-fold cross-validation, with a 99.09% accuracy and 99.15% F1 score.