Multi-resolution CNN for Lower Limb Movement Recognition Based on Wearable Sensors
Issued Date
2022-01-01
Resource Type
ISSN
03029743
eISSN
16113349
Scopus ID
2-s2.0-85142674497
Journal Title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume
13651 LNAI
Start Page
111
End Page
119
Rights Holder(s)
SCOPUS
Bibliographic Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol.13651 LNAI (2022) , 111-119
Suggested Citation
Hnoohom N., Chotivatunyu P., Mekruksavanich S., Jitpattanakul A. Multi-resolution CNN for Lower Limb Movement Recognition Based on Wearable Sensors. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol.13651 LNAI (2022) , 111-119. 119. doi:10.1007/978-3-031-20992-5_10 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/87134
Title
Multi-resolution CNN for Lower Limb Movement Recognition Based on Wearable Sensors
Author's Affiliation
Other Contributor(s)
Abstract
Human activity recognition (HAR) remains a difficult challenge in human-computer interaction (HCI). The Internet of Healthcare Things (IoHT) and other technologies are expected to be used primarily in conjunction with HAR to support healthcare and elder care. In HAR research, lower limb movement recognition is a challenging research topic that can be applied to the daily care of the elderly, fragile, and disabled. Due to recent advances in deep learning, high-level autonomous feature extraction has become feasible, which is used to increase HAR efficiency. Deep learning approaches have also been used for sensor-based HAR in various domains. This study presents a novel method that uses convolutional neural networks (CNNs) with different kernel dimensions, referred to as multi-resolution CNNs, to detect high-level features at various resolutions. A publicly available benchmark dataset called HARTH was used to evaluate the recognition performance to collect acceleration data of the lower limb movements of 22 participants. The experimental results show that the proposed approach improves the F1 score and achieves a higher score of 94.76%.