Deep Learning Models for Daily Living Activity Recognition based on Wearable Inertial Sensors
Issued Date
2022-01-01
Resource Type
Scopus ID
2-s2.0-85136204854
Journal Title
2022 19th International Joint Conference on Computer Science and Software Engineering, JCSSE 2022
Rights Holder(s)
SCOPUS
Bibliographic Citation
2022 19th International Joint Conference on Computer Science and Software Engineering, JCSSE 2022 (2022)
Suggested Citation
Mekruksavanich S., Jantawong P., Hnoohom N., Jitpattanakul A. Deep Learning Models for Daily Living Activity Recognition based on Wearable Inertial Sensors. 2022 19th International Joint Conference on Computer Science and Software Engineering, JCSSE 2022 (2022). doi:10.1109/JCSSE54890.2022.9836239 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/84371
Title
Deep Learning Models for Daily Living Activity Recognition based on Wearable Inertial Sensors
Author's Affiliation
Other Contributor(s)
Abstract
Due to the breadth of its application domains, Hu-man Activity Recognition (HAR) is a problematic area of human-computer interaction. HAR can be used in remote monitoring of senior healthcare and concern situations in intelligent man-ufacturing, among other applications. HAR based on wearable inertial sensors has been researched identification efficiency in various kinds of human actions considerably more than vision-based HAR. The sensor-based HAR is generally applicable to indoor and outdoor locations without privacy considerations of implementation. In this research, we explore the recognition performance of multiple deep learning (DL) models to recognize everyday living human activities. We developed a deep residual neural network that employed aggregated multi-branch transformation to boost identification performance. The proposed model is called the ResNeXt model. To evaluate its performance, three standard DL models (CNN, LSTM, and CNN-LSTM) are investigated and compared to our proposed model using a standard HAR dataset called Daily Living Activity dataset. These datasets gathered mobility signal data from multimodal sensors (accelerometer, gyroscope, and magnetometer) in three distinct body areas (wrist, hip, and ankle). The experimental findings reveal that the proposed model surpasses other benchmark DL models with maximum accuracy and F1-scores. Furthermore, the findings show that the ResNeXt model is more resistant than other models with fewer training parameters.