Deep Learning Networks for Eating and Drinking Recognition based on Smartwatch Sensors
Issued Date
2022-01-01
Resource Type
Scopus ID
2-s2.0-85141791034
Journal Title
Proceedings - 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics, RI2C 2022
Start Page
106
End Page
111
Rights Holder(s)
SCOPUS
Bibliographic Citation
Proceedings - 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics, RI2C 2022 (2022) , 106-111
Suggested Citation
Mekruksavanich S., Jantawong P., Hnoohom N., Jitpattanakul A. Deep Learning Networks for Eating and Drinking Recognition based on Smartwatch Sensors. Proceedings - 2022 Research, Invention, and Innovation Congress: Innovative Electricals and Electronics, RI2C 2022 (2022) , 106-111. 111. doi:10.1109/RI2C56397.2022.9910318 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/84341
Title
Deep Learning Networks for Eating and Drinking Recognition based on Smartwatch Sensors
Author's Affiliation
Other Contributor(s)
Abstract
Smartwatches are becoming more popular for recognizing and monitoring human actions in everyday life. These wearable devices are equipped with various IMU sensors for ubiquitous data processing and recording of human physical activity data. Sensor-based human activity recognition (HAR) has risen to the top of the list of the most active research topic due to its widely real-life applications in various practical domains, such as healthcare monitoring, sports and exercise tracking, and misbehavior prevention. Many machine learning and deep learning approaches have been recently proposed to solve the problem of human activity recognition, focusing on activities of daily living. However, an exciting and challenging HAR topic deals with more complex human activities such as eating-related activities. This paper proposes a sensor-based HAR framework using data from eating-related activities recorded by a smartwatch sensor. In this framework, five d eep learning networks (CNN, LSTM, BiLSTM, Stacked LSTM, CNN-LSTM, and LSTM-CNN) are evaluated for their recognition of eating-related activities. To ensure the model's dependability, data from eating-related activities on the standard publicly available dataset WISDM-HARB are utilized to evaluate the proposed framework using state-of-the-art metrics: accuracy and confusion matrices. Experiment findings demonstrate that the S tacked LSTM model outperforms other deep learning models, achieving an accuracy of 97.37%.