An Efficient ResNetSE Architecture for Smoking Activity Recognition from Smartwatch
Issued Date
2023-01-01
Resource Type
ISSN
10798587
eISSN
2326005X
Scopus ID
2-s2.0-85132160676
Journal Title
Intelligent Automation and Soft Computing
Volume
35
Issue
1
Start Page
1245
End Page
1259
Rights Holder(s)
SCOPUS
Bibliographic Citation
Intelligent Automation and Soft Computing Vol.35 No.1 (2023) , 1245-1259
Suggested Citation
Hnoohom N., Mekruksavanich S., Jitpattanakul A. An Efficient ResNetSE Architecture for Smoking Activity Recognition from Smartwatch. Intelligent Automation and Soft Computing Vol.35 No.1 (2023) , 1245-1259. 1259. doi:10.32604/iasc.2023.028290 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/81807
Title
An Efficient ResNetSE Architecture for Smoking Activity Recognition from Smartwatch
Author(s)
Author's Affiliation
Other Contributor(s)
Abstract
Smoking is a major cause of cancer, heart disease and other afflictions that lead to early mortality. An effective smoking classification mechanism that pro-vides insights into individual smoking habits would assist in implementing addic-tion treatment initiatives. Smoking activities often accompany other activities such as drinking or eating. Consequently, smoking activity recognition can be a challenging topic in human activity recognition (HAR). A deep learning framework for smoking activity recognition (SAR) employing smartwatch sensors was proposed together with a deep residual network combined with squeeze-and-excitation modules (ResNetSE) to increase the effectiveness of the SAR framework. The proposed model was tested against basic convolutional neural networks (CNNs) and recurrent neural networks (LSTM, BiLSTM, GRU and BiGRU) to recognize smoking and other similar activities such as drinking, eating and walking using the UT-Smoke dataset. Three different scenarios were investigated for their recognition performances using standard HAR metrics (accuracy, F1-score and the area under the ROC curve). Our proposed ResNetSE outperformed the other basic deep learning networks, with maximum accuracy of 98.63%.