A Deep Residual-based Model on Multi-Branch Aggregation for Stress and Emotion Recognition through Biosignals
Issued Date
2022-01-01
Resource Type
Scopus ID
2-s2.0-85133321953
Journal Title
19th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, ECTI-CON 2022
Rights Holder(s)
SCOPUS
Bibliographic Citation
19th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, ECTI-CON 2022 (2022)
Suggested Citation
Mekruksavanich S., Hnoohom N., Jitpattanakul A. A Deep Residual-based Model on Multi-Branch Aggregation for Stress and Emotion Recognition through Biosignals. 19th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, ECTI-CON 2022 (2022). doi:10.1109/ECTI-CON54298.2022.9795449 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/84389
Title
A Deep Residual-based Model on Multi-Branch Aggregation for Stress and Emotion Recognition through Biosignals
Author(s)
Author's Affiliation
Other Contributor(s)
Abstract
Stress and emotion recognition (SER) is a rapidly growing field of study that has applications in various areas, including psychological wellbeing, rehabilitative services, athletic training, and human-computer interaction. Biological information such as the electrocardiogram (ECG), electromyography (EMG), and electrodermal activity (EDA) has been frequently utilized for the SER for learning-based approaches. This study introduces a convolutional neural network motivated by ResNeXt to facilitate multimodal awareness. The proposed model, named StressNeXt, can extract high-level insights from raw bio-signal signals and classify emotional expressions effectively. We undertake a series of investigations using a publicly released standard dataset (WESAD) to determine the optimal implementation of the proposed solution for recognizing stress and emotion. After incorporating preliminary fusion events, we examined deep learning models using 5-fold cross-validation. Our study demonstrates that the suggested technique can comprehend robust multimodal representations with an accuracy of 87.73% utilizing EDA. Additionally, the identification was designed to provide better to 99.92% by fusing with accelerometer sensor data.