Emotion Classification Using Transformer-Based Language Model
3
Issued Date
2025-01-01
Resource Type
Scopus ID
2-s2.0-105004559256
Journal Title
10th International Conference on Digital Arts, Media and Technology, DAMT 2025 and 8th ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering, NCON 2025
Start Page
191
End Page
196
Rights Holder(s)
SCOPUS
Bibliographic Citation
10th International Conference on Digital Arts, Media and Technology, DAMT 2025 and 8th ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering, NCON 2025 (2025) , 191-196
Suggested Citation
Laojampa K., Wongpatikaseree K., Hnoohom N., Marukatat R. Emotion Classification Using Transformer-Based Language Model. 10th International Conference on Digital Arts, Media and Technology, DAMT 2025 and 8th ECTI Northern Section Conference on Electrical, Electronics, Computer and Telecommunications Engineering, NCON 2025 (2025) , 191-196. 196. doi:10.1109/ECTIDAMTNCON64748.2025.10962105 Retrieved from: https://repository.li.mahidol.ac.th/handle/123456789/110140
Title
Emotion Classification Using Transformer-Based Language Model
Author(s)
Author's Affiliation
Corresponding Author(s)
Other Contributor(s)
Abstract
This paper presents Emotion Classification Using Transformer-Based Language Models, highlighting the growing interest in emotion-related studies, particularly in the Thai language. Emotions play a crucial role in perception, influencing text input on various platforms or responses in chatbot interactions. Text messages, when combined into sentences, may convey diverse emotions, leading to misunderstandings and potentially inappropriate behavior. To classify emotions in text, experiments were conducted using a dataset from the Jubjai chatbot. The study aimed to identify the most effective pre-trained model and compare the results of data cleansing between fine-tuned and non-fine-tuned models to evaluate the accuracy in analyzing 7, 5, and 3 emotions. The experimental results demonstrated that the fine-tuned Wangchangberta model outperformed XLM-RoBERTa, achieving accuracy rates of 73% for 7 emotions, 76% for 5 emotions, and 82% for 3 emotions.
