Multi-views Emotional Knowledge Extraction for Emotion Recognition in Conversation

dc.contributor.authorJian Z.
dc.contributor.authorWu D.
dc.contributor.authorWang S.
dc.contributor.authorHe J.
dc.contributor.authorYao J.
dc.contributor.authorLiu K.
dc.contributor.authorWu Q.
dc.contributor.correspondenceJian Z.
dc.contributor.otherMahidol University
dc.date.accessioned2025-05-26T18:12:33Z
dc.date.available2025-05-26T18:12:33Z
dc.date.issued2025-07-08
dc.description.abstractEmotion Recognition in Conversation (ERC) is a challenging task due to the scarcity and dispersion of contextual information across utterances. Most existing methods attempt to integrate comprehensive information to enhance utterance semantics, which, however, also introduces noise and irrelevant content, misleading the model and limiting its potential in emotion recognition. To this end, we introduce the concept of Conversational Clique (ConvClique) and propose CC-ERC, a multi-view emotional knowledge extraction method designed to capture the most relevant emotional cues within the ConvClique from complementary perspectives and collaboratively predict utterance emotions. Specifically, CC-ERC comprises two modules: 1) the Utterance Spatial Relationship (USR) module, which predicts emotions by modeling structural correlations among utterances, and 2) the Emotion Temporal Relationship (ETR) module, which captures emotion sequence patterns to determine utterance emotions. These modules are integrated to obtain the final prediction, contributing to the robustness and accuracy of emotion recognition. The effectiveness of CC-ERC is validated on three widely used ERC datasets, evaluated in both online and offline settings. Compared to the state-of-the-art methods, CC-ERC achieves average improvements of 0.63% in accuracy and 0.94% in weighted F1 scores. Ablation studies further validate the significance of ConvClique-based knowledge extraction and demonstrate the effectiveness of the USR and ETR modules in modeling utterance structural correlations and emotion sequence patterns.
dc.identifier.citationKnowledge-Based Systems Vol.322 (2025)
dc.identifier.doi10.1016/j.knosys.2025.113601
dc.identifier.issn09507051
dc.identifier.scopus2-s2.0-105005491351
dc.identifier.urihttps://repository.li.mahidol.ac.th/handle/123456789/110377
dc.rights.holderSCOPUS
dc.subjectBusiness, Management and Accounting
dc.subjectComputer Science
dc.subjectDecision Sciences
dc.titleMulti-views Emotional Knowledge Extraction for Emotion Recognition in Conversation
dc.typeArticle
mu.datasource.scopushttps://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=105005491351&origin=inward
oaire.citation.titleKnowledge-Based Systems
oaire.citation.volume322
oairecerif.author.affiliationCollege of Management Mahidol University
oairecerif.author.affiliationXiamen University

Files

Collections