Simple jQuery Dropdowns
Please use this identifier to cite or link to this item: http://repository.li.mahidol.ac.th/dspace/handle/123456789/31604
Title: Speech and prosodic processing for assistive technology
Authors: Lalita Narupiyakul
Vlado Keselj
Nick Cercone
Booncharoen Sirinaovakul
Mahidol University
Dalhousie University
York University
King Mongkuts University of Technology Thonburi
Keywords: Computer Science
Issue Date: 1-Dec-2013
Citation: Frontiers in Artificial Intelligence and Applications. Vol.253, (2013), 36-48
Abstract: A speaker's utterance may convey different meanings to a hearer than what the speaker intended. Such ambiguities can be resolved by emphasizing accents at different positions. In human communication, the utterances are emphasized at a focus part to distinguish the important content and reduce ambiguity in the utterance. In our Focus-to-Emphasize Tone (FET) system, we determine how the speaker's utterances are influenced by focus and speaker's intention. The relationships of focus information, speaker's intention and prosodic phenomena are investigated to recognize the intonation patterns and annotate the sentence with prosodic marks. We propose using the Focus to Emphasize Tone (FET) analysis, which includes: (i) generating the constraints for foci, speaker's intention and prosodic features, (ii) defining the intonation patterns, and (iii) labelling a set of prosodic marks for a sentence. We also design the FET structure to support our analysis and to contain focus, speaker's intention and prosodic components. An implementation of the system is described and the evaluation results on the CMU Communicator (CMU-COM) dataset are presented. © 2013 The authors and IOS Press. All rights reserved.
URI: https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84894597328&origin=inward
http://repository.li.mahidol.ac.th/dspace/handle/123456789/31604
ISSN: 09226389
Appears in Collections:Scopus 2011-2015

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.