Publication: Robust visual voice activity detection using Long Short-Term Memory recurrent neural network
Issued Date
2016-01-01
Resource Type
ISSN
16113349
03029743
03029743
Other identifier(s)
2-s2.0-84959019631
Rights
Mahidol University
Rights Holder(s)
SCOPUS
Bibliographic Citation
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol.9431, (2016), 380-391
Suggested Citation
Zaw Htet Aung, Panrasee Ritthipravat Robust visual voice activity detection using Long Short-Term Memory recurrent neural network. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol.9431, (2016), 380-391. doi:10.1007/978-3-319-29451-3_31 Retrieved from: https://repository.li.mahidol.ac.th/handle/20.500.14594/43477
Research Projects
Organizational Units
Authors
Journal Issue
Thesis
Title
Robust visual voice activity detection using Long Short-Term Memory recurrent neural network
Author(s)
Other Contributor(s)
Abstract
© Springer International Publishing Switzerland 2016. Many traditional visual voice activity detection systems utilize features extracted from mouth region images which are sensitive to noisy observations of the visual domain. In addition, hyperparameters of the feature extraction process modulating the desired compromise between robustness, efficiency, and accuracy of the algorithm are difficult to be determined. Therefore, a visual voice activity detection algorithm which only utilizes simple lip shape information as features and a Long Short-Term Memory recurrent neural network (LSTM-RNN) as a classifier is proposed. Face detection is performed by structural SVM based on histogram of oriented gradient (HOG) features. Detected face template is used to initialize a kernelized correlation filter tracker. Facial landmark coordinates are then extracted from the tracked face. Centroid distance function is applied to the geometrically normalized landmarks surrounding the outer and inner lip contours. Finally, discriminative (LSTM-RNN) and generative (Hidden Markov Model) methods are used to model the temporal lip shape sequences during speech and non-speech intervals and their classification performances are compared. Experimental results show that the proposed algorithm using LSTMRNN can achieve a classification rate of 98% in labeling speech and non-speech periods. It is robust and efficient for realtime applications.